text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Strings are poor substitutes for capabilities
Cairo Jackson
Greenhorn
Joined: Jan 18, 2007
Posts: 14
posted
Mar 09, 2007 01:25:00
0
I am reading the book Effective
Java
, written by Joshua Bloch. Now, I'm with Item 32: Avoid strings where other types are more appropriate.
I can't understand the last point: Strings are poor substitutes for capabilities. I copied the part that I dun understand:
----------------------------------------------------------------------
Occasionally, strings are used to grant access to some functionality. For example, consider the design of a thread-local variable facility. Such a facility provides variables for which each thread has its own value. When confronted with designing such a facility several years ago, several people independently came up with the same design in which client-provided
string
keys grant access to the contents of a threadlocal variable:
//); }
The problem with this approach is that the keys represent a shared global namespace. If two independent clients of the package decide to use the same name for their thread-local variable, they unintentionally share the variable, which will generally cause both clients to fail. Also, the security is poor; a malicious client could intentionally use the same key as another client to gain illicit access to the other client's data. This API can be fixed by replacing the string with an unforgeable key (sometimes called a capability):
public class ThreadLocal { private ThreadLocal() { } // Noninstantiable public static class Key { Key() { } } // Generates a unique, unforgeable key public static Key getKey() { return new Key(); } public static void set(Key key, Object value); public static Object get(Key key); }
While this solves both of the problems with the string-based API, you can do better. You don't really need the static methods any more. They can instead become instance methods on the key, at which point the key is no longer a key: it is a thread-local variable. At this point, the noninstantiable top-level class isn't doing anything for you any more, so you might as well get rid of it and rename the nested class to
ThreadLocal
:
public class ThreadLocal { public ThreadLocal() { } public void set(Object value); public Object get(); }
This is, roughly speaking, the API that java.util.ThreadLocal provides. In addition to solving the problems with the string-based API, it's faster and more elegant than either of the key-based APIs.
----------------------------------------------------------------------
I really dun understand what he wants to say, especially with the
ThreadLocal
sample. Can any one tell me what does "Strings are poor substitutes for capabilities" actually mean? in any easier sample codes?
Thank you very much.
David O'Meara
Rancher
Joined: Mar 06, 2001
Posts: 13459
I like...
posted
Mar 09, 2007 01:40:00
0
Forget
ThreadLocal
for now, if you haven't already used them it will just confuse the matter.
Imagine you have a nasty caching mechanism that is maintained globally and uses String values as keys for the data. (don't laugh, I've seen things that were conceptually the same) You give data a name, put in the cache, then pull it out by name.
Now you only have a single namespace: that is, all Strings you keep get stored in the same pool in the names of cached objects. If you read and write two objects called "mydata" from different locations in your application, they will collide and interfere with each other.
Here the capability we're looking for is the unique keying of cached data. Strings look like a good fit and are easy to use, but they don't actually provide the capabilities required.
Anupam Sinha
Ranch Hand
Joined: Apr 13, 2003
Posts: 1090
posted
Mar 15, 2007 04:28:00
0
Hi
I am also reading the same book. Now I am also at the same item. Can you please throw some more light into it. specially considering the fact that thread local would be in any case local to a thread how come can there be a key collision?
David O'Meara
Rancher
Joined: Mar 06, 2001
Posts: 13459
I like...
posted
Mar 15, 2007 04:58:00
0
because it doesn't specify where the name is valid during the execution of the Thread. If multiple sections set/get the
ThreadLocal
String value, there is no implicit mechanism to protect this namespace collision. If you want a locking or key mechanism, do so, but be wary of using Strings as they are a poor substitute for the real functionality.
I agree. Here's the link:
subject: Strings are poor substitutes for capabilities
Similar Threads
, how many threads will be created?.
ThreadLocale Variable
Newbie to ThreadLocal
confusion in ThreadLocal example.
SimpleDateFormat is not thread safe
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/406388/java/java/Strings-poor-substitutes-capabilities | CC-MAIN-2015-22 | refinedweb | 798 | 59.13 |
Content-type: text/html
#include <ucred.h>
int getpeerucred(int fd, ucred_t **ucred);
The getpeerucred() function returns the credentials of the peer endpoint of a connection-oriented socket (SOCK_STREAM) or STREAM fd at the time the endpoint was created or the connection was established. A process that initiates a connection retrieves the credentials of its peer at the time the peer's endpoint was created. A process that listens for connections retrieves the credentials of the peer at the time the peer initiated the connection.
When successful, getpeerucred() stores the pointer to a freshly allocated ucred_t in the memory location pointed to by the ucred argument if that memory location contains the null pointer. If the memory location is non-null, it will reuse the existing ucred_t.
When ucred is no longer needed, a credential allocated by getpeerucred() should be freed with ucred_free(3C).
It is possible that all fields of the ucred_t are not available to for all peer endpoints and all callers.
Upon successful completion, getpeerucred() returns 0. Otherwise, it returns -1 and errno is set to indicate the error.
The getpeerucred() function will fail if:
EAGAIN There is not enough memory available to allocate sufficient memory to hold the user credential. The application can try again later.
EBADF The fd argument is not a valid file descriptor.
EFAULT The pointer location pointed to by the ucred_t ** argument points to an invalid, non-null address.
EINVAL The socket is connected but the peer credentials are unknown.
ENOMEM The physical limits of the system are exceeded by the memory allocation needed to hold the user credential.
ENOTCONN The socket or STREAM is not connected or the STREAM's peer is unknown.
ENOTSUP This operation is not supported on this file descriptor.
See attributes(5) for descriptions of the following attributes:
door_ucred(3DOOR), ucred_get(3C), attributes(5), connld(7M)
The system currently supports both sides of connection endpoints for local AF_UNIX, AF_INET, and AF_INET6 sockets, /dev/tcp, /dev/ticots, and /dev/ticotsord XTI/TLI connections, and pipe file descriptors sent using I_SENDFD as a result of the open of a named pipe with the "connld" module pushed. | https://backdrift.org/man/SunOS-5.10/man3c/getpeerucred.3c.html | CC-MAIN-2021-21 | refinedweb | 357 | 53.41 |
This article is a very simple introduction on writing a Windows Form application for the Microsoft.NET framework using C#. The sample application demonstrates how to create and layout controls on a simple form and the handling of mouse click events. The application displays a form showing attributes of a file. This form is similar to the properties dialog box of a file (Right click on a file and Click on Properties menu item). Since attributes of a file will be shown, the sample will show how to use File IO operations in .NET framework.
Like every Win32 application source, we will start with the inclusion of some header
files. C# does not make use of header files; it uses namespaces to accomplish
this purpose. Most of the C# core functionality is implemented in the
System namespace.
For forms application, the functionality is included in the
System.WinForms namespace.
Therefore, right at the top of our source file we need to define these namespaces.
using System; using System.WinForms;
We will need some more namespace definitions, but I will explain them as we go
along with this sample application. Like every C# application, Windows Forms
application will be defined as a class. Since we will be making use of the
Form class,
it needs to be derive from it.
public class WinFileInfo : Form
The next thing that needs to be identified is the entry point for the application. Unlike
Win32 applications, the method
Main
(not
WinMain) will be the entry point for this application.
public static void Main () { Application.Run (new WinFileInfo ()); }
The
Application class provides static methods to manage an application like running, closing,
and managing windows messages. In the method
Main of the application we will
start running the application using the method
Run of the
Application
class. And later on we will call the method
Exit on the
Application
class to stop the application or in other words close the form.
Since we are writing a GUI application, we need some controls on the form. And because this
application has been written using the .Net SDK, no wizard or tools have been used.
So how can we put controls on the form and define their location sizes?
This is very much like writing a resource file for Win32 application. The
System.WinForms
namespace contains definitions for all the common controls we will use on
forms or Windows Form applications e.g.
Button,
Checkbox,
RadioButton, etc.
For more information check the online documentation for .NET SDK. The
Control class
defines the base class for controls. This class contains the base methods.
All the controls override the virtual methods specific to their functionality.
First lets see what the output of this Form application looks like.
The Application does this layout of controls in the method
InitForm.
public WinFileInfo () { InitForm (); }
Setting the
Text property of
Form object will set the title of the form.
this.Text = "File Information Application";
The client size of the form is controlled by the
ClientSize property of the form.
this.ClientSize = new Size(400, 280);
The controls in the Form application follow a hierarchical pattern i.e. each control
acts as a container for other controls. And for complex GUI design, controls
can be layered on top of each other and then pushed back or brought forward as
needed. In this particular application, Form acts as a container for a
Panel,
GroupBox and
a bunch of other controls like buttons, static text, etc. The
GroupBox acts
as a container for three checkboxes. The panel acts as a container for an edit
control and a static text (in Forms terms, label control). The following code
adds the check boxes to the
GroupBox
control container using the method
Add
of the Controls collection of the Form.
wndAttribBox.Controls.Add (wndArchiveCheck); wndAttribBox.Controls.Add (wndReadOnlyCheck); wndAttribBox.Controls.Add (wndHiddenCheck);
The other method of adding controls to the controls collection is by directly creating
an array of control objects and setting the
All property of control collection of
Form to the array. The following code sets the child controls in the Form container.
this.Controls.All = new Control [] { wndPanel, wndAttribBox, wndFileExistCheck, wndLocationLabel, wndLocation, wndCreateTimeLabel, wndLastAccessLabel, wndLastWriteLabel, wndCreateTime, wndLastAccessTime, wndLastWriteTime, wndFindButton, wndCloseButton };
Every Control in the .NET Framework is an Object that implements methods, properties and events. An object is created using the new operator in C#. The same concept applies to controls too. I will describe the creation of a Button control, setting its properties like size and location and then event handlers for the click on the button.
First we need to define the control variable for the
WinFileInfo class. The following code
shows how the various controls have been defined and created.
ButtonwndFindButton= new Button (); ButtonwndCloseButton= new Button (); CheckBoxwndFileExistCheck= new CheckBox (); CheckBoxwndArchiveCheck= new CheckBox (); CheckBoxwndReadOnlyCheck= new CheckBox (); CheckBoxwndHiddenCheck= new CheckBox ();
The best thing about the .NET Framework is that we do not have to worry about releasing the variables allocated on the heap using the new operator. Garbage collection will do its magic (trust MS on this) when the variable is not in use anymore.
The following code shows how the various properties for the Find button have been set. The name of the button's properties are very self-explanatory but I will try to describe them one by one.
// Set some properties for Find Button. wndFindButton.Text = "Find"; wndFindButton.TabIndex = 0; wndFindButton.Anchor = AnchorStyles.BottomRight; wndFindButton.Size = new Size (72, 24); wndFindButton.Location = new Point (110, 250); wndFindButton.Click += new EventHandler (this.buttonFind_click);
The
Text property is used to set the text that will be displayed on the button.
This is like calling the
SetWindowText Win32 API call on button control.
The
TabIndex sets the index of Tab i.e. where does this control stand in the sequence when a user
navigates through the controls using the Tab key. This is like setting TabOrder using
Visual Studio control wizard. A
TabIndex of 0 indicates that this control will be
the first to gain focus when the Tab key is pressed.
The
Anchor is a very important property if you want to fix the location of a control with
respect to some fixed point of the parent control or container. For example in this
case I want the Button to always stay anchored to the Bottom Right corner of
the form no matter what. So if user tries to resize the form, the button will
reposition itself to stay anchored to the Bottom Right corner of Form.
The
Size property fixes the size of a control.
The
Location property sets the location of the control with respect to its parent control.
The most important setting is how to handle the event when the user clicks on this control. This is
done through adding an event handler to the
Click event of the
Control object.
We create a new event handler using the
EventHandler object and pass the function
that will handle the event, as an argument to the
EventHandler constructor.
The
EventHandler created is added to the Click event of the
Button.
This way we can create controls dynamically and set their properties and event handling methods.
After creating all the controls do not forget to add them to the parent
container like
Form,
Panel,
GroupBox, and etc.
Like every Win32 control, we can customize the appearance and actions of all the
controls. For example, what button should handle the message if the user clicks
ENTER? Setting the
AcceptButton property of the form can do this.
this.AcceptButton= wndFindButton;
The following code from the
InitForm method shows some of the other properties that can be customized.
// We don't need a maximize box for this form. this.MaximizeBox = false; // Set the Find button to ACCEPT button for this form. this.AcceptButton= wndFindButton; // Set the close button to Cancel operation button. this.CancelButton = wndCloseButton; // Set the start position of the form to be center of screen. this.StartPosition = FormStartPosition.CenterScreen; //And then activate the Form object. this.Activated += new EventHandler (this.WinFileInfo_activate);
Yes, you can do it. For this you need to import Win32 DLLs that implement the
functions you need. E.g. for
LoadImage,
DestroyObject
API calls, we need to import User32.dll and Gdi32.dll DLLs. This can be
accomplished by using code as shown below.
//------------------------------------------------- // We need to import the Win32 API calls used to deal with // image loading. //------------------------------------------------- [DllImport("user32.dll")] public static extern int LoadIcon(int hinst, String name); [DllImport("user32.dll")] public static extern int DestroyIcon(int hIcon); [DllImport("user32.dll")] public static extern int LoadImage(int hinst, String lpszName,uint uType, int cxDesired,int cyDesired, uint fuLoad); [DllImport("gdi32.dll")] public static extern int DeleteObject(int hObject);
This application shows the attribute for files. This means we need to do some File IO operations. The .NET Framework provides a File Object for IO operations. We can create a File object by providing a file name as the parameter to constructor.
File myFile = new File(wndFileName.Text);
Then we can make use of the
Attributes property of the File Object to get the attributes of file.
The attributes of the File are defined as a
FileSystemAttributes enumeration object.
For all the available
enum values look in the documentation.
FileSystemAttributes attribs = myFile.Attributes;
After that, we can do logical operations on the Attributes value to check if a particular
file attribute is set or not. For example, to see if a file is of
Archive type
or not, the following code can be used.
if ((attribs & FileSystemAttributes.Archive) == FileSystemAttributes.Archive) { wndArchiveCheck.Checked = true; }
One thing that needs to be noticed is that the result of a logical ‘
&’
operation on an
Enumeration is an
Enumeration.
It is not a Boolean value. Therefore the resultant value is being compared
against an enum value. There is one more gotcha in this operation. If you
do not put the logical operation, (
attribs & FileSystemAttributes.Archive),
in bracket, the compiler will throw an error. Right now I do not know if
it is a bug in the compiler or if that is the way it is intended, but for now put
the logical operations in brackets.
To get file's creation date, use the
CreateDate property of the
File object.
This will return a result in a
DateObject. And then you can use the Format method of the
DateTime
object to get the string representation of the file's creation date. Check the online documentation for
the
DateTime object to see what are the available format values.
DateTime timeFile = myFile.CreationTime; wndCreateTime.Text = timeFile.Format ("F", DateTimeFormatInfo.InvariantInfo);
Yes, you can. Use Visual Studio to generate a resource script file (
.RC file).
Add the resources you want to it. In this sample application, I have added an
icon and a bitmap to the resource file. Then use the command line
RC compiler.
This will generate a
.RES file.
For example in this application,
WinFileInfo.rc file generates
WinFileInfo.res
file. Then, using the
/win32res compiler option the resources can be embedded in the application.
One file that has to be copied into the folder is
afxres.h. Otherwise the RC file will not compile.
Yes, it too can be done. There are two ways to accomplish this. One is to use the
/win32icon
compiler option and specifying the icon's file name. The disadvantage of this approach
is that you cannot embed resources in the application using the
/win32res
compiler option. The other approach is adding the icon in the resource file and
then making sure that it has the lowest ID. You can then embed the resource file using the
/win32res compiler option. This way you will see your icon associated to the application
in an explorer window.
I have used a
makefile that is provided with the .NET SDK. I had to make changes
in
master.mak and
makefile so that it will compile only my application,
not the whole bunch of sample applications. But I did some extra stuff in the
makefile
to add some extra information to the output file. Here is how the
makefile looks.
!include master.mak _IMPORTS=$(_IMPORTS) /r:System.WinForms.DLL/r:System.DLL /r:Microsoft.Win32.Interop.DLL /r:System.Drawing.DLL /r:System.Net.DLL _WIN32RES=$(_WIN32RES) /win32res:WinFileInfo.res #_WIN32ICON=$(_WIN32ICON) /win32icon:FormIcon.ico _XMLDOC=$(_XMLDOC) /doc:WinFileInfo.xml all: WinFileInfo.exe WinFileInfo.exe: WinFileInfo.cs
You must be wondering what that
_XMLDOC is. This is a very interesting
feature of the C# compiler. This helps you create documentation for the
methods of your class. But you have to follow very specific rules for tags and
where to place them. I have done this for a couple of methods in this application.
You can look for more details in the online documentation. The following code shows
how XML documentation is being generated for the method
ButtonClose_Click.
///<summary> ///ButtonClose_Click Method: ///<para> ///This method is invoked when Close button on the dialog box is clicked. ///This method closes the application. ///</para> ///</summary>
This is just a very simple Windows Form application. This should give you a starting point to build upon. I am not an expert on C# or Forms application. I am learning too so I thought I will share the experience with you guys. Please do send your comments and suggestions to me. I will try to improve this application and add some more advanced features.
The latest version is now RTM compliant. Here is a brief discussion of the changes.
The application needs to refer to some system assemblies like
System.DLL,
System.Windows.Forms.DLL, etc. Earlier we had to specify all these
assembly references through
/reference switch. But now
C#
compiler looks for
CSC.rsp response file in the project directory.
If the response file is not found in the project directory, then it looks for
the file in the directory from where compiler was invoked. Since we are
assuming that you don't have IDE installed, then this compiler will be invoked
from
"Windows Folder"\Microsoft.NET\"CLR Version" directory. You
will find a
CSC.rsp file there. Take a look at this file. You will
see that it already has added reference to most commonly referenced assemblies.
Therefore you don't need to specify them through
/reference switch
anymore.
If you want compiler to ignore the inclusion of default
CSC.rsp file,
use
/noconfig compilation switch. If you include your own
CSC.rsp
file in the project folder, then its settings override the settings specified
in global
CSC.rsp file. For more details, please look in the
documentation for this compiler switch. Unfortunately
you cannot specify
noconfig switch in Visual Studio .Net IDE.
There are namespace changes, attribute name and value changes and some method name changes in the new code from the previous version that was written for Beta1. Other than that I did not have to make a whole lot of changes.
Next we will try to write an article that will show how an ASP.Net application can be written without the help of IDE. and it will show how ASP.Net references the assemblies at compile time and run time.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/cs/winfileinfo.aspx | crawl-002 | refinedweb | 2,529 | 58.99 |
Dependency Injection - An Introductory Tutorial
This article discusses dependency injection in a tutorial format. It covers some of the newer features of Spring DI such as annotations, improved XML configuration and more.
Dependency Injection
Dependency Injection (DI) refers to the process of supplying an external dependency to a software component. DI can help make your code architecturally pure. It aids in design by interface as well as test-driven development by providing a consistent way to inject dependencies. For example, a data access object (DAO) may depend on a database connection. Instead of looking up the database connection with JNDI, you could inject it.
One way to think about a DI container like Spring is to think of JNDI turned inside out. Instead of an object looking up other objects that it needs to get its job done (dependencies), a DI container injects those dependent objects. This is the so-called Hollywood Principle, “Don't call us” (lookup objects), “we’ll call you” (inject objects).
If you have worked with CRC cards you can think of a dependency as a collaborator, i.e., an object that another object needs to perform its role.
Let's say that you have an automated teller machine (ATM) and it needs the ability to talk to a bank. It uses what it calls a transport object to do this. In this example, a transport object handles the low-level communication to the bank.
This example could be represented by either of the two interfaces as follows:
AutomatedTellerMachine interface
package com.arcmind.springquickstart; import java.math.BigDecimal; public interface AutomatedTellerMachine { void deposit(BigDecimal bd); void withdraw(BigDecimal bd); }
ATMTransport interface
package com.arcmind.springquickstart; public interface ATMTransport {:
AutomatedTellerMachine implementation:
package com.arcmind.springquickstart; import java.math.BigDecimal; public class AutomatedTellerMachineImpl implements AutomatedTellerMachine{ private ATMTransport transport; public void deposit(BigDecimal bd) { ... transport.communicateWithBank(...); } public void withdraw(BigDecimal bd) { ... transport.communicateWithBank(...); } public void setTransport(ATMTransport transport) { this.transport = transport; } } SimulationAtmTransport
package com.arcmind.springquickstart; public class SoapAtmTransport implements ATMTransport { public void communicateWithBank(byte[] datapacket) { ... } }
package com.arcmind.springquickstart; public class StandardAtmTransport implements ATMTransport { public void communicateWithBank(byte[] datapacket) { ... } }
package com.arcmind.springquickstart; public class SimulationAtmTransport implements ATMTransport { public void communicateWithBank(byte[] datapacket) { ... } }
Notice the possible implementations of the ATMTransport interface. The AutomatedTellerMachineImpl does not know or care which transport it uses. Also, for testing and developing, instead of talking to a real bank, notice that you can use the SimulationAtmTransport.
ArcMind Inc., Rick's employer, offers Spring Framework training and consulting as well as JSF, GWT, JPA and more
- Login or register to post comments
- 24709 reads
- Printer-friendly version
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
dzoneCody replied on Tue, 2008/11/11 - 8:40am
Very nice article,
One of my biggest obstacles has been grasping DI conceptionally. I think you did a good job in showing the "tutorial" perspective of DI. Can't wait to see more...
Rick Hightower replied on Thu, 2008/11/13 - 12:09pm
Jakob Jenkov replied on Tue, 2008/11/11 - 2:55pm
Hi Rick,
I just browsed the article quickly. Seems you have spent a long time putting this piece together! Kudos to you!
If I may comment a bit on the title, I think it is perhaps slightly mis-leading. This article, though definately useful, is more of a "Dependency Injection with Spring"-tutorial. Some of the concepts are unique to Spring, and the configuration examples definately are too. But readers new to dependency injection could be lead to believe, that dependency injection is only something you can do with Spring. But there are Pico, Guice and Butterfly Container too, among others.
You do mention Spring early on page 1, so I guess the "mislead" reader is quickly corrected. Anyways, with frameworks growing as huge as Spring is, it is nice with "getting started / introductory" articles like this one.
I too wrote a tutorial on dependency injection, but my tutorial tries to explain the concept a bit more detached from any specific DI container (yet it shows examples using Butterfly DI Container). In the end of this tutorial I also explain when and when NOT to use dependency injection (according to my opinion / experience).
Rick Hightower replied on Tue, 2008/11/11 - 3:05pm
in response to: jj83777
digitalcorndawg replied on Tue, 2008/11/11 - 3:05pm
hovo73 replied on Tue, 2008/11/11 - 3:06pm
cmathias replied on Tue, 2008/11/11 - 3:09pm
Nice article Rick. I love the DI concept, but I struggled with it at first, as I imagine most developers do who have been authoring more traditional workhorse code. I find it amazing how using things like Spring let me get down to business...most of my code these days actually does only what it needs to do rather than taking twenty lines to set up the three important ones.
I think it's great that heavyweights like yourself that really 'get' this stuff take the time to turn back, and teach the rest who are lagging.
I would disagree with Jakob's observation as I feel you did mention alternatives more than once. (though yes, your samples are Spring). Spring has become a monster, but from my experiences so far, is still the sharpest knive in the drawer.
Keep it up Rick - the one I'm personally waiting for is the AspectJ introduction (specifically tapping cut points in compiled libraries). Can you just DI that into my head please?
tac replied on Tue, 2008/11/11 - 3:21pm
excellent concrete example of using dependency injection. I am a long-time Spring user but there were several useful pieces of information on some of the newer features. I will definitely add the 'p:' tags to my toolkit. Thanks,
-tom
Paul Hixson replied on Tue, 2008/11/11 - 3:24pm
Jakob Jenkov replied on Tue, 2008/11/11 - 3:59pm
Walter Bogaardt replied on Tue, 2008/11/11 - 4:38pm
Great explanation. I was wondering what happens to the role of the applicationContext with all the annotations. It was one of those todo things to look at when I had some cycles, and you cleared it up in a few lines.
I'd like to see something on the spring annotations features like @Bean and javaconfig, and spring or DI in the web world of servlets, jsf, jsp. Keep up the good articles.
Steven replied on Wed, 2008/11/12 - 6:41am
great read, but it feels like it missed a key part: actually integrating spring into an app.
i.e. starting it up, adding the context/listener to web.xml or something similar like even using the bean factory.
Stuart Halloway replied on Wed, 2008/11/12 - 9:15am
Hi Rick,
Responding per your request for comments. I found the tutotial easy-to-follow, but I don't want to follow where it goes. :-) I think DI is overrated, and both XML and Annotations are bad ways to do it. If a team is committed to DI, I would recommend a Groovy, JRuby, or Clojure based DSL.
Cheers,
Stuart
johnfryar replied on Wed, 2008/11/12 - 9:53am
You have particular skill with examples. The examples you provide here, as with your JSF tutorial on developerworks, are USEFUL and clearly illustrate some key concepts. The images you use to highlight how the xml is related to the java code and how the annotated classes related one-to-another are top-notch. You could probably expand that approach into a book and it would be very helpful to both newbies and experience Spring developers.
I agree with Jakob that the title is misleading. This really isn't about DI so much as it is about DI using Spring. That being said, it's a fantastic article for anyone looking for an introduction to DI using Spring.
I look forward to the follow-up articles.
Solomon replied on Wed, 2008/11/12 - 2:06pm
This is a follow up to your linkedIn question:
I'd like to revise this question a bit to: "When should you annotate a class with Injection configuration, and when should you externalize the Injection configuration?"
IMHO, the more generic question is key on how you should think about DI. 80-90% of the time you can annotate a class with DI info. The types of annotations you're adding are "I need an instance of X class injected here" (for example Spring's @Autowired and @Required, JEE's @Resource and Guice's @Inject), "I am available for use" (Spring's @Component/@Service/@Controller/@Repository, JEE's @Stateless/..., and Guice's @ImplementedBy/@Singleton/...), and "I need/am a specific instance of X class/interface" (Spring's @Qualifier, and Guice's @BindingAnnotation).
There are quite a few reasons for the other 10-20% of cases that don't make sense, and for those, DI configuration needs to be put somewhere else, for example in Spring XML, Spring JavaConfig or Guice Modules/Producers. Some of the reasons are:
1) The class is defined in a location that you don't have control over, such as a vendor jar.
2) The class encapsulates an environmental configuration, such as setting up a DataSource or Hibernate configuration
3) You need more than one instance of the same exact class
Some cases have a 100% need to be externalized (for example, database and hibernate). However, there are plenty of cases that are pretty tricky to either categorize or implement.
One quick example of a borderline common case is a Spring @Controller. I often configure MVC Controllers with a JSP file or Tile that I'd like to use. You can do that by injecting a String value (with a Qualifier "mySuccssView") into the controller; however, Spring doesn't let you use the @Autowired annotation, you have to use JEE's @Resource annotation, or use XML. By the way, the definition of the mySuccssView in Spring XML took 3 lines of code!!! Anyway, I was using WebSphere 6.1, and had to opt out of the @Resource because of classpath issues, so I had to use the XML config. The problem with the XML config is that @Controller servers two purposes: 1) defining a Role in the system, 2) defining the class as "I am available for use in a DI use" as part of classpath scanning and default autowiring. I wanted to set the value the value in XML, therefore the classpath scanning part of #2 was out of the question. To do that, I had to customize the component scanning with a Spring "context:exclude-filter."
The point of that example is that there are still plenty of gotchas that need to be ironed out. Guice is slightly ahead of Spring on those kinds of annotation gotchas. One of the most seemingly obvious abilities that Guice has is the import of key/value pairs into the module (a.k.a. "application context") as seen in this thread in the Guice News Group.
There's a lot more to this topic, but I hope that I've she some more light on this esoteric subject.
jordanz replied on Wed, 2008/11/12 - 5:42pm
Solomon replied on Wed, 2008/11/12 - 7:22pm
in response to: jordanz
Here we go again. This argument is so 2004. I assume that you've read good old Martin Fowler's article on the subject.
DI moved away from Singletons and JNDI lookups. IHMO, DI's been a great boon for the quality and clarity of my code base. XML and annotation aren't code, they are indeed declarative configuration and possibly meta-programming.
You are correct. The term "Configuration" does apply. Usually I think about "Configuration" in the context of environment related setup. DI does make environmental configuration a lot simpler. However, it also adds in the ability to separate out how a particular class does its work and participates with other classes from how that class is exposed and how that class finds instances of other classes.
Yeah, at the end of the day DI is "Configuration", but its configuration that inherently replaces the code you as a developer have to write for the types of things covered by Creational and Structural Design Patterns.
jordanz replied on Wed, 2008/11/12 - 7:46pm
in response to: sduskis
I've read most articles on the subject. Yes, the counter-arguments have been going on for a long time. That doesn't mean that the subject is settled.
Solomon replied on Wed, 2008/11/12 - 8:30pm
in response to: jordanz
I've read most articles on the subject. Yes, the counter-arguments have been going on for a long time. That doesn't mean that the subject is settled.[/quote]
You're right. The subject of DI is not settled yet. But dismissing it as "just configuration" and something that "was done in software practice for years" doesn't seem to do it justice. We mortals do need simple phrases for complex concepts that DI solves like dependency management, environmental configuration and separation of concerns. There are other methodologies of solving those same needs. There are also different imlementations of DI that have plenty of merit.
With that said, comments like "Why not call this what we've always called this: Configuration. Everyone knows what that is." don't seem to lead the direction of the communal conversation towards "settling" the subject.
jordanz replied on Wed, 2008/11/12 - 9:43pm
in response to: sduskis.
Solomon replied on Thu, 2008/11/13 - 10:06am
in response to: jordanz.
[/quote]
DiP defines how objects interact with each other and that an object that encapsulates some behavior (DAO, service, UI and etc) should never (or rarely) look up or instantiate another behavior-driven dependency. Those dependencies can be either it another object or some configuration setting. DiP says that in order to get those dependencies, some external assembly system (either a framework, or hand-coding) should assemble the application and "inject" specific dependencies as needed.
In other words, you can adhere to the DI Principle without any configuration. It's the Dependency Injection Frameworks that use configuration to create a solution that adheres to DiP.
ravihasija replied on Thu, 2008/11/13 - 11:23am
This is a great article. Led by example, and not dry by just providing theory. Simple, yet beautiful. Kudos to you for creating this. I have a firm grasp of what Spring and DI (in general) is.
I would love to have such articles on other topics like Hibernate, EhCache, Struts, EJB, JSP, etc to name a few. Probably they are already out there on this website. If not then, something to consider ;-)
Thank you so much!
Sincerely,
Ravi
Rick Hightower replied on Thu, 2008/11/13 - 12:10pm
Rick Hightower replied on Thu, 2008/11/13 - 2:35pm
in response to: ravihasija
Thanks Ravi. I really appreciate the positive feedback. It really helps. Thanks.
Bruno Sofiato replied on Fri, 2008/11/14 - 6:33pm
Cool article.
I personally don't like the current annotation usage model spread on the majority of the frameworks, guys configuration isn't meta-data. Annotations IMHO should be used to express semantical information within the code itself. The @Required annotations got it right. The ATMTransport is required by the AutomatedTellerMachineImpl class, thats an implementation detail of that class, the @Required is expressing a semantical info.
The @Autowired and the @Qualifier on the otherside are expressing config details. They are linking an property value to an named instance on the bean container. Should a class know which named instances will be injected on it's instances ? I think it should not.
Rick Hightower replied on Fri, 2008/11/14 - 11:08pm
in response to: Bruno Sofiato
Bruno Sofiato replied on Sat, 2008/11/15 - 10:42am
in response to: rhightower
Yes, some objects were made to collaborate with others, but maybe the property declaration would provide this kind of info.
I agree that annotations are a popular aproach these days, maybe it's a consequence of this grudge against XML based configuration that is wildspread among the Java developer's ranks.
Just my 2 cents.
William Willems replied on Mon, 2008/11/17 - 5:54am
nitin pai replied on Mon, 2008/11/17 - 10:13pm
Rick - Thanks a ton. This timing of this article has been perfect to match with my intentions. I wanted to make my team understand the concepts of Spring such as DI, IOC, AOP etc. And you have just eased out my work since I do not have to compile the articles myself. This article is simply to the point.
Eagerly waiting for your further articles on Spring features. And if you require any help just let me know. I am on the same track too :)
george.jiang replied on Tue, 2008/11/18 - 9:43pm
The best introduction to Spring 2.5 DI. Better than those published Spring books.
Rick, when will the intoduction article to Spring AOP be out? Thanks. | http://java.dzone.com/articles/dependency-injection-an-introd | crawl-002 | refinedweb | 2,847 | 55.84 |
There" encoding="utf-8"?> <rdf:RDF xmlns: <channel rdf: <title>SitePoint.com</title> <link></link> <description>SitePoint is the natural place to go to grow your online business.</description> <items> <rdf:Seq> <rdf:li rdf: </rdf:Seq> </items> </channel> <item rdf: can’t be accessed by the usual means.
Here’s the solution. First, check what the URI is for the namespace. In this case, the
dc: prefix maps to the URI:
<rdf:RDF xmlns:
Then use the
children method of the SimpleXML object, passing it that URI:
$feed = simplexml_load_file(''); foreach ($feed->item as $item) { $ns_dc = $item->children(''); echo $ns_dc->date; }
When you pass the namespace URI to the
children method, you get a SimpleXML collection of the child elements belonging to that namespace. You can work with that collection the same way you would with any SimpleXML collection.
You can use the
attributes method in the same way to obtain attributes with namespace prefixes.
October 20th, 2005 at 8:37 pm
Could we please have an example for the attributes please? I have failed to use the attributes method to get at a name spaced attribute contained in a name spaced element (and caused my scalp to bleed in the process!!!). In the end I regex’ed out the attribute to its own element so I could get SimpleXML to work; not so simple!
October 20th, 2005 at 9:34 pm
Hmm okay. To get at the
rdf:resourceattribute in the
rdf:litag in the example above:
October 22nd, 2005 at 10:03 am
Thanks for the pointers Kevin!
January 13th, 2006 at 3:34 pm
[...] This article reveals the ‘special’ steps one has to take to read XML in PHP 5 containing namespaces—more psychological support with a bit more technical help. Another article, “XML Namespaces” does more of the same. [...]
June 20th, 2006 at 11:46 pm
What about a line that has both a namespace and an attribute such as the following:
[broken code sample removed -Ed]
How would I grab the itunes:image reference in the preceding xml?
I’ve tried the following code snippet to no avail(and many others):
foreach($rss->channel as $channel)
{
$channel_itunes = $channel->children(’’);
$image = $channel_itunes->image['url'];
}
Any help would be greatly appreciated.
June 21st, 2006 at 9:06 am
joeblow,
Could you post your code sample again and escape your special characters (e.g. <)?
June 22nd, 2006 at 1:07 am
<channel>
<itunes:image href=”some.link.com” type=”video”>
<
I actually figured out the solution using your documentation and some experimentation.
foreach($rss->channel as $channel)
{
$channel_itunes = $channel->children(’’);
$image_items = $channel_itunes->attributes();
$image = $image_items['href'];
}
June 23rd, 2006 at 5:23 am
Is there any way to see what is contained in the buffer for $channel_itunes in the above example. I have tried the Zend debugger but is just states “Object of: SimpleXMLElement” and print_r gives me an empty SimpleXMLElement Object. If I could see the data that was contained in the buffer, I would be able to more accurately troubleshoot problems I was having without guessing as I am now.
Thanks.
March 29th, 2007 at 11:33 am
Is there any way to (automatically) get the xmlns:dc URL from the code? I’d like to find the URI in code, but all examples have it hard-coded.
March 29th, 2007 at 12:52 pm
Anonymous,
Sure — just use the getNamespaces method:
April 11th, 2007 at 3:53 am
Kevin,
Thank you for this article. I bought the book previously and this missing topic is just what I’m stuck on. If you could indulge my ignorance, I am still stuck on how to parse multiple namespace items with multiple attributes on the same level. I am trying to parse the yahoo weather rss feed:
<channel>
…
<yweather:location<yweather:location>
…
<yweather:astronomy<yweather:astronomy>
…
<channel>
how do I get to sunrise for instance? $sunrise = ?
Thanks!
November 1st, 2007 at 6:23 am
SimpleXML and namespaces are sooo gay. This should be handled the same as non-namespaced attributes.
December 4th, 2007 at 5:31 am
is there a way to escape “” in xml file only. I want those characters to be recognized in html though.
January 14th, 2008 at 8:29 am
That’s definitely dugg. Thank you!
September 1st, 2008 at 2:48 am
Great article. I have been trying to get a value from a node with a namespace for quite some time now and couldn’t quite figure it out.
Thanks.
December 24th, 2008 at 1:01 am
Kevin, great article! This helped a lot. Question for you: how would I handle multiple namespace prefixes, such as the following:
If I wanted to get the values of attribute one and two of the widget, how would that be done?
Thanks,
- MT
December 24th, 2008 at 1:02 am
Kevin, great article! This helped a lot. Question for you: how would I handle multiple namespace prefixes, such as the following:
[item:group]
[widget:typeA attr1="" attr2=""/]
[/item:group]
If I wanted to get the values of attribute one and two of the widget, how would that be done?
Thanks,
- MT
February 10th, 2009 at 2:23 am
That’s brilliant, thanks a lot! | http://www.sitepoint.com/blogs/2005/10/20/simplexml-and-namespaces/ | crawl-002 | refinedweb | 869 | 70.13 |
That disables code paths like these in keyboard.c: /* Determine how many characters we should *try* to read. */ #ifdef FIONREAD /* Find out how much input is available. */ if (ioctl (fileno (tty->input), FIONREAD, &n_to_read) < 0) And in sysdep.c: #ifdef FIONREAD status = ioctl (fd, FIONREAD, &avail); #else /* no FIONREAD */ /* Hoping it will return -1 if nothing available or 0 if all 0 chars requested are read. */ if (proc_buffered_char[fd] >= 0) avail = 1; else { avail = read (fd, &buf, 1); if (avail > 0) proc_buffered_char[fd] = buf; } #endif /* no FIONREAD */ So perhaps it's some glibc or kernel bug related to ioctl()? This issue is most likely not specific to Emacs. Has anyone contacted upstream about this bug? Here's some unrelated issues that I've run into with thu kFreeBSD port thus far that I haven't found bugs for: * The debian-installer doesn't ask for a keyboard layout. * Almost no docs, for solving e.g. that issue. I only found out about /etc/kbdcontrol.conf by asking around on IRC. This is to be expected, but presumably needs to be fixed before it can go into stable. | https://lists.debian.org/debian-bsd/2010/08/msg00053.html | CC-MAIN-2017-39 | refinedweb | 187 | 74.9 |
Call Tree as Icicle Chart
The Icicle chart is a graphical representation of the stack trace responsible for creating a selected object set.
As well as in Call Tree, calls are shown starting from the first call in the stack descending to the one that directly created the object set. Each call is shown as a horizontal bar which length depends on the size of objects allocated in the call's subtree. The more memory was allocated in the underlying subtree, the longer the bar. It's obvious that over the subtree, function calls (bars) can only reduce in size giving the subtree a look of an icicle.
Use the Icicle chart to get an overview of the stack trace in just one glance. Without digging into the stack trace, you can quickly determine what function in the subtree allocates most of memory.
Example
Consider the example below for better understanding of the Icicle chart.
As you can see, to analyze the Call Tree, you have to expand all subtrees in the stack and interpret the numbers in the Bytes and Bytes in subtree columns. In contrast, just a glance at the Icicle chart allows you to determine main memory generator in the stack - the "green icicle". A click on a function you're interested in will show you the corresponding stack trace (on the right).
The Allocated in function value shows how much memory was allocated directly in the selected function.
The Allocated in subtree value shows the amount of memory allocated in the selected function and all underlying functions in the subtree.
How the Icicle Chart Is Painted
Here are the rules of how dotMemory paints bars (functions) on the Icicle chart:
- Allocated memory
The more memory is allocated by a function, the larger the color value of the corresponding icicle bar. Thus, functions that do not create objects by themselves look pale. Vice versa, functions that allocate memory look darker.
- Branching
Each new call subtree (when a function calls two more other functions) in the stack is painted with a new color.
- System namespace
To help you distinguish system calls from other ones, all functions from the
Systemnamespace are painted in less saturated color. Consider the example below: The
ThemeCatalogRegistrarBaseclass doesn't belong to the
Systemnamespace.
- Functions with almost no allocations
For the sake of simplicity, when a number (more than five) of subsequent calls do not allocate memory*, dotMemory groups them into one bar. Typically, these are system calls that are of no interest for the analysis. In the stack trace, such groups of calls are shown as Folded items and the corresponding bars are painted with the horizontal line pattern.
To expand the bars, use the middle click.
Zooming In and Out
If you want to take a more detailed look at a certain call subtree, you can change the scale of the Icicle chart.
To zoom in on a call,. | https://www.jetbrains.com/help/dotmemory/2017.1/Icicles.html | CC-MAIN-2019-30 | refinedweb | 486 | 60.35 |
I am using Visual Studio 2012 RC (pretty much same as Visual Studio 2010) and I'm having some issues with includes. Never really understood how they work, always having problems, sadly. I have 3 projects
Project1 include paths: "." (project's directory, some files are in folders and it causes a lot include errors)
Project2 include paths: "..\Project1" (first project's directory)
Project3 include paths: ".." (solution directory)
Project2 includes some files from Project1:
#include <File1.h>
Project3 includes some files from Project2:
#include <Project2\File2.h> // we have solution folder in include list, gotta specify project folder
Everything seems fine, but when I try to compile Project3, it cannot find Project1 includes, which are included in Project2.
Adding include path for Project1 in Project3 seems wrong, as I'm not using it directly. I hoped someone could help me out to sort this.
Thank you in advance.
Edited by Ripiz, 02 June 2012 - 03:52 AM. | http://www.gamedev.net/topic/625751-visual-studio-include-issue/ | CC-MAIN-2016-26 | refinedweb | 156 | 58.99 |
import "golang.org/x/exp/shiny/widget/glwidget"
Package glwidget provides a widget containing a GL ES framebuffer.
GL is a widget that maintains an OpenGL ES context.
The Draw function is responsible for configuring the GL viewport and for publishing the result to the widget by calling the Publish method when the frame is complete. A typical draw function:
func(w *glwidget.GL) { w.Ctx.Viewport(0, 0, w.Rect.Dx(), w.Rect.Dy()) w.Ctx.ClearColor(0, 0, 0, 1) w.Ctx.Clear(gl.COLOR_BUFFER_BIT) // ... draw the frame w.Publish() }
The GL context is separate from the one used by the gldriver to render the window, and is only used by the glwidget package during initialization and for the duration of the Publish call. This means a glwidget user is free to use Ctx as a background GL context concurrently with the primary UI drawing done by the gldriver.
NewGL creates a GL widget with a Draw function called when painted.
Publish renders the default framebuffer of Ctx onto the area of the window occupied by the widget.
Package glwidget imports 6 packages (graph). Updated 2017-06-02. Refresh now. Tools for package owners. | http://godoc.org/golang.org/x/exp/shiny/widget/glwidget | CC-MAIN-2017-34 | refinedweb | 196 | 58.08 |
This blog is permanently closed.
For up-to-date information please follow to corresponding WebStorm blog or PhpStorm blog.
Auto update build version is 192 and can’t update itself, just redirects to EAP page, probably, this is misprint in build number.
Linux.
Worked for me. Windows 7.
same.
on OSX 10.8.2
Works fine on OSX 10.8.2
same here on OS X 10.8
Nice updated. But still no file watcher support on PHPStorm. Any idea on when this will be included?
We plan to publish it as a plugin in a few weeks as we get a bit more feedback on the usage. For now you can download zip version of a WebStorm, unzip and then use “Install plugin from disk” option. Sorry for inconvenience.
Then why have you announced it in this version if it is not here?
Here is the plugin:
You can install it from the IDE: Settings -> Plugins -> Browse Repo -> File Watcher -> Right click -> Install
File watch plugin needs more flexibility.
1. I don’t need it to call external command every single character I type. Do it on ‘save’
2. Allow option to watch ALL files, but to compile SOME. Now only way of doing this is external commands.
For example twitter bootstrap sources have ton of LESS files, all should be watched, but only one css should be generated.
Hi Andrew!
1) IntelliJ platform automatically synchronized in-memory content of a file which is used while editing to an actual psychical file on disk. So you don’t actual have a ‘save’ action. For your purposes you can use External Tool and call it manually.
2) You can manage it by creating your own scope and setting it up for a file watcher.
With your example you can use ‘Track only root files’ option of LESS watcher(Settings -> File Watcher -> LESS -> check ‘Track only root files’).
Strange. PhpStorm says the last version is 126.192, but there is no patch file (PS-126.92-126.192-patch-unix.jar).
Thanks for the update! Anyhow, it is generating an error at the moment.
404
Please investigate the issue accordingly. Thanks!
IDE get wrong link for patch, it should be:
I just got the 126.192 update as well…
I would love to have a path for xdebug.file_link_format to open PHP files in PHP Storm.
Please read discussion in. There is described few workarounds for that.
If using Darcula theme, “Mark modified tabs with asterisk” – that asterisk on the modified tab is just too dark (dark blue). One can barely see it. Still not fixed.
But good work, guys!
Think someone made a typo in the check-for-update data, as the IDE reports 126.192 is the latest available build. Everything points to build 162
@Everybody
Sorry, updates fixed.
Also, here is File Watchers temporary link for everybody willing to test.UPD: Please use plugin from the repository.
Please give it at least 25 minutes for CDN to get in sync.
Here is the plugin.
so many features are added with every EAP build, i can’t keep up… is it too much to ask for the user guide to be updated or to have videos added to jetbrainsTV for certain important features like the new PHPUnit functionality or many other important functionalities
good wok by the way
We are working on tutorials for the new stuff.
This is not a good build. Throws a lot of “AssertionError: $IntellijIdeaRulezzz: $IntellijIdeaRulezzz”. I have submitted it via internal reporting tool. Dont now if you guys receive them.
We do receive them and will fix reported problems in future builds.
Agree, for me it totally broke the SVN integration, first EAP which made me downgrade.
Ok, so how do I get LESS compiling to work in Windows? The screencast doesn’t tell me how to get and install the lessc “executable” which I apparently need?
That does only explain how to install it on Linux? Perhaps you mean ?
I must say, I am getting increasingly dissapointed by every new version of phpStorm. It is turning less and less into an IDE (with the emphasis on Integrated) and more into a fancy text-editor which can handle external tools. Simply installing (or even updating) the program and initializing a project is becoming a time-consuming task. Not to mention the fact that I need to install all these external programs to make use of certain functionality (like PEAR for phpBeautifier and lessc-script for compiling, etc.) Not only that, but setting and changing options is far from immediate or inuitive and requires one to go into settings which could just as easily have been implemented as a checkbox or equivalent on the main screen.
Try Simpless. I have used it from a long time. it’s take care of all cS s without any issue.
Nice Release.
Only issue is the field autocompletion/suggestion is annoying.
What about having the suggestion or matched arguments appear above or below the cursor and not inline?
I hope, that Darcula theme will be edited till next EAP. All in all, theme is great. Colors great, but when I place caret at some variable or method the only highlight I see is marks on scrollbar to the right. As I recall in 5.0.4 in normal theme it did add a pinkish background, or gray. Can’t recall for sure. But at least it was visible. Can’t say same about current state of Darcula theme.
Is there a way to disable horizontal scrolling? I can’t find anything relevant for “scroll” or “horizontal” in the Settings.
Additionally, is there any way to refactor a class into a new namespace?
“Use soft wraps”
Is there anyway to disable switching between tabs when scrolling horizontally? I use a trackpoint and I find it hard to scroll in a perfectly vertical direction so I end up accidentally switching to another tab.
I have exactly the same problem.
Thank you for figuring out it’s the horizontal scroll that’s causing tab changes!
I ended up disabling horizontal scrolling, but obviously this is far from an optimal solution…
I have the version of PhpStorm 5.0.4, try upgrading to the latest version and tells me I have the latest version. In the Update screen option change to “Early Acess Program”, however I keep saying that I have the latest version.
How I can upgrade to version “PhpStorm EAP build 126 162 6″ from the IDE is not possible?
Ubuntu 12.04
Thank you.
You do have the latest stable version (you can treat EAP version as Beta, since it is still in development).
Just grab it from EAP page:
How to use/configure file watchers? didn’t find it nor in menu, nor in docs.
You should install File Watcher plugin. Check this screencast:
Is there a way to keep the old icons (menu elements, folder, files …) ?
I really like better the old ones, I suppose is an easy import / library fix, just asking
Nope, unfortunately there’s no way.
Too bad, it would have been nice to keep some of the things we are used to along the way. However this is not the biggest concern at all I would say.
Indeed, most of people like new look or at least get used to it with time
If there’s a particular problem that e.g. icon looks confusing please report the issue to the tracker | http://blog.jetbrains.com/webide/2013/02/phpstorm-6-eap-build-126-162/ | CC-MAIN-2014-15 | refinedweb | 1,249 | 75.71 |
MLRun Serving Graphs¶
Overview¶
MLRun serving graphs allow to easily build real-time data processing and advanced model serving pipelines, and deploy them quickly to production with minimal effort.
The serving graphs can be composed of pre-defined graph blocks (model servers, routers, ensembles, data readers and writers, data engineering tasks, validators, etc.), or from native python classes/functions. Graphs can auto-scale and span multiple function containers (connected through streaming protocols).
Graphs can run inside your IDE or Notebook for test and simulation and can be deployed into production serverless pipeline with a single command. Serving Graphs are built on top of Nuclio (real-time serverless engine), MLRun Jobs, MLRun Storey (native Python async and stream processing engine), and other MLRun facilities.
Accelerate performance and time to production¶
The underline Nuclio serverless engine uses high-performance parallel processing engine which maximize the utilization of CPUs and GPUs, support 13 protocols and invocation methods (HTTP, Cron, Kafka, Kinesis, ..), and dynamic auto-scaling for http and streaming. Nuclio and MLRun support the full life cycle, including auto generation of micro-services, APIs, load-balancing, logging, monitoring, and configuration management, allowing developers to focus on code, and deploy faster to production with minimal work.
In this document¶
Examples¶
Simple model serving router¶
in order to deploy a serving function you need to import or create the serving function, add models to it and deploy.
import mlrun # load the sklearn model serving function and add models to it fn = mlrun.import_function('hub://v2_model_server') fn.add_model("model1", model_path={model1-url}) fn.add_model("model2", model_path={model2-url}) # deploy the function to the cluster fn.deploy() # test the live model endpoint fn.invoke('/v2/models/model1/infer', body={"inputs": [5]})
The Serving function support the same protocol used in KFServing V2 and Triton Serving framework,
In order to invoke the model you to use following url:
<function-host>/v2/models/model1/infer.
See the serving protocol specification for details
model url is either an MLRun model store object (starts with
store://) or URL of a model directory (in NFS, s3, v3io, azure, .. e.g. s3://{bucket}/{model-dir}), note that credentials may need to be added to the serving function via environment variables or MLRun secrets
See the scikit-learn classifier example which explains how to create/log MLRun models.
Writing your own serving class¶
You can implement your own model serving or data processing classes, all you need to do is inherit the
base model serving class and add your implementation for model
load() (download the model file(s) and load the model into memory)
and
predict() (accept request payload and return prediction/inference results).
you can override additional methods :
preprocess,
validate,
postprocess,
explain,
you can add custom api endpoint by adding method
op_xx(event) (which can be invoked by
calling the
<model-url>/xx, where operation = xx), see model class API.
Minimal sklearn serving function example:¶
See the full Model Server()
To test the function locally use the mock server:
import mlrun from sklearn.datasets import load_iris fn = mlrun.new_function('my_server', kind='serving') # set the topology/router and add models graph = fn.set_topology("router") fn.add_model("model1", class_name="ClassifierModel", model_path="<path1>") fn.add_model("model2", class_name="ClassifierModel", model_path="<path2>") # create and use the graph simulator server = fn.to_mock_server() x = load_iris()['data'].tolist() result = server.test("/v2/models/model1/infer", {"inputs": x})
Advanced data processing and serving ensemble¶
MLRun Serving graphs can host advanced pipelines which handle event/data processing, ML functionality, or any custom task, in the following example we build an asynchronous pipeline which pre-process data, pass the data into a model ensemble, and finishes off with post processing.
create a new function of type serving from code and set the graph topology to
async flow
import mlrun function = mlrun.code_to_function("advanced", filename="demo.py", kind="serving", image="mlrun/mlrun", requirements=['storey']) graph = function.set_topology("flow", engine="async")
Build and connect the graph (DAG) using the custom function and classes and plot the result.
we add steps using the
step.to() method (will add a new step after the current one), or using the
graph.add_step() method.
We use the
graph.error_handler() (apply to all steps) or
step.error_handler()
(apply to a specific step) if we want the error from the graph or the step to be
fed into a specific step (catcher)
We can specify which step is the responder (returns the HTTP response) using the
step.respond() method.If we don’t specify the responder the graph will be non-blocking.
# use built-in storey class or our custom Echo class to create and link Task steps graph.to("storey.Extend", name="enrich", _fn='({"tag": "something"})') \ .to(class_name="Echo", name="pre-process", some_arg='abc').error_handler("catcher") # add an Ensemble router with two child models (routes), the "*" prefix mark it is a router class router = graph.add_step("*mlrun.serving.VotingEnsemble", name="ensemble", after="pre-process") router.add_route("m1", class_name="ClassifierModel", model_path=path1) router.add_route("m2", class_name="ClassifierModel", model_path=path2) # add the final step (after the router) which handles post processing and respond to the client graph.add_step(class_name="Echo", name="final", after="ensemble").respond() # add error handling step, run only when/if the "pre-process" step fail (keep after="") graph.add_step(handler="error_catcher", name="catcher", full_event=True, after="") # plot the graph (using Graphviz) and run a test graph.plot(rankdir='LR')
create a mock (test) server, and run a test, you need to use
wait_for_completion()
to wait for the async event loop to complete
server = function.to_mock_server() resp = server.test("/v2/models/m2/infer", body={"inputs": data}) server.wait_for_completion()
and finally, you can deploy the graph as a real-time Nuclio serverless function with one command:
function.deploy()
Note
If you test a Nuclio function that has a serving graph with the async engine via the Nuclio UI, the UI may not display the logs in the output.
NLP processing pipeline with real-time streaming¶
In Some cases we want to split our processing to multiple functions and use streaming protocols to connect those functions, in the following example we do the data processing in the first function/container and the NLP processing in the second function (for example if we need a GPU just for that part).
See the full notebook example
# define a new real-time serving function (from code) with an async graph fn = mlrun.code_to_function("multi-func",>", "q1", path=internal_stream)\ .to(>", "output_stream", path=out_stream) # specify the "enrich" child function, add extra package requirements child = fn.add_child_function('enrich', './nlp.py', 'mlrun/mlrun') child.spec.build.commands = ["python -m pip install spacy", "python -m spacy download en_core_web_sm"] graph.plot()
Currently queues only support iguazio v3io stream, Kafka support will soon be added
The Graph State Machine¶
Graph overview and usage¶
MLRun Graphs enable building and running DAGs (directed acyclic graph), the first graph element accepts
an
Event object, transform/process the event and pass the result to the next steps
in the graph. The final result can be written out to some destination (file, DB, stream, ..)
or return back to the caller (one of the graph steps can be marked with
.respond()).
The graph can host 4 types of steps:
Task – simple execution step which follow other steps and runs a function or class handler or a REST API call, tasks use one of many pre-built operators, readers and writers, can be standard Python functions or custom functions/classes, or can be a external REST API (the special
$remoteclass).
Router – emulate a smart router with routing logic and multiple child routes/models (each is a tasks), the basic routing logic is to route to the child routes based on the Event.path, more advanced or custom routing can be used, for example the Ensemble router sends the event to all child routes in parallel, aggregate the result and respond (see the example).
Queue – queue or stream which accept data from one or more source steps and publish to one or more output steps, queues are best used to connect independent functions/containers. queue can run in-memory or be implemented using a stream which allow it to span processes/containers.
Flow – A flow hosts the DAG with multiple connected tasks, routers or queues, it starts with some source (http request, stream, data reader, cron, etc.) and follow the execution steps according to the graph layout, flow can have branches (in the async mode), flow can produce results asynchronously (e.g. write to an output stream), or can respond synchronously when one of the steps is marked as the responder (
step.respond()).
The Graph server have two modes of operation (topologies):
router topology (default)- a minimal configuration with a single router and child tasks/routes, this can be used for simple model serving or single hop configurations.
flow topology - a full graph/DAG, the flow topology is implemented using two engines,
async(the default) is based on Storey and async event loop, and
syncwhich support a simple sequence of steps.
Example for setting the topology:
graph = function.set_topology("flow", engine="async")
Graph context and Event objects¶
The Event object¶
The Graph state machine accepts an Event object (similar to Nuclio Event) and passes
it along the pipeline, an Event object hosts the event
body along with other attributes
such as
path (http request path),
method (GET, POST, ..),
id (unique event ID).
In some cases the events represent a record with a unique
key, which can be read/set
through the
event.key, and records have associated
event.time which by default will be
the arrival time, but can also be set by a step.
The Task steps are called with the
event.body by default, if a Task step need to
read or set other event elements (key, path, time, ..) the user should set the task
full_event
argument to
True.
Task steps support optional
input_path and
result_path attributes which allow controlling which portion of
the event will be sent as input to the step, and where to update the returned result.
for example, if we have an event body
{"req": {"body": "x"}},
input_path="req.body" and
result_path="resp"
the step will get
"x" as the input and the output after the step will be
{"req": {"body": "x"}: "resp": <step output>}.
Note that
input_path and
result_path do not work together with
full_event=True.
The Context object¶
the step classes are initialized with a
context object (when they have
context in their
__init__ args)
, the context is used to pass data and for interfacing with system services. The context object has the
following attributes and methods.
Attributes:
logger - central logger (Nuclio logger when running in Nuclio)
verbose - will be True if in verbose/debug mode
root - the graph object
current_function - when running in a distributed graph, the current child function name
Methods:
get_param(key, default=None) - get graph parameter by key, parameters are set at the serving function (e.g.
function.spec.parameters = {"param1": "x"})
get_secret(key) - get the value of a project/user secret
get_store_resource(uri, use_cache=True) - get mlrun store object (data item, artifact, model, feature set, feature vector)
get_remote_endpoint(name, external=False) - return the remote nuclio/serving function http(s) endpoint given its [project/]function-name[:tag]
Response(headers=None, body=None, content_type=None, status_code=200) - create nuclio response object, for returning detailed http responses
Example, using the context:
if self.context.verbose: self.context.logger.info('my message', some_arg='text') x = self.context.get_param('x', 0)
Error handling and catchers¶
Graph steps may raise an exception and we may want to have an error handling flow,
it is possible to specify exception handling step/branch which will be triggered on error,
the error handler step will receive the event which entered the failed step, with two extra
attributes:
event.origin_state will indicate the name of the failed step, and
event.error
will hold the error string.
We use the
graph.error_handler() (apply to all steps) or
step.error_handler()
(apply to a specific step) if we want the error from the graph or the step to be
fed into a specific step (catcher)
Example, setting an error catcher per step:
graph.add_step("MyClass", name="my-class", after="pre-process").error_handler("catcher") graph.add_step("ErrHandler", name="catcher", full_event=True, after="")
Note: additional steps may follow our
catcherstep
see the full example above
exception stream:
The graph errors/exceptions can be pushed into a special error stream, this is very convenient in the case of distributed and production graphs
setting the exception stream address (using v3io streams uri):
function.spec.error_stream = 'users/admin/my-err-stream'
Implement your own task class or function¶
The Graph executes built-in task classes or user provided classes and functions, the task parameters include the following:
class_name(str) - the relative or absolute class name
handler(str) - the function handler (if class_name is not specified it is the function handler)
**class_args- a set of class
__init__arguments
you can use any python function by specifying the handler name (e.g.
handler=json.dumps),
the function will be triggered with the
event.body as the first argument, and its result
will be passed to the next step.
instead we can use classes which can also store some step/configuration and separate the
one time init logic from the per event logic, the classes are initialized with the
class_args,
if the class init args contain
context or
name, those will be initialize with the
graph context and the step name.
the class_name and handler specify a class/function name in the
globals() (i.e. this module) by default
or those can be full paths to the class (module.submodule.class), e.g.
storey.WriteToParquet.
users can also pass the module as an argument to functions such as
function.to_mock_server(namespace=module),
in this case the class or handler names will also be searched in the provided module.
when using classes the class event handler will be invoked on every event with the
event.body
if the Task step
full_event parameter is set to
True the handler will be invoked and return
the full
event object. If we don’t specify the class event handler it will invoke the class
do() method.
if you need to implement async behavior you should subclass
storey.MapClass.
Building distributed graphs¶
Graphs can be hosted by a single function (using zero to N containers), or span multiple functions
where each function can have its own container image and resources (replicas, GPUs/CPUs, volumes, etc.).
it has a
root function which is where you configure triggers (http, incoming stream, cron, ..),
and optional downstream child functions.
Users can specify the
function attribute in
Task or
Router steps, this will indicate where
this step should run, when the
function attribute is not specified it would run on the root function.
function="*" means the step can run in any of the child functions.
steps on different functions should be connected using a
Queue step (a stream)
adding a child function:
fn.add_child_function('enrich', './entity_extraction.ipynb', image='mlrun/mlrun', requirements=["storey", "sklearn"])
see a complete example | https://docs.mlrun.org/en/latest/serving/serving-graph.html | CC-MAIN-2021-49 | refinedweb | 2,497 | 51.78 |
Today's project involves automatically uploading electrical metering data to an FTPS server (explicit FTP over TLS, otherwise knowns as ESFTP). Shouldn't be a problem, since Python supports FTPS out of the box. Only it doesn't work. Here's the code:
import ftplib ftp = ftplib.FTP_TLS('host', 'user', 'password')('directory') with open('filename', 'rb') as f:('STOR filename', f)
After a successful initial connection, the data transfer connection fails:
*cmd* 'PASV' *resp* '227 Entering Passive Mode (10,200,0,100,255,150).' *cmd* 'CWD collect/CSV' *resp* '250 CWD command successful' *cmd* 'TYPE I' *resp* '200 Type set to I' *cmd* 'PASV' *resp* '227 Entering Passive Mode (10,200,0,100,255,123).' Traceback (most recent call last): File "metering/__main__.py", line 143, in main('STOR {}'.format(filename), stream) File ".../lib/python3.5/ftplib.py", line 503, in storbinary with self.transfercmd(cmd, rest) as conn: File ".../lib/python3.5/ftplib.py", line 398, in transfercmd return self.ntransfercmd(cmd, rest)[0] File ".../lib/python3.5/ftplib.py", line 793, in ntransfercmd conn, size =(self, cmd, rest) File ".../lib/python3.5/ftplib.py", line 360, in ntransfercmd source_address=self.source_address) File ".../lib/python3.5/socket.py", line 711, in create_connection raise err File ".../lib/python3.5/socket.py", line 702, in create_connection sock.connect(sa) OSError: [Errno 113] No route to host
The most confusing aspect was that the transfer worked perfectly well via FileZilla or Gnome "Connect to Server".
I eventually noticed a message in the FileZilla logs,
Server sent passive reply with unroutable address. Using server address instead. It turns out that the FTPS server was mis-configured and was replying to the
PASV command with an internal IP address that was not accessible from the public internet. It seems that this is a common enough configuration issue that some FTP clients detect the problem and use the existing server address instead. Python's FTP client doesn't do this though.
The solution was to sub-class the
FTP_TLS class and force it to ignore the response:
class FTP_TLS_IgnoreHost(ftplib.FTP_TLS): def makepasv(self): _, port = super().makepasv() return self.host, port ftp = FTP_TLS_IgnoreHost('host', 'user', 'password')
The
makepasv method parses the remote server's response to
PASV and returns a host and a port. We're extending this method to throw away the returned host and use the one we already have from the original connection. The underscore is just a convention to indicate that we don't care about the first item in the tuple, the host (it's special in some languages like Prolog, but not in Python). Note of course that this approach doesn't attempt to detect unroutable addresses like FileZilla, it just assumes.
There are no doubt third-party packages that provide this functionality and more, but for such a small extension, it wasn't worth the hassle. | https://stumbles.id.au/python-ftps-and-mis-configured-servers.html | CC-MAIN-2018-22 | refinedweb | 479 | 58.18 |
6 years, 9 months ago.
Possible to trigger Ticker interface using argument?
Is there any way to start a ticker interface using argument?
I know that InterruptIn does that but i need my event based function to repeat itself periodically when an event is triggered.
I tried using mbed-rtos but my program always hang even when i did not include the header file in my main file.
1 Answer
6 years, 9 months ago.
hi, have you check the example in the handbook?
; wait(0.2); } }
Is this what you want to do?
Greetings
No sorry.
For example
#include "mbed.h" Ticker flipper; DigitalOut led1(LED1); DigitalOut led2(LED2); DigitalIn enable(p30); void flip() { led2 = !led2; } int main() { led2 = 1; flipper.attach(&flip, 2.0, enable) //Ticker will only trigger when an event occur in DigitalIn P30. while(1) { led1 = !led1; wait(0.2); } }
That isn't possible, although you could make an own library which does it. Although also in your interrupt handler you could simply add that if enable == 1, do that.
If you have for example that after a button is pressed once, it should start calling flip, you can simply only attach it after the button has been pressed.posted by 25 Jan 2013
hi, what do you meant isn´t possible? a try the example from the handbokk on my mbed and works.
The led1 is blinking and the led2 blink in the period that is specified (2 seconds)
Greetingsposted by 25 Jan 2013
Your code is possible, but what WB IsMe likes to do (in the post above mine) isn't possible (without making a custom library to do it).posted by 25 Jan 2013
ok, then what yo want is the ticker function to be activate when some event ocurr (like pressing a button). Is that right? if so try this
#include "mbed.h" InterruptIn button(p5); Ticker flipper; DigitalOut led1(LED1); DigitalOut led2(LED2); int isPressed; void flip() { led2 = !led2; } void eventFunction() { if(!isPressed) { flipper.attach(&flip, 2.0); isPressed=1; } else { flipper.detach(); isPressed=0; } } int main() { isPressed=0; led2 = 0; button.rise(&eventFunction); while(1) { led1 = !led1; wait(0.2); } }
Greetingsposted by 25 Jan 2013
You need to log in to post a question | https://os.mbed.com/questions/310/Possible-to-trigger-Ticker-interface-usi/ | CC-MAIN-2019-47 | refinedweb | 375 | 76.11 |
On 29 Nov 01 at 16:49, jarmo kettunen wrote:> > gcc -O2 -Wall -DGLIBC_HEADERS -c iwlib.c> In file included from iwlib.c:11:> iwlib.h:91:8: warning: extra tokens at end of #endif directive> iwlib.h:96:8: warning: extra tokens at end of #endif directive> In file included from iwlib.h:42,> from iwlib.c:11:> /usr/include/linux/in.h:25: conflicting types for `IPPROTO_IP'> /usr/include/netinet/in.h:32: previous declaration of `IPPROTO_IP'iwlib.h (or any other userspace app) must not include <linux/*> and <asm/*> files. If it needs access to them for accessing ioctl API(or anything else), maintainer must create stripped-down copy ofthese headers, and distribute them with app - which is btw onlycorrect way, as otherwise you cannot create userspace app whichwill support more than one API version (and iw used coupleof incompatible APIs in the past...). | https://lkml.org/lkml/2001/11/29/150 | CC-MAIN-2022-21 | refinedweb | 147 | 58.08 |
. Topics and Queues) as well. It can be a little tricky as it requires the use of the Windows Azure REST API and there aren’t a ton of public examples of how to do it! So in this blog post, I’ll show you how to send a message to a Service Bus Topic from Salesforce.com. Note that this sequence resembles how you’d do this on ANY platform that can’t use a Windows Azure SDK.
Creating the Topic and Subscription
First, I needed a Topic and Subscription to work with. Recall that Topics differ from Queues in that a Topic can have multiple subscribers. Each subscription (which may filter on message properties) has its own listener and gets their own copy of the message. In this fictitious scenario, I wanted users to submit IT support tickets from a page within the Salesforce.com site.
I could create a Topic in a few ways. First, there’s the Windows Azure portal. Below you can see that I have a Topic called “TicketTopic” and a Subscription called “AllTickets”.
If you’re a Visual Studio developer, you can also use the handy Windows Azure extensions to the Server Explorer window. Notice below that this tool ALSO shows me the filtering rules attached to each Subscription.
With a Topic and Subscription set up, I was ready to create a custom VisualForce page to publish to it.
Code to Get an ACS Token
Before I could send a message to a Topic, I needed to get an authentication token from the Windows Azure Access Control Service (ACS). This token goes into the request header and lets Windows Azure determine if I’m allowed to publish to a particular Topic.
In Salesforce.com, I built a custom VisualForce page with the markup necessary to submit a support ticket. The final page looks like this:
I also created a custom Controller that extended the native Accounts Controller and added an operation to respond to the “Submit Ticket” button event. The first bit of code is responsible for calling ACS and getting back a token that can be included in the subsequent request. Salesforce.com extensions are written in a language called Apex, but it should look familiar to any C# or Java developer.
Http h= new Http(); HttpRequest acReq = new HttpRequest(); HttpRequest sbReq = new HttpRequest(); // define endpoint and encode password String acUrl = ''; String encodedPW = EncodingUtil.urlEncode(sbUPassword, 'UTF-8'); acReq.setEndpoint(acUrl); acReq.setMethod('POST'); // choose the right credentials and scope acReq.setBody('wrap_name=demouser&wrap_password=' + encodedPW + '&wrap_scope='); acReq.setHeader('Content-Type','application/x-www-form-urlencoded'); HttpResponse acRes = h.send(acReq); String acResult = acRes.getBody(); // clean up result to get usable token String suffixRemoved = acResult.split('&')[0]; String prefixRemoved = suffixRemoved.split('=')[1]; String decodedToken = EncodingUtil.urlDecode(prefixRemoved, 'UTF-8'); String finalToken = 'WRAP access_token=\"' + decodedToken + '\"';
This code block makes an HTTP request to the ACS endpoint and manipulates the response into the token format I needed.
Code to Send the Message to a Topic
Now comes the fun stuff. Here’s how you actually send a valid message to a Topic through the REST API. Below is the complete code snippet, and I’ll explain it further in a moment.
//set endpoint using this scheme: https://<namespace>.servicebus.windows.net/<topic name>/messages String sbUrl = ''; sbReq.setEndpoint(sbUrl); sbReq.setMethod('POST'); // sending a string, and content type doesn't seem to matter here sbReq.setHeader('Content-Type', 'text/plain'); // add the token to the header sbReq.setHeader('Authorization', finalToken); // set the Brokered Message properties sbReq.setHeader('BrokerProperties', '{ \"MessageId\": \"{'+ guid +'}\", \"Label\":\"supportticket\"}'); // add a custom property that can be used for routing sbReq.setHeader('Account', myAcct.Name); // add the body; here doing it as a JSON payload sbReq.setBody('{ \"Account\": \"'+ myAcct.Name +'\", \"TicketType\": \"'+ TicketType +'\", \"TicketDate\": \"'+ SubmitDate +'\", \"Description\": \"'+ TicketText +'\" }'); HttpResponse sbResult = h.send(sbReq);
So what’s happening here? First, I set the endpoint URL. In this case, I had to follow a particular structure that includes “/messages” at the end. Next, I added the ACS token to the HTTP Authorization header.
After that, I set the brokered messaging header. This fills up a JSON-formatted BrokerProperties structure that includes any values you needed by the message consumer. Notice here that I included a GUID for the message ID and provided a “label” value that I could access later. Next, I defined a custom header called “Account”. These custom headers get added to the Brokered Message’s “Properties” collection and are used in Subscription filters. In this case, a subscriber could choose to only receive Topic messages related to a particular account.
Finally, I set the body of the message. I could send any string value here, so I chose a lightweight JSON format that would be easy to convert to a typed object on the receiving end.
With all that, I was ready to go.
Receiving From Topic
To get a message into the Topic, I submitted a support ticket from the VisualForce page.
I immediately switched to the Windows Azure portal to see that a message was now queued up for the Subscription.
How can I retrieve this message? I could use the REST API again, but let’s show how we can mix and match techniques. In this case, I used the Windows Azure SDK for .NET to retrieve and delete a message from the Topic. I also referenced the excellent JSON.NET library to deserialize the JSON object to a .NET object. The tricky part was figuring out the right way to access the message body of the Brokered Message. I wasn’t able to simply pull it out a String value, so I went with a Stream instead. Here’s the complete code block:
//pull Service Bus connection string from the config file string connectionString = ConfigurationManager.AppSettings["Microsoft.ServiceBus.ConnectionString"]; //create a subscriptionclient for interacting with Topic SubscriptionClient client = SubscriptionClient.CreateFromConnectionString(connectionString, "tickettopic", "alltickets"); //try and retrieve a message from the Subscription BrokeredMessage m = client.Receive(); //if null, don't do anything interesting if (null == m) { Console.WriteLine("empty"); } else { //retrieve and show the Label value of the BrokeredMessage string label = m.Label; Console.WriteLine("Label - " + label); //retrieve and show the custom property of the BrokeredMessage string acct = m.Properties["Account"].ToString(); Console.WriteLine("Account - " + acct); Ticket t; //yank the BrokeredMessage body as a Stream using (Stream c = m.GetBody<Stream>()) { using (StreamReader sr = new StreamReader(c)) { //get a string representation of the stream content string s = sr.ReadToEnd(); //convert JSON to a typed object (Ticket) t = JsonConvert.DeserializeObject<Ticket>(s); m.Complete(); } } //show the ticket description Console.WriteLine("Ticket - " + t.Description); }
Pretty simple. Receive the message, extract interesting values (like the “Label” and custom properties), and convert the BrokeredMessage body to a typed object that I could work with. When I ran this bit of code, I saw the values we set in Salesforce.com.
Summary
The Windows Azure Service Bus brokered messaging services provide a great way to connect distributed systems. The store-and-forward capabilities are key when linking systems that span clouds or link the cloud to an on-premises system. While Microsoft provides a whole host of platform-specific SDKs for interacting with the Service Bus, there are platforms that have to use the REST API instead. Hopefully this post gave you some insight into how to use this API to successfully publish to Service Bus Topics from virtually ANY software platform.
Categories: .NET, Cloud, Salesforce.com, Windows Azure, Windows Azure Service Bus
Good stuff, thanks. But I keep getting a 401 – Invalid authorization token signature – when I send a message. The SWT token looks fine as far as I can tell, so I think it’s a problem with the way I’ve set up ACS. Any tips on that?
Hey Mike. Are you using the password or the key to authenticate? I sometimes get a messed up ACS account where I’m passing in the wrong credentials! Look at the ACS portal and try both values.
Is there also a way of doing this the other way round? So my Azure app adds a message to a Topic and a third-party customer who subscribes to that Topic using an HTTP endpoint (using a non-.NET system) receives this message?
Definitely. You can use the REST API () to also pull from a Topic subscription. You could do this synchronously on demand, or have a background job that does this occasionally.
Hi Richard
Thanks for your article. It was really helpful. I’m using the REST API to receive a message according to the above document in MSDN. However I’m getting “The specified HTTP verb (POST) is not valid”, HTTP 400 error when try to retrieve the message. Any idea why? I’m using SAS for authentication. The send message works fine.
Cheers
Manoj
You are using Azure service bus in the cloud. If I were to use on premise Service Bus for windows server and I have the port 9355 open and ssl enabled, how can I send a message to the queue from salesforce and also pick up from the queue? Do I use Shared Access Signature Authentication? Is that similar to adding ACS token to the HTTP Authorization header?
Wow, not sure about using on-prem Service Bus with this. So you have a public IP and those ports open?
That’s the plan. We are planning to test it out. I was wondering if you had any suggestions.
Nothing comes to mind. If the ports are open, I don’t see any obvious reason it wouldn’t work. That said, you’ll want to be careful about security and opening up a service like that to public internet traffic.
@Chandra – Did you have any luck getting this to work?
No, We didn’t even try. We had to drop this plan.
Do you know what should I configure security related in salesforce in addition to workflow rule and outbound message to send the xml message to an azure topic or queue?
You definitely have to make sure to have the endpoint added to the remote site settings | https://seroter.wordpress.com/2013/09/18/using-the-windows-azure-service-bus-rest-api-to-send-to-topic-from-salesforce-com/ | CC-MAIN-2019-47 | refinedweb | 1,688 | 58.18 |
How to Publish Array Data for Multiple Servos on Arduino
I'm trying to publish an array in Python that is subscribed to by code on an Arduino that has callbacks for an Adafruit PWM servo controller. The code on the Arduino compiles and uploads to an Arduino Uno just fine. I'm not sure how to publish a multiarray for the following example: 3 servos that each have different integer angles between 0 and 180. I'm getting the following error messages when I run:
TypeError: Invalid number of arguments, args should be ['layout', 'data'] args are('{data: [20,50,100]}',)
How should I be publishing the multiarray in this example?
Python Publishing Code:
print "reco_event_servo_pwm: set up data values for servos" servo_pub1 = rospy.Publisher('servo_pwm', UInt16MultiArray, queue_size=10) n = 3 while n >= 0: servo_pub1.publish('{data: [20,50,100]}') # THIS IS THE STATEMENT THAT HAS ERRORS rate.sleep() n = n - 1
Arduino Code:
#if (ARDUINO >= 100) #include <Arduino.h> #else #include <WProgram.h> #endif #include <Servo.h> #include <ros.h> #include <std_msgs/UInt16MultiArray.h> #include <std_msgs/String.h> #include <Wire.h> #include <Adafruit_PWMServoDriver.h> ///////////////////////////////////////////////////////////////////////////////// ros::NodeHandle nh; // called this way, it uses the default address 0x40 Adafruit_PWMServoDriver pwm = Adafruit_PWMServoDriver(); // 200 // this is the 'minimum' pulse length count (out of 4096) #define SERVOMAX 400 // this is the 'maximum' pulse length count (out of 4096) // our servo # counter uint8_t servonum = 0; void servo_ctlr_cb( const std_msgs::UInt16MultiArray& cmd_msg) { // servo1.write(cmd_msg.data); //set servo angle, should be from 0-180 for (int i=0; i<3; i++) { // run for all servos for TESTing pwm.setPWM(i, 0, cmd_msg.data[i]); //PWM signal, state where it goes high, state where it goes low, 0-4095, deadband 351-362 } } ros::Subscriber<std_msgs::UInt16MultiArray> sub1("servo_pwm", servo_ctlr_cb); void setup() { Serial.begin(9600); nh.initNode(); nh.subscribe(sub1); pwm.begin(); pwm.setPWMFreq(60); // Analog servos run at ~60 Hz updates for(int j=0;j<8;j++) { //initialize every thruster via channel (0-7) with a for-loop pwm.setPWM(j,0,351); } } // pulselength /= 4096; // 12 bits of resolution pulse *= 1000; pulse /= pulselength; pwm.setPWM(n, 0, pulse); } void loop() { nh.spinOnce(); delay(20); }
Hi there. Did you find a solution? I am facing something similar... I need to call the instruction to publish data from another void function but it seems that the python subscriber is not printing anything and with rqt_graph I can see that the rosserial is publishing and python node is subscribing... but I can't see anything In the screen rospy loginfo
@subarashi I've moved your answer to a comment. Please remember that this is not a forum. Answers should be answers and anything else should really be a comment.
Bro, thanks but if you won't help me to figure out my problem why are taking time to move my comment LOL
@subarashi It only takes a moment to move it and I'm just informing you about how the site works, which will help you (and others) in the long run | https://answers.ros.org/question/235150/how-to-publish-array-data-for-multiple-servos-on-arduino/ | CC-MAIN-2021-43 | refinedweb | 504 | 56.76 |
urn:lsid:ibm.com:blogs:entries-84a1e68f-fa2b-4d83-a240-5f565e7d6248 Mainframe Performance Topics with Martin Packer - Tags - rexx I'm a well-known mainframe performance guy, with almost 30 years of experience helping customers manage systems. I also dabble in lots of other technology. I've sought to widen the Performance role, incorporating aspects of infrastructural architecture. 19 2015-03-24T09:19:53-04:00 IBM Connections - Blogs urn:lsid:ibm.com:blogs:entry-a048c817-0009-445b-bb05-26213b0b4026 Refactoring REXX - Temporarily Inlined Functions MartinPacker 11000094DH active Comment Entries application/atom+xml;type=entry Likes 2013-07-16T09:52:14-04:00 2013-07-16T09:52:14-04:00 <p>You could consider this another in the Small Programming Enhancement (SPE) <img src="" class="smiley" alt=":-)" title=":-)" /> series. You’ll probably also notice I’ve been doing quite a lot of REXX programming recently. Anyway, here’s a tip for refactoring code I like.</p> <p>Suppose you have a line of code:</p> <pre><code>norm_squared=(i-j)*(i-j) </code></pre> <p>that you want to turn into a function.</p> <p>No biggie:</p> <pre><code>norm2: procedure parse arg x,y return (x-y)*(x-y) </code></pre> <p>and call it with:</p> <pre><code>norm_squared=norm2(i,j) </code></pre> .</p> <p>Try the following, though:</p> <pre><code>/* REXX */ do i=1 to 10 do j=1 to 10 say i j norm2(i,j) /* do never */ if 0 then do norm2: procedure parse arg x,y return (x-y)*(x-y) /* end do never */ end say "After procedure definition" end end exit </code></pre> <p <strong>did</strong> do this but it immediately failed once I tried it inside a loop: You learn and move on.)</p> <p>What this enables you to do is to develop the function “inline” and then you can move it later - to another candidate invocation or indeed to the end of the member (or even to a separate member).</p> <p>It saves a lot of scrolling about and encourages refactoring into separate routines. It’s not the same as an <a href="">anonymous function</a> but it’s heading in that direction, in terms of utility.</p> <p. <img src="" class="smiley" alt=":-)" title=":-)" /></p> You could consider this another in the Small Programming Enhancement (SPE) series. You’ll probably also notice I’ve been doing quite a lot of REXX programming recently. Anyway, here’s a tip for refactoring code I like. Suppose you... 1 0 14098c8484e-5285-4df6-9c57-1f5413905ade Sorting In REXX Made Easy MartinPacker 11000094DH active Comment Entries application/atom+xml;type=entry Likes... 0 0 380-d02a3e00-3d30-431f-a7ed-5d0c18f7d7d3 Min And Max Of Tokens In A String MartinPacker 11000094DH active Comment Entries application/atom+xml;type=entry Likes 2013-07-14T06:01:33-04:00 2013-07-14T06:01:33-04:00 <p>A couple of days ago I had a need to take a REXX string comprising space-separated numbers and find their minimum and maximum values. Here's the technique I used.</p> <p>(When I say "space-separated" there can be one or more spaces between the numbers, but there has to be at least one.)</p> <p>The solution has three components:</p> <ol> <li>The REXX SPACE function - to turn the list into a comma-separated string of numbers. (The second parameter is the number of so-called spaces to separate tokens with. The third is the actual character to use - in my case a comma.)</li> <li>The REXX MIN (or MAX) function to compute the minimum (or maximum) value from this comma-separated string. These functions take a set of parameters of arbitrary length and do the maths on them. Parameters are separated by commas, hence the need to use SPACE to make it so.</li> <li>INTERPRET to glue 1 and 2 together.</li> </ol> <p>My need is relatively low volume, so the "health warning" about INTERPRET's performance is hardly relevant for <strong>my</strong> use case.</p> <p>Here's the code:</p> <pre><code>/* Return min and max value of string of space-separated numbers */ minAndMax: procedure parse arg list comma_list=space(list,,",") interpret "minimum=min("comma_list")" interpret "maximum=max("comma_list")" return minimum maximum </code></pre> <p>It's relatively straightforward, taking a list of numbers and returning the minimum and maximum. You'll notice it doesn't check that the tokens really <strong>are</strong> numbers. If I were to extend it I'd probably check for two SLR conditions: Overflow ("<em>*</em>" or similar) and Missing Value ("---" or similar). I'd probably take some of the "List Comprehension" stuff I talked about in <a href="">Dragging REXX Into The 21st Century?</a> and apply it to the list.</p> <p>And my code uses this to decide if I have a range of values or just a single one. In the former case it turns the pair of numbers into e.g. "1-5" and the latter just e.g. "4".</p> <p>Of course there are other ways to do minimum and maximum for a list of numbers but this one seems the simplest and most elegant to me. "6 months later me" might take a different view. <img src="" class="smiley" alt=":-)" title=":-)" /></p> A couple of days ago I had a need to take a REXX string comprising space-separated numbers and find their minimum and maximum values. Here's the technique I used. (When I say "space-separated" there can be one or more spaces between the numbers, but there... 0 0 115-32496cf7-33f5-49fd-a941-d47ec0097d46 Dragging REXX Into The 21st Century? MartinPacker 11000094DH active Comment Entries application/atom+xml;type=entry Likes 2013-06-07T09:28:19-04:00 2013-06-07T09:28:19-04:00 <p>I like REXX but sometimes it leaves a little to be desired. This post is about a technique for dealing with some of the issues. I present it in the hope some of you will find it worth building on, or using directly.</p> <p><strong>Note:</strong> I’m talking about <strong>Classic</strong> REXX and not <a href="">Open Object REXX</a>.</p> <p><a href="">List Comprehensions</a> are widespread in modern programming languages - because they express concisely otherwise verbose concepts - such as looping.</p> <p>Here’s an example from javascript:</p> <pre><code>var numbers = [1, 4, 9]; var roots = numbers.map(Math.sqrt); /* roots is now [1, 2, 3], numbers is still [1, 4, 9] */ </code></pre> <p>It’s taken from <a href="">here</a> which is a good description of javascript’s support for arrays.</p> <p>Essentially it applies the square root function (<code>Math.sqrt</code>) to each element of the array <code>numbers</code>, using the <code>map</code> method. Even though it processes every element there’s no loop in sight. This, to me, is quite elegant and very maintainable. It gets rid of a lot of looping cruft that adds no value.</p> <h3>My Challenge</h3> <p).</p> <p>An example of a blank-delimited token string is:</p> <pre><code> <col style="text-align:left;"/> </colgroup> <thead> <tr> <th style="text-align:left;">Function</th> <th style="text-align:left;">Purpose</th> </tr> </thead> <tbody> <tr> <td style="text-align:left;">map</td> <td style="text-align:left;">Applies a routine to each element</td> </tr> <tr> <td style="text-align:left;">filter</td> <td style="text-align:left;">Creates a subset of the string with each element being kept or discarded based on the routine’s return value (1 to keep and 0 to throw the item away)</td> </tr> <tr> <td style="text-align:left;">reduce</td> <td style="text-align:left;">Produce a result based on an initial value and applying a routine to each element.</td> </tr> </tbody> </table> <p>Here’s a simple version of filter:</p> <pre><code </code></pre> <p>Variable “list” is the input space-separated list. “outlist” is the output list that filter builds - in the same space-separated list format.</p> <p>Much of this is in fact parameter handling: The p1, p2, p3 optional parameters need checking for. But the “heavy lifting” comes in three parts:</p> <ul> <li><p>Breaking the string into tokens (or items, if you prefer).</p></li> <li><p>Using <code>interpret</code> to invoke the filter function (named in variable f) against each token.</p></li> <li><p>Checking the value of the keepit variable on return from the filter function: </p> <p>If it’s 1 then keep the item. If not then remove it from the list.</p></li> </ul> <p>I also wrote a filter called “grepFilter” (amongst others). Recall the example above where I wanted to find the string “CICS” at the beginning of a token. That could’ve been done with a filter that checked for <code>pos("CICS",item)=1</code>. That’s obviously a very simple case. grepFilter, as the name suggests, uses grep against each token. It worked nicely (though I suggest it fails my long-standing “minimise the transitions between REXX and Unix through BPXWUNIX” test).</p> <p>And then I got playing with examples, including “pipelining” - from, say, map to filter to reduce - such as:</p> <p><code>say reduce("sum",0,filter("gt",8,map("timesit",2,"1 2 3 4 5 6")))</code></p> <h3>Issues</h3> <p>There are a number of issues with this approach:</p> <ul> <li><p>You’ll notice the function name (first parameter in the <code>filter</code> example above) is in fact a character string.</p> <p>It’s not a function reference as other languages would see it. REXX doesn’t have a <strong>first class</strong> function data type. Suppose you didn’t have a procedure of that name in your code: You’d get some weird error messages at run time. And while you can pass around character strings all you want the semantics are different from passing around function references.</p></li> <li><p>The vital piece of REXX that makes this technique possible is the <code>interpret</code> instruction.</p> <p>It’s very powerful but comes at a bit of a cost: When the REXX interpreter starts it tokenises the REXX exec - for performance reasons. It can’t tokenise the string passed to <code>interpret</code>. <code>interpret</code>.</p></li> <li><p>The requirement to write, for example</p> <p><code>say reduce("sum",0,filter("gt",8,map("timesit",2,"1 2 3 4 5 6")))</code></p> <p>rather than</p> <p><code>say "1 2 3 4 5 6".map("timesit",2).filter("gt",8).reduce("sum",0)</code></p> <p>is inelegant. Fixing this would require subverting a major portion of what REXX is. And that’s <strong>not</strong> what I’m trying to do.</p></li> <li><p>The need to apply a function to each item - particularly in the filter case - can be overkill.</p> <p>In my Production code I can write</p> <p><code>filter("item>8","1 2 4 8 16 32")</code></p> <p>as I check the first parameter for characters such as “>” and “=”. So no filtering function required.</p></li> <li><p>REXX doesn’t have <a href="">anonymous functions</a> and I can’t think of a way to simulate them. Can you? If you look at the linked Wikipedia entry it shows how expressive they can be.</p></li> </ul> <p>These are worth thinking about but not - I would submit - show stoppers. They just require care in using these techniques and sensible expectations.</p> <h3>Conclusions</h3> <p>It’s perfectly possible to do <strong>some</strong>.)</p> <p>I’d note that “CMS Pipelines” would do some of this - but not all. And in any case most people don’t have CMS Pipelines - whether on VM or ported to TSO. (TSO is my case, but mostly in batch.)</p> <p>I don’t believe “Classic” REXX to be under active development so asking for new features is probably a waste of time. Hence my tack of <strong>simulating</strong> them, and living with the limitations of the simulation: It still makes for clearer, more maintainable code.</p> <p>Care to try to simulate other modern language features? <a href="">Lambda</a> or <a href="">Currying</a> would be pretty similar.</p> <p>Of course if I had kept my blinkers on then I wouldn’t know about all these programming concepts and wouldn’t be trying to apply them to REXX. But where’s the fun in <strong>that</strong>?</p>... 0 0 2369-0eef75c6-9a05-423e-a184-bb88655336af REXX That's Sensitive To Where It's Called From MartinPacker 11000094DH active Comment Entries application/atom+xml;type=entry Likes 2013-05-26T16:59:54-04:00 2013-05-26T16:59:54-04:00 <p>I have REXX code that can be called directly by TSO (in DD SYSTSIN data) or else by another REXX function. I want it to behave differently in each case:</p> <ul> <li>If called directly from TSO I want it to print something.</li> <li>If called from another function I want it to return some values to the calling function.</li> </ul> <p>So I thought about how to do this. The answer’s quite simple: Use the parse source command and examine the second word returned.</p> <p>Here’s a simple example, the function <code>myfunc</code>.</p> <pre><code>/* REXX myfunc */ interpret "x=1;y=2" parse source . envt . if (It’s probably a standard technique nobody taught me.) <img src="" class="smiley" alt=":-)" title=":-)" /></p>... 0 0 1578 3491-37adde2d-5591-4b11-bc55-6bed98fe3dde Alternate Macro Libraries - Part 2 MartinPacker 11000094DH active Comment Entries application/atom+xml;type=entry Likes 3359-c884a99f-42ab-4de3-be5a-5faccc2d9243 Microwave Popcorn, REXX and ISPF MartinPacker 11000094DH active Comment Entries application/atom+xml;type=entry Likes 2013-01-21T14:36:36-05:00 2013-01-21T14:36:36-05:00 <p>To me learning is like <a href="">Microwave Popcorn</a>.</p> <p>Specifically, turning</p> <img src="" width="800px" /> <p>into</p> <img src="" width="800px" /> <p>Part of the fun of making popcorn is watching the bag and listening to the poppings: As each kernel pops it pushes the bag out.</p> <p>And so it is with learning: Every piece of knowledge contributes to the overall shape.</p> <p>Anyhow, enough of the homespun "philosophy". <img src="" class="smiley" alt=":-)" title=":-)" /></p> <p>I was maintaining some ISPF REXX code recently and it caused me to come across two areas where REXX can really help with ISPF applications:</p> <ul> <li>Panel field validation.</li> <li>File Tailoring</li> </ul> <p>The introduction of REXX support is not all that recent - I think z/OS R.6 and R.9 were the operative releases - but I think most people are unaware of these capabilities.</p> <p>I'm not an ISPF application programmer so if you want the technical details look them up in the ISPF manuals. But here's the gist of why you might want to consider them.</p> <br/> <h3>Panel Field Validation</h3> <br/> <p:</p> <p.</p> <p>Of course you might be in a position to do this all in the REXX that causes the panel to be displayed in the first place. But there are two reasons why I think you'd want to do it in the panel definition itself:</p> <ul> <li>It's a lot simpler than having the driving REXX redisplay the panel if the fields don't validate.</li> <li>Keeping all the field validation logic together - VER() and REXX - is much neater.</li> </ul> <p>But you have the choice.</p> <br/> <h3>File Tailoring</h3> <br/> <p>Again driven by REXX, the code I maintain uses ISPF File Tailoring to create JCL from skeleton files, based on variables from ISPF panels.</p> <p>You can write some quite sophisticated tailoring logic without using REXX. But with REXX you can do so much more.</p> <p>(My first test case used the REXX strip() function to remove trailing blanks. Of course you can do that with )SETF without REXX.)</p> <p>If you code )REXX var1 var2 ... then some REXX then a terminating )ENDREXX you can use the full power of REXX.</p> <p>In the above var1 etc are quite important: If you want to use any of the File Tailoring variables (or set them in the REXX code) you have to list them.</p> <p>Note: You can use say to write debugging info to SYSTSPRT.</p> <p>I don't believe you can directly emit lines in REXX but you could set a variable to 1 or 0 and use )SEL to conditionally include text.</p> <p - <b>after</b> the validation code has run.</p> <br/> <hr/> <br/> <p.</p> <p>And certainly <b>I</b> feel my grasp of ISPF is that much better - but maybe because of the 2000 lines of ISPF REXX I reformatted and adopted in the process. <img src="" class="smiley" alt=":-)" title=":-)" /></p> To me learning is like Microwave Popcorn . Specifically, turning into Part of the fun of making popcorn is watching the bag and listening to the poppings: As each kernel pops it pushes the bag out. And so it is with learning: Every piece of... 0 0 29 4112-363ca7cb-d31d-481c-97fb-5ee0bd8030d9 Drawing The Line MartinPacker 11000094DH active Comment Entries application/atom+xml;type=entry Likes 2012-03-23T10:12:37-04:00 2012-03-23T10:12:37-04:00 <p>You'd think it would be pretty simple to draw a line. Right?</p> <p>This post discusses an enhancement I'd like to make to my current reporting - and I'm pretty sure that <b>technically</b> I can do it. The question is whether I <b>should</b>.</p> <p>Consider my current "Memory by address space within Service Class" graph. Here's a sample:</p> <br /> <img src="" width="729" /> <br /> <p>And here's what I think I might like it to look like:</p> <br /> <img src="" width="729" /> <br /> <p.</p> <p>Let's talk about:</p> <ul> <li>Motivation and Usage</li> <li>Mechanics</li> </ul> <br /> <h3>Motivation and Usage</h3> <br /> <p>When I throw graphs at you I see myself as "story telling". Hopefully an accurate story, certainly one I believe in. So, when working on my code I ask the question "how does this affect the story telling?"</p> <p>Here's how I normally tell the (e.g) CPU story:</p> <ol> <li>Talk about CPU usage by processor pool by LPAR<sup>1</sup> and stacked up to give the machine view.</li> <li>Break down CPU usage by WLM Workload and the Service Class<sup>2</sup> - again by pool.</li> <li>Likewise by address space within a Service Class.</li> <li>Possibly break down address space CPU to e.g. Transaction - assuming CICS or DB2 are "in play".</li> </ol> <p>When you've done that you certainly know where the CPU is going. You do the same thing for memory - right until you get to Step 4.</p> <p>The concept of "capture ratio" is well known and bridges the gap between Step 1 and Step 2 - for CPU<sup>3</sup>. It doesn't make sense to draw the proposed line for this case.</p> <p>To bridge between the Service Class level and the Address Space level (Step 2 to Step 3) I think a different treatment is required. There are a number of reasons for this:</p> <ul> <li>Some service classes have no address spaces. And hence no memory. "Capture Ratio" may be 100% but unlikely to be computed that way. <img src="" class="smiley" alt=":-)" title=":-)" /></li> <li>The chart I'm proposing has up to 15 address spaces on it. (We could make it more but then it becomes markedly less readable.) So, for a Service Class with more than 15 address spaces we miss some - as in this particular example. I'd like to show we had good (or bad) coverage of the "headline" Service Class number in these 15 address spaces. This works fine for CPU, memory and EXCPs.</li> <li>Type 30 memory numbers behave badly and it would be nice to see how badly compared to the Service Class total. (Type 30 CPU numbers don't behave badly.)</li> </ul> <p>So I think the line that says what the total "should" be is ideal for this. Hence my proposal<sup>4</sup>.</p> <br /> <h3>Mechanics</h3> <br /> <p>Today the data is in two tables: A Service Class (Period) table and an Address Space table - both summarised at an interval level<sup>5</sup>. The former comes from RMF SMF 72 Subtype 3. The latter comes from SMF 30 Subtypes 2 and 3. It's always interesting handling two different data sources as if they might <b>magically</b> corroborate each other. How naive. <img src="" class="smiley" alt=":-)" title=":-)" /></p> <p.</p> <p>In your case you can probably bring the two together quite neatly. Anyone know if MXG already does this?</p> <br /> </p><h3>Conclusion</h3> <br /> <p>So, why am I blogging about this? Two reasons: </p> <ul> <li>Because you might want to try the same depiction idea.</li> <li>Because I'd like to know if you think this is a good idea.</li> </ul> <p>So I'd like your input on this. (Commenting here would be fine or any other way you want.) And maybe next time I crunch your data the story will be told just that little bit better. At least that's the plan. <img src="" class="smiley" alt=":-)" title=":-)" /></p> <br /> <p /><hr /> <br /> <p><sup>1</sup> Nowadays those pools are: GCP, ICF, zIIP, zAAP, and IFL.</p> <p><sup>2</sup> I've not found much value in breaking CPU usage down by Service Class Period.</p> <p><sup>3</sup> For memory I handle it differently - because there are reported-on memory usages that are outside of the Workload / Service Class hierarchy. And I explicitly calculate an "Other" category - which has <b>never</b> turned out to be negative.</p> <p><sup>4</sup> Today I'd be showing you two charts and inviting you to do the comparison. I hope my proposal makes this quicker and smoother.</p> <p><sup>5</sup>.</p> </ul> You'd think it would be pretty simple to draw a line. Right? This post discusses an enhancement I'd like to make to my current reporting - and I'm pretty sure that technically I can do it. The question is whether I should . Consider my current... 0 0 5336 Comment Entries application/atom+xml;type=entry Likes 68 3551aefd1-4173-4427-8cb5-2c392634e943 DB2 Package-Level Statistics and Batch Tuning MartinPacker 11000094DH active Comment Entries application/atom+xml;type=entry Likes 3665 36-2a80869e-79fc-48c0-8ae7-ecde57f1271e Hackday 9 - REXX, Java and XML MartinPacker 11000094DH active Comment Entries application/atom+xml;type=entry Likes 54 Comment Entries application/atom+xml;type=entry Likes 7680 4899-edd0bcb5-8daf-4af0-a21e-a70bd1a2ff80 An Experiment With Job Naming Conventions MartinPacker 11000094DH active Comment Entries application/atom+xml;type=entry Likes 30b0ee7f3-4b9c-4ad2-8d05-9334b2a0d807 XML, XSLT and DFSORT, Part One - Creating A Flat File With XSLT MartinPacker 11000094DH active Comment Entries application/atom+xml;type=entry Likes 2011-05-14T16:25:22-04:00 2011-05-14T16:29:54-04:00 <p>This is the second part of a (currently) three-part series on processing XML data with DFSORT, given a little help from standard XML processing tools. The first part - which you should read before reading on - is <a href="">here</a>.</p> <p>To recap, getting XML data into DFSORT is a two stage process:</p> <ol> <li>Flatten the XML data so that it consists of records with fields in sensible places.</li> <li>Process this flattened data with DFSORT / ICETOOL or something else, like REXX.</li> </ol> <p>This post covers the first part of this. You'll see how you can transform the XML file below into a Comma-Separated Variable (CSV) file.</p> <p>Here's the source XML, complete with a few quirks:</p> <table border="1"> <thead> <tr> <td><b>XML File To Be Processed</b></td></tr> </thead> <tbody><tr> <td> <pre><?xml version="1.0"?> <mydoc> <greeting level="h1"> Hello World! </greeting> <stuff> <item a="1"> <b>1</b> <row>One</row> </item> <item <b>2</b> <row>Two</row> </item> <item a="903"> <b>3</b> <row> Three </row> </item> </stuff> </mydoc> </pre> </td> </tr> </tbody></table> <p>Here's the resulting flat file:</p> <table border="1"> <thead> <tr><td><b>Resulting Flat File For Processing With DFSORT / ICETOOL</b></td></tr> </thead> <tbody><tr> <td> <pre> "One",1 "Two",12 "Three",903 </pre> </td> </tr></tbody></table> <p>I'm assuming you can read XML reasonably well. In this example we have three "item" <b>elements</b> as <b>children</b> of a "stuff" element. The "stuff" element is a child of the "mydoc" element. The "mydoc" element also contains a "greeting" element. Each "item" element has a single "row" child element and an "a" <b>attribute</b>.</p> <p>To produce the output we need to find the "item" elements and pick up the "row" child element and the "a" attribute value. We write one record for each "item" element. (We ignore the "greeting" element entirely.) </p><p. </p><p>I've deliberately formatted each "item" element slightly differently: </p><ol> <li>The "a" attribute is on the same line as the "item" tag, and the "row" element fits entirely on one line.</li> <li>The "a" attribute is on the next line, and the "row" element is on one line.</li> <li>The "a" attribute is as in <b>1</b> but the "row" element text is split across three lines.</li> </ol> <p.</p> <p). </p><p>In this example you scarcely need to write your own program. (Handling item 3, as I'll describe later, is the one case where a program might be better.).</p><p> </p><p>Here's the XSLT stylesheet that produces the required output:</p> <table border="1"> <thead> <tr> <td><b>XSLT Stylesheet</b></td></tr> </thead> <tbody><tr> <td> <pre><?xml version="1.0"?> <xsl:stylesheet <xsl:output <b>2</b> <xsl:template <xsl:apply-templates <b>3</b> </xsl:template> <xsl:template <b>4</b> <xsl:text>"</xsl:text> <b>5</b> <xsl:value-of <b>6</b> <xsl:text>",</xsl:text> <b>7</b> <xsl:value-of <b>8</b> </xsl:template> </xsl:stylesheet> </pre> </td> </tr> </tbody></table> <p>This is a fairly simple stylesheet. Here's how it works (and the numbered lines above correspond to the numbering below:</p><p> </p><ol> <li>Here we declare the level of the XSLT language to be 2.0. In fact there's nothing about this stylesheet that requires that language level.</li> <li>Here we say we're creating a text file as output and that it will be EBCDIC (IBM-1047).</li> <li>Here we search for the "stuff" element within the "mydoc" element - using the XPath language. In fact the only "stuff" element we'll match with is the one at the top of the XML <b>node tree</b> - because it's preceded by a "/". For each matched "stuff" element we apply the template below.</li> <li>This template matches all "item" elements within the "stuff" element.</li> <li>Here text starts to be written out for the record. In this case the leading quote around the first piece of data.</li> <li>Here the first of piece of data is written out - the text value of the "row" element. We'll come back to the normalize-space() function in a minute. </li> <li>Here a trailing quote and a comma are written out.</li> <li>Here the value of the "a" attribute is written out. It needs no adjustment (in this example).</li> </ol> <p.</p> <p>If you want to get into XSLT I can recommend Doug Tidwell's <a href="">XSLT, Second Edition Mastering XML Transformations</a> book. It's what I've used - with some additional research on the web (which didn't yield much additional insight).</p> <p>I used the <a href="">Saxon B</a> (free) parser as it's the only one I can get my hands on that does XSLT 2.0. It's a java jar. You could use others, of course.</p> <p.) <img src="" class="smiley" alt=":-)" title=":-)" /></p> <p>(If you specify version="1.0" for the stylesheet Saxon will issue a message informing you you're running a 1.0 stylesheet through a 2.0 processor. This has caused no problems whatsoever for me.)</p> <p>Originally I downloaded Saxon to my Linux laptop and used it with an ASCII stylesheet and XML data. Transferring to z/OS was straightforward. This approach may work for you, if you're setting out to learn XSLT.</p> <p>Learning and working with XSLT continues to be a journey of discovery. If I'm missing some tricks that you spot feel free to let me know. The next post in this series will be about the DFSORT counterpart.</p> This is the second part of a (currently) three-part series on processing XML data with DFSORT, given a little help from standard XML processing tools. The first part - which you should read before reading on - is here . To recap, getting XML data into... 1 0 4424 | https://www.ibm.com/developerworks/community/blogs/MartinPacker/feed/entries/atom?tags=rexx&lang=en | CC-MAIN-2015-14 | refinedweb | 5,028 | 63.8 |
A;
3:
4: namespace HelloWorldViewModel
5: {
6: public class HelloWorldModel : INotifyPropertyChanged
7: {
8: private string _name;
9: private bool _isHavingGoodDay = true;
10:
11: /// <summary>
12: /// The user's name
13: /// </summary>
14: public string Name
15: {
16: get { return this._name; }
17: set
18: {
19: if (this._name != value)
20: {
21: this._name = value;
22: this.RaisePropertyChanged("Name");
23: this.RaisePropertyChanged("Greeting");
24: }
25: }
26: }
27:
28: /// <summary>
29: /// Whether or not the user is having a good day
30: /// </summary>
31: public bool IsHavingGoodDay
32: {
33: get { return this._isHavingGoodDay; }
34: set
35: {
36: if (this._isHavingGoodDay != value)
37: {
38: this._isHavingGoodDay = value;
39: this.RaisePropertyChanged("IsHavingGoodDay");
40: this.RaisePropertyChanged("Greeting");
41: }
42: }
43: }
44:
45: /// <summary>
46: /// A greeting for the user
47: /// </summary>
48: public string Greeting
49: {
50: get
51: {
52: if (!string.IsNullOrEmpty(this.Name))
53: {
54: if (this.IsHavingGoodDay)
55: {
56: return string.Format("Awesome; glad to see you {0}", this.Name);
57: }
58:
59: return string.Format("Sorry to hear that, {0}. I hope things turn around.", this.Name);
60: }
61:
62: return "Hello World!";
63: }
64: }
65:
66: /// <summary>
67: /// Support Binding
68: /// </summary>
69: public event PropertyChangedEventHandler PropertyChanged;
70:
71: /// <summary>
72: /// Helper method for raising the PropertyChanged event
73: /// </summary>
74: /// <param name="propertyName"></param>
75: private void RaisePropertyChanged(string propertyName)
76: {
77: if (this.PropertyChanged != null)
78: {
79: this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
80: }
81: }
82: }
83: }
The takeaways from this class:
Now for the other side of the coin, the View.
1: <UserControl x:Class="HelloWorldViewModel.Page"
2: xmlns=""
3: xmlns:x=""
4: xmlns:
5: <UserControl.Resources>
6: <app:VisibilityConverter x:
7: </UserControl.Resources>
8: <UserControl.DataContext>
9: <app:HelloWorldModel />
10: </UserControl.DataContext>
11: <Grid x:
12: <Image Source="Smiley.jpg"
13: HorizontalAlignment="Stretch"
14: VerticalAlignment="Stretch"
15:
16: <Image Source="Sad.jpg"
17: HorizontalAlignment="Stretch"
18: VerticalAlignment="Stretch"
19:
20: <Border Width="800"
21: HorizontalAlignment="Center"
22: VerticalAlignment="Top"
23: CornerRadius="0,0,125,125"
24:
25: <TextBlock Grid.Row="3"
26: Text="{Binding Greeting}"
27: HorizontalAlignment="Center"
28:
29: </Border>
30: <Border Width="500"
31: HorizontalAlignment="Center"
32: VerticalAlignment="Bottom"
33: CornerRadius="125,125,0,0"
34:
35: <Grid Width="300"
36: VerticalAlignment="Top"
37: HorizontalAlignment="Center"
38:
39: <Grid.RowDefinitions>
40: <RowDefinition />
41: <RowDefinition />
42: <RowDefinition />
43: </Grid.RowDefinitions>
44:
45: <TextBlock Grid.
46: <TextBox Grid.
47:
48: <CheckBox Grid.Row="2"
49: Content="I'm having a good day"
50:
51: </Grid>
52: </Border>
53: </Grid>
54: </UserControl>
What the View exhibits:
Now, let’s look at the code-behind for the View. This is by far the most important part of this sample.
2: using System.Windows.Controls;
6: public partial class Page : UserControl
8: public Page()
9: {
10: InitializeComponent();
11: }
12: }
13: }
Yep, (essentially) no code in the code-behind. And yes, that is the most important part of the ViewModel pattern. You can have your View bound to your ViewModel using 1-way and 2-way binding, with logic in your ViewModel and not a lick of logic in your View’s code-behind.
As I mentioned in my ViewModel Pattern introductory post, it’s important to disallow any view-specific behavior from leaking into the ViewModel. The visibility of the two images in this sample are perfect examples of this point. The ViewModel exposes a boolean, not a Visibility property. The View can then do whatever it wants with that boolean. For all the ViewModel knows, the View doesn’t do anything with it, but in this case, there are 2 images that are toggled based on the value.
In future posts, I’ll talk about more ViewModel details that go beyond Hello World type applications. Most notably, I’ll be writing about service references and how a ViewModel should get data from services to be exposed to the View. Keep in mind that this pattern can be applied in either WPF or Silverlight.
You can download the source code for this application here.
Monday, October 27, 2008 11:08 AM
Jeff, great example!How can I write a converter so it sets visibility of the panel depending on two properties, not one? Can (or should) the converter access properties in the model class?Here is the reason: There is a listbox (of persons) on the left side of the window and the 'Add' and 'Update' buttons underneath. When the user selects a row in the listbox, I need to display a read-only panel with the details on the right side of the window. When the user clicks the Update button, I need to hide the read-only panel and display the update panel. Same for the Add button, except I have to clear selected item property first.Therefore read-only panel visibility depends on two things: 1) item in selected the listbox and 2) it's not the update mode.I got it to work by using two borders (one for each property) around read-only panel and a lot of extra code. Is there a simpler way? Thanks!
I would create a single ViewModel property that represents the combined state that will drive the visibility.
Good Example Jeff This is really helpfull to Know ViewModel
Good work, keep it up
cool, thanks.
Thanks for this simple example for MVVM architecture
Hello there,I guess this is not right according to the MVVM concept ...!!As I can see you only create the MODEL that implements the INotifyPropertyChanged and bind it directly in the xaml to the view.While you should create the VIEW-MODEL class and bind it to the VIEW ..!!RegardsWaleed
Waleed,This is just a very simple example, as when I posted this I had realized that there weren't any ViewModel posts that showed this level of simplicity. Contrived, yes. I would argue though that this example has a ViewModel, but no Model.-Jeff
Thank for this lovely example ! Not much text, straight to the point ! That's how i like it !
you should do one of these types of posts on commanding with mvvm with the various controls
Thanks for taking the time to do this; it was helpful for me
Where is the Model? Am I missing something here?
You were doing well until you started adding "VisibilityConverter" and mentioning that you would add more details about it in a future post. So much for a hello world sample. Better luck next time
How to add VisibilityConverter in xaml becoz it is not coming ?
Fields denoted with a "*" are required.
I am the Microsoft Development Lead for NuGet, the NuGet Gallery (), and RIA Services.
Original Template Design: Leevi Graham
Adapted Design: timheuer | http://jeffhandley.com/archive/2008/10/27/helloworld.viewmodel.aspx | CC-MAIN-2014-41 | refinedweb | 1,110 | 58.28 |
September 30, 2018.
A few weeks ago I published version
0.0.1 of the systems
library, and today I cut
0.0.2, which was pretty much exclusively focused on making it work
better in a jupyter notebook workflow, which is still one of the best tools
I've ever found for iterative exploration.
This post walks through the steps to get
systems running in a Jupyter notebook.
(You can see the final notebook on Github as well.
The first step is getting Jupyter and systems installed:
mkdir jupyter cd jupyter python3 -m venv ./env source ./env/bin/activate pip install jupyter systems
Now that you've got that installed, you can run create a notebook via:
jupyter notebook
That should open a new tab in your browser pointing at your Jupyter notebooks, but you can also go there via localhost:8888/tree.
Then go ahead and create a new notebook, giving it
a reasonable name, maybe
systems exploration or some such.
As a final step, verify that installation worked properly by writing this code into the first section.
from systems.parse import parse from systems.viz import as_dot from IPython.core.display import HTML
Then run the code by clocking the
Run button (this might take a while if this
is the first time you're running Jupyter. If it doesn't work, look at the terminal
where you ran
jupyter notebook earlier for error messages, and you'll have to
debug that before moving forward.
For example, on my computer I have to upgrade
prompt_tookit before moving forward,
but prompt that wouldn't be the case for most folks.
pip install –upgrade prompt_toolkit
Assuming you've gotten things working, next step is to start iterating on a model.
The example I'll use here is the same one from an early post on developing a hiring funnel, since the goal is to showcase using jupyter.
Your notebook's first cell should be importing the various dependencies:
from systems.parse import parse from systems.viz import as_dot from IPython.core.display import HTML
Then your second cell should include a multi-line string that includes your model and parsing the model. In theory you can split the definition and the parsing into separate cells, but I think combining them is better because it'll ensure you immediately get errors if you accidentally specify a specification with some problems.
spec = """ [Candidate] > Recruiters(3, 7) @ 1 [Candidate] > Prospect @ Recruiters * 3 Prospect > Screen @ 0.5 Screen > Onsite @ 0.5 # yeah and some more, eliding for example """ model = parse(spec)
Then you can render the model as a diagram:
as_dot(model)
For larger systems, You can change left-to-right rendering to top-to-bottom via:
as_dot(model, rankdir="TB")
And you can run the model and show the results as an inlined HTML table:
results = model.run(rounds=10) rendered = model.render_html(results) HTML(rendered)
With those pieces put together, you're good! You can start iterating as you like.
If you want to chart your outputs, that's pretty doable. Here's a quick example
using bokeh. First, you'll need to install
bokeh:
pip install bokeh
Then you'll need to restart your kernel, either using the
Kernel menu in your notebook,
or by terminating the notebook process and starting it again. Then create a new cell
with some additional imports and configuration:
from bokeh.plotting import figure, output_notebook, show output_notebook()
Then you can render a line like this:
col = "Offer" x = list(range(len(results))) y = [row[col] for row in results] p = figure(title=col) p.line(x, y) show(p)
This is going to output something quite simple, but you can work through the Bokeh documentation if you want to do something more interesting!
You can see the full example notebook on Github. | https://lethain.com/systems-jupyter-notebook/ | CC-MAIN-2019-51 | refinedweb | 637 | 63.19 |
Created on 2010-09-24 14:31 by jayt, last changed 2019-09-12 11:21 by shihai1991. This issue is now closed.
I want to create a custom interactive shell where I continually do
parse_args. Like the following:
parser = argparse.ArgumentParser()
command = raw_input()
while(True):
args = parser.parse_args(shlex.split(command))
# Do some magic stuff
command = raw_input()
The problem is that if I give it invalid input, it errors and exits
with a help message.
I learned from argparse-users group that you can override the exit method like the following:
class MyParser(ArgumentParser):
def exit(self, status=0, message=None):
# do whatever you want here
I would be nice to have this usage documented perhaps along with best practices for doing help messages in this scenario.
Do you want to work on a patch?
(Aside: you may want to learn about the cmd and shlex modules for read-eval-print-loop programs :)
I am also trying to use argparse interactively, but in this case by combining it with the cmd module. So I'm doing something like below:
class MyCmd(cmd.Cmd):
parser = argparse.ArgumentParser(prog='addobject')
parser.add_argument('attribute1')
parser.add_argument('attribute2')
parser.add_argument('attribute3')
def do_addobject(self, line):
args = MyCmd.parser.parse_args(line.split())
newobject = object(args.attribute1, args.attribute2, args.attribute3)
myobjects.append(newobject)
I'm faced with the same problem that when given invalid input, parse_args exits the program completely, instead of exiting just to the Cmd shell.
I have the feeling that this use case is sufficiently common such that it would be good if people did not have to override the exit method themselves, and instead an alternative to parse_args was provided that only raises exceptions for the surrounding code to handle rather than exiting the program entirely.
You can always catch SystemExit.
In the short term, just catch the SystemExit.
In the slightly longer term, we could certainly provide a subclass, say, ErrorRaisingArgumentParser, that overrides .exit and .error to do nothing but raise an exception with the message they would have printed. We'd probably have to introduce a new Exception subclass though, maybe ArgumentParserExit or something like that.
Anyway if you're interested in this, please file a new ticket (preferably with a patch). Regardless of whether we ever provide the subclass, we certainly need to patch the documentation to tell people how to override error and exit.
I don't think it's best to create a new subclass to throw an ArgumentParserExit exception; if I read the stack trace I'd see that an ArgumentError was thrown, then caught, then an ArgumentParserExit was thrown, which IMHO is confusing. In the current design, parse_known_errors catches an ArgumentError and then exits. I propose that the user be optionally allowed to turn off the handling of ArgumentError and to handle it himself instead through an exit_on_argument_error flag.
Attached patch does this. Also I think this issue falls under component 'Lib' too.
FWIW unittest had a similar issue and it's been solved adding an 'exit' argument to unittest.main() [0].
I think using an attribute here might be fine.
The patch contains some trailing whitespace that should be removed, also it might be enough to name the attribute "exit_on_error".
It should also include tests to check that the attribute is set with the correct default value and that it doesn't raise SystemExit when the attribute is False.
[0]:
Updated previous patch with test cases and renamed exit_on_argument_error flag to exit_on_error.
Looks good to me.
What is the status of this? If the patch looks good, then will it be pushed into 3.4?
It's great that this patch was provided. Xuanji, can you submit a contributor agreement, please?
The patch is missing an update to the documentation.
(Really the patch should have been in a separate issue, as requested, since this one is about improving the documentation for the existing released versions. I guess we'll have to open a new issue for updating the docs in the existing versions).
The patch doesn't work for 3.3 (I think it's just because the line numbers are different), but looking over what the patch does, it looks like parse_known_args will return a value for args if there is an unrecognized argument, which will cause parse_args to call error() (it should raise ArgumentError instead).
It doesn't look like xuanji has signed a CLA.
Should we create a new issue, and have someone else create a new patch, and let this issue just be about the docs?
Yes, I think opening a new issue at this point might be a good idea. The reason is that there are a changes either in place or pending in other issues that involve the parse_know_args code, so a new patch is probably required regardless.
I wish I had time to review and commit all the argparse patches, but so far I haven't gotten to them. They are on my todo list somewhere, though :)
The exit and error methods are mentioned in the 3.4 documentation, but there are no examples of modifying them.
16.4.5.9. Exiting methods
ArgumentParser.exit(status=0, message=None)
ArgumentParser.error(message)
test_argparse.py has a subclass that redefines these methods, though I think it is more complex than necessary.
class ErrorRaisingArgumentParser(argparse.ArgumentParser):
In , part of , which creates a parser mode that is closer to optparse in style, I simply use:
def error(self, message):
usage = self.format_usage()
raise Exception('%s%s'%(usage, message))
ArgumentParser.error = error
to catch errors. a Javascript port of argparse, adds a 'debug' option to the ArgumentParser, that effectively redefines this error method. They use that extensively in testing.
Another approach is to trap the sysexit. Ipython does that when argparse is run interactively.
Even the simple try block works, though the SystemExit 2 has no information about the error.
try:
args = parser.parse_args('X'.split())
except SystemExit as e:
print(e)
Finally, plac ( ) is a pypi package that is built on argparse. It has a well developed interactive mode, and integrates threads and multiprocessing.
I would like to send a patch for the issue. How do I start
This issue is a duplicate of issue 9112 which was resolved by commit 9375492b
It is a good idea. So I update this title and add PR 15362.
I am not sure there have a problem of xuanli's CLA or not~
New changeset f545638b5701652ffbe1774989533cdf5bc6631e by Miss Islington (bot) (Hai Shi) in branch 'master':
bpo-9938: Add optional keyword argument exit_on_error to argparse.ArgumentParser (GH-15362)
Thank you for your PR and for your time, I have merged the PR into master.
Stéphane, thanks for your good comment. Some argparse's bpo is too old ;) | https://bugs.python.org/issue9938 | CC-MAIN-2020-45 | refinedweb | 1,125 | 64.91 |
.
You can run this with a simple
main( ) method, dropping a few components onto the panel and putting it into a frame:
The code is pretty simple, but it has two big flaws. First, if the window is moved, the background won't be refreshed automatically.
paintComponent( ) only gets called when the user resizes the window. Second, if the screen ever changes, it won't match up with the background anymore.
You really don't want to update the screenshot often, though, because that involves hiding the window, taking a new screenshot, and then reshowing the window—all of which is disconcerting to the user. Actually detecting when the rest of the desktop changes is almost impossible, but most changes happen when the foreground window changes focus or moves. If you accept this idea (and I do), then you can watch for those events and only update the screenshot when that happens:
public class TransparentBackground extends JComponent implements ComponentListener, WindowFocusListener, Runnable { private JFrame frame; private Image background; private long lastupdate = 0; public boolean refreshRequested = true; public TransparentBackground(JFrame frame) { this.frame = frame; updateBackground( );frame.addComponentListener(this); frame.addWindowFocusListener(this); new Thread(this).start( ); }( ); }
First, make the panel,
TransparentWindow, implement
ComponentListener,
WindowFocusListener, and
Runnable. The listener interfaces will let the panel catch events indicating that the window has moved, been resized, or the focus changes. Implementing
Runnable will let the panel create a thread to handle custom
repaint( )s.
The implementation of
ComponentListener involves the four methods beginning with
component. They each simply call
repaint( ) so that the background will be updated whenever the user moves or resizes the window. Next are the two window focus handlers, which just call
refresh( ), as shown here:
public void refresh( ) { if(frame.isVisible( )) { repaint( ); refreshRequested = true; lastupdate = new Date( ).getTime( ); } } public void run( ) { try { while(true) { Thread.sleep(250); long now = new Date( ).getTime( ); if(refreshRequested && ((now - lastupdate) > 1000)) { if(frame.isVisible( )) { Point location = frame.getLocation( ); frame.hide( ); updateBackground( ); frame.show( ); frame.setLocation(location); refresh( ); } lastupdate = now; refreshRequested = false; } } } catch (Exception ex) { p(ex.toString( )); ex.printStackTrace( ); } }
refresh( ) ensures that the frame is visible and schedules a repaint. It also sets the
refreshRequested boolean to true and saves the current time, which will become very important shortly.
The
run( ) method sleeps constantly, waking up every quarter-second to see if a refresh has been requested, and whether it has been more than a second since the last refresh. If more than a second has passed and the frame is actually visible, then
run( ) will save the frame location, hide it, update the background, then put the frame back in place and call
refresh( ). This ensures that the background is never updated more than needed.
So, why all of this rigmarole about using a thread to control refreshing? One word: recursion. The event handlers could simply call
updateBackground( ) and
repaint( ) directly, but hiding and showing the window to generate the screenshot would cause more focus-changed events. These would then trigger another background update, causing the window to hide again, and so on, creating an infinite loop. The new focus events are generated a few milliseconds after
refresh( ) is processed, so simply checking for an
isRecursing flag wouldn't stop a loop.. | http://oreillynet.com/lpt/a/6321 | CC-MAIN-2013-20 | refinedweb | 537 | 55.44 |
All,
Apologize for the delay, here’s lecture 3’s notes.
- Tim
Unofficial Deep Learning Lecture 3 Notes
Where do we go from here?
- CNN Image intro <- we are here
- Structured neural net intro
- Language RNN intro
- Collaborative filtering intro
- Collaborative filtering in-depth
- Structured neural net in-depth
- CNN image in depth
- Language RNN in depth
Talking about the Kaggle command line
The unofficial Kaggle CLI tool keeps changing though. So, be careful with different versions.
Use below command to upgrade:
pip install kaggle-cli --upgrade
Note: that the specific name of a Kaggle challenge is listed as follows:
Specific name: planet-understanding-the-amazon-from-space
Don’t forget to enter your password.
curlWget Chrome extension - everytime you try and download something. There’s a yellow button with a command line version to download data. Paste that command in AWS/equivalent console to download data.
%reload_ext autoreload %autoreload 2 %matplotlib inline
import sys sys.path.append('/home/paperspace/repos/fastai') import torch
import fastai
from fastai.imports import * from fastai.transforms import * from fastai.conv_learner import * from fastai.model import * from fastai.dataset import * from fastai.sgdr import *
1. Fastai Library Comparison: Short explanation on a quick and dirty Cats vs. Dogs.
Need the following folders:
- Train - with a folder for different
- Valid
- Test
Assuming you download from Kaggle and unzip
from fastai.conv_learner import * PATH = 'data/dogscats/'
Set image size and batch size
sz = 224; bs = 64
Training a model -> straight up
Note: this command will download the ResNet model. May take a few minutes, using ResNet50 to compare to Keras, will take about 10 mins to run afterwards.
By default all the layers frozen except the last few. Note that we need to pass
test_name parameter to
ImageClassifierData for future predictions.
tfms = tfms_from_model(resnet50, sz, aug_tfms=transforms_side_on, max_zoom=1.1) data = ImageClassifierData.from_paths(PATH, tfms= tfms, bs=bs, test_name='test1') learn = ConvLearner.pretrained(resnet50, data ) % time learn.fit( 1e-2, 3, cycle_len=1) # deeper model like resnet 50
A Jupyter Widget [ 0. 0.04488 0.02685 0.99072] [ 1. 0.03443 0.02572 0.99023] [ 2. 0.04223 0.02662 0.99121] CPU times: user 4min 16s, sys: 1min 43s, total: 5min 59s Wall time: 6min 14s
Note: ‘precompute = True’ caches some of the intermediate steps which we do not need to recalculate every time. It uses cached non-augmented activations. That’s why data augmentation doesn’t work with precompute. Having precompute speeds up our work. Jeremy telling this during lecture 3
Unfreeze the layers, apply a learning rate
BN_freeze - if are you using a deep network on a very similar dataset to your target (ours is dogs and cats) - its causing the batch normalization not be updated.
Note: If Images are of size between 200-500px and arch > 34 e.g. resnet50 then add
bn_freeze(True)
learn.unfreeze() learn.bn_freeze(True) %time learn.fit([1e-5, 1e-4,1e-2], 1, cycle_len=1)
A Jupyter Widget [ 0. 0.02088 0.02454 0.99072] CPU times: user 4min 1s, sys: 1min 5s, total: 5min 7s Wall time: 5min 12s
Get the Predictions and score the model
%time log_preds, y = learn.TTA() metrics.log_loss(y, np.exp(log_preds)), accuracy(log_preds,y)
CPU times: user 31.9 s, sys: 14 s, total: 45.9 s Wall time: 56.2 s (0.016504555816930676, 0.995)
2. Fastai Library Comparison: Keras Sample
Example of running on TensorFlow back-end
To install:
pip install tensorflow-gpu keras
%reload_ext autoreload %autoreload 2 %matplotlib inline
import numpy as np from keras.preprocessing
PATH = "data/dogscats/" sz=224 batch_size=64
import numpy as np from keras.preprocessing.image import ImageDataGenerator from keras.preprocessing import image from keras.layers import Dropout, Flatten, Dense from keras.applications import ResNet50 from keras.models import Model, Sequential from keras.layers import Dense, GlobalAveragePooling2D from keras import backend as K
Set paths
train_data_dir = f'{PATH}train' validation_data_dir = f'{PATH}valid' batch_size = 64
1. Define a data generator(s)
- data augmentation do you want to do
- what kind of normalization do we want to do
- create images from directly looking at it
- create a generator - then generate images from a directory
- tell it what image size, whats the mini-batch size you want
- do the same thing for the validation_generator, do it without shuffling, because then you can’t track how well you are doing
train_datagen = ImageDataGenerator(rescale=1. / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1. / 255) train_generator = train_datagen.flow_from_directory(train_data_dir, target_size=(sz, sz), batch_size=batch_size, class_mode='binary') # validation set validation_generator = test_datagen.flow_from_directory(validation_data_dir, shuffle=False, target_size=(sz, sz), batch_size=batch_size, class_mode='binary')
Note: class_mode=‘categorical’ for multi-class classification
2. Make the Keras model
- ResNet50 was used because Keras didn’t have ResNet34. This is for comparing apples to apples.
- Make base model.
- Make the layers manually which ones you want.
base_model = ResNet50(weights='imagenet', include_top=False) x = base_model.output x = GlobalAveragePooling2D()(x) x = Dense(1024, activation='relu')(x) predictions = Dense(1, activation='sigmoid')(x)
3. Loop through and freeze the layers you want
- You need to compile the model.
- Pass the type of optimizer, loss, and metrics.
model = Model(inputs=base_model.input, outputs=predictions) for layer in base_model.layers: layer.trainable = False model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
4. Fit
- Keras expects the size per epoch
- How many workers
- Batchsize
%%time model.fit_generator(train_generator, train_generator.n // batch_size, epochs=3, workers=4, validation_data=validation_generator, validation_steps=validation_generator.n // batch_size)
6. We decide to retrain some of the layers,
- loop through and manually set layers to true or false.
split_at = 140 for layer in model.layers[:split_at]: layer.trainable = False for layer in model.layers[split_at:]: layer.trainable = True model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
7. Closing Comments
PyTorch - a little early for mobile deployment.
TensorFlow - do more work with Keras, but can deploy out to other platforms, though you need to do a lot of work to get there.
3. Reviewing Dog breeds as an example to submit to Kaggle
how to make predictions - will use dogs / cats for simplicity. Jeremy uses Dog breeds for walkthrough.
By default, PyTorch gives back the log probability.
log_preds,y = learn.TTA(is_test=True) probs = np.exp(log_preds)
Note:
is_test = True gives predictions on test set rather than validation set.
df = pd.DataFrame(probs) df.columns = data.classes
df.insert(0,'id', [o[5:-4] for o in data.test_ds.fnames])
Explanation: Insert a new column at position zero named ‘id’. subset and remove first 5 and last 4 letters since we just need ids.
df.head()
with large files compression is important to speedup work
SUBM = f'{PATH}sub/' os.makedirs(SUBM, exist_ok=True) df.to_csv(f'{SUBM}subm.gz', compression='gzip', index=False)
Gives you back a URL that you can use to download onto your computer. For submissions, or file checking etc.
FileLink(f'{SUBM}subm.gz')
4. What about a single prediction?
assign a single picture
fn = data.val_ds.fnames[0]
fn
'valid/cats/cat.9000.jpg'
can always view the photo
Image.open('data/dogscats/'+fn)
Shortest way to do a single prediction
Make sure you transform the image before submitting to the learn.
im = val_tfms(open_image(PATH+fn) learn.predict_array(im[none])
(Note the use of open_image instead of Image.open above - this divides by 255 and converts to np.array as is done during training)
Everything passed to or returned from models is assumed to be mini-batch or “tensors” so it should be a 4-d tensor. (#ct, height, weight, channels) This is why we add another dimension via
im[none]
trn_tfms, val_tfms = tfms_from_model(resnet50,sz)
Predict dog or cat!
im = val_tfms(open_image('data/dogscats/'+fn)) preds = learn.predict_array(im[None]) np.argmax(preds) # 0 is cat
0
5. Convolution: Whats happening behind the scenes?
Otavio Good’s Video
The theory behind Convolutional Networks, and Otavio Good demo of Word Lens, now part of Google Translate.
The video shows the illustration of the image recognition of a letter
A (for classification). Some highlights:
- Positives
- Negatives
- Max Pools
- Another Max Pools
- Finally, we compare it to a template of
A, B, C, D, E, then we get a % probability.
- Illustrating a pretrained model.
Spreadsheet Example - Convolution Layers
Definitions
Layers
- Input
- Conv1
- Conv2
- Maxpool
- Denseweights
- Dense activation
Example of Max pooling
Refer to entropy_example.xlsx.
Now, if we were to predict numbers (0-9) or categorical data… we’ll have that many output by fully connected layer. There is no ReLU after fully connected so we can have negative numbers. We want to convert these numbers into probabilities which are between 0-1 and add to 1. Softmax is an activation function which helps here. An activation function is a function which we apply to activations. We were using ReLU i.e. max(0,x) until now which is also activation function. Such functions are for non-linearity. An activation function takes a number and spits out a single number.
Example of a softmax layer
Only ever occurs in the final layer. Always spits out numbers between 0 and 1. And the numbers added together gives us a total of 1. This isn’t necessary, we COULD tell them to learn a kernel to give probabilities. But if you design your architecture properly, you will build a better model. If you build the model that way, and it iterates with the proper expected output you will save some time.
of them
1. Get rid of negatives
( Exponential column ) - It also accentuates the number and helps us because at the end we want one them with high probability. Softmax picks one of the output with strong probability.
Some basic properties:
$$ ln(xy) = ln(x) +ln(y) $$ $$ ln(\frac{x}{y}) = ln(x) - ln(y) $$ $$ ln(x) = y , e^y = x $$
2. then do the % proportion
$$ \frac{ln(x)}{\sum{ln(x)}} = probability$$
Image models (how do we recognize multiple items?)
import sys sys.path.append('/home/paperspace/repos/fastai') import torch
from fastai.imports import * from fastai.transforms import * from fastai.conv_learner import * from fastai.model import * from fastai.dataset import * from fastai.sgdr import *
PATH = '/home/paperspace/Desktop/data/Planet: Understanding the Amazon from Space/'
list_paths = [f"{PATH}train-jpg/train_0.jpg", f"{PATH}train-jpg/train_1.jpg"] titles=["haze primary", "agriculture clear primary water"] #plots_from_files(list_paths, titles=titles, maintitle="Multi-label classification")
f2 = is
f_beta where
beta = 2, weights false negatives and false positives much worse
def f2(preds, targs, start=0.17, end=0.24, step=0.01): with warnings.catch_warnings(): warnings.simplefilter("ignore") return max([fbeta_score(targs, (preds>th), 2, average='samples') for th in np.arange(start,end,step)])
#from planet import f2 metrics=[f2]
Write any metric you like
Custom metrics from the
planet.py file
from fastai.imports import * from fastai.transforms import * from fastai.dataset import * from sklearn.metrics import fbeta_score import warnings def f2(preds, targs, start=0.17, end=0.24, step=0.01): with warnings.catch_warnings(): warnings.simplefilter("ignore") return max([fbeta_score(targs, (preds>th), 2, average='samples') for th in np.arange(start,end,step)]) def opt_th(preds, targs, start=0.17, end=0.24, step=0.01): ths = np.arange(start,end,step) idx = np.argmax([fbeta_score(targs, (preds>th), 2, average='samples') for th in ths]) return ths[idx] def get_data(path, tfms,bs, n, cv_idx): val_idxs = get_cv_idxs(n, cv_idx) return ImageClassifierData.from_csv(path, 'train-jpg', f'{path}train_v2.csv', bs, tfms, suffix='.jpg', val_idxs=val_idxs, test_name='test-jpg') def get_data_zoom(f_model, path, sz, bs, n, cv_idx): tfms = tfms_from_model(f_model, sz, aug_tfms=transforms_top_down, max_zoom=1.05) return get_data(path, tfms, bs, n, cv_idx) def get_data_pad(f_model, path, sz, bs, n, cv_idx): transforms_pt = [RandomRotateZoom(9, 0.18, 0.1), RandomLighting(0.05, 0.1), RandomDihedral()] tfms = tfms_from_model(f_model, sz, aug_tfms=transforms_pt, pad=sz//12) return get_data(path, tfms, bs, n, cv_idx)
f_model = resnet34
label_csv = f'{PATH}train_v2.csv' n = len(list(open(label_csv)))-1 val_idxs = get_cv_idxs(n)
We use a different set of data augmentations for this dataset - we also allow vertical flips, since we don’t expect vertical orientation of satellite images to change our classifications.
Here we’ll have 8 flips. 90, 180, 270 and 0 degree. and same for the side. We’ll also have some rotation, zooming, contrast and brightness adjustments.
data.val_ds returns single item/image say
data.val_ds[0].
data.val_d returns an generator. Which returns mini-batch of items/images. We always get the next mini-batch.
def get_data(sz): tfms = tfms_from_model(f_model, sz, aug_tfms=transforms_top_down, max_zoom=1.05) return ImageClassifierData.from_csv(PATH, 'train-jpg', label_csv, tfms=tfms, suffix='.jpg', val_idxs=val_idxs, test_name='test-jpg')
PATH = '/home/paperspace/Desktop/data/Planet: Understanding the Amazon from Space/'
os.makedirs('data/planet/models', exist_ok=True) os.makedirs('cache/planet/tmp', exist_ok=True)
label_csv = f'{PATH}train_v2.csv'
data = get_data(256)
x,y = next(iter(data.val_dl))
y
1 0 0 ... 0 1 1 0 0 0 ... 0 0 0 0 0 0 ... 0 0 0 ... ⋱ ... 0 0 0 ... 0 0 0 0 0 0 ... 0 0 0 1 0 0 ... 0 0 0 [torch.FloatTensor of size 64x17]
list(zip(data.classes, y[0]))
[('agriculture', 1.0), ('artisinal_mine', 0.0), ('bare_ground', 0.0), ('blooming', 0.0), ('blow_down', 0.0), ('clear', 1.0), ('cloudy', 0.0), ('conventional_mine', 0.0), ('cultivation', 0.0), ('habitation', 0.0), ('haze', 0.0), ('partly_cloudy', 0.0), ('primary', 1.0), ('road', 0.0), ('selective_logging', 0.0), ('slash_burn', 1.0), ('water', 1.0)]
One Hot Encoding:
Softmax - probabilities to make 1 choice
one-hot - each column only tracks 1 possible classification. e.g. 3 classes = 3 columns
Index - multi class stored as indices. Taken care of by fastai library.
Sigmoid function
$$ = \frac{e^\alpha}{1+e^\alpha}$$
plt.imshow(data.val_ds.denorm(to_np(x))[0]*1.4);
How do we use this?
resize the data from 256 down to 64 x 64.
Wouldn’t do this for cats and dogs, because it starts off nearly perfect. If we resized, we destroy the model. Most ImageNet models are designed around 224 which was close to the normal. In this case, since this is landscape, there isn’t that much of ImageNet that is useful for satellite.
So we will start small
sz=64
data = get_data(sz)
What does resize do?
I will not use images more than image size 1.3, go ahead and make new
jpg where the smallest edge is x size. So this will save a lot of time for processing. In general the image resize will take a center crop.
data = data.resize(int(sz*1.3), 'tmp')
Train our model
Note: Training implies improving filters/kernels and weights in Fully connected layers. On the other hand activations are calculated.
learn = ConvLearner.pretrained(f_model, data, metrics=metrics)
To view the model + the layers (only looking at 5)
list(learn.summary().items())[:5]
[('Conv2d-1', OrderedDict([('input_shape', [-1, 3, 64, 64]), ('output_shape', [-1, 64, 32, 32]), ('trainable', False), ('nb_params', 9408)])), ('BatchNorm2d-2', OrderedDict([('input_shape', [-1, 64, 32, 32]), ('output_shape', [-1, 64, 32, 32]), ('trainable', False), ('nb_params', 128)])), ('ReLU-3', OrderedDict([('input_shape', [-1, 64, 32, 32]), ('output_shape', [-1, 64, 32, 32]), ('nb_params', 0)])), ('MaxPool2d-4', OrderedDict([('input_shape', [-1, 64, 32, 32]), ('output_shape', [-1, 64, 16, 16]), ('nb_params', 0)])), ('Conv2d-5', OrderedDict([('input_shape', [-1, 64, 16, 16]), ('output_shape', [-1, 64, 16, 16]), ('trainable', False), ('nb_params', 36864)]))]
lrf=learn.lr_find() learn.sched.plot()
lr = 0.2
Refit the model
Follow the last few steps on the bottom of the Jupyter notebook.
learn.fit(lr, 3, cycle_len=1, cycle_mult=2)
How are the learning rates spread per layer?
[split halfway, split halfway, always last layer only]
lrs = np.array([lr/9,lr/3,lr])
learn.unfreeze() learn.fit(lrs, 3, cycle_len=1, cycle_mult=2)
learn.save(f'{sz}')
learn.sched.plot_loss()
Structured Data
Related Kaggle competition:
There’s really two types of data. Unstructured and structured data. Structured data - columnar data, columns, etc… Structured data is important in the world, but often ignored by academic people. Will look at the Rossmann stores data.
%matplotlib inline %reload_ext autoreload %autoreload 2
from fastai.imports import * from fastai.torch_imports import * from fastai.structured import * from fastai.dataset import * from fastai.column_data import * np.set_printoptions(threshold=50, edgeitems=20) from sklearn_pandas import DataFrameMapper from sklearn.preprocessing import LabelEncoder, Imputer, StandardScaler import operator PATH='/home/paperspace/Desktop/data/rossman/'
test = pd.read_csv(f'{PATH}test.csv', parse_dates=['Date'])
def concat_csvs(dirname): path = f'{PATH}{dirname}' filenames=glob.glob(f"{path}/*.csv") wrote_header = False with open(f"{path}.csv","w") as outputfile: for filename in filenames: name = filename.split(".")[0] with open(filename) as f: line = f.readline() if not wrote_header: wrote_header = True outputfile.write("file,"+line) for line in f: outputfile.write(name + "," + line) outputfile.write("\n")
Feature Space:
- train: Training set provided by competition
- store: List of stores
- store_states: mapping of store to the German state they are in
- List of German state names
- googletrend: trend of certain google keywords over time, found by users to correlate well with given data
- weather: weather
- test: testing set
table_names = ['train', 'store', 'store_states', 'state_names', 'googletrend', 'weather', 'test']
We’ll be using the popular data manipulation framework pandas. Among other things, pandas allows you to manipulate tables/data frames in python as one would in a database.
We’re going to go ahead and load all of our CSV’s as data frames into the list
tables.
tables = [pd.read_csv(f'{PATH}{fname}.csv', low_memory=False) for fname in table_names]
from IPython.display import HTML
We can use
head() to get a quick look at the contents of each table:
- train: Contains store information on a daily basis, tracks things like sales, customers, whether that day was a holiday, etc.
- store: general info about the store including competition, etc.
- store_states: maps store to state it is in
- state_names: Maps state abbreviations to names
- googletrend: trend data for particular week/state
- weather: weather conditions for each state
- test: Same as training table, w/o sales and customers
This is very representative of a typical industry dataset.
The following returns summarized aggregate information to each table across each field. | http://forums.fast.ai/t/deeplearning-lecnotes3/7866 | CC-MAIN-2018-30 | refinedweb | 3,019 | 52.97 |
#include <db.h> int DB->verify(DB *db, const char *file, const char *database, FILE *outfile, u_int32_t flags);
The
DB->verify() method verifies the integrity of all databases in
the file specified by the file
parameter, and optionally outputs the databases' key/data pairs to the
file stream specified by the outfile
parameter.
The
DB->verify() method does not perform any
locking, even in Berkeley DB environments that are configured with a
locking subsystem. As such, it should only be used on files that are
not being modified by another thread of control.
The
DB->verify() method may not be called after the
DB->open() method is called.
The DB handle may not be
accessed again after
DB->verify() is called, regardless of its
return.
The
DB->verify() method is the underlying method used by the db_verify utility. See the
db_verify utility
source code for an example of using
DB->verify() in a IEEE/ANSI Std
1003.1 (POSIX) environment.
The
DB->verify() method will return DB_VERIFY_BAD if a database is
corrupted. When the DB_SALVAGE flag is specified, the DB_VERIFY_BAD
return means that all key/data pairs in the file may not have been
successfully output. Unless otherwise specified, the
DB->verify()
method returns a non-zero error value on failure and 0 on success.
The database parameter is the database in file on which the database checks for btree and duplicate sort order and for hashing are to be performed. See the DB_ORDERCHKONLY flag for more information.
The database parameter must be set to NULL except when the DB_ORDERCHKONLY flag is set.
The flags parameter must be set to 0 or the following value:
Write the key/data pairs from all databases in the file to the file stream named in the outfile parameter. Key values are written for Btree, Hash and Queue databases, but not for Recno databases.
The output format is the same as that specified for the db_dump utility, and can be used as input for the db_load utility.:
Output all the key/data pairs in the
file that can be found. By default,
DB->verify() does not assume
corruption. For example, if a key/data pair on a page is marked as
deleted, it is not then written to the output file. When
DB_AGGRESSIVE is specified, corruption is assumed, and any key/data
pair that can be found is written. In this case, key/data pairs that
are corrupted or have been deleted may appear in the output (even if
the file being salvaged is in no way corrupt), and the output will
almost certainly require editing before being loaded into a database.
When using the DB_SALVAGE flag, if characters in either the key or data items are printing characters (as defined by isprint(3)), use printing characters to represent them. This flag permits users to use standard text editors and tools to modify the contents of databases or selectively remove data from salvager output.
Note: different systems may have different notions about what characters are considered printing characters, and databases dumped in this manner may be less portable to external systems.
Skip the database checks for btree and duplicate sort order and for hashing.
The
DB->verify() method normally verifies that btree keys and
duplicate items are correctly sorted, and hash keys are correctly
hashed. If the file being verified contains multiple databases using
differing sorting or hashing algorithms, some of them must necessarily
fail database verification because.
Perform the database checks for btree and duplicate sort order and for hashing, skipped by DB_NOORDERCHK.
When this flag is specified, a database parameter should also be specified,
indicating the database in the physical file which is to be checked.
This flag is only safe to use on databases that have already
successfully been verified using
DB->verify() with the DB_NOORDERCHK
flag set.
If the database was opened within a database environment, the
environment variable
DB_HOME may be used as the path of the
database environment home.
DB->verify() is affected by any database directory specified using the
DB_ENV->set_data_dir()
method, or by setting the "set_data_dir" string in the environment's
DB_CONFIG
file.
The
DB->verify()
method may fail and return one of the following non-zero errors:
If the method was called after DB->open() was called; or if an invalid flag value or parameter was specified.
Database and Related Methods | http://docs.oracle.com/cd/E17276_01/html/api_reference/C/dbverify.html | CC-MAIN-2015-48 | refinedweb | 726 | 51.38 |
Recently, we started using Jira at work to track some IT related things. Thus, I have quickly had to learn how to administer Jira. One thing that I really wanted to get set up and working well was the ability to respond to an email sent by the system via a reply email and have that filed in the ticket. That wasn’t too terribly hard to set up. First I created a mailbox for Jira to check in our mail system. Then I set up a mail handler to pull the reply emails in. I settled for using a “Add a comment before a specified marker or separator in the email body” handler so that I could provide a regular expression to define how to extract just the reply. The below screenshots show my setup for this.
The Split Regex field below is this /[Ff]{1}rom:[^\n]+{myuser}@{mydomain}\.{extention}/. Replace the {myuser}, {mydomain}, and {extention} parts with the email address of the account that Jira mails as. So, if your Jira system sends email as jira@coolstuff.org the above expressions would look like this /[Ff]{1}rom:[^\n]+jira@coolstuff\.org/. This will split the email where it sees the first line that looks something like this From: Jira [mailto:jira@coolstuff.org] or from: Jira [mailto:jira@coolstuff.org] … which is how Outlook formats its replies.
So, I got that part working great now I can go get a doughnut right? Nope! Turns out every time I would reply the issue icon and the image attachment in my email signature would get attached to the issue … over and over and over. So, before long I had a veritable glut of the same images attached to the issue. Grrrr!! So, I asked myself “Self, what can we do about this?” To which I so helpfully replied to myself, “Go check the Atlassian Marketplace, Atlassian Community, and Google for an answer.” After a couple grueling hours of trying to find the answer I came to the stark conclusion that there wasn’t one. Double grrrr!!
After a bit of thinking I decided I could just scan the images against a set of MD5 hashes to exclude when the issue is updated and here is the fruit of my labor. This solution requires having ScriptRunner for Jira installed. If you don’t have it … well, you should. The possibilities are pretty endless with what you can do with it. I created a Script Listener that would respond to the “Issue Updated” event.
And the actual contents of the script file.
import java.security.*; import com.atlassian.jira.issue.Issue; import com.atlassian.jira.component.ComponentAccessor; import com.atlassian.jira.issue.AttachmentManager; import com.atlassian.jira.util.AttachmentUtils; // Add new MD5 hashes to the below array to auto remove them when they are attached to the ticket. // This is helpful to get rid of things like images in email signatures, JIRA issue type icons in the email, etc. def deleteHashes = [ "eaf938ae5025889b60029d6d839d19db", //JIRA blue check mark "f370264d9a3d1b92666419e6ecc102ef", //email signature logo v1 "662b051e6082e4499079ddc18e5eb302", //email signature logo v2 "a4ab3c522859297084064502477effd8" //Pulse line icon ]; def issue = event.getIssue(); def attachments = issue.getAttachments(); def attachmentFile = null; def bytes = null; def md = MessageDigest.getInstance("MD5"); def digest = null; def hash = ""; def manager = ComponentAccessor.getComponent(AttachmentManager) for(a in attachments) { attachmentFile = AttachmentUtils.getAttachmentFile(a); bytes = getBytesFromFile(attachmentFile); digest = md.digest(bytes); hash = String.format("%032x", new BigInteger(1, digest)); for(h in deleteHashes) { if(hash == h) { manager.deleteAttachment(a); break; } } } public byte[] getBytesFromFile(File file) throws IOException { // Get the size of the file long length = file.length(); // You cannot create an array using a long type. // It needs to be an int type. // Before converting to an int type, check // to ensure that file is not larger than Integer.MAX_VALUE. if (length > Integer.MAX_VALUE) { log.info("File is too large!"); // File is too large throw new IOException("File is too large!"); } // Create the byte array to hold the data byte[] bytes = new byte[(int)length]; // Read in the bytes int offset = 0; int numRead = 0; InputStream is = new FileInputStream(file); try { while (offset < bytes.length && (numRead=is.read(bytes, offset, bytes.length-offset)) >= 0) { offset += numRead; } } finally { is.close(); } // Ensure all the bytes have been read in if (offset < bytes.length) { log.info("Could not completely read file " + file.getName()); throw new IOException("Could not completely read file " + file.getName()); } return bytes; }
Now, when I respond to an issue via email if any of the attachments on the issue match any of the MD5 hashes at the top of the script that attachment will get deleted from the issue. And if I find that there are other attachments we start seeing like this on a regular basis all I have to do is add the MD5 hash to the list and save the script … problem solved.
Now, about that doughnut.
Update: So, the whole editing the script and having others put in MD5 hashes part … yeah, that went over like a lead balloon. Here is an updated version that is much easier to administer.
Pingback: Remove Jira Issue Attachments by MD5 Hash Redux - I am Davin | https://iamdav.in/2018/02/22/remove-jira-issue-attachments-md5-hash/ | CC-MAIN-2021-43 | refinedweb | 859 | 58.08 |
Using the solution of a linear system, and splitting a matrix
Hi there,
I'm fairly new to Sage and Python, so I'm getting into basic problems here that I'd be happy if you could help me out.
Here it is, actually, here they are: I'm generating a set of equalities and solving them with solve. A simple example:
import numpy as n;
m=4;
s = list(var('s_%d' % int(i)) for i in range(m));
eqns=[s_0+s_1==1,s_2-s_3==0];
sol=solve(eqns,s,solution_dict=True)[0]
This gives the solutions:
{s_1: -r2 + 1, s_0: r2, s_3: r1, s_2: r1}
My first question is, how do I create a matrix with the solutions? Say, something like:
M=m.zeros((2,2));
for i in range(2):
for j in range(2):
M[int(i),int(j)] = sol[s[i]]+ sol[s[j+2]]
This is giving me the error: "TypeError: unable to simplify to float approximation"
My second question, would be, given the array M, how do I split it as r1 times a matrix, plus r2 times another matrix, plus a constant matrix? In the above example,
M= [[1,r1+r2],[r1-r2+1,r1-r2+1]]= r1 [[0,1],[1,1]] + r2 [[0,1],[-1,-1]]+ [[1,0],[0,0]]
I'm interested in the matrices multiplying the still unknown coefficients. Maybe I should add that the number of equations in the problem I'm solving is much bigger than in this simple example, and therefore I cannot find this matrix decomposition by simply looking at it.
Thanks for the help! | https://ask.sagemath.org/question/8925/using-the-solution-of-a-linear-system-and-splitting-a-matrix/ | CC-MAIN-2018-13 | refinedweb | 270 | 52.73 |
Mark Hammond wrote: > > I struck a bit of a snag with the Unicode support when trying to use the > most recent update in a C++ source file. > > The problem turned out to be that unicodeobject.h did a #include "wchar.h", > but did it while an 'extern "C"' block was open. This upset the MSVC6 > wchar.h, as it has special C++ support. Thanks for reporting this. > Attached below is a patch I made to unicodeobject.h that solved my problem > and allowed my compilations to succeed. Theoretically the same problem > could exist for wctype.h, and probably lots of other headers, but this is > the immediate problem :-) > > An alternative patch would be to #include "whcar.h" in PC\config.h outside > of any 'extern "C"' blocks - wchar.h on Windows has guards that allows for > multiple includes, so the unicodeobject.h include of that file will succeed, > but not have the side-effect it has now. > > Im not sure what the preferred solution is - quite possibly the PC\config.h > change, but Ive include the unicodeobject.h patch anyway :-) > > Mark. > > *** unicodeobject.h 2000/03/13 23:22:24 2.2 > --- unicodeobject.h 2000/03/14 01:06:57 > *************** > *** 85,91 **** > --- 85,101 ---- > #endif > > #ifdef HAVE_WCHAR_H > + > + #ifdef __cplusplus > + } /* Close the 'extern "C"' before bringing in system headers */ > + #endif > + > # include "wchar.h" > + > + #ifdef __cplusplus > + extern "C" { > + #endif > + > #endif > > #ifdef HAVE_USABLE_WCHAR_T > I've included this patch (should solve the problem for all inlcuded system header files, since it wraps only the Unicode APIs in extern "C"): --- /home/lemburg/clients/cnri/CVS-Python/Include/unicodeobject.h Fri Mar 10 23:33:05 2000 +++ unicodeobject.h Tue Mar 14 10:38:08 2000 @@ -1,10 +1,7 @@ #ifndef Py_UNICODEOBJECT_H #define Py_UNICODEOBJECT_H -#ifdef __cplusplus -extern "C" { -#endif /* Unicode implementation based on original code by Fredrik Lundh, modified by Marc-Andre Lemburg (mal@lemburg.com) according to the @@ -167,10 +165,14 @@ typedef unsigned short Py_UNICODE; #define Py_UNICODE_MATCH(string, offset, substring)\ (!memcmp((string)->str + (offset), (substring)->str,\ (substring)->length*sizeof(Py_UNICODE))) +#ifdef __cplusplus +extern "C" { +#endif + /* --- Unicode Type ------------------------------------------------------- */ typedef struct { PyObject_HEAD int length; /* Length of raw Unicode data in buffer */ I'll post a complete Unicode update patch by the end of the week for inclusion in CVS. -- Marc-Andre Lemburg ______________________________________________________________________ Business: Python Pages: | https://mail.python.org/pipermail/python-dev/2000-March/002599.html | CC-MAIN-2014-10 | refinedweb | 379 | 58.99 |
Explanation of Deep and Shallow Copying
When creating copies of arrays or objects one can make a deep copy or a shallow copy. This explanation uses arrays.
Recall array variables in Java are references (some folks say pointers, but there are differences between references and points).
Object and array variables refer to the actual object or array. (In pointer terms the object and array variables store the memory address of the actual object or array. I liken this to a house on a street and the only thing in the house is a piece of paper with the address of another house on the street. To beginning programming students this seems odd, but their are valid technical reasons for this approach.)
A shallow copy can be made by simply copying the reference.
public class Ex {
private int[] data;
// makes a shallow copy of values confusing, because I didn't intentionally change anything about the
// object e refers to.
}
}
A deep copy means actually creating a new array and copying over the values.
public class Ex{The above code shows deep copying.
private int[] data;
// altered to make a deep copy of values.) | http://www.cs.utexas.edu/~scottm/cs307/handouts/deepCopying.htm | CC-MAIN-2018-13 | refinedweb | 192 | 63.9 |
CGI Developer's Guide
Chapter 2
The Basics
CONTENTS
- Hello, World!
- Outputting CGI
- Installing and Running Your CGI Program
- A Quick Tutorial on HTML Forms
- Accepting Input from the Browser
- A Simple CGI Program
- General Programming Strategies
- Summary
A few years ago, I was setting up World Wide Web pages for Harvard
college, and I wanted to include a page where people could submit
their comments about the pages. At the time, the Web was young
and the documentation scarce. I, like many others, depended on
the terse documentation and other people's code to learn how to
program CGI. Although this method of learning required some searching,
plenty of experimentation, and a lot of questions, it was very
effective. This chapter is a mirror of my early struggles with
CGI (with several refinements, of course!).
Although gaining a complete understanding and mastery of the Common Gateway Interface takes some time, the protocol itself is fairly simple. Anyone with some basic programming skills and familiarity with the Web is capable of quickly learning how to program fairly sophisticated CGI applications in the same way I and others learned a few years ago.
The objective of this chapter is to present the basics of CGI in a comprehensive and concise manner. Every concept discussed here is covered in greater detail in later chapters. However, upon finishing this chapter, you should be immediately capable of programming CGI applications. Once you reach that point, you have the option of learning the remaining subtle nuances of CGI either by reading the rest of this book or by simply experimenting on your own.
You can reduce CGI programming to two tasks: getting information
from the Web browser and sending information back to the browser.
This is fairly intuitive once you realize how CGI applications
are usually used. Often, the user is presented with a form to
complete, such as the one in Figure 2.1.
Once the user fills out this form and submits it, the information
is sent to a CGI program. The CGI program must then convert that
information into something it understands, process it appropriately,
and then send something back to the browser, whether it is a simple
acknowledgment or the results of a complex database search.
Figure 2.1 : A sample form.
In other words, programming CGI requires understanding how to get input from and how to send output back to the Web browser. What goes on between the input and output stages of a CGI program depends on what the developer wants to accomplish. You'll find that the main complexity of CGI programming lies in that in-between stage; after you figure out how to deal with the input and output, you have essentially accomplished what you need to know to become a CGI developer.
In this chapter, you learn the basic concepts behind CGI input and output as well as other rudimentary skills you need to write and use CGI, including how to create HTML forms and how to call your CGI programs. The chapter covers the following topics:
- The traditional "Hello, world!" program.
- CGI output: sending information back to the Web browser for display.
- Configuring, installing, and running your applications. You learn several different platforms and Web servers.
- CGI input: interpreting the information sent by the Web browser. You are also introduced to some useful programming libraries to help parse this input.
- A simple example: You will step through a simple example that encompasses all of the lessons in this chapter.
- Programming strategies.
Because of the nature of this chapter, I only casually discuss
certain topics. Don't worry; all of these topics are explored
in much more detail in the other chapters.
Hello, World!
You begin with the traditional introductory programming problem. You want to write a program that will display Hello, world! on your Web browser. Before you can write this program, you must understand what information the Web browser expects to receive from CGI programs. You also need to know how to run this program so you can see it in action.
CGI is language-independent, so you can implement this program in any language you want. A few different ones are used here to demonstrate this language independence. In Perl, the "Hello, world!" program looks like Listing 2.1.
Listing 2.1. Hello, world! in Perl.
#!/usr/local/bin/perl
# hello.cgi - My first CGI program
print "Content-Type: text/html\n\n";
print "<html> <head>\n";
print "<title>Hello, world!</title>";
print "</head>\n";
print "<body>\n";
print "<h1>Hello, world!</h1>\n";
print "</body> </html>\n";
Save this program as hello.cgi, and install it in the appropriate place. (If you are not sure where that is, relax; you'll learn this in "Installing and Running Your CGI Program," later in this chapter.) For most people, the proper directory is called cgi-bin. Now, call the program from your Web browser. For most people, this means opening the following Uniform Resource Locator (URL):
hostname is the name
of your Web server, and directoryname
is the directory in which you put hello.cgi (probably cgi-bin).
Your Web browser should look like Figure 2.2.
Figure 2.2 : Your first CGI program, if all goes well, will display Hello, world!.
Dissecting hello.cgi
There are a couple of things worth mentioning about hello.cgi. First, you're using simple print commands. CGI programs do not require any special file handles or descriptors for output. In order to send output to the browser, simply print to the stdout.
Second, notice that the content of the first print statement (Content-Type: text/html) does not show up on your Web browser. You can send whatever information you want back to the browser (an HTML page or graphics or sound), but first, you need to tell the browser what type of data you're sending it. This line tells the browser what sort of information to expect-in this case, an HTML page.
Third, the program is called hello.cgi. It's not always necessary to use the extension .cgi with your CGI program name. Although the source code for many languages also use extensions, the .cgi extension is not being used to denote language type, but is a way for the server to identify the file as an executable rather than a graphic file or HTML or text file. Servers are often configured to only try to run those files which have this extension, displaying the contents of all others. Although it might not be necessary to use the .cgi extension, it's still good practice.
In summary, hello.cgi consists of two main parts:
- It tells the browser what kind of information to expect (Content-Type: text/html)
- It tells the browser what to display (Hello, world!)
Hello, World! in C
To demonstrate the language-independence of CGI programs, Listing 2.2 contains the equivalent hello.cgi program written in C.
Listing 2.2. Hello, world! in C.
/* hello.cgi.c - Hello, world CGI */
#include <stdio.h>
int main() {
printf("Content-Type: text/html\r\n\r\n");
printf("<html> <head>\n");
printf("<title>Hello, World!</title>\n");
printf("</head>\n");
printf("<body>\n");
printf("<h1>Hello, World!</h1>\n");
printf("</body> </html>\n");
}
Neither the Web server nor the browser care which language you use to write your program. Although every language has advantages and disadvantages as a CGI programming language, it is best to use the language with which you are most comfortable. (A more detailed discussion on choosing your programming language is in Chapter 1, "Common Gateway Interface (CGI).")
Outputting CGI
You can now take a closer look at how to send information to the
Web browser. As you saw in the "Hello, world!" example,
Web browsers expect two sets of data (see Figure 2.3):
a header that contains information such as the type of information
to display (such as the Content-Type:
line) and the actual information (what shows up on the Web browser).
These two blocks of information are separated by a blank line.
Figure 2.3 : Browsers expect a header and the data from CGI programs, separated by a blank line.
The header is called an HTTP header. It provides important information about the information the browser is about to receive. There are several different types of HTTP headers, and the most common is the one you used previously: the Content-Type: header. You can use different combinations of HTTP headers by separating them with a carriage return and a newline (\r\n). The blank line separating the header from the data also consists of a carriage return and a newline (why you need both is described briefly in the preceding note and in detail in Chapter 4). You learn the other HTTP headers in Chapter 4; for now, you focus on the Content-Type: header.
The Content-Type: header describes the type of data the CGI is returning. The proper format for this header is
Content-Type: subtype/type
where subtype/type is a valid multipurpose Internet mail extensions (MIME) type. The most common MIME type is the HTML type: text/html. Table 2.1 lists a few of the more common MIME types you will see; a more complete list and discussion of MIME types is in Chapter 4.
Following the header and the blank line, you simply print the data as you want it to appear. If you are sending HTML, then print the HTML tags and data to stdout following the header. You can send graphics, sound, and other binary files as well simply by printing the contents of the file to stdout. There are some examples of this in Chapter 4.
Installing and Running Your CGI Program
This section digresses briefly from CGI programming and talks about configuring your Web server to use CGI and installing and running your programs. You learn a few different servers for different platforms here in some detail, but you will want to consult your server documentation for the best instructions.
All servers require space for the server files and space for the HTML documents. In this book, the server area is called ServerRoot and the document area is called DocumentRoot. On UNIX machines, the ServerRoot is typically in /usr/local/etc/httpd/ and the DocumentRoot is typically in /usr/local/etc/httpd/htdocs/. This is by no means necessarily true on your system, however, so make sure you replace all references to ServerRoot and DocumentRoot with your own ServerRoot and DocumentRoot.
When you access files using your Web browser, you specify the file in the URL relative to the DocumentRoot. For example, if you have the file /usr/local/etc/httpd/htdocs/index.html on your machine mymachine.org, you would access that file with the following URL:
Configuring Your Server for CGI
Most Web servers are preconfigured to use CGI programs. There are generally two things that tell a server whether a file is a CGI application or not:
- A designated directory. Some servers enable you to specify that all files in a designated directory (usually, by default, called cgi-bin) are CGI.
- Filename extensions. Many servers are preconfigured to interpret all files ending in .cgi as CGI.
The designated directory method is somewhat of a historical relic (the earliest servers used this as their sole method for determining which files were CGI programs), but it has several advantages.
- It keeps CGI programs centralized, preventing your other directories from becoming cluttered.
- You are not restricted to any specific filename extension, so you can name files whatever you want. Some servers enable you to designate several different directories as CGI directories.
- It also gives you greater control over who can write CGI. For example, if you maintain a system with several users, and you don't want them to use their own CGI scripts without first auditing the programs for security reasons, you can designate only those files in a restricted, centralized directory as CGI. Users will then have to give you the CGI programs to install, and you can audit the code first to make sure there are no major security problems with the program.
Indicating CGI by filename extension can be useful because of
its flexibility. You are not restricted to one single directory
for CGI programs. Most servers can be configured to recognize
CGI by filename extension, although not all of them are configured
this way by default.
Installing CGI on UNIX Servers
No matter how your UNIX server is configured, you need to take a few steps to make sure your CGI applications run properly. Your Web server will normally be running as a non-existent user (that is, the UNIX user nobody, an account which has no file access rights, and can't be logged into). Consequently, compiled CGI applications should be world-executable and CGI scripts (written in Perl, Bourne shell, or another scripting language) should be both world-executable and world-readable.
If you are using a scripting language such as Perl or Tcl, make sure you specify the full path of your interpreter in the first line of your script. For example, a Perl script using perl in the /usr/local/bin directory should begin with the following line:
#!/usr/local/bin/perl
Some Common UNIX Servers
The NCSA and Apache Web servers have similar configuration files because the Apache server was originally based on the NCSA code. By default, they are configured to think any file in the cgi-bin directory (located by default in ServerRoot) is a CGI program. To change the location of your cgi-bin directory, you can edit the conf/srm.conf configuration file. The format for configuring this directory is
ScriptAlias fakedirectoryname realdirectoryname
where fakedirectoryname is the fake directory name (/cgi-bin) and realdirectoryname is the complete path where the CGI programs are actually stored. You can configure more than one ScriptAlias by adding more ScriptAlias lines.
The default configuration is sufficient for most people's needs. You should edit the line in the srm.conf file anyway to specify the correct realdirectoryname. If, for example, your CGI programs are located in /usr/local/etc/httpd/cgi-bin, the ScriptAlias line in your srm.conf file should resemble the following:
ScriptAlias /cgi-bin/ /usr/local/etc/httpd/cgi-bin/
To access or reference your CGI programs located in this directory, you would use the following URL:
where hostname is the host name of your Web server and programname is the name of your CGI. For example, suppose you copied the hello.cgi program into your cgi-bin directory (for example, /usr/local/etc/httpd/cgi-bin) on your Web server called. To access your CGI, use the following URL:
If you want to configure either the NCSA or Apache server to recognize any file with the extension .cgi as CGI, you need to edit two configuration files. First, in the srm.conf file, uncomment the following line:
AddType application/x-httpd-cgi .cgi
This will associate the CGI MIME type with the extension .cgi. Now, you need to modify your access.conf file to enable CGIs to be executed in any directory. To do this, add the ExecCGI option to the Option line. It will probably look something like the following line:
Option Indexes FollowSymLinks ExecCGI
Now, any file with the extension .cgi is considered CGI; access it as you would access any file on your server.
The CERN server is configured in a similar fashion as the NCSA and Apache servers. Instead of ScriptAlias, the CERN server uses the command Exec. For example, in the httpd.conf file, you will see the following line:
Exec /cgi-bin/* /usr/local/etc/httpd/cgi-bin/*
Other UNIX servers are configurable in a similar fashion; check your server's documentation for more details.
Installing CGI on Windows
Most of the servers available for Windows 3.1, Windows 95, and Windows NT are configured using the file-extension method for CGI recognition. Generally, reconfiguring your Windows-based server simply requires running the server's configuration program and making the appropriate changes.
Configuring your server to correctly run scripts (such as Perl) is sometimes tricky. With DOS or Windows, you cannot specify the interpreter on the first line of the script like you can with UNIX. Some servers are preconfigured to associate certain filename extensions with an interpreter. For example, many Windows web servers will assume that files ending in .pl are Perl scripts.
If your server does not do this type of file association, you can define a wrapper batch file that calls both the interpreter and the script. As with the UNIX server, don't install the interpreter in either the cgi-bin directory or in any Web-accessible directories.
Installing CGI on the Macintosh
The two most established server options for the Macintosh are StarNine's WebStar and its MacHTTP predecessor. Both recognize CGIs by looking at the filename's extension.
MacHTTP understands two different extensions: .cgi and .acgi, which stands for asynchronous CGI. Regular CGI programs installed on the Macintosh (with the .cgi extension) will keep the Web server busy until the CGI is finished running, forcing the server to put all other requests on hold. Asynchronous CGI, on the other hand, will enable the server to accept requests even while running.
The Macintosh CGI developer using either of these Web servers should simply use the .acgi extension rather than the .cgi extension whenever possible. This should work with most CGI programs; if it doesn't seem to work, rename the program to .cgi.
Running Your CGI
After you've installed your CGI, there are several ways to run it. If your CGI is an output-only program, such as the Hello, world! program, then you can run it by simply accessing its URL.
Most programs are run as the back end to an HTML form. Before you learn how to get information from these forms, first read a brief introduction on how to create these forms.
A Quick Tutorial on HTML Forms
The two most important tags in an HTML form are the <form> and <input> tags. You can create most HTML forms using only these two tags. In this chapter, you learn these tags and a small subset of the possible <input> types or attributes. A complete guide and reference to HTML forms is in Chapter 3, "HTML and Forms."
The <form> Tag
The <form> tag is used to define what part of an HTML file is to be used for user input. It is how most HTML pages call a CGI program. The tag's attributes specify the program's name and location either locally or as a full URL, the type of encoding being used, and what method is being used to transfer the data to be used by the program.
The following line shows the specifications for the <form> tag:
<FORM ACTION="url" METHOD=[POST|GET]
The ENCTYPE attribute is fairly unimportant and is usually not included with the <form> tag. For more information on the ENCTYPE tag, see Chapter 3. For one use of ENCTYPE, see Chapter 14, "Proprietary Extensions."
The ACTION attribute references the URL of the CGI program. After the user fills out the form and submits the information, all of the information is encoded and passed to the CGI program. It is up to the CGI program to decode the information and process it; you learn this in "Accepting Input From the Browser," later in this chapter.
Finally, the METHOD attribute describes how the CGI program should receive the input. The two methods-GET and POST-differ in how they pass the information to the CGI program. Both are discussed in "Accepting Input From the Browser."
For the browser to be able to allow user input, all form tags and information must be surrounded by the <form> tag. Don't forget the closing </form> tag to designate the end of the form. You may not have a form within a form, although you can set up a form that enables you to submit parts of the information to different places; this is covered extensively in Chapter 3.
The <input> Tag
You can create text input bars, radio buttons, checkboxes, and other means of accepting input by using the <input> tag. This section only discusses text input fields. To implement this field, use the <input> tag with the following attributes:
<INPUT TYPE=text NAME=" . . . " VALUE=" . . . " SIZE= MAXLENGTH= >
NAME is the symbolic name of the variable that contains the value entered by the user. If you include the VALUE attribute, this text will be placed as the default text in the text input field. The SIZE attribute enables you to specify a horizontal length for the input field as it will appear on the browser. Finally, MAXLENGTH specifies the maximum number of characters the user can input into the field. Note that the VALUE, SIZE, and MAXLENGTH attributes are all optional.
Submitting the Form
If you have only one text field within your form, the user can submit the form by simply typing in the information and pressing Enter. Otherwise, you must have some way for the user to submit the information. The user submits information by using a submit button with the following tag:
<input type=submit>
This tag creates within your form a button labeled Submit. When the user has finished filling out the form, he or she can submit its content to the URL specified by the form's ACTION attribute by clicking the Submit button.
Accepting Input from the Browser
In previous examples, you saw how to write a CGI program that sends information from the server to the browser. In reality, a CGI program that only outputs data does not have many applications (but it does have some; see Chapter 4 for examples). More important is the capability of CGI to receive information from the browser, the feature that gives the Web its interactive nature.
A CGI program receives two types of information from the browser.
- First, it gets various pieces of information about the browser (its type, what it can view, the remote host name, and so on), the server (its name and version, the port its running on, and so on), and the CGI program itself (the program name and where it's located). The server provides all of this information to the CGI program through environment variables.
- Second, the CGI program can get information entered by the user. This information, after first being encoded by the browser, is sent either through an environment variable (the GET method) or through the standard input (stdin-the POST method).
Environment Variables
Knowing what environment variables are available for the CGI program
can be useful, both as a learning aid and as a debugging tool.
Table 2.2 lists some of the available CGI environment variables.
You can also write a CGI program that prints the environment variables
and their values to the Web browser.
In order to write a CGI application that displays the environment variables, you have to know how to do two things:
- Determine all of the environment variables and their corresponding values.
- Print the results to the browser.
You already know how to do the latter. In Perl, the environment variables are stored in the associative array %ENV, which is keyed by the environment variable name. Listing 2.3 contains env.cgi, a Perl program that accomplishes our objective.
Listing 2.3. A Perl program, env.cgi, which outputs all CGI environment variables.
#!/usr/local/bin/perl
print "Content-type: text/html\n\n";
print "<html> <head>\n";
print "<title>CGI Environment</title>\n";
print "</head>\n";
print "<body>\n";
print "<h1>CGI Environment</h1>\n";
foreach $env_var (keys %ENV) {
print "<B>$env_var</B> = $ENV{$env_var}<BR>\n";
}
print "</body> </html>\n";
A similar program can be written in C; the complete code is in Listing 2.4.
Listing 2.4. env.cgi.c in C.
/* env.cgi.c */
#include <stdio.h>
extern char **environ;
int main()
{
char **p = environ;
printf("Content-Type: text/html\r\n\r\n");
printf("<html> <head>\n");
printf("<title>CGI Environment</title>\n");
printf("</head>\n");
printf("<body>\n");
printf("<h1>CGI Environment</h1>\n");
while(*p != NULL)
printf("%s<br>\n",*p++);
printf("</body> </html>\n");
}
GET Versus POST
What is the difference between the GET and POST methods? GET passes the encoded input string via the environment variable QUERY_STRING, whereas POST passes it through stdin. POST is the preferable method, especially for forms with a lot of data, because there is no limit to how much information you can send. On the other hand, you are limited with the GET method by the amount of environment space you have. GET has some utility, however; this is discussed in detail in Chapter 5, "Input."
In order to determine which method is used, the CGI program checks the environment variable REQUEST_METHOD, which will either be set to GET or POST. If it is set to POST, the length of the encoded information is stored in the environment variable CONTENT_LENGTH.
Encoded Input
When the user submits a form, the browser first encodes the information before sending it to the server and subsequently to the CGI application. When you use the <input> tag, every field is given a symbolic name, which can be thought of as the variable. The value entered by the user can be thought of as the value of the variable.
In order to specify this, the browser uses something called the URL encoding specification, which can be summed up as follows:
- Separate different fields with the ampersand (&).
- Separate name and values with equal signs (=), with the name on the left and the value on the right.
- Replace spaces with pluses (+).
- Replace all "abnormal" characters with a percent sign (%) followed by the two-digit hexadecimal character code.
Your final encoded string will look something like the following:
name1=value1&name2=value2&name3=value3 ...
For example, suppose you had a form that asked for name and age. The HTML used to produce this form is in Listing 2.5.
Listing 2.5. HTML to produce the name and age form.
<html> <head>
<title>Name and Age</title>
</head>
<body>
<form action="/cgi-bin/nameage.cgi" method=POST>
Enter your name: <input type=text<p>
Enter your age: <input type=text<p>
<input type=submit>
</form>
</body> </html>
Suppose the user enters Joe Schmoe in the name field, and 20 in the age field. The input will be encoded into the input string.
name=Joe+Schmoe&age=20
Parsing the Input
In order for this information to be useful, you need to be able to parse the information into something your CGI programs can use. You learn strategies for parsing the input in Chapter 5. For all practical purposes, you will never have to think about how to parse the input because several people have already written freely available libraries that do the parsing for you. Two such libraries are introduced in this chapter in the following sections: cgi-lib.pl for Perl (written by Steve Brenner) and cgihtml for C (written by me).
The general idea for most of the libraries written in different languages is to parse the encoded string and place the name and value pairs into a data structure. There is a clear advantage to using a language that has built-in data structures such as Perl; however, most of the libraries for lower-level languages such as C and C++ include data-structure implementations and routines.
Don't worry about understanding every detail of the libraries; what is really important is to learn to use them as tools to make your job as a CGI programmer easier.
cgi-lib.pl
cgi-lib.pl takes advantage of Perl's associative arrays. The function &ReadParse parses the input string and keys each name/value pair by the name. For example, the appropriate lines of Perl necessary to decode the name/age input string just presented would be
&ReadParse(*input);
Now, if you want to see the value entered for "name," you can access the associative array variable $input{"name"}. Similarly, to access the value for "age," you look at the variable $input{"age"}.
cgihtml
C does not have any built-in data structures, so cgihtml implements its own linked list for use with its CGI parsing routines. It defines the structure entrytype as follows:
typedef struct {
char *name;
char *value;
} entrytype;
In order to parse the name/age input string in C using cgihtml, you would use the following:
llist input; /* declare linked list called input */
read_cgi_input(&input); /* parse input and place in linked list */
To access the information for the age, you could either parse through the list manually or use the provided cgi_val() function.
#include <stdlib.h>
#include <string.h>
char *age = malloc(sizeof(char) * strlen(cgi_val(input,"age")) + 1);
strcpy(age,cgi_val(input,"age"));
The value for "age" is now stored in the string age.
Chapter 5 goes into more depth for these and other libraries. For now, you're ready to combine your knowledge of input and output to write a full-fledged, yet simple, CGI program.
A Simple CGI Program
You are going to write a CGI program called nameage.cgi that processes the name/age form. The data processing (what I like to call the "in-between stuff") is minimal. nameage.cgi simply decodes the input and displays the user's name and age. Although there is not much utility in such a tool, this demonstrates the most crucial aspect of CGI programming: input and output.
You use the same form as described previously, calling the fields name and age. For now, don't worry about robustness or efficiency; solve the problem at hand using the simplest possible solution. The Perl and C solutions are shown in Listings 2.6 and 2.7, respectively.
Listing 2.6. nameage.cgi in Perl.
#!/usr/local/bin/perl
# nameage.cgi
require 'cgi-lib.pl'
&ReadParse(*input);
print "Content-Type: text/html\r\n\r\n";
print "<html> <head>\n";
print "<title>Name and Age</title>\n";
print "</head>\n";
print "<body>\n";
print "Hello, " . $input{'name'} . ". You are\n";
print $input{'age'} . " years old.<p>\n";
print "</body> </html>\n";
Listing 2.7. nameage.cgi in C.
/* nameage.cgi.c */
#include <stdio.h>
#include "cgi-lib.h"
int main()
{
llist input;
read_cgi_input(&input);
printf("Content-Type: text/html\r\n\r\n");
printf("<html> <head>\n");
printf("<title>Name and Age</title>\n");
printf("</head>\n");
printf("<body>\n");
printf("Hello, %s. You are\n",cgi_val(input,"name"));
printf("%s years old.<p>\n",cgi_val(input,"age"));
printf("</body> </html>\n");
}
Note these two programs are almost exactly equivalent. They both contain parsing routines that occupy only one line and handle all the input (thanks to the respective library routines). The output is essentially a glorified version of your basic Hello, world! program.
Try running the program by filling out the form and pressing the
Submit button. Assuming you enter Eugene
for name, and 21
for age, your result should
resemble Figure 2.4.
Figure 2.4 : The result of the CGI nameage cgi program.
General Programming Strategies
You now know all of the basic concepts necessary to program CGI. When you understand how CGI receives information and how it sends it back to the browser, the actual quality of your final product depends on your general programming abilities. Namely, when you program CGI (or anything for that matter), keep the following qualities in mind:
- Simplicity
- Efficiency
- Generality
The first two qualities are fairly common: try to make the code as readable and as efficient as possible. Generality applies more to CGI programs than to other applications. You will find as you start developing your own CGI programs that there are a few basic applications that you and everyone else want to do. For example, one of the most common and obvious tasks of a CGI program is to process a form and e-mail the results to a certain recipient. You might have several different forms you want processed, each with a different recipient. Instead of writing a CGI program for each different form, you can save time by writing a more general CGI program that works for all of the forms.
By touching upon all of the basic features of CGI, I have provided you with enough information to start programming CGI. However, in order to become an effective CGI developer, you need to have a deeper understanding of how CGI communicates with the server and the browser. The rest of this book focuses on the details that are skimmed over in this chapter and discusses strategies for application development and the advantages and limitations of the protocol.
Summary
This chapter rapidly introduced the basics behind CGI programming. You create output by formatting your data correctly and printing to stdout. Receiving CGI input is slightly more complex because it must be parsed before it can be used. Fortunately, several libraries already exist that do the parsing for you.
You should feel comfortable programming CGI applications at this point. The rest of this book is devoted to providing more details about the specification and offering tips and strategies for programming advanced, sophisticated applications. | http://www.webbasedprogramming.com/CGI-Developers-Guide/ch2.htm | CC-MAIN-2022-05 | refinedweb | 5,559 | 63.19 |
The Art of Java-Based HTTP Data Service Implementation and Testing (Part 2)
The Art of Java-Based HTTP Data Service Implementation and Testing (Part 2)
Want to see how to incorporate end-to-end testing for your Java HTTP data services? Take a look at out ActFramework and some associated tools can help.
Join the DZone community and get the full member experience.Join For Free
In the previous article of this HTTP data service implementation and testing series, we walked through the process of creating a simple data service in Java with a built-in fully automated testing facility. In today's post, we will flesh out the project with RESTful service endpoints for a simple TODO task management system. We will also demonstrate how to create automated testing cases for all service endpoints. Here are the service endpoints we will code in today's journey:
GET /todos
Get all TODO items
GET /todos/?q=?
Query TODO items with the description of the value passed in by "q"
GET /todos/{id}
Return the TODO item specified by ID
POST /todos
Add one TODO item
DELETE /todos/{id}
Remove the TODO item specified by ID
So let's start by creating our project. Typing the following command in your console to generate the project:
mvn archetype:generate -B \ -DgroupId=demo.todo \ -DartifactId=todo-service \ -DarchetypeGroupId=org.actframework \ -DarchetypeArtifactId=archetype-simple-restful-service \ -DarchetypeVersion=1.8.8.5
Now open the project in your IDE (we recommend using IntelliJ IDEA (Community Edition) for all coding exercises in this post). Once you are done, you should be able to see something like this:
Open the "pom.xml" file in your IDE and add the dependency of the database access plugin:
<dependency> <groupId>org.actframework</groupId> <artifactId>act-eclipselink</artifactId> </dependency>
Here, we are using act-eclipselink, which is backed by EclipseLink, a JPA library to provide database access capabilties for our TODO service. It can be drop replaced with act-hibernate. No code needs to change with this dependency update.
The
Server class is not used in the TODO application, so just remove that file and add the Model class
Todo into the project:
package demo.todo; import act.util.SimpleBean; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; @Entity(name = "todo") public class Todo implements SimpleBean { @Id @GeneratedValue(strategy = GenerationType.AUTO) public Integer id; public String desc; }
Here we get
Todo to implement the
SimpleBean so we can declare all fields as public, and ActFramework will generate the Getter/Setter methods for the
Todo class. And when it invokes something like
todo.desc = "Task A" outside of the
Todo class, the invocation will be replaced with
todo.setDesc("Task A"); and the invocation of
String s = todo.desc will be replaced with
String s = todo.getDesc();
The next class we need to add into the project is our RESTful service
TodoService:
package demo.todo; import act.controller.annotation.UrlContext; import act.db.jpa.JPADao; import act.util.JsonView; import org.osgl.mvc.annotation.DeleteAction; import org.osgl.mvc.annotation.GetAction; import org.osgl.mvc.annotation.PostAction; import org.osgl.util.S; import java.util.List; import javax.inject.Inject; @JsonView @UrlContext("/todos") public class TodoService { @Inject private JPADao<Integer, Todo> dao; @GetAction public List<Todo> list(String q) { return S.blank(q) ? dao.findAllAsList() : dao.q("desc like", q).fetch(); } @GetAction("{id}") public Todo findById(int id) { return dao.findById(id); } @PostAction public int create(Todo todo) { dao.save(todo); return todo.id; } @DeleteAction("{id}") public void remove(int id) { dao.deleteById(id); } }
Now we can start the app and try our TODO service endpoints. You can either launch the app in the IDE or via
mvn command.
To launch the app in IDEA, open the
AppEntry class and follow the following screenshot:
To launch the app in a console, type the following
mvn command:
mvn clean compile act:run
After the app has been launched, we can test it via HTTP connection tool, e.g. Postman. But here in this article, I will use a simple console tool named httpie to test our service endpoints.
First, create a TODO item:
luog@luog-X510UQR:~$ http POST localhost:5460/todos desc='Task A' HTTP/1.1 201 Created Content-Length: 12 Content-Type: application/json Date: Wed, 30 May 2018 07:40:58 GMT Server: act/1.8.8-RC8 { "result": 1 }
From the console log, we see the service returned successfully with the ID of the new TODO item.
Now that we have a record in the database, we can test the service endpoint that returns all TODO items in the system:
luog@luog-X510UQR:~$ http localhost:5460/todos HTTP/1.1 200 OK Content-Length: 26 Content-Type: application/json Date: Wed, 30 May 2018 07:41:53 GMT Server: act/1.8.8-RC8 [ { "desc": "Task A", "id": 1 } ]
Get the exact TODO item with the ID:
luog@luog-X510UQR:~$ http localhost:5460/todos/1 HTTP/1.1 200 OK Content-Length: 24 Content-Type: application/json Date: Wed, 30 May 2018 07:42:28 GMT Server: act/1.8.8-RC8 { "desc": "Task A", "id": 1 }
Delete the TODO item by ID and verify the delete option:
luog@luog-X510UQR:~$ http DELETE localhost:5460/todos/1 HTTP/1.1 204 No Content Content-Type: application/json Date: Wed, 30 May 2018 07:44:53 GMT Server: act/1.8.8-RC8 luog@luog-X510UQR:~$ http localhost:5460/todos HTTP/1.1 200 OK Content-Length: 2 Content-Type: application/json Date: Wed, 30 May 2018 07:44:59 GMT Server: act/1.8.8-RC8 []
Looks great! We're done with the implementation, and the manual testing shows the functions are all good.
Now we are going to implement fully automated end-to-end testing fo our TODO application. The tool we are using is act-e2e, a plugin designed to enable easy implementation of en-to-end testing for ActFramework applications. Here is the feature list of act-e2e:
- End-to-end tests. Meaning tests are done by sending requests to the application via HTTP channel and verifying the response sent back by the application. The test is completely isolated from the application implementation.
- Test cases are defined in YAML files and are super easy to read and write. It just needs to define the request and the rules to verify the response — no need to understand the internal data structure of the application.
- Test environment management, including:
- Clear data before starting a test scenario
- Load test data before starting a test scenario
- The test data is defined in a YAML file called fixtures.
- Other tools
- Request template: It might need to define common attributes for all requests, in which case a request template can be defined and referenced by other requests.
- Last response data cache: Sometimes, it might need to refer to the responded data of the last interaction (e.g. ID of a newly created record) to construct the request of the next iteration.
The Scenario file (normally in
src/main/resources/e2e/scenarios.yml) is the center of e2e tests. The structure of the file can be depicted as:
We are not going to enumerate all syntax and semantics of the Scenario file used in act-e2e. Instead, we will take the journey of creating test scenarios for our TODO application. It should review a lot of things in e2e Scenario files in an easy way.
Now let's open the
src/main/resources/e2e/scenarios.yml file and remove its old content, which is created for the removed
Service class. Then let's add a scenario with our first interaction for our TODO app:
Scenario(Main): description: Test TODO service interactions: - description: Add one todo item request: method: POST url: /todos params: desc: Task A response: json: result: - exists: true
It's pretty easy to tell the first iteration is created for TODO create a service endpoint test. The request part is fairly straightforward and lets us put some notes about the response part. So the
json part means the response shall be JSON data, and it must have a field named
result. To understand the rationale behind that, let's get back to our TODO item and create some service endpoint code:
@PostAction @Transactional public int create(Todo todo) { dao.save(todo); return todo.id; }
Initially, it doesn't look correct, as the service endpoint actually returns an integer, so what happened with that
result? The reason is because we have the
@JsonView annotation on the
TodoService class, which dictates the framework to generate valid JSON responses for each request with returning data. Since the primitive type value doesn't have our structure, we will use "result" to wrap it, generating something like
{"result": 1} as the response, and that's the reason we create the response verification spec above.
Since we have the first iteration created, we can use the
/~/e2e endpoints provided by the act-e2e plugin to test our scenarios file. Open the browser and navigate to, it should display something like:
Meaning our first run passed, which is good. Let's move ahead to our next iteration — copy the following code to your scenarios.yml file:
- description: Fetch todo item added in last interaction request: method: GET url: /todos/${last:result} response: json: desc: Task A
This time, the request deserves a little bit of an explanation. So the aim is to send the request to fetch the TODO item created in the last iteration. The URL should be
/todos/{id}, where "id" should be the value of the
result field of the JSON data returned in the last response, so we use the special notation
${last:result} to fetch the value in place. The response is pretty simple, it expects a JSON response with
desc to be "Task A", exactly what we put in the last interaction to create the item.
Refresh the browser and we should get something like:
The next iteration is to get a TODO list. Add the following code into your
scenarios.yml file:
- description: Fetch todo item list request: method: GET url: /todos response: json: size: 1 0: desc: Task A
The response spec of this one is a bit interesting. We want to verify the data looks like:
HTTP/1.1 200 OK Content-Length: 26 Content-Type: application/json Date: Wed, 30 May 2018 07:41:53 GMT Server: act/1.8.8-RC8 [ { "desc": "Task A", "id": 1 } ]
In our response spec, we specified the data should be JSON, and the
size: 1 means if it is a JSON array, then it shall contain one element, and the
0 part specifies the verification rules for the first element in the array, which shall have a
desc property be "Task A".
Now refresh the browser it should show that all three interactions passed:
The next interaction is to test search endpoints, literally the same URL with our get TODO list service, but with a query parameter
q specified:
- description: Search todo item list request: method: GET url: /todos params: q: A response: json: size: 1 0: desc: Task A
This time, when we reload the browser, we get an oops:
Going back to the app console, we should find the following error message:
[FAIL] Search todo item list error running scenario: Main org.osgl.exception.UnexpectedException: Cannot verify value[0] with test [1] at org.osgl.util.E.unexpected(E.java:179) at act.e2e.Scenario.verifyValue(Scenario.java:569) at act.e2e.Scenario.verifyList(Scenario.java:509) at act.e2e.Scenario.verifyBody(Scenario.java:440) at act.e2e.Scenario.verify(Scenario.java:406)
Looks like it returns an empty JSON array. Hmm... what causes this discrepancy? The code is actually missing a
% before the letter
A for query parameter
q, as SQL expects
% to be used with text when doing the
like query, so let's add that and try again:
This is an even more shocking result, but actually not that scary. I didn't realize the
% is a reserved character in YAML files, so let's wrap the
%A with quotes, so that your interaction now should look like this:
- description: Search todo item list request: method: GET url: /todos params: q: '%A' response: json: size: 1 0: desc: Task A
Now refresh the browser — it gives us the green pass:
The last two interactions need to be added together. They are for deleting TODO items and verifying the test's effect:
- description: Delete todo item by ID request: method: DELETE url: /todos/${last:0.id} - description: Verify delete effect request: method: GET url: /todos response: json: size: 0
Cool! No drama, it just passed:
So now let's say our TODO application is all done, with services implemented plus a set of fully automated end-to-end test cases! Let's take a look at code statistics:
luog@luog-X510UQR:/tmp/2/todo-service$ loc src -------------------------------------------------------------------------------- Language Files Lines Blank Comment Code -------------------------------------------------------------------------------- XML 1 115 20 7 88 Java 3 79 16 7 56 YAML 1 53 0 0 53 -------------------------------------------------------------------------------- Total 5 247 36 14 197 --------------------------------------------------------------------------------
Well with just 56 lines of Java source code plus 53 lines of YAML code, we get a workable and testable Java RESTful service with 5 endpoints! (The 88 lines of XML is the logback configuration that comes with the Maven archetype). Who said Java can't be an agile language?!
To wrap up our story, we need to show how act-e2e helps our CI service. as we don't expect CI to run in a browser in order to test our scenarios, right? Here is how to get things done when it really works in your CI: Just use the Maven command
mvn -q clean compile act:e2e. It launches your application followed up with the e2e tests on whatever is defined in your scenario files:
luog@luog-X510UQR:/tmp/2/todo-service$ mvn -q clean compile act:e2e Listening for transport dt_socket at address: 5005 ___ _ _ _ __ _ _ ___ _ _ | / \ | \ / \ __ (_ |_ |_) \ / | / |_ | \_/ |_/ \_/ __) |_ | \ \/ _|_ \_ |_ powered by ActFramework r1.8.8-RC8-7ed4 version: v1.0-SNAPSHOT-db6c scan pkg: base dir: /tmp/2/todo-service pid: 31412 profile: e2e mode: DEV zen: Flat is better than nested. 2018-05-31 22:00:08,558 INFO a.Act@[main] - loading application(s) ... 2018-05-31 22:00:08,579 INFO a.a.App@[main] - App starting .... 2018-05-31 22:00:08,839 WARN a.h.b.ResourceGetter@[main] - URL base not exists: META-INF/resources/webjars 2018-05-31 22:00:08,863 WARN a.a.DbServiceManager@[main] - DB configuration not found. Will try to init default service with the sole db plugin: act.db.eclipselink.EclipseLinkPlugin@37ddb835 2018-05-31 22:00:10,529 WARN a.m.MailerConfig@[main] - smtp host configuration not found, will use mock smtp to send email 2018-05-31 22:00:11,041 WARN a.Act@[jobs-thread-2] - No data source user configuration specified. Will use the default 'sa' user 2018-05-31 22:00:11,041 WARN a.Act@[jobs-thread-2] - No database URL configuration specified. Will use the default h2 inmemory test database 2018-05-31 22:00:11,041 WARN a.Act@[jobs-thread-2] - JDBC driver not configured, system automatically set to: org.h2.Driver 2018-05-31 22:00:11,205 INFO o.xnio@[main] - XNIO version 3.3.8.Final 2018-05-31 22:00:11,232 INFO o.x.nio@[main] - XNIO NIO Implementation Version 3.3.8.Final 2018-05-31 22:00:11,392 INFO a.Act@[main] - network client hooked on port: 5460 2018-05-31 22:00:11,394 INFO a.Act@[main] - CLI server started on port: 5461 2018-05-31 22:00:11,396 INFO a.Act@[main] - app is ready at: 2018-05-31 22:00:11,397 INFO a.Act@[main] - it takes 4680ms to start the app 2018-05-31 22:00:11,449 INFO a.a.App@[jobs-thread-2] - App[todo-service] loaded in 2870ms 2018-05-31 22:00:11,455 INFO a.a.ApiManager@[jobs-thread-5] - start compiling API book 2018-05-31 22:00:11,606 INFO a.a.ApiManager@[jobs-thread-5] - API book compiled Start running E2E test scenarios ================================================================================ MAIN Test TODO service -------------------------------------------------------------------------------- [EL Info]: 2018-05-31 22:00:12.769--ServerSession(353000845)--EclipseLink, version: Eclipse Persistence Services - 2.7.1.v20171221-bd47e8f [EL Info]: connection: 2018-05-31 22:00:12.94--ServerSession(353000845)--/file:/tmp/2/todo-service/./_default login successful [PASS] Add one todo item [PASS] Fetch todo item added in last interaction [PASS] Fetch todo item list [PASS] Search todo item list [PASS] Delete todo item by ID [PASS] Verify delete effect -------------------------------------------------------------------------------- It takes 0s to run this scenario. 2018-05-31 22:00:13,391 INFO a.a.App@[Thread-6] - App shutting down ....
Summary
In this post, we went through the whole process of creating a workable and testable RESTful TODO service. It actually takes no more than 20 minutes to get it worked out in less than 60 lines of Java code. This is the core value of ActFramework — focus on expressiveness and makes web developer's life easier.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/the-art-of-java-based-http-data-service-implementa-1?fromrel=true | CC-MAIN-2019-26 | refinedweb | 2,926 | 56.05 |
I.
See this discussion on Google Plus for information on other implementations for other editors.
Check the README on the Github page]]>
One of the common misunderstandings about JavaScript is that it does not provide encapsulation, and therefore is not fully capable of Object Oriented Programming. While EcmaScript 3 does not have syntax to specify which members of an object are private or read-only, this is addressed in EcmaScript 5, with the new Object API. However even without the new syntax, it is perfectly possible to achieve encapsulation in JavaScript objects.
The key is understanding how scope works in JavaScript. The main thing to keep in mind is that JavaScript is function scoped: variables are visible anywhere within a function, including within functions which are defined within that function. This is the property of ‘closure’.
The other important thing to understand is that JavaScript does not have Classes, but rather constructor functions. So when you say,
What really happens is that the Dog constructor function gets called, with ‘this’ set to the prototype of the constructor. The prototype is simply another object, which defaults to a plain old Object, but can be specified in code, eg.
The effect of this is that the
dog object will be able to use members
which exist in the prototype object, so
dog.bark() will work.
So how do we get ‘private’ members in objects? The trick is to use the closure property of the constructor function itself, and define some accessor methods within it:
So now your dogs can be given a new name when they are created, but the name cannot be directly accessed or changed from the outside.
If you want to make a private variables which are not arguments to the constructor, you can do that by simply declaring them at the top of the constructor function:
So we could continue in this way, and declare all the methods of the Dog inside the constructor function, and enjoy the benefits of having encapsulated properties. That would work fine, but the downside is that these functions are being re-created every time the constructor is called. This is a waste of memory.. not sure how much of an issue this is in reality unless you’re making thousands of them, but it is something to avoid if possible.
While it is not possible to completely avoid some duplicate functions, the memory used can be mitigated as follows: if you make getters/setters for all your properties that you want to be private, then you can define the rest of the methods on the prototype to use the getters/setters. This way the only duplicated functions are the getters/setters, which should be very lightweight (couple lines each), and shouldn’t result in memory issues, unless you’re making a particle system or something.
Here is an example of this:
So this way you have the protection of encapsulated state, and avoid duplicating all your methods in every instance, thanks to prototypal inheritance. Now getters & setters may be distateful to you, but it is the best solution to this issue I can see, and it has the additional benefit of adding a layer of abstraction to your code, so you could change the way the state values are implemented, without changing a bunch of references to properties throughout the class.
As always, for inheritance to work with non-empty constructors, it is important that you explicitly call the prototype’s constructor:
So we have private variables which can only be accessed by the get/set functions we declared in the constructor. But what about private functions? They too can be declared in the constructor function as inner functions:
The limitation here is that while such inner functions can be called by other functions which are defined within the constructor, they cannot be called by prototype methods. You could put all methods which need to access the private methods also inside your constructor, but then you’re back to the memory issue.
A way around this is to wrap the entire definition of the constructor and prototype in a closure function, which contains the private methods. This is basically the module pattern, and indeed if you are defining a constructor function inside a module, you can take advantage of the hiding effect of the module.
But there is an important gotcha here: these methods (and any other vars they access) are static: they are shared between all instances created with the constructor function contained within the same module. So you need to be aware of this. This is not a problem if you have static constants and ‘pure functions’ without side-effects encapsulated in your module. But it is potentially dangerous that one instance could call a method which could affect other instances… a kind of state contamination which is best avoided. Static and stateful just don’t go well together.
There’s also the issue that the module pattern may be too effective in preventing behaviour modification, and undermine the dynamic nature of the language.
Perhaps perhaps private instance methods are not absolutely necessary, and may be detrimental. AFAIK Smalltalk does not have them either, and takes a similar approach to that outlined here, of encapsulating state by only exposing methods (strictly speaking, ‘messages’, in Smalltalk), not properties. And Smalltalk is the archetypal Object Oriented language.
To be honest, I’ve never really found myself feeling the need for all this privacy, but I’m excited to have found a way to do it without using a third party Class system for JavaScript.]]>
If you are a web developer, you have most likely thought to yourself occasionally, “I really wish I didn’t have to hit refresh a million times a day to see my changes take effect…”. Well, finally there is an answer: LiveReload comes to the rescue with a tool which automatically refreshes the browser whenever some of the files used by it are changed.
The 2.0 version is a nice Mac App, which I am using at home, but I also tried out version 1.x which is distributed as a RubyGem, and it worked fine on Ubuntu… and it apparently works fine on Windows too (after you have installed Ruby… see instructions on the GitHub readme).
After you have installed the app or gem, you can either embed a JavaScript on your page, or install browser plugins . I opted for the plugin route, which adds a button to your browser chrome allowing you to enable LiveReload for the current URL. Note that it is necessary to restart the browser completely for it to work. This means on Firefox you will need to restart twice: once to install the plugin, and again to get it working. Another issue I saw with Firefox was that the button didn’t show up on the addons toolbar, so I had to ‘customize’ my toolbars and add the ‘LR’ button which was now available as an option.
The other thing you need to do is tell LiveReload which folders to watch for changes. On the Mac App, you can use the Finder to select the folders, while with the RubyGem you need to give the folders as command line arguments.. or just type ‘livereload’ in the folder you want to watch, if there is only one.
Another benefit of LiveReload is that you can hitch multiple browsers up at once, to verify that your code behaves consistently across browsers. Great for cross-browser testing while you work.
Furthermore, when you are writing specs or unit tests for your code, you can have the tests running in another browser, and watch as they pass or fail, all without leaving your editor.
Finally, the version 2 has options to auto-compile SASS, Less, CoffeeScript and similar meta-languages which compile into HTML, CSS, or JavaScript. While the Mac App is initially free to use (it is still beta, but I did not experience any problems), I see that the update to the final 2.0 version will require some payment. However, if you just want auto-refreshing on changes, without pre-compilation, then you can continue to use the 1.0 version (or presumably the Beta App ?).
In case you are wondering how LiveReload works, I took a gander at the Javascript which the plugin embeds, and it uses WebSockets to receive notifications from the livereload app, which starts up a socket server
I think LiveReload is going to make my experience as a web developer much less annoying and more Zen-like :)
ps. I am not being paid by LiveReload to say any of this.. If you know of any other solutions to this perennial problem, I would certainly be interested to hear of them!]]>
Here is how I do it …
I recommend using pathogen to manage plugins, as then you can simply clone the plugin repos into your .vim/bundle directory and you’re done. (and run ‘:Helptags’ to install help).
While this is not bad, I personally prefer another Javascript syntax definition file, which provides a more complete set of keywords (eg. browser events such as ‘onmouseover’), and thus highlights them differently. To use it, download and copy it to
~/.vim/syntax/javascript.vim
Now when viewing the same .js file, you will notice a few differences:
Another feature which this syntax file adds is the ability to fold functions. You can fold a function by positioning the cursor within it and typing ‘zc’. Then it will collapse to a single blue line with the function definition highlighted in it, and the number of lines contained within the fold. This can be a handy way to defocus attention from functions you are not interested in. It works in nested functions as well.
I generally want Javascript to have spaces instead of tabs, with 4 spaces for each
indent level. This can be achieved by adding the following content as the file
.vim/after/ftplugin/javascript.vim
" set whitespace for javascript to always use 4 spaces instead of tabs setlocal expandtab setlocal shiftwidth=4 setlocal shiftround setlocal tabstop=4
Vim’s default indent file for javascript merely sets the indentation to cindent… this is not very effective. For best results, go get the web-indent plugin.
Then you can use the builtin indent commands (essentially ‘=’ in command mode, combined with a motion, eg. ‘gg=G’ for the whole file, or just select some text in visual mode and hit ‘=’).]]>
I just managed to submit my JS10K entry in time, having spent a lot of time implementing compression hacks like converting the code and data into images using node-canvas. Turns out I may not have needed to worry about that as the competition now lets you submit a .zip file, and only considers the size of the .zip file, not the files within it. Somehow I missed that. Oh well.
Here is a link to one of my favorite lifeforms, the Frothing Puffer:
In order to meet the size restriction of the competition, I had to leave out some functionality that I developed earlier, such as thumbnails of the lifeforms. So here is the uncut version, with thumbnails, and all of the lifeforms in it.
Actually, there are a couple of features in the competition version which are not in this version: The ability to link to a pattern using the URL, and also the ability to edit the seed pattern (albeit in text form). I have thoughts of making a version with a more visual editor, which would let you turn on cells by clicking on them, and perhaps save and share them with others. But I’m not sure if I’ll get around to it… life is short :P
I must give credit to the David Silver’s LIFE Lexicon, where I obtained the seed patterns. As always, Da Code is available on GitHub. Note that the competition version is in the JS10K branch.]]>
Just wanted to take note of how I did this, in case it helps anyone, or if I forget :)
First I installed Homebrew, as I’m kind of sick of MacPorts and Fink (specifically how slow and out of date they tend to be). This was a matter of clicking on the ‘install homebrew today’ button, which linked me to their github page… where I found the following instructions.
/usr/bin/ruby -e "$(curl -fsSL)"
So as it turns out this did not work, and instead I got an error message about ca certs. Fortunately it did mention that I should add the -k option to turn to turn off strict cert checking. so I did this instead:
/usr/bin/ruby -e "$(curl -fsSLk)"
And it worked.
So I went over to the MongoDB site and specifically their Quickstart OS X page. Since I have homebrew now I did,
brew update brew install mongodb
note that I did not need to use ‘sudo’ before either of these commands, which is nice. After installing, it instucted me to issue the following commands:
mkdir -p ~/Library/LaunchAgents cp /usr/local/Cellar/mongodb/2.0.0-x86_64/org.mongodb.mongod.plist ~/Library/LaunchAgents/ launchctl load -w ~/Library/LaunchAgents/org.mongodb.mongod.plist
Which I dutifully did. Note that the ‘launchctl load …’ command actually starts the ‘mongod’ database daemon, so there’s no need to start it manually. Simply typing ‘mongo’ starts the mongodb shell
>: mongo MongoDB shell version: 2.0.0 connecting to: test
Note that >: is my prompt, in case you’ve seen Lost you may get the joke :) Now I can create a ‘collection’ by saving an item to one, like so:
> db.foo.save({a:1})
And I can query all the items of ‘foo’ like so:
> db.foo.find() { "_id" : ObjectId("4e6ffd8928d02c8f55a09dbb"), "a" : 1 }
I’ve just got started with MongoDB, but I have to say it looks like it will be really nice to work with on personal projects. No more fussing around with database schemas! Yay!
Oh, yeah, to quit mongo, you just type:
]]>]]>
> quit()
I just found out that the videos from both the JSConf 2011 and NodeConf conferences held earlier this year, are available online.
First off, the conference has an Oregon Trail theme, which I did not understand until I saw this video by author Sloane Crosley, which explains the game’s appeal delightfully. (Skip to about the 2 minute mark for the actual reading)
Here are the ones I have checked out so far:
Bytes and Blobs (David Flanagan)
A really good explanation of the new Binary APIs for Javascript. Nice to see the author of the great Rhino book in person (at least virtually.) Kind of amusing to hear all the talk of BLOBs (Binary Large OBjects). This opens up a number of interesting possibilities, such as getting access to files from the user’s filesystem (with their permission, after they have selected it) without uploading them. Also image processing, and even audio processing and generation. However, browser support is still incomplete… Let’s hope IE supports these APIs in it’s next version.
The Future of Javascript (Rebecca Murphey)
This is a call to action to ensure that Javascript evolves properly, to prevent some of the quagmires we currently experience as developers. In a nutshell, we need to agree on common standards and protocols to prevent duplication of effort. Some of this work is underway, eg CommonJS, promisesA. Also some interesting facts about the history of the automobile and the road system which we take for granted.
The Future of JS and You (Alex Russel and Peter Hallman)
A look behind the scenes of the Traceur Javascript compiler, which lets you write futuristic Harmony flavored Javascript, and generates present-day Javascript. This reminds me a bit of CoffeeScript, though here you’re still writing Javascript, albeit a version which does not yet exist. The suggestion is that you can invent your own Javascript.next features and hack the compiler to support them. If you’re interested in writing a compiler in Javascript, this would be a good place to start.
‘Freestyle RPC for Node’ (James Halliday) An entertaining presentation on James’s dnode node module which allows seamless remoting between client and server code. perhaps not the most elucidating presentation (there is about 10min of random rapping towards the end) but DNode does indeed look awesome.
It seems like every single session from these conferences is available here. The only confusing thing is that the videos do not always have the same titles as the sessions at the conferences, so you may need to cross-reference with the jsconf programme
Ah, I am so nostalgic for the old NodeJS logo :/]]>
Jasmine is described on its site as a ‘behaviour driven development framework for testing your JavaScript code.’ I thought I would give it a try, using it while developing a new game with my Canvasteroids framework.
First off, though, I should address the different terminology here: what is ‘behaviour driven development’ anyways? Why isn’t it called a Unit Testing framework? To understand why, you should read this article The idea that the names matter really rings true for me and it always seemed weird to me to try and test something which does not yet exist. Thinking of ‘specifications’ rather than ‘tests’ makes a lot more sense to me. Also, thinking about testing ‘units’ of code is more likely to make your tests closely coupled to your implementation, leading to maintenance headaches and unnecessary, verbose tests.
Installing Jasmine is ridiculously easy: I just downloaded the standalone project, and extracted it as the ‘specs’ folder in the Canvasteroids project. Inside it is a SpecRunner.html file.. opening up this in a browser will run the dummy tests which are there by way of example. Then I removed them so that I could run my own specs. I also installed some Jasmine Snippets for Vim to make writing specs even easier.
I created a new spec file called BreakoutSpec.js in the specs/ folder and added the following:
The describe() function defines a Suite of Specs. The function given to it as its second argument contains the specs themselves. Each spec is defined by an it() function call, which similarly takes a function as its second argument. The first argument to both describe() and it() is a descriptive string which will be shown in reports. Within the it() function there are calls to expect() which will check specific things to determine whether the Spec has been fulfilled. See the list of them here. In this case we are simply checking that the Breakout game instance was created successfully. Obviously, beforeEach() is run before each spec. We need to include the spec in the SpecRunner.html file, which now looks like this:
Refreshing the SpecRunner.html file, we see that this Spec fails. There are two failures, one which is the result of a run-time error while executing the spec (because the namespace ‘breakout’ in ‘breakout.Breakout’ is undefined, and because the expectation was not met (because ‘game’ was not defined).
So, I made a new folder called ‘breakout’ in the games folder, and copied the index.html file from Asteroids into it, renaming the reference to the Asteroids game class to be Breakout. I also created the new game class in a file alongside the index.html file, called Breakout.js, with the following Ext 4 class definition in it:
So we have a Breakout class now, but how do we load it in our test runner? Why not use the Ext JS 4 Class loader, just as we do in the game itself? This is what the updated SpecRunner looks like (we also need to load the ext-foundation.js file).
Note that we made the call to Ext.Loader.setConfig(), defining the required namespace ‘breakout’ to be the relative path of the directory containing the class file. And we wrapped the Jasmine code in a call to Ext.onReady(), which will only execute when the classes required have been loaded. This is handy, because we will not have to worry about adding them to the SpecRunner, which can be a chore with other Javascript Testing frameworks I’ve used. We will however need to update the paths of the namespaces used by any classes in the application, so that any namespaces are correctly resolved. And when we run the SpecRunner again … we get the green light! The spec succeeded.
So I admit this is a trivial spec… but it should be enough to get you started with Jasmine and Ext JS4.]]>
I’ve been working with Ext JS a lot at work over the last year, and have grown to like it. Last year I went to the Sencha Conference in San Francisco, and was very impressed by the features of the upcoming release of Ext JS 4. One of the big changes is the new Class system, which makes defining classes easy and also loads required classes dynamically when required. The build tools which Sencha (The company which makes Ext JS) provides make it easy to produce a single compressed .js file of your app for deployment. All in all it is a great framework.
Ext JS 4 is well modularized, and you can use as little or as much of it as you like. For the Canvasteroids game framework that I’m working on, I decided to use the ‘ext-foundation.js’ file which is the essential core of the library, which defines the Class management system, as well as a useful collection of utility methods for the core Javascript types. However no changes are made to the native object prototypes, so there will not be complications when combining Ext 4 with other frameworks.
The Ext JS 4 class system lets you define a class as follows:
Obviously, the ‘constructor’ function is what becomes the class constructor. The ‘extend’ property will cause this class to inherit the given class. Note also the use of namespaces, eg ‘foo.Bar’ in the names of classes. In Ext JS 3 you had to explicitly declare all your namespaces, but in v4 this is handled automagically.
Additionally, you can have ‘mixins’ which add in methods (or properties) of the given class. This is very helpful as it enables another way of re-using code besides inheritance. This is how you declare mixins (an example from Canvasteroids):
The initDrawable(), beforeDraw(), draw(), and afterDraw() methods all come from the Drawable mixin class (which is defined in the same was a regular class except it does not have a constructor). Drawing is a behaviour, and this is a good use case for using a Mixin. As with inheritance, you can override methods or properties from a Mixin class. In this case I overrode the ‘draw’ method with a custom implementation. In this way Mixins can act like Abstract Classes, but you are free to use more than one at a time.
Also note the call to callParent() : this is equivalent to super() in ActionScript.. it calls the same method from which it is called on the parent class. So in a constructor it will call the super-class’s constructor. Note that the arguments must be passed as an array.
If you’ve worked on a large scale Javascript application, you know it can become a pain to maintain a large number of script tags, and resolve dependencies. Every time you add a new class to the system, you have to add it to the list of scripts includes in your HTML page, and you have to figure out where in the list of includes it has to go in order to have all of its dependencies satisfied and to satisfy its dependents. This is usually done by trial and error and can be quite annoying. Ext.Loader to the rescue! Once properly configured, you will never need to add another script tag again! Here’s how to set up dynamic loading in your main HTML file:
The ‘enabled:true’ attribute of the object passed to Ext.Loader.setConfig() is what turns on the dynamic loading of classes. When this is turned on, Ext JS will dynamically add a ‘script’ tag when it notices that it needs to load a class. By default, it will look for a file with the same name as the class (replacing ‘.’ with ‘/’) relative to the HTML file. If you wish to use a different path you will need to add a mapping to the ‘paths’ hash object. By default, the ‘Ext’ namespace is mapped to the ‘src’ directory of the library so it will load other classes from the framework if you are using them.
Then you load your main app using a call to Ext.require(). In this case I required the main Asteroids game class, which was in the same directory as the HTML file this code is in. The ‘Asteroids’ class itself has multiple dependent classes, and the Loader will recursively parse them until they are all satisfied.For example the ‘canvasutils.Context2D’ class will be resolved to ‘../../lib/canvasutils/Context2D.js’. NB: make sure you name your class correctly, or errors will occur in the loading process. In addition to any classes referenced in an ‘extends’ or ‘mixins’ definition, there is a ‘requires’ property which can be used to explicitly declare a class’s dependencies. For example in the Asteroids class, I have defined the requires as follows:
When the Loader is enabled, Ext JS will wait until all the dependent classes have been loaded before executing Ext.onReady(). At this point you can simply instantiate the class normally with ‘new’ as I have done. Alternatively, you can use Ext.create() with the class name as a string. This has the advantage that if for some reason the class was not loaded, the Loader will load the class synchronously. However, this is non-optimal for performance, and you will be warned in the console about this. Since I want to avoid this, and for ease of debugging, I tend to use the old fashioned form of instantiation with ‘new Xxx()’, and remember to add the class to the list of ‘requires’.
Note that the order of the classes in the ‘requires’ array does not matter.
While loading .js files dynamically is nice for debugging, it is not optimal for deployment, due to all the additional HTTP requests. It is recommended to combine all the required Javascript files into one, and compress it by removing comments, whitespace, etc. If you have used the Ext.Loader as I describe above, this will be very easy. First you will need to install the free Sencha tools from here (see the links for the different platforms tools near the bottom of the page). Then use a terminal to go to the same directory as you HTML file and type the following command:
This will create a .jsb3 (JSBuilder, Sencha’s JavaScript compression tool) configuration file for the application. If that worked then do this:
And you should have a app-all.js file which will be the combined and compressed version of your app. You should create another HTML file for deployment which references this file, rather than the Ext.Loader config we used for develpoment. That’s it! Your Ext JS 4 app is ready to go.
This is a brief overview of the Class system, for more info see the excellent docs:
Class System docs
Ext JS 4 Getting Started]]>
I was pleasantly shocked the other day to see a pull request in my GitHub account for the Canvasteroids game framework I recently created. There were the following features added to the Asteroids game:
Some kind folks from the Chico State Open Source Team had picked my project to flex their programming skills. I was quite impressed with the quality of their contributions, and that they were able to dive into the project which I admit is not really documented at all.
This has inspired me to continue my efforts on Canvasteroids, as well as documenting it better, to enable further collaboration. I’ve since posted on my blog about one of the patterns I use in the framework: ‘State and Transition Oriented Programming’. I’ve also started on a second game, which I will hopefully be launching soon.]]>
You may be wondering about the title of this post… well, it isn’t the advice my grandfather gave me but I never heeded :) STOP is a reference to the concept of State and Transition Oriented Programming. I found out about this concept from Troy Gardner at the 360Flex conference a couple of years ago. The video of this presentation is well worth watching.
To summarize, in case you’re not done watching it yet.. States are a very important aspect of programming interactive systems, but they are often neglected and as a result can lead to nasty bugs. By dealing with states carefully and consciously it becomes a lot easier to develop more complex systems. Troy developed a framework in ActionScript to provide readymade support for state management and transitions.
It would be possible to port the framework to JavaScript, but the basic idea is so simple that it does not require a framework to implement and benefit from using it. This post is my attempt to describe how to do that.
Usually, from what I’ve seen, states are defined as constants, or enums or something like that, and assigned to some global variable…
Then later on, there will be forks in the code to do different things depending on the state:
While this works fine most of the time, you can quickly end up with a mess of duplciated conditional logic all over the place, as every time you need to do something, you have to check which State is active to decide if and how you’re going to do it.
It’s worth considering the difference between lowercase ‘state’ and uppercase State. The former is simply the values of variables that your application contains. This is vague and generalized and not what I’m referring to here. The uppercase State refers to a global condition which usually corresponds to a phase of activity of the application. For example a video player component might be ‘loading’ or ‘playing’ or ‘paused’. The State of the application fundamentally changes its behaviour, and how it will respond to events. Having said this, I’m not going to consistently capitalize ‘State’, but I’m always referring to the this more specific meaning of the word.
Events are things that happen in your application, whether it is user input or a result of processing or time passing. A collision between two objects is an example of a typical event in a game for example. Depending on which State the application is in, events will be handled differently. When events occur, we can send ‘messages’ or ‘signals’ to the currently active State, which can decide how to respond.
The idea that I’ve borrowed/stolen from Troy Gardner is to implement States as functions. Since they are essentially global constants, they are capitalized. This also indicates their special significance, and differentiates them from reqular functions. Here is how the two states of a light switch might be implemented (if we visualize the switch as having two buttons, ‘off’ and ‘on’):
Notice how the event handlers did not have to know about the different States of the application, but simply sent a message to the currently active State function, which is dynamically assigned when the state is set. Assigning the State function to the ‘state’ variable effectively sets the State, and alters the behaviour of the system… Note how the OFF State does not respond to the ‘turn_off’ message, and the ON State does not respond to the ‘turn_on’ message. There is no switching on the state value - rather the State function will handle the messages it is sent, ignoring any it is not interested in.
It is often necessary to perform actions when changing from one State to another. But it can be easily done by using ‘enter’ and ‘exit’ messages which are passed to every State when they are made active or deactivated. The way to make this happen is to use a ‘changeState()’ function to change from one state to another:
Then you can manage your transitions by listening for these messages in your State functions:
If you want to nest States, eg. LUNCH within DAY, this can be done by passing on any unknown messages to the super-state:
If you have events which need to be handled at any time, they can be put in a BASE state, and other states can pass on messages to it. For example, if the user resizes the browser window, there could be a ‘resize’ message which would be handled by the BASE State:
As you can see by comparing these two examples, you have the option of passing all messages to the super-state, or only passing on unknown messages.
This method of handling events by sending messages to State functions is very flexible and dynamic, and makes adding new States easy. It adds some abstraction between events and the response to them, which reduces coupling within your application. It also reduces duplication and conditional branching, and I think it also makes the code easier to read, as behaviours are grouped by State and message, which should be self-descriptive.
Finally, STOP programming is not opposed in any way to OOP, but is orthogonal to it. I have found it to be particularly useful in developing games, which tend to have many different states with different behaviours.
For a more complex example of this pattern in action, see the Asteroids game.]]>
A while back I decided to have a go at making a Javascript + Canvas version of Asteroids. I was able to get a basic version of the game working in a weekend, but I wasn’t happy about the code being in one huge file, so I set about refactoring it using ExtJS 4 for OO support (but not the whole library). This turned out to be quite addictive and after a while I had a small library of code on my hands. The initial title of the game was ‘CanvAsteroids’, but as I shifted to working on the supporting library I started to think of it as ‘CanvaSteroids’ : Canvas on Steroids. And so I have named the library this awkward name and renamed the game itself to just be Asteroids. The plan is to work my way through the alphabet, making a game for each letter. Next is Breakout. It’s good to have goals ;)
I want to write some more posts about some of the techniques I used to get the game working, especially the collision detection. I found a way to do this using the Canvas ‘isPointInPath’ method, which seems to perform quite well. The game uses keyboard for controls (left-right arrows for turning, spacebar to fire) and I also added some mouse/touch control inspired by Seb Deslisle’s JSTouchController. I don’t think the game is really well suited to touch, but at least I learned how to handle the touch events. If anyone has an iPad I’d be interested to know how it works.. unfortunately I heard the performance of Canvas is not great on iOS. However I’m really not into targeting one platform, and I’d like the games to be playable on any desktop or mobile device which has a decent browser (ie supports Canvas). For the IE < 9 users, I added Chrome Frame. For sound I used SoundManager2. The actual sounds were recorded off my Atari 2600 during gameplay, and edited in Audacity.
Anyways, enough nerdy banter.. here’s the game (warning LOUD sound effects!).
I used GitHub’s nifty new pages feature (plus a domain) to host a site here:
canvasteroids.com
And for all your source-y needs, the Github repo]]>
Require…]]>
I’ve recently discovered some neat tools which help make Javascript development easier. Everyone is excited about HTML5, which Brendan Eich (the inventor of Javascript) describes as ‘The unicorn which craps skittles from its rear end’. The Javascript language awaits a much needed update with ES5 Harmony, but in the meantime it has access to some cool new APIs in the form of HTML5 features such as Canvas, Local Storage, etc.
But what about outside of the browser? There have been some attempts to use Javascript on the server, but they don’t seem to have gained much widespread adoption. Then came NodeJS. If you haven’t heard about NodeJS I recommend you checkout this video by Ryan Dahl, the creator of NodeJS:
NodeJS is revolutionary in that it give Javascript access to the filesystem and network, with a performance oriented event-based approach that solves concurrency elegantly without the pain of multithreading. In itself this might only be interesting to hardcore server geeks, but NodeJS is being used as the infrastructure for a lot of other projects. Some of them are deployed by NPM, the Node Package Manager, which (surprise!) is a package management system for NodeJS applications. So once you have installed NodeJS and NPM, you can easily install other libraries on the command line. And you can create your own projects, using the CommonJS package format. This is made easy by the ‘npm init’ command, which creates a package.json configuration file for you automatically (I will give an example later).
But what if you’re more interested in client-side applications which run in a web-browser? No problem! Teleport comes to the rescue. This is itself a NodeJS package which you can install using NPM. Now once you have created a basic NPM / Common JS app, you just include the teleport library as a dependency. Then you can write your Javascript just as you usually would for a web-app. To test the app in a browser you run ‘teleport activate’ and Teleported will spawn a nodeJS webserver with your app running in it.
Once you have an awesome app ready, you can publish it to the NPM repository library with a single command (npm publish), so that other people can install it. No fussing about with web servers or downloading code. I’m really excited about the possibilities this creates for sharing client-server applications where the server is your own machine. Suddenly Javascript becomes a viable language for serious desktop applications with access to the filesystem and the network, as well as all the new HTML5 GUI goodness. Because the app runs on your own system, you can run it offline, and performance is going to be way better than over the internet.
In order to demonstrate just how easy this all is I will break it down for you, assuming you have nothing installed. Here we go:
Prerequisites:
sudo npm install teleport
Create a NPM app (I called mine frappuccino) using the command line:
mkdir frappucino cd frappucino npm init
At this point, you will go through a wizard to generate the package.conf file. It is just to provide metadata for your app, don’t worry too much about the answers, most of them are fine as they are. You can always edit it afterwards. For the ‘module entry point’ question I left it as default, and for the ‘where do your modules live’ I said ‘lib’ (this is standard, but not default). It actually guessed I was using a Git repo which I have in my home directory, which is not the repo I want to use, so I had to remove that reference afterwards. If you run it in a git repo, it will figure it out and use the info from that for the repository info. So if you want to do that, you should create the git repo first. After removing the incorrect repo info, my package.conf file looks like this:
The other change I made was to add the “dependencies” config, which will ensure that Teleport is installed as a dependency. OK, there are now a couple of other files you need to create:
The index.html page. Mine looks like this:
Note the included teleport file (which npm will supply), and the application js file (which we will create next). Now create the /lib folder in the frappucino folder, and in it create a file called app.js. In it I have the following, as a test.
alert("Frappucino is the best!");
Ok, now you should have the following file structure:
frappucino/ index.html package.json lib/ app.js
Finally, to get npm to supply the dependencies, and register the app locally, type this within the ‘frappucino’ directory:
sudo npm link
I get some nice pink and green output with ‘npm ok’ at the end to let me know it worked.
Now, to run the app, do:
teleport activate
If all goes, well, it should say something like:
Teleport is activated:
Now you can open another terminal window and type:
open
And you will see your app in all its glory (or lack thereof, such as in this case.. just an alert window. But its a start :).]]>
My entry to the 10KApart Javascript competition is a lightweight version of the Turtle Graphics explorer I’ve been working on. Aside from leaving out the the procedures browser, and doing some compression (I used YUICompressor) I did not have to change the code, and I didn’t remove any of the turtle’s abilities. In fact there are a couple of new features, notably the showing of the turtle itself on screen.
So check it out, and rate it ! It turns out that community voting is no longer considered for the Community Prize, but it will make me feel better :)
I posted the source code on github also.]]>
One of my earliest experiences of computer programming was when my school acquired a computer lab, and we did some Turtle Graphics programming, probably in Logo. The school even had a robotic turtle which was used to draw lines on paper on the floor. I think one project we did was to draw the letters of the alphabet which made up the name of the school, or some other message. In any case I remember making the turtle go round drawing lines.
Earlier this year I read Mindstorms by Seymour Papert, which describes how computers can be used by children to learn about Mathematics and programming, and how the Turtle Graphics system is a better system for exploratory learning than the Math is conventionally taught. One of the reasons for this is that the child is able to identify with the turtle, and solve problems by walking out the path of the turtle, and performing the instructions of the program. There is a fascinating description of how younger children were actually better at figuring out how to get the turtle to draw a circle by going forward a bit and turning a bit repeatedly, than were older children who had already learned more objective facts about circles, such as that they have a radius and a center. Even older children who had learned the algebraic formula for a circle were actually incapable of solving the problem.
Recently I discovered the Holy Grail of the field, the definitive Turtle Graphics by Andrea di Sessa and Harold Abelson, who also co-wrote the Structure and Interpretation of Computer Programming. I’ve only just read the first chapter, but I was inspired to try out implementing a Turtle Graphics environment in Javascript, using Canvas as a rendering surface.
I started out typing my functions into the Javascript Console of Chrome, but I had a vision of a web-based interactive Turtle Graphics environment, where users could run their own code and see the results instantly. So this weekend I hacked together the Turtle Graphics Explorer. Its not going to work in any released version of IE (I’ll add Chrome Frame soon), but that will change when IE9 ships. A new version of Chrome, Safari, Firefox, or Opera should be fine, though I admit I’ve not tested thoroughly. Hopefully it is easy to understand.. load an example, or look at the procedures to see how they work. There is only very rudimentary error detection, so if it does not seem to be working, you may need to re-load a working example, hit ‘clear screen’, or just refresh the browser.
Here is a screenshoot of the application.. click on it to open it up, and start exploring ;)
All the code is available at the GitHub project.]]>
Last year Seb Lee-Delisle posted about how it would be nice to be able to draw in 3D with the same ease as you can draw in 2D, using the flash drawing API. It inspired me to try something similar, but with a different approach. I was looking into WebGL and OpenGL late last year, and though I got a bit bogged down with my lack of C knowledge, I found I liked the procedural API. The entire scene is actually redrawn every frame. This year I’ve also been thinking about the Turtle graphics system I once used in school, and how it works in a similar way (albeit in 2D). So I took a stab at creating a simple procedural 3D drawing API, along the lines of OpenGL and Logo. First the demo..
flash 10 required
It is currently implemented as a base class which exposes the following methods:
rotate(angleX, angleY, angleZ) draw(x,y,z) move(x,y,z) pushMatrix() popMatrix()
The x, y, z values here are relative to the current position. The current position (and rotation) is maintained by the modelViewMatrix. By calling pushMatrix() you can save the current state and return to it later with popMatrix(). If you have subroutines, it is generally a good idea if they return the modelViewMatrix to the same state. For example in this example the drawSquare() method does this:
private function drawSquare() { draw(100,0,0); rotate(0,0,90); draw(100,0,0); rotate(0,0,90); draw(100,0,0); rotate(0,0,90); draw(100,0,0); rotate(0,0,90);//return to starting orientation }
So this simply draws 4 lines of length 100, rotating 90 degrees around the Z axis. The final rotation isn’t necessary for the drawing, but it ensures that the state is not changed after the routine is called. To draw a cube, we call the drawSquare routine while rotating and moving in 3D.
private function drawCube() { //draw cube //back drawSquare(); //left pushMatrix(); rotate(0,90,0); drawSquare(); popMatrix(); //top pushMatrix(); rotate(-90,0,0); drawSquare(); popMatrix(); //right pushMatrix(); move(100,0,0); rotate(0,90,0); drawSquare(); popMatrix(); //bottom pushMatrix(); move(0,100,0); rotate(-90,0,0); drawSquare(); popMatrix(); //front pushMatrix(); move(0,0,-100); drawSquare(); popMatrix(); }
finally, to draw the cubes, we call the drawCube() routine inbetween calls to push/popMatrix() and any changes in position we wish to make.
pushMatrix(); drawCube(); popMatrix(); pushMatrix(); move(150,0,0); drawCube(); popMatrix(); pushMatrix(); move(-150,0,0); drawCube(); popMatrix();
Once you get the hang of this approach, it is quite intuitive, and reduces the need for calculating a lot of 3D coordinates. This is done behind the scenes in the draw() method, which adds the real 3D coordinates to a list of vertices, which ultimately get projected to 2D coordinates. It’s easy to rotate the objects in the scene by calling rotate() before drawing the objects in the scene.
I’ve used Haxe here because of the faster compile time. There is also an earlier ActionScript version which I’ll link to below.
DrawCubes.hx
Draw3D.hx
MatrixStack.as (the AS3 prototype)
GitHub]]>
For Valentine’s Day I’ve dusted off an idea I initially submitted last year as part of the now defunct 25 Lines ActionScript contest. By tweaking the formula which generates the coordinates of the sphere, I was able to get something that more or less looked like a heart. I was able to get the heart modeled and lit within the 25 lines, but could not quite squeeze in the code to animate it. Now, without the limitation of 25 lines, I’ve done that, as well as optimize the performance using the normal map + PixelBender normal map shader that I used in my normal map tutorial. Here is the result - use the sliders to change the direction of the lighting.
flash 10 required
The mechanics of this are essentially the same as the lighting of a sphere technique described in the normal map tutorial. However, since the shape of the heart is not the same as a sphere, I could not use the sphere formula to generate the normal map. So I just used the same formula to generate the normal map as I did to generate the geometry. For each point on the texture map, I determined the lat/lon of that point, and plugged it in to the formula to get its 3D position. I did the same for the pixel above and to the right. Since we now have three points, we have a triangle, and we can get the normal of the triangle by the cross-product of two of the sides (considered as vectors). This normal gets encoded as a color in the resulting normal map. This is a fairly intensive operation, but since it is only required once at the start, it doesn’t affect the running performance. I tried to optimize this process by using a PixelBender kernel to generate the normal map, but got some odd results, so returned to more familiar ActionScript territory. Using a shader to generate the normal map will become necessary if the geometry changes a lot.
I also refactored the code a lot so that it’s not all in one class. I’ve separated the model, shader and renderer into separate classes, and used a Scene to coordinate them. This is the beginnings of a primitive 3D engine, which I’ve dubbed DeepSee, so if that goes anywhere I may break it out as a separate project on GitHub.
Heart2
DeepScene
Heart.as
IModel.as
NormalMapShader
NormalMapShader.pbk
If]]>
Augmented Reality (AR) was one of the big things to appear on the flash scene last year. AR has been around for a while, but it new in flash… The FLARToolkit, is an ActionScript port (by Saqoosha, whom I got to see at FITC last year) of a Java AR library (NyARToolkit) IIRC, which was itself derived from the original C version (ARToolkit).
As an interesting aside: ARToolkit isn’t the only AR library implemented in C, though it was the first… there is also ARTag which “uses more complex image processing and digital symbol processing to achieve a higher reliability and immunity to lighting”. For the limited processing power of Flash, ARToolkit was probably a good start, but perhaps some smart people may be able to enhance its recognition by improving the algorithms.
When I tried to use FLARToolkit, I’ll admit I had some trouble, as it seemed rather complex and not very well documented (at least not in English). But fortunately I found FLARManager which makes setting up an AR project in Flash a lot simpler. It also provides some optimizations to the recognition process.
However I feel it could be made still be simpler, and I’m working on that.. although the code still needs some refactoring, I have it basically working with a simple animated butterfly, which is simply a couple of bitmaps with Y rotations applied to them. The trick is to get the matrix that FLARManager gives you back when a marker is recognized, and apply that to your Sprite with 3D content. To use the app, you need to:
make sure you have a webcam attached to your computer and turned on. Most macs have them built-in by default. Don’t use any other application which uses the camera at the same time or it won’t work… thus you can also only run one AR enabled website at a time.
print out a AR Marker ( see here ) – right click and download any of the .png files or the .pdf, if you want a whole bunch at once, and print on white paper. At a pinch, you can use a black rectangle drawn on paper, as I did in the video below.
Go here for the AR Butterfly (but finish reading instructions first ;)
When the flash app starts, it will prompt you to allow access to the camera… click the ‘allow’ button. Now I have found this settings thingy to be very glitchy (it can also be accessed by right clicking on a flash movie and selecting ‘Settings…’). On my MacBook, I found it took a long time for the dialog to go away after clicking the ‘allow button’. I’ve also noticed similar problems on Linux. If you’re having trouble with it you can enable the site permission to access the camera by going here: flash player global settings Then you’ll need to find this site in the long list, select it, and click ‘always allow access’ for the camera. That way you won’t be bothered by the buggy dialog anymore (at least on this site).
Once you can see the output of the camera in the app, make sure you have decent lighting, hold up the marker to the camera, and adjust position so that there is enough contrast, but also no glare, as FLARToolkit does not handle that very well. Also make sure nothing is in front of the marker, breaking the outlines, or it won’t detect (this is something that ARTag can apparently handle). You should also not move too fast, or have shaky hands… If you’re lucky, you’ll see something like this:
flash 10 required
I noticed that the z-sorting on the butterly wings is inverted in this video… will have to sort that out. I’ll post a link to the code later, once it is a bit more polished. I have several ideas for AR applications, which I’ll hopefully get a chance to work on soon…]]> | http://feeds.feedburner.com/dafishinsea | CC-MAIN-2018-43 | refinedweb | 9,073 | 67.99 |
NAME
pow, powf, powl - power functions
SYNOPSIS
#include <math.h> double pow(double x, double y); float powf(float x, float y); long double powl(long double x, long double y); Link with -lm.
DESCRIPTION
The pow() function returns the value of x raised to the power of y.
ERRORS
The pow() function can return the following error: EDOM The argument x is negative and y is not an integral value. This would result in a complex number.
CONFORMING TO
SVr4, 4.3BSD, C89. The float and long double variants are C99 requirements.
SEE ALSO
cbrt(3), cpow(3), sqrt(3)
COLOPHON
This page is part of release 2.77 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. 2002-07-27 POW(3) | http://manpages.ubuntu.com/manpages/hardy/man3/pow.3.html | CC-MAIN-2015-27 | refinedweb | 135 | 76.01 |
Windows 10 Exploit Development Setup - Vulnserver Walkthrough Part 1
Intro
Lately I have been getting more into exploit development as I needed a bit of a break from the more typical Red Team skills. Exploitation experience would help me bring more to red teams that I perform so wanted to start learning.
For this series of blog posts I aim to exploit the various functions within Vulnserver using a variety of Windows Exploitation techniques. Typically this is dont on an old box like a Windows 7 or XP, 32 bit machine. I like this approach in general for learning to do things as a beginner, but I always felt it caused a bit of a barrier moving exploits into modern Windows environments. Due to this, I will be doing them all (I plan to anyway) on a Windows 10 x64 machine.
At the start all exploit protections will be turned off for vulnserver on the Windows 10 machine. This is so that I can learn the basics without having to bypass a bunch of stuff Windows 10 does to protect binaries. After I have exploited the vulnserver mechanisms, I aim to turn on the various Windows 10 protections one by one and see what they do, and see how they can be bypassed (if I can figure it out).
Currently I have no idea if this is going to be do-able at my knowledge level, but I wanted to learn more about Windows 10 protections and finding bypasses and clear information on how they work can be difficult.
Setting Up the OS
I am going to be using Commando-VM. This project is super handy for installing a bunch of tools, making windows more lightweight in general and disabling AV. This is optional and do at your own risk, installations can take forever and installing it can be a pain. If you do it, then I recommend changing the install config so that it is only getting the tools you want (WinDBG, Ghidra, Metasploit, VSCode, Unix tools, Git, Python2, Python3, ncat).
Whether you use commando or not, you will want to be disabling AV. I typically just add exceptions to folders such as my home folder, since AV will turn back on and you dont want it wiping metasploit or something and ruining your flow.
Setting Up VulnServer
VulnServer is a piece of vulnerable software developed in C for Windows. The purpose of this software is to be owned in various ways. More about it can be found here.
You can download it by using
git clone.
git clone
You can then run it using the following.
cd vulnserver .\vulnserver.exe <port>
This will start it on port 9999 if no port number is provided.
Disabling Protections
In Windows 10 you can open the windows panel and search for
Exploit protection. This will prompt a control panel item for managing exploit protection.
Go into the
Program Settings tab and hit the
Add program to customise button.
Click on the
Choose exact file path and then select the
vulnserver.exe that you downloaded.
With this selected, turn off all of the protections for it.
This will allow us to exploit the vulnserver without having to worry about ASLR, DEP, CFG etc at the start. We will come to that later ;)
WinDBG
If you are using commando you can just use
choco install windbg.fireeye, windbg.pykd.flare (you can find the package names here). With WinDBG and PyKd installed, to finish the mona setup you only need to do download the relevant python scripts (mona.py and windbglib.py).
WinDBG can be downloaded with the following steps:
- Download the Windows 10 SDK from (It might be a good idea not to install the very latest version. You can get an older version from - for instance version 10.0.17763.0)
- Launch the installer with administrator privileges (right-click on the file and choose ‘Run as administrator’)
- During installation, only select “Debugging tools for Windows”. Deselect the other options
- Install in the default path. (C:\Program Files (x86)\Windows Kits\10\Debuggers…)
- Create a new system environment variable called
_NT_SYMBOL_PATH
- Set the value of this new variable to
srv*c:\symbols*(Make sure there are no spaces before & after)
The Lord and Saviour Mona
Mona is an exploitation framework that is hugely helpful and does a lot of heavy lifting for us. It will be invaluable. It was originally designed for immunity, but there is a WinDBG port which can be downloaded with the following steps:
- Download pykd.zip from to a temporary location on your computer
- Extract the archive. You should get 2 files: pykd.pyd and vcredist_x86.exe
- Check the properties of both files and “Unblock” the files if necessary.
- Run vcredist_x86.exe with administrator privileges and accept the default values.
- Copy
pykd.pydto
C:\Program Files (x86)\Windows Kits\10\Debuggers\x86\winext
- Open a command prompt with administrator privileges and run the following commands:
cd "C:\Program Files (x86)\Common Files\Microsoft Shared\VC" regsvr32 msdia90.dll
(You should get a messagebox indicating that the dll was registered successfully)
- Download windbglib.py from
- Save the file under
C:\Program Files (x86)\Windows Kits\10\Debuggers\x86(“Unblock” the file if necessary)
- Download mona.py from
- Save the file under
C:\Program Files (x86)\Windows Kits\10\Debuggers\x86(“Unblock” the file if necessary)
NOTE: The place where you save the python files is one directory above where you save the
pykd.pyd.
Python
For Mona to work you will need to have a 32 bit of Python 2.7.14 (or higher) on your system and in your path. To do this follow these steps:
- Download the latest 32bit version of python 2.7.x from (I got 2.7.18) and install it.
- Make sure to use the default installation folder (C:\Python27) and verify once again that you are installing the 32bit version.
For scripting up my exploits I will be using Python 3 because tool development is dwindling on Python 2 and it makes packages a pain. To do this I downloaded a Python 3 installer from the python link above and installed, making sure to tick the box that includes it in my path.
You should now be able to open a prompt and type
python --version and have it show python 2.7.18, and
py --version to show python 3.
BooFuzz
For fuzzing I will be using the BooFuzz framework. I downloaded this using pip3 (Wheel is not mandatory but I was hitting issues on the install method it does without wheel).
pip3 install wheel pip3 install boofuzz
You should now be able to
import boofuzz in python3 without errors.
Telnet / Netcat
For interaction with vulnserver you will need something like netcat or Telnet. Telnet is nice to install for windows, but I do get issues when interacting with vulnserver using telnet, for unknown reasons. If you got commando, then ncat can be installed with
choco install ncat.flare.
If you are not running commando then you can download ncat by downloading Nmap (which will also package ncat in for you) windows installers from. After this is downloaded and installed you should be able to run it like below:
ncat 127.0.0.1 9999
Telnet works well for windows in general and won’t hit av issues or anything. On Windows Telnet is no longer installed by default, but it is there, it just needs to be enabled. That can be done with the following command:
dism /online /Enable-Feature /FeatureName:TelnetClient
After this you should be able to run
Telnet and be taken to a telnet prompt. For connecting with vulnserver an example would be:
telnet 127.0.0.1 9999
IDE
An IDE to make your exploit scripts. For me I prefer Visual Studio Code which can be downloaded from. It is up to you though, any text editor you can write code in will be fine.
If you do go with VSCode, I recommend installing some Python3 extensions and making Python3 your interpreter so that when you run code in VSCode it is Python3 and not Python2 (which we only have for mona to run).
Metasploit
We will need metasploit for making out shellcode (well, not need but it makes life a lot easier). This can be downloaded for Windows 10 64bit.
Before doing this make sure all anti-virus is turned off!
If you have commando this can all be done with
choco install metasploit.flare.
Check that you can run MSFvenom to ensure you have what you need.
msfvenom.exe -h
Making Quality of Life Changes for WinDBG
Now you have all the tools you need, open WinDBG x86 and attach it to any process (for example run vulnserver.exe and then go into
File > Attach > vulnserver.exe).
Once attached enter the following in the command box within WinDBG (in the bottom left).
.load pykd.pyd !py mona
With all the changes we have done, this should work. If it doesn’t make sure you have followed the mona and python installation steps carefully.
Now lets change the mona working directory, as usually it puts logs next to the windbg.exe which makes it a pain in the arse to find and navigate too.
!py mona config -set workingfolder C:\monalogs\%p_%i
This will save all log files to
C:\monalogs\processname_processpid, so for example
C:\monalogs\vulnserver_1337.
Then close WinDBG, right click on the icon (on desktop for me) and hit properties. In the target field add
-c ".load pykd.pyd". This makes it run it automatically when you load it, so you don’t need to remember that first step which is nice. Within properties, the target should now look like:
"C:\Program Files (x86)\Windows Kits\10\Debuggers\x86\windbg.exe" -c ".load pykd.pyd"
Summary
Hopefully you have followed the steps and have it all working.
You should be able to launch vulnserver, attach WinDBG to it, run mona, write python code for exploits, import boofuzz for fuzzing and use MSFVenom to generate shellcode.
In the next post I will start covering the first exploit.
Issues?
If you have issues with this setup, please look at the following resources that helped me: | https://philkeeble.com/exploitation/windows/Vulnserver-Walkthrough-Part-1/ | CC-MAIN-2021-21 | refinedweb | 1,712 | 71.34 |
Java Exercises: Take the last three characters from a given string and add these characters at front and back of the string
Java Basic: Exercise-84 with Solution
Write a Java program to take the last three characters from a given string and add the three characters at both the front and back of the string. String length must be greater than three and more.
Test data: "Python" will be "honPythonhon"
Pictorial Presentation:
Sample Solution:
Java Code:
import java.util.*; import java.io.*; public class Exercise84 { public static void main(String[] args) { String string1 = "Python"; int slength = 3; if (slength > string1.length()) { slength = string1.length(); } String subpart = string1.substring(string1.length()-3); System.out.println(subpart + string1 + subpart); } }
Sample Output:
honPythonhon
Flowchart:
Java Code Editor:
Contribute your code and comments through Disqus.
Previous: Write a Java program to multiply corresponding elements of two arrays of integers.
Next: Write a Java program to check if a string starts with a specified word. | https://www.w3resource.com/java-exercises/basic/java-basic-exercise-84.php | CC-MAIN-2019-47 | refinedweb | 161 | 54.83 |
Bringing Java into PerlBringing Java into Perl
In this article, I will show how to bring Java code into a Perl program
with
Inline::Java. I won't probe the internals of
Inline or
Inline::Java, but I will tell you what you need to make a Java class
available in a program or module. The program/module distinction is important
only in one small piece of syntax, which I will point out.
The article starts with the Java code to be glued into Perl, then shows several approaches for doing so. First, the code is placed directly into a Perl program. Second, the code is placed into a module used by a program. Finally, the code is accessed via a small Perl proxy in the module.
Consider the following Java class:
public class Hi { String greeting; public Hi(String greeting) { this.greeting = greeting; } public void setGreeting(String newGreeting) { greeting = newGreeting; } public String getGreeting() { return greeting; } }
This class is for demonstration only. Each of its objects is nothing but
a wrapper for the string passed to the constructor. The only operations
are accessors for that one string. Yet with this, we will learn most of
what we need to know to use Java from Perl. Later, we will add a few
features, to show how arrays are handled. That's not as interesting as
it sounds, since
Inline::Java almost always does all of the work without help.
Since we're talking about Perl, there is more than one way to incorporate our trivial Java class into a Perl program. (Vocabulary Note: Some people call Perl programs "scripts." I try not to.) Here, I'll show the most direct approach. Subsequent sections move to more and more indirect approaches, which are more often useful in practice.
Not surprisingly, the most direct approach is the simplest to understand. See if you can follow this:
#!/usr/bin/perl use strict; use warnings; use Inline Java => <<'EOJ'; public class Hi { // The class body is shown in the Java Code above } EOJ my $greeter = Hi->new("howdy"); print $greeter->getGreeting(), "\n";
The Java class is the one above, so I have omitted all but the class
declaration. The Perl code just wraps it, so it is tiny.
To use
Inline::Java, say
use Inline Java => code where
code tells
Inline where to look for the code. In this case, the code follows
inline (clever naming, huh?). Note that single-quote context is safest here.
There are other ways to include the code; we'll see my favorite way later.
The overly curious are welcome to consult the perldoc for all of the others.
Once
Inline::Java has worked its magic -- and it is highly magical --
we can use the Java
Hi class as if it was a Perl package.
Inline::Java
provides several ways to construct Java objects. I usually use the
one shown here; namely, I pretend the Java constructor is called
new,
just like many Perl constructors are. In honor of Java, you might rather say
my $greeter = new Hi("howdy");, but I usually avoid this indirect
object form. You can even call the constructor by the class name as in
my $greeter = Hi->Hi("howdy"); (or, you could even say the pathological
my $greeter = Hi Hi("howdy");). Class methods are accessed just
like the constructor, except that their names are the Java method names.
Instance methods are called through an object reference, as if the reference
were a Perl object.
Note that
Inline::Java performs type conversions for us, so we can
pass and receive Java primitive types in the appropriate Perl variables.
This carries over to arrays, etc. When you think about what must be
going on under the hood, you'll realize what a truly magical module
this is.
I often say that most Perl code begins life in a program. As time
goes by, the good parts of that code, the ones that can be reused, are
factored out into modules. Suppose our greeter is really popular, so
many programs want to use it. We don't want to have to include the Java
code in each one (and possibly require each program to compile its own
copy of the class file). Hence, we want a module. My module looks a
lot like my earlier program, except for two features. First, I changed
the way
Inline looks for the code, which has nothing to do with whether
the code is in a program or a module. Second, reaching class methods from
any package other than
main requires careful -- though not particularly
difficult -- qualification of the method name.
package Hi; use strict; use warnings; use Inline Java => "DATA"; sub new { my $class = shift; my $greeting = shift; return Hi::Hi->new($greeting); } 1; __DATA__ __Java__ public class Hi { // The class body is shown in The Java Code above }
The package starts like all good packages, by using
strict and
warnings.
The
use Inline statement is almost like the previous one, but the code
lives in the
__DATA__ segment instead of actually being inline. Note that
when you put the code in the
__DATA__ segment, you must include a marker
for your language so that
Inline can find it. There are usually several
choices for each language's marker; I chose
__Java__. This allows
Inline to glue from multiple languages into one source file.
The constructor is needed so that the caller does not need to know they are
interfacing with
Inline::Java. They call the constructor with
Hi->new("greeting") as they would for a typical package called
Hi.
Yet, the module's constructor must do a bit of work to get the right object
for the caller. It starts by retrieving the arguments, then returns the
result of the unusual call
Hi::Hi->new(...). The first
Hi is for
the Perl package and the second is for the Java class; both are required.
Just as in the program from the last section, there are multiple ways
to call the constructor. I chose the direct method with the name
new.
You could use the indirect object form and/or call the method
by the class name. The returned object can be used as normal, so I just
pass it back to the caller. All instance methods are passed directly through
Inline::Java without help from
Hi.pm. If there were class methods
(declared with the
static keyword in Java), I would either have to
provide a wrapper, or the caller would have to qualify the names. Neither
solution is particularly difficult, but I favor the wrapper, to keep the
caller's effort to a minimum. This is my typical laziness at work. Since
there will likely be several callers, and I will have to write them, I want
to push any difficult parts into the module.
If you need to adapt the behavior of the Java object for your Perl audience,
you may insert routines in
Hi.pm to do that. For instance, perhaps
you want a more typical Perl accessor, instead of the
get/
set pair used
in the Java code. In this case, you must make your own genuine Perl object
and proxy through it to the Java class. That might look something like this:
package Hi2; use strict; use warnings; use Inline Java => "DATA"; sub new { my $class = shift; my $greeting = shift; bless { OBJECT => Hi2::Hi->new($greeting) }, $class; } sub greeting { my $self = shift; my $new_value = shift; if (defined $new_value) { $self->{OBJECT}->setGreeting($new_value); } return $self->{OBJECT}->getGreeting(); } 1; __DATA__ __Java__ public class Hi { // Body omitted again }
Here, the object returned from
Inline::Java, which I'll call the Java
object for short, is stored in the
OBJECT key of a hash-based
Hi2 object that is returned to the caller. The distinction between the Perl
package and the Java class is clear in this constructor call. The Perl
package comes first, then the Java class, then the class method to call.
The
greeting method, shifts in the
$new_value, which the caller supplies
if she wants to change the value. If
$new_value is defined,
greeting
passes the
set message to the Java object. In either case, it returns the
current value to the caller, as Perl accessors usually do.
In the last section, we saw how to make a Perl module access Java code. We also saw how to make the Perl module adapt between the caller's expectation of Perl objects and the underlying Java objects. Here, we will see how to access Java classes that can't be included in the Perl code.
There are a lot of Java libraries. These are usually distributed in compiled
form in so-called .jar (java archive) files. This is good design on the part
of the Java community, just as using modules is good design on the part of
the Perl community. Just as we wanted to make the
Hi Java class available
to lots of programs -- and thus placed it in a module -- so the Java people
put reusable code in .jars. (Yes, Java people share the bad pun heritage of
the Unix people, which brought us names like
yacc,
bison,
more, and
less.)
Suppose that our humble greeter is so popular that it has been greatly expanded and .jarred for worldwide use. Unless we provide an adapter like the one shown earlier, the caller must use the .jarred code from Perl in a Java-like way. So I will now show three pieces of code: 1) an expanded greeter, 2) a Perl driver that uses it, and 3) a mildly adapting Perl module the driver can use.
Here's the expanded greeter; the two Perl pieces follow later:
import java.util.Random; public class Higher { private static Random myRand = new Random(); private String[] greetings; public Higher(String[] greetings) { this.greetings = greetings; } public void setGreetings(String[] newGreetings) { greetings = newGreetings; } public String[] getGreetings() { return greetings; } public void setGreeting(int index, String newGreeting) { greetings[index] = newGreeting; } public String getGreeting() { float randRet = myRand.nextFloat(); int index = (int) (randRet * greetings.length); return greetings[index]; } }
Now there are multiple greetings, so the constructor takes an array of
Strings. There are
get/
set pairs for the whole list of greetings
and for single greetings. The single
get accessor returns one greeting
at random. The single
set accessor takes the index of the greeting
to replace and its new value.
Note that Java arrays are fixed-size;
don't let
Inline::Java fool you into thinking otherwise. It is very
good at making you think Java works just like Perl, even though this
is not the case. Calling
setGreeting with an out-of-bounds index will
be fatal unless trapped. Yes, you can trap Java exceptions with
eval
and the
$@ variable.
This driver uses the newly expanded greeter through
Hi3.pm:
#!/usr/bin/perl use strict; use warnings; use Hi3; my $greeter = Hi3->new(["Hello", "Bonjour", "Hey Y'all", "G'Day"]); print $greeter->getGreeting(), "\n"; $greeter->setGreeting(0, "Howdy"); print $greeter->getGreeting(), "\n";
The
Hi3 module (directly below) provides access to the Java code. I
called the constructor with an anonymous array. An array reference
also works, but a simple list does not. The constructor returns a Java
object (at least, it looks that way to us); the other calls just provide
additional examples. Note, in particular, that
setGreeting expects
an
int and a
String.
Inline::Java examines the arguments and
coerces them into the best types it can. This nearly always works as expected.
When it doesn't, you need to look in the documentation for "CASTING."
Finally, this is
Hi3.pm (behold the power of Perl and the work of the
Inline developers):
package Hi3; use strict; use warnings; BEGIN { $ENV{CLASSPATH} .= ":/home/phil/jar_home/higher.jar"; } use Inline Java => 'STUDY', STUDY => ['Higher']; sub new { my $class = shift; return Hi3::Higher->new(@_); } 1;
To use a class hidden in a .jar I need to do three things:
CLASSPATH, before using
Inline. A well-placed
BEGINblock makes this happen.
STUDYinstead of providing Java source code.
STUDYdirective to the
use Inlinestatement. This tells
Inline::Javato look for named classes. In this case, the list has only one element:
Higher. Names in this list must be fully qualified if the corresponding class has a Java package.
The constructor just calls the
Higher constructor through
Inline::Java, as we have seen before.
Yes, this is the whole module, all 15 lines of it.
If you need an adapter between your caller and the Java library, you can
put it in either Perl or Java code. I prefer to code such adapters in Perl
when possible, following the plan we saw in the previous section. Yet
occasionally, that is too painful, and I resort to Java. For example,
the glue module
Java::Build::JVM uses both a Java and a Perl adapter
to ease communication with the genuine
javac compiler. Look at
the
Java::Build distribution from CPAN for details.
So what is
Inline::Java doing for us? When it finds our Java code,
it makes a copy in the .java file of the proper name (
javac is adamant
that class names and file names match). Then it uses our Java compiler to
build a compiled version of the program. It puts that version in a
directory, using an MD5 sum to ensure that recompiling happens when and only
when the code changes.
You can cruise through the directories looking at what it did. If something
goes wrong, it will even give you hints about where to look. Here's a tour
of some of those directories. First, there is a base directory. If you
don't do anything special, it will be called _Inline, under
the working directory from which you launched the program.
If you have a .Inline directory in your home directory, all
Inline
modules will use it. If you use the
DIRECTORY directive in your
use Inline statement, its value will be used instead. For ease of
discussion, I'll call the directory _Inline.
Under _Inline is a config file that describes the various
Inline
languages available to you. More importantly, there are two subdirectories:
build and lib. If your code compiles, the build directory will
be cleaned. (That's the default behavior; you can include directives in your
use Inline statement to control this.) If not, the build directory
has a subdirectory for your program, with part of the MD5 sum in its name.
That directory will hold the code in its .java file and the error output
from javac in cmd.out.
Code that successfully compiles ends up in lib/auto. The actual .class
files end up in a subdirectory, which is again named by class and
MD5 sum. Typically, there will be three files there. The .class file
is as normal. The other files describe the class. The .inl file
has an
Inline description of the class. It contains the full MD5 sum,
so code does not need to be recompiled unless it changes. It also says
when the code was compiled, along with a lot of other information about
the
Inline::Java currently installed. The .jdat file is specific
to
Inline::Java. It lists the signatures of the methods available
in the class.
Inline::Java finds these using Java's reflection system
(reflection is the Java term for symbolic references).
For more information on
Inline and
Inline::Java and the other
inline modules, see their perldoc. If you want to join in, sign up
for the inline@perl.org mailing list, which is archived at
nntp.x.perl.org/group/perl.inline.
Thanks to Brian Ingerson for
Inline and Patrick LeBoutillier for
Inline::Java. These excellent modules have saved me much time and
heartache. In fact, I doubt I would have had the courage to use Java
in Perl without them. Double thanks to Patrick LeBoutillier, since
he took the time to read this article and correct some errors (including
my failure to put "Bonjour" in the greetings list).
Perl.com Compilation Copyright © 1998-2006 O'Reilly Media, Inc. | http://www.perl.com/lpt/a/792 | crawl-002 | refinedweb | 2,671 | 72.97 |
In the previous lesson on basic exception handling, we explained how throw, try, and catch work together to enable exception handling. This lesson is dedicated to showing more examples of exception handling at work in various cases.
Exceptions within functions
In all when thrown. This allows us to use exception handling in a much more modular fashion. We’ll demonstrate this by rewriting the square root program from the previous lesson to use a modular function.
#include "math.h" // for sqrt() function using namespace std; // A modular square root function double MySqrt(double dX) { // If the user entered a negative number, this is an error condition if (dX < 0.0) throw "Can not take sqrt of negative number"; // throw exception of type char* return sqrt(dX); } int main() { cout << "Enter a number: "; double dX; cin >> dX; try // Look for exceptions that occur within try block and route to attached catch block(s) { cout << "The sqrt of " << dX << " is " << MySqrt(dX) << endl; } catch (char* strException) // catch exceptions of type char* { cerr << "Error: " << strException << endl; } }!
The most interesting part of this program is the MySqrt() function, which potentially raises an exception. However, you will note that this exception is not inside of a try block! This essentially means MySqrt is willing to say, “Hey, there’s a problem!”, but is unwilling to handle the problem itself. It is, in essence, delegating that responsibility to its caller (the equivalent of how using a return code passes the responsibility of handling an error back to a function’s caller).
Let’s revisit for a moment what happens when an exception is raised. First, the program looks to see if the exception can be handled immediately (which means it was thrown inside a try block). If not, it immediately terminates the current function and checks to see if the caller will handle the exception. If not, it terminates the caller and checks the caller’s caller. Each function is terminated in sequence until a handler for the exception is found, or until main() terminates. This process is called unwinding the stack (see the lesson on the stack and the heap if you need a refresher on what the call stack is).
Now, let’s take a detailed look at how that applies to this program when MySqrt(-4) is called and an exception is raised.
First, the program checks to see if we’re immediately inside a try block within the function. In this case, we are not. Then, the stack begins to unwind. First, MySqrt() terminates, and control returns to main(). The program now checks to see if we’re inside a try block. We are, and there’s a char* handler, so the exception is handled by the try block within main(). To summarize, MySqrt() raised the exception, but the try/catch block in main() was the one who captured and handled the exception.
At this point, some of you are probably wondering why it’s a good idea to pass errors back to the caller. Why not just make MySqrt() handle it’s.
#include <iostream> using namespace std; void Last() // called by Third() { cout << "Start Last" << endl; cout << "Last throwing int exception" << endl; throw -1; cout << "End Last" << endl; } void Third() // called by Second() { cout << "Start Third" << endl; Last(); cout << "End Third" << endl; } void Second() // called by First() { cout << "Start Second" << endl; try { Third(); } catch(double) { cerr << "Second caught double exception" << endl; } cout << "End Second" << endl; } void First() // called by main() { cout << "Start First" << endl; try { Second(); } catch (int) { cerr << "First caught int exception" << endl; } catch (double) { cerr << "First caught double exception" << endl; } cout << "End First" << endl; } int main() { cout << "Start main" << endl; try { First(); } catch (int) { cerr << "main caught int exception" << endl; } cout << "End main" << endl; return 0; }. Last() prints “Last throwing int exception” and then throws an int exception. This is where things start to get interesting.
Because Last() doesn’t handle the exception itself, the stack begins to unwind. Last() terminates immediately and control returns to the caller, which is Third().
Third() doesn’t handle any exceptions either, so it terminates immediately and control returns to Second().
Second() has a try block, and the call to Third() is within it, so the program attempts to match the exception with an appropriate catch block. However, there are no handlers for exceptions of type int here, so Second() terminates immediately and control returns to First(). it’s.
Thanks for writing these, you always explain things very clearly. Maybe you could make reference to the std exception libary and throwing classes.
If Third() function has the exception handler then in this case how the stack unwinding will happen. | http://www.learncpp.com/cpp-tutorial/153-exceptions-functions-and-stack-unwinding/ | crawl-002 | refinedweb | 776 | 68.81 |
Out of the box, React allows you to style components directly with the
style property. It’s accepts an object of style properties and for most use cases, it’s more than sufficient. As a single property, there’s no way to specify more granular defaults and support for
!important is effectively non-existent with the
style property. Luckily, a bit of
emotion will go a long way!
👩🎤 emotion is flexible and highly performant CSS-in-JS library. It accepts strings and objects, supports defaulting and extending variable and with an additional Babel plugin even supports inline child selectors!
🐊 Alligator.io recommends ⤵️⚛️ React for Beginners by Wes Bos
Getting Started
To kick things off, we will need to install our dependencies,
emotion and
emotion-react via
npm:
$ npm install --save emotion emotion-react
or via
yarn:
$ yarn add emotion emotion-react
Be sure to include
react-emotion within in your component’s source code:
import styled, { css } from "react-emotion";
Usage
With our dependencies installed, let’s talk about the different ways that you can leverage
emotion to style your components.
CSS
The quickest way to get up and running with
emotion is by passing
css to an element’s or component’s
className property.
css accepts styles as a string, a tagged template literal, an object or an array.
Here are a couple of examples of
css with a string and with an object:
<div className={css`background: #eee;`}> <div className={css({ padding: 10 })}> Hooray, styles! </div> </div>
Styled
In addition to
css you can also use
styled to create an element and style it.
Similar to
css,
styled can be used with a string, a tagged template literal, an object of an array.
When you use
styled to create an element you can then create new elements with properties that can then be utilized in your styles. This opens the door for easy customization and reuse:
const Heading = styled("h1")` background-color: ${props => props.bg}; color: ${props => props.fg}; `;
Which creates a
Heading component that accepts
bg and
fg properties that will set the background and text colors:
<Heading bg="#008f68" fg="#fae042"> Heading with yellow text and a green background! </Heading>
Taking things a step further, we can take our
Heading component and extend it, bringing the background and foreground color properties along with it:
const Subheading = Heading.withComponent("h2");
The properties themselves are not mandatory, so you include / omit them as you see fit:
<Subheading fg="#6db65b"> Subheading with default colors! </Subheading> <Subheading fg="#6db65b"> Subheading with light green text! </Subheading> <Subheading bg="#6db65b"> Subheading with light green background! </Subheading>
Just like
css, you can specify your styles as an object instead of as a string:
const Quote = styled("blockquote")(props => ({ fontSize: props.size, }));
And even include an object of default styles:
const Cite = styled("cite")( { fontWeight: 100 }, props => ({ fontWeight: props.weight }) );
That can be optionally set when using the component:
<Cite> Citation with light text! </Cite> <Cite weight={700}> Citation with heavy text! </Cite>
As mentioned before, with
emotion you can specify
!important styles with ease:
const Footer = styled("footer")` margin-top: 50px !important; `;
Putting It All Together
Now that we’ve went through a bunch of disparate use cases, let’s go crazy and put them together into a more cohesive example:
import React from "react"; import { render } from "react-dom"; import styled, { css } from "react-emotion"; const Heading = styled("h1")` background-color: ${props => props.bg}; color: ${props => props.fg}; `; const Subheading = Heading.withComponent("h2"); const Quote = styled("blockquote")(props => ({ fontSize: props.size })); const Cite = styled("cite")( { fontWeight: 100 }, props => ({ fontWeight: props.weight }) ); const Footer = styled("footer")` border-top: 1px solid #ccc; color: #ccc; margin-top: 50px !important; padding-top: 20px; `; function App() { return ( <div className={css`background: #ddd;`}> <div className={css({ padding: 10 })}> <Heading bg="#008f68" fg="#fae042"> Gator Lyrics </Heading> <Subheading fg="#6db65b"> Lyrics from songs that contain the word "alligator" </Subheading> <Quote size={28}> See you later, alligator. After a while, crocodile. </Quote> <Cite weight={700}>Bill Haley</Cite> <Footer>EOF</Footer> </div> </div> ); } const container = document.createElement("div"); document.body.appendChild(container); render(<App />, container);
That’s how you can style with
emotion in your React app!
To see a live example of the code above, you can check out this CodeSandbox.
Enjoy! 💥 | https://alligator.io/react/react-emotion/ | CC-MAIN-2019-35 | refinedweb | 713 | 54.42 |
Controllers in Practice
Now that you understand the general function of controllers in ASP.NET, let's take a look at the process for creating one in Visual Studio.
The Controllers folder of a Visual Studio project contains the controller classes used to manage responses and input. Remember that the “Controller” suffix is required.
Begin by creating a file for a controller such as “SuperController.cs.” Next, design code to specify views.
namespace MySoftware.Controllers { public class CarController : Controller { // Get all the cars in the garage public ActionResult Index(id) { // Point to the view with content return View(cars); } } }
Note that Visual Studio allows code snippets to be saved and automatically generated; for example, on entering a term, a code template for a controller will be created.
ASP.NET MVC maps URLs to action methods within the Controllers class. The Views folder contains the information that defines the action result, which we will examine later. Note that controllers can contain as many methods as necessary to complete a task.
WRITING ACTION METHODS
Action methods are usually one-to-one. One-to-one refers to actions like entering a URL, clicking Submit, or clicking a link. They can return virtually any type of data, and though methods are diverse, they must conform to certain rules:
- Methods are public.
- Methods are not static.
- Methods are not extension methods.
- Methods are not constructors, getters, or setters.
- Methods do not contain open generic types.
- Methods are not methods of the controller base class.
- Methods do not contain ref or out parameters.
Review an example of an action method below:
public ActionResult Index() { var SelectedIndex = _rnd.Next(_cars.Count); ViewData["Cars"] = _cars[SelectedIndex]; return View(); }.
PARAMETERS
The default procedure for action method parameter values is to get them from data collected by the request. These value types include names or values from forms, query string values, and cookie values. If the value cannot be managed or if its type presents a problem, the passing of null or throwing of an exception occurs. The framework also automatically maps URL values to values for action methods through checking for HTTP request values by default on any incoming request.
There are many ways to access these values. Two means of access include the Request and Response objects of the Controller class. The Request object gathers values passed by the client to the server during an HTTP request. The Response object sends output to the client from the server. Review their syntax, and examples of their use below.
Correct syntax for Request follows:
Request[.collection|property|method](variable)
Correct syntax for Response follows:
Response.collection|property|method
Review examples of Request and Response code below:
strEmployeeID = Request.Form( "EmployeeID" ) Response.Clear Response.Write "The number of strTheAmount is " & strTheAmount Response.End | https://freeasphosting.net/asp-net-tutorial-controllers-in-practice.html | CC-MAIN-2018-51 | refinedweb | 462 | 50.43 |
Conquering Instagram with PHP and the Instagram API.
API Calls and Limits
There are two types of API calls you can perform with the Instagram API: Unauthenticated and Authenticated. Unauthenticated API calls only need the client ID, and authenticated API calls use OAuth, specifically OAuth 2.0. If you don’t know what OAuth is, check out this Introduction to OAuth 2 article on DigitalOcean.
Before we move on, it’s important to understand that there are limits to this API. At the moment of writing of this article, you can only perform 5000 authenticated calls per token and 5000 unauthenticated calls to the API. This is regardless of the endpoint that you use, though there are specific endpoints each of which has its own limits. You can check out the endpoint-specific rate limits section in the limits page if you want to learn more.
Registering an Application
Needless to say, you need to have your own Instagram account in order to work with the Instagram API. Then, sign up as a developer.
Next we have to register an application. We can do that by going to the Instagram Developer page and clicking on the ‘Register Your Application’ button. If you’ve never created an app before, you’ll be redirected to the app registration page which looks like this:
If you have already created an app previously, then it leads you to the app management page which lists all your existing apps. From there, all you have to do is to click on the ‘Register a New Client’ button which would lead you to the same page as above.
Once you’re on the app registration page, fill in all the fields. The website URL is the URL to your app’s website. The redirect URI is the URL where the user will be redirected after giving access permission to your app. This has to be an HTTPS URL. If you do not have an HTTPS server, you can use Apache and Ngrok. Download the ngrok version for your operating system, extract it and then execute the following command in your preferred install directory. Replace
80 with the port where your server is running:
ngrok http 80
What this does is assign an HTTP and HTTPS URL to your Apache server running on localhost. You can then use the HTTPS URL for testing. Use this URL for the Website URL and Redirect URI fields in the app registration page.
Once that’s done, just click on the ‘Register’ button to finish the registration. If all went well, you will be greeted with a client ID and client secret. We will be using these later on to perform requests to the API.
API Console
You can use the API console to play around the requests which you can make with the Instagram API. To use the API console, expand the API method selection menu on the left side of the console. From there, you can select a method you want to use for your request. Most methods require authentication so you have to select OAuth 2 from the Authentication drop-down and then sign in with your existing Instagram account. Do note that any requests you perform are performed on your behalf. This means that any action you do, such as liking a photo or following a user will be performed by your account.
The API Console is pretty self-explanatory. You can select what type of HTTP request (GET, POST, DELETE, PUT) you want to use, enter the URL where the request will be submitted, and enter required query parameters. You can see the actual request and response that have been made after you click the ‘Send’ button.
Making API Calls with PHP
Now we’re ready to interact with the API using PHP. We can do that with Guzzle, an HTTP client for PHP. Install it with Composer:
composer require guzzlehttp/guzzle:~5.0
Optionally, you can install the Slim PHP framework if you want to follow along with the demo project. Note that we’re using version 5 of Guzzle because version 6 is based on PSR-7 and thus lacking many of the practical features of past iterations.
composer require slim/slim
If you want to use Slim, you need to install Twig, as well as Slim Views, so you can use Twig from within Slim.
composer require twig/twig composer require slim/views
Once that’s done, create a new PHP file and add the following code:
<?php require 'vendor/autoload.php'; use GuzzleHttp\Client; $client = new Client();
Next, add the client ID, client secret and redirect URL of your Instagram app.
define("CLIENT_ID", "YOUR CLIENT ID"); define("CLIENT_SECRET", "YOUR CLIENT SECRET"); define("REDIRECT_URL", "YOUR REDIRECT URL");
Set up Slim to make use of Twig for handling views. Also, enable error reporting and set the directory for caching the views:
app = new \Slim\Slim(array( 'view' => new \Slim\Views\Twig() //use twig for handling views )); $view = $app->view(); $view->parserOptions = array( 'debug' => true, //enable error reporting in the view 'cache' => dirname(__FILE__) . '/cache' //set directory for caching views );
Getting the Access Token
To get an access token, we first need to construct the login URL. The login URL points to the page that asks the user to grant permission to the app. The base login URL is:. And then we need to pass in the
client_id,
redirect_uri,
scope and
response_type as query parameters.{$client_id}&redirect_uri={$redirect_url}&scope=basic&response_type=code
You already know what the
client_id and
redirect_url are, so let’s talk about the
scope and the
response_type.
scope– this is where you specify what your app can do. Currently, the scopes available are
basic,
relationships, and
likes.
basicis provided by default. This gives you read access to all of the API endpoints. The other 3, however, require your app to be submitted for review, because they allow your app to like, comment, follow or unfollow a specific user.
response_type– the type of response we will get once the user grants permission to the app. On the server-side, this should be
codeand on the client-side this should be
token. We’re primarily working on the server so this should be
code. This means that an authorization code is returned after permission has been granted.
Once the user has granted permission to the app, he will be redirected to the redirect URL that was specified. The authorization code is passed along with this URL as a query parameter. Next we need to make a
POST request to the
/oauth/access_token endpoint, additionally passing in the client ID, client secret, grant type, redirect URL and the code. The grant type is how the access token will be acquired after the user has granted permission to your app. In this case, we’re using
authorization_code. This is the code that is passed as a query parameter in the redirect URL. Once the request is made, we convert the response from JSON to an array by calling the
json method on the response. Finally, we render the view.
$app->get('/login', function () use ($app, $client) { $data = array(); $login_url = ''; if($app->request->get('code')){ $code = $app->request->get('code'); $response = $client->post('', array('body' => array( 'client_id' => CLIENT_ID, 'client_secret' => CLIENT_SECRET, 'grant_type' => 'authorization_code', 'redirect_uri' => REDIRECT_URL, 'code' => $code ))); $data = $response->json(); }else{ $login_url = "{$client_id}&redirect_uri={$redirect_url}&scope=basic&response_type=code"; } $app->render('home.php', array('data' => $data, 'login_url' => $login_url)); });
Views in Slim are stored in the
templates directory by default. Here are the contents of the
home.php view.
{% if login_url %} <a href="{{ login_url }}">login with instagram</a> {% else %} <div> <img src="{{ data.user.profile_picture }}" alt="{{ data.user.username }}"> </div> <ul> <li>username: {{ data.user.username }}</li> <li>bio: {{ data.user.bio }}</li> <li>website: {{ data.user.website }}</li> <li>id: {{ data.user.id }}</li> <li>access token: {{ data.access_token }}</li> </ul> {% endif %}
At this point you can now extract the access token and store it somewhere safe. Instagram didn’t mention how long an access token will last. All the documentation says is that it will expire at a time in the future. Therefore, we need to handle the event where the access token expires. You can do that by checking the
error_type under the
meta item in the response. If the value is
OAuthAccessTokenError, then it means your access token has expired. You will only need to check for this item if the
code in the
meta item has a value other than 200. 200 means OK, just like the HTTP status code. 400 means error.
Tags Search
Now we can make authenticated calls to the API. First, let’s try searching for recent photos taken in Niagara Falls via tag searching. Remember that tags don’t have spaces in them so we have to stick with camelCase:
$app->get('/tags/search', function() use($app, $client, $access_token) { $tag = 'niagaraFalls'; $response = $client->get("{$tag}/media/recent?access_token={$access_token}"); $results = $response->json(); $app->render('images.php', array('results' => $results)); });
The
images.php view just loops through all the results that are returned and extracts the low resolution image URL. We then use that as a source for the image tag.
{% for row in results.data %} <img src="{{ row.images.low_resolution.url }}"> {% endfor %}
By default, Instagram returns a maximum of 20 photos per request. You can, however, specify the
count as one of the query parameters to increase or limit the number of photos returned.
If you’re not sure if the tag that you are using exists, you can first perform a tag search and then use the first result that comes out:
$app->get('/tags/search-with-tagvalidation', function() use($app, $client, $access_token) { $query = 'Niagara Falls'; $response = $client->get("{$access_token}&q={$query}"); $result = $response->json(); if(!empty($result['data'])){ $tag = $result['data'][0]['name']; $response = $client->get("{$tag}/media/recent?access_token={$access_token}"); $results = $response->json(); $app->render('images.php', array('results' => $results)); }else{ echo 'no results'; } });
User Feed
The user feed can be accessed by submitting a GET request to the
/users/self/feed endpoint:
$app->get('/user/feed', function() use($app, $client, $access_token) { $response = $client->get("{$access_token}"); $results = $response->json(); });
Here’s a screenshot of a sample user feed response:
Searching for Users
Let’s try searching for users who have ‘Ash Ketchum’ as their name:
$app->get('/user/search', function() use($app, $client, $access_token) { $query = 'Ash Ketchum'; $response = $client->get("{$query}&access_token={$access_token}"); $results = $response->json(); });
The call above returns the
username,
id,
profile_picture, and
full_name of the user. Not all of the results are exact matches though.
Here’s the screenshot of the response that I got:
Searching for Photos in a Specific Place
You can also search for photos or videos uploaded in a specific place by using the Google Geocoding API. We use the Google Geocoding API to convert our query to coordinates (latitude and longitude) which the Instagram API requires. Here’s an example:
$app->get('/geo/search', function() use($app, $client, $access_token) { $query = 'banaue rice terraces'; //make a request to the Google Geocoding API $place_response = $client->get("{$query}&sensor=false"); $place_result = $place_response->json(); if($place_result['status'] == 'OK'){ //extract the lat and lng values $lat = $place_result['results'][0]['geometry']['location']['lat']; $lng = $place_result['results'][0]['geometry']['location']['lng']; //make a request to the Instagram API using the lat and lng $response = $client->get("{$access_token}&lat={$lat}&lng={$lng}"); $results = $response->json(); if(!empty($results['data'])){ $app->render('images.php', array('results' => $results)); }else{ echo 'no photos found'; } }else{ echo 'place not found'; } });
Note that you can also specify the
distance,
min_timestamp, and
max_timestamp to this endpoint to further filter the results. The default distance is 1km and you can specify up to 5km.
min_timestamp and
max_timestamp are unix timestamps for limiting results to photos that were taken within a specific time period. You can use Carbon to easily generate timestamps based on user input such as ‘yesterday’, ‘5 days ago’, ‘1 week ago’.
Pagination
You may have noticed that the Instagram API already makes our life easy with pagination. If the results of a specific API call have a next page on it, you can just use the value of
next_url under the
pagination item as the URL to be used on the next request. This allows you to access the next page easily. Though do keep in mind that you need to store the
id of the first item on the current page so that you can still access that page after you have navigated to the next page.
PHP Client
If you want to make your life even easier when working with the Instagram API, there’s a PHP library called Instagram-PHP-API which provides convenience methods. To install it, execute
composer require cosenary/instagram.
Once that’s done, you can use it by adding the following code:
use MetzWeb\Instagram\Instagram; $instagram = new Instagram(array( 'apiKey' => CLIENT_ID, 'apiSecret' => CLIENT_SECRET, 'apiCallback' => REDIRECT_URL ));
Here are a few examples.
Getting the Login URL
$instagram->getLoginUrl(array('basic', 'relationships'));
The array argument is optional. It contains the scopes that you want to use.
Getting the Access Token
Pretty much the same as what we did earlier using Guzzle, only this time, we’re calling methods and the data that we need becomes readily available.
$app->get('/login2', function () use ($app, $instagram) { $login_url = $instagram->getLoginUrl(array('basic', 'likes')); if(!empty($_GET['code'])){ $code = $_GET['code']; $data = $instagram->getOAuthToken($code); //get access token using the authorization code $instagram->setAccessToken($data); $access_token = $instagram->getAccessToken(); //do anything you want with the access token }else{ $app->render('login.php', array('login_url' => $login_url)); } });
Getting User Info
You can get the user info by calling the
getUser method.
$user_id can be omitted if you only want to get the user info of the currently logged in user.
$user_data = $instagram->getUser($user_id);
Laravel
If you use Laravel, someone has also created a Laravel Wrapper which uses this library. You can check it out here.
Conclusion
In this tutorial, we learned how to work with the Instagram API using the Guzzle HTTP Client and an Instagram client for PHP. The Instagram API is a really nice way to interact with an Instagram users’ data. With it, you can build some really interesting applications.
Have you built anything with the API? Do you prefer Guzzle or the Instagram PHP library? Why? Let us know in the comments! | https://www.sitepoint.com/conquering-instagram-with-php-and-the-instagram-api/ | CC-MAIN-2019-30 | refinedweb | 2,395 | 54.83 |
Logger injection in JBoss 6 Final ?Sakari Isoniemi Jan 1, 2011 12:01 PM
Seam Logger don't inject with Seam 2.2.1.CR2, JBoss 6 Final.
Application is generated by seam-gen and works fine.
When loggging is added comes always NPE no matter how logging is instantiated.
import org.jboss.seam.annotations.Logger;
@Logger public Log log;
log.debug(”test seam logging”); -- NPE
@Logger private Log log;
log.debug(”test seam logging”); -- NPE
@Logger Log log;
log.debug(”test seam logging”); -- NPE
In JBoss AS 6 commmon lib is log4.jar and server start advertises this lib. I copied this jar to ../default/lib.
Result is the same. Maybe jar must be in applications libraries in this AS version ?
1. Re: Logger injection in JBoss 6 Final ?Sakari Isoniemi Jan 5, 2011 1:33 AM (in response to Sakari Isoniemi)
Hai
This problem is not Seam version or platfrom spesific.
Probably the cause is that appropriate class where query is started is not injected by Seam. This is bad architecture for new Seam users and I suggest that seam-gen is changed so, that server methods are called from UI so, that @Logger works.
Why is the class not Seam injected although there is annotation @Name ?
This requires that methods are instantiated from UI with #{bean.method} notation that causes, that Seam catches the call and injects Logger etc.
Am I right ?
2. Re: Logger injection in JBoss 6 Final ?Stefano Travelli Jan 5, 2011 3:19 AM (in response to Sakari Isoniemi)
Since Seam components are managed components you have to let Seam instantiate them.
Referencing the component in EL is the common way. In some cases you can use Component.getInstance().
3. Re: Logger injection in JBoss 6 Final ?Sakari Isoniemi Jan 5, 2011 3:46 AM (in response to Sakari Isoniemi)
OK
I suppose that in Seam documentantion this is not emphasized
enough.
Sadly seam-gen produces architecture where @Logger cannot be used everywhere,
so some clarifying is needed. | https://developer.jboss.org/thread/193472 | CC-MAIN-2018-39 | refinedweb | 335 | 68.67 |
hi all,
I am trying to create a most simple 1 linear layer network to fit a linear regression. Just to help myself better understand how Pytorch works. However, I encountered a strange issue with the model training.
in my model’s init() method, I have to add a manual initialization step(shown below) to have the model quickly converge to my regression function. (the weight value 2, 3 are random number, I could put any value here and the model will still converge)
self.layer1.weight = torch.nn.Parameter(torch.Tensor([2, 3]))
Without this line, the model never converge, the training loss just randomly oscillates in the range of hundreds of thousands. With this line, it quickly decreases to near 1.
I have postulated that it is because default initial weight parameters were too small if I do not initialize them to be far away from zero. Then I changed the initial values and found out the convergence always work as long as I have this line, the exact value I set does not matter. Could someone explain what is going on behind the scene here? Thanks.
My entire script:
import torch import numpy as np class Net(torch.nn.Module): def __init__(self, input_dim, output_dim): super(Net, self).__init__() self.layer1 = torch.nn.Linear(input_dim, output_dim, bias=False) self.layer1.weight = torch.nn.Parameter(torch.Tensor([2, 3])) def forward(self, x): x = self.layer1(x) x.squeeze() return x # generate data using the linear regression setup y = 5 * x1 + 3 * x2 sample_size = 10000 input_dim = 2 output_dim = 1 epoch = 30 bs = 100 data = np.random.randn(sample_size, 3) data[:, :2] = data[:, :2] * 100 # add a normal noise term data[:, 2] = 5 * data[:, 0] + 3 * data[:, 1] + np.random.randn(sample_size) data = torch.Tensor(data) train_x = data[:, :input_dim] train_y = data[:, input_dim] net = Net(input_dim, output_dim) net.zero_grad() criterion = torch.nn.MSELoss() optimizer = torch.optim.RMSprop(net.parameters(), lr=.01) for i in range(epoch): batch = 0 while batch * bs < train_x.shape[0]: batch_x = train_x[batch * bs : (batch + 1) * bs, :] batch_y = train_y[batch * bs : (batch + 1) * bs] pred_y = net.forward(batch_x) loss = criterion(pred_y, batch_y) optimizer.zero_grad() loss.backward() optimizer.step() if batch % 100 == 0: #print(f"{i} {batch} {loss}") print(net.layer1.weight) batch += 1 | https://discuss.pytorch.org/t/why-the-model-fail-to-converge-without-a-manul-weight-initialization/50431 | CC-MAIN-2019-30 | refinedweb | 377 | 50.33 |
in reply to
Is Perl the right language for a large web application?
walikngthecow: this is a good question, that I (and many others) have been wrestling with, from time to time.
In my opinion, in comparison to Python, Perl is more expressive, more powerful, and much neater in its language architecture. I think the Perl sigil methodology is brilliant (namely, that scalars start with a '$', arrays with a '@', hashes with a '%' and so on); that the operator determines the expression context, instead of the variable type; these and several other pillars of the Perl language make it really distinct.
On the other hand, many areas of the Python language seem like a kludge (for example, its regex implementation, and its implementation of array slicing).
If you know Perl well - if you have mastered Perl - then you have in your possession a considerably more powerful tool than Python.
Some additional considerations: 1. On a sizable web project, you need to use a tool - I understand that "Catalyst" is a popular one - but I've never used it. I have been using wxPerl.2. It seems to me that the user base of Python is much, much larger than that of Perl (maybe because of the rumors that Google is using Python internally?) and that the Perl user base is slowly shrinking. A couple of years ago, there was a resurgence in Perl, since the biology departments, in universities, have discovered that it's convenient (and powerful) to use Perl for manipulating genome sequences. But since then, it seems that Perl usage is flat and decreasing.
Helen
Lots of companies/organisations use Perl extensively on sizeable projects - the BBC, LoveFilm and GlaxoSmithKline are 3 I have personal experience of.
One thing that puzzles me is that PHP seems to be the standard web development language, even at enterprise levels. I can appreciate it is perhaps easier to program in, but the fact that its performance seems to be some way behind Perl leaves me mystified as to why it is a choice for large scale projects.
I develop in Perl and PHP, finding that the former jobs are less available but more demanding and more rewarding, whilst the latter keep my bank balance afloat till the next Perl job.... :-)
One of the main reasons PHP is easier for a lot of newbies is that their web hosting company has already set things up to make things as easy as possible. Just rename one of your HTML pages ".php" and you can start sprinkling in some code.
Perl, if your web host supports it at all, is probably running via CGI (fine for small scripts, but not the most efficient method for large scale projects); modules that could make your life easier, like Plack, may not be pre-installed, and installing them without root access may be non-obvious; for people who have never touched a Unix command-line before, even trivial things like chmod +x represent a major barrier to entry.
This is fixable but would require co-ordinated effort from the Perl community. For example: a web-based control panel to make it easy to manage local::lib directories, and which would also provide one-click glue between Apache and a PSGI application. (Plus evangelism, evangelism, evangelism to get this widely pre-installed by major hosting providers.)
While PHP is easy to get started, it also has drawbacks that make PHP development harder in the long term for bigger projects. Until very recently, namespace management was poor, meaning that every part of your code had to be careful not to tread on every other part's function names. All functions lived in a single flat namespace. (Class methods being an exception.) Namespaces were introduced in PHP 5.3, but are not yet widely used. None of the many, many, many built-in PHP functions are namespaced. Most large PHP projects (e.g. Drupal) still use a flat namespace.
PHP's OO model is less flexible than Perl's too. With the recent addition of traits, it compares favourably to languages like Java and Python, but it's still not up there with the mighty Moose. With fewer options for recombining different bits of code into different objects, DRY can suffer.
There are other things, but I won't bore you all.
Incidentally Dave, we appear to live around the corner from each other. (On a planetary scale at least.)
Dark
Milk
White
Alcoholic
Nutty
Fruity
Biscuity
Gingery
Spreadable
Drinkable
Moussed
Alternative e.g. Carob
None
Other kind
Results (779 votes),
past polls | http://www.perlmonks.org/index.pl?node_id=1029467 | CC-MAIN-2015-27 | refinedweb | 762 | 61.56 |
Things are easy when you have to use a C++ library in a Python project. Just you can use Boost.
First of all here is a list of components you need:
Let's start with a small C++ file. Our C++ project has only one method which returns some string "This is the first try". Call it CppProject.cpp
char const *firstMethod() { return "This is the first try."; } BOOST_PYTHON_MODULE(CppProject) { boost::python::def("getTryString", firstMethod); // boost::python is the namespace }
Have a CMakeLists.txt file a below:
cmake_minimum_required(VERSION 2.8.3) FIND_PACKAGE(PythonInterp) FIND_PACKAGE(PythonLibs) FIND_PACKAGE(Boost COMPONENTS python) INCLUDE_DIRECTORIES(${Boost_INCLUDE_DIRS} ${PYTHON_INCLUDE_DIRS}) PYTHON_ADD_MODULE(NativeLib CppProject) FILE(COPY MyProject.py DESTINATION .) # See the whole tutorial to understand this line
By this part of the tutorial everything is so easy. you can import the library and call method in your python project. Call your python project MyProject.py.
import NativeLib print (NativeLib.getTryString)
In order to run your project follow the instructions below:
cmake -DCMAKE_BUILD_TYPE=Release ..
make
python MyProject.py. Now, you have to see the string which the method in your C++ project returns. | https://riptutorial.com/boost/example/22112/introductory-example-on-boost-python | CC-MAIN-2021-43 | refinedweb | 184 | 61.22 |
Nanoparticle¶
ASE provides a module,
ase.cluster, to set up
metal nanoparticles with common crystal forms.
Please have a quick look at the documentation.
Build and optimise nanoparticle¶
Consider
ase.cluster.Octahedron(). Aside from generating
strictly octahedral nanoparticles, it also offers a
cutoff
keyword to cut the corners of the
octahedron. This produces “truncated octahedra”, a well-known structural motif
in nanoparticles. Also, the lattice will be consistent with the bulk
FCC structure of silver.
Exercise
Play around with
ase.cluster.Octahedron() to produce truncated
octahedra. Set up a cuboctahedral
silver nanoparticle with 55 atoms. As always, verify with the ASE GUI that
it is beautiful.
ASE provides a forcefield code based on effective medium theory,
ase.calculators.emt.EMT, which works for the FCC metals (Cu, Ag, Au,
Pt, and friends). This is much faster than DFT so let’s use it to
optimise our cuboctahedron.
Exercise
Optimise the structure of our Ag55 cuboctahedron
using the
ase.calculators.emt.EMT
calculator.
Ground state¶
One of the most interesting questions of metal nanoparticles is how their electronic structure and other properties depend on size. A small nanoparticle is like a molecule with just a few discrete energy levels. A large nanoparticle is like a bulk material with a continuous density of states. Let’s calculate the Kohn–Sham spectrum (and density of states) of our nanoparticle.
As usual, we set a few parameters to save time since this is not a real production calculation. We want a smaller basis set and also a PAW dataset with fewer electrons than normal. We also want to use Fermi smearing since there could be multiple electronic states near the Fermi level:
from gpaw import GPAW, FermiDirac calc = GPAW(mode='lcao', basis='sz(dzp)', setups={'Ag': '11'}, occupations=FermiDirac(0,1))
These are GPAW-specific keywords — with another code, those variables would have other names.
Exercise
Run a single-point calculation of the optimised Ag55 structure with GPAW.
After the calculation, dump the ground state to a file:
calc.write('groundstate.gpw')
Density of states¶
Once we have saved the
.gpw file, we can write a new script
which loads it and gets the DOS:
import matplotlib.pyplot as plt from gpaw import GPAW calc = GPAW('groundstate.gpw') energies, dos = calc.get_dos(npts=500, width=0.1) efermi = calc.get_fermi_level()
In this example, we sample the DOS using Gaussians of width 0.1 eV.
You will want to mark the Fermi level in the plot. A good way
is to draw a vertical line:
plt.axvline(efermi).
Exercise
Use matplotlib to plot the DOS as a function of energy, marking also the Fermi level.
Exercise
Looking at the plot, is this spectrum best understood as continuous or discrete?
The graph should show us that already with 55 atoms, the plentiful d electrons are well on their way to forming a continuous band (recall we are using 0.1 eV Gaussian smearing). Meanwhile the energies of the few s electrons split over a wider range, and we clearly see isolated peaks: The s states are still clearly quantized and have significant gaps. What characterises the the noble metals Cu, Ag, and Au, is that their d band is fully occupied so that the Fermi level lies among these s states. Clusters with a different number of electrons might have higher or lower Fermi level, strongly affecting their reactivity. We can conjecture that at 55 atoms, the properties of free-standing Ag nanoparticles are probably strongly size dependent.
The above analysis is speculative. To verify the analysis we would want to calculate s, p, and d-projected DOS to see if our assumptions were correct. In case we want to go on doing this, the GPAW documentation will be of help, see: GPAW DOS.
Solutions¶
Optimise cuboctahedron:
from ase.cluster import Octahedron from ase.calculators.emt import EMT from ase.optimize import BFGS atoms = Octahedron('Ag', 5, cutoff=2) atoms.calc = EMT() opt = BFGS(atoms, trajectory='opt.traj') opt.run(fmax=0.01)
Calculate ground state:
from gpaw import GPAW, FermiDirac from ase.io import read atoms = read('opt.traj') calc = GPAW(mode='lcao', basis='sz(dzp)', txt='gpaw.txt', occupations=FermiDirac(0.1), setups={'Ag': '11'}) atoms.calc = calc atoms.center(vacuum=4.0) atoms.get_potential_energy() atoms.calc.write('groundstate.gpw')
Plot DOS:
import matplotlib.pyplot as plt from gpaw import GPAW from ase.dft.dos import DOS calc = GPAW('groundstate.gpw') dos = DOS(calc, npts=800, width=0.1) energies = dos.get_energies() weights = dos.get_dos() ax = plt.gca() ax.plot(energies, weights) ax.set_xlabel(r'$E - E_{\mathrm{Fermi}}$ [eV]') ax.set_ylabel('DOS [1/eV]') plt.savefig('dos.png') plt.show() | https://wiki.fysik.dtu.dk/ase/gettingstarted/cluster/cluster.html | CC-MAIN-2020-16 | refinedweb | 777 | 52.15 |
Python alternatives for PHP functions
open(filename).read(1000) # ALWAYS specify a max size (in bytes). See [url][/url]
# Python 2
import urllib2
urllib2.urlopen(url).read(1000)
# Python 3
import urllib.request
urllib.request.urlopen(url).read(1000)
import urllib2
def file_get_contents(filename, use_include_path = 0, context = None, offset = -1, maxlen = -1):
if (filename.find('://') > 0):
ret = urllib2.urlopen(filename).read()
if (offset > 0):
ret = ret[offset:]
if (maxlen > 0):
ret = ret[:maxlen]
return ret
else:
fp = open(filename,'rb')
try:
if (offset > 0):
fp.seek(offset)
ret = fp.read(maxlen)
return ret
finally:
fp.close( )
(PHP 4 >= 4.3.0, PHP 5)
file_get_contents — Reads entire file into a().
Name of the file to read..
A valid context resource created with
stream_context_create(). If you don't need to use a
custom context, you can skip this parameter by NULL.
The offset where the reading starts.
Maximum length of data read.
The function returns the read data or FALSE on failure.. | http://www.php2python.com/wiki/function.file-get-contents/ | CC-MAIN-2019-51 | refinedweb | 160 | 63.76 |
Let's Take a Look at CSS in JS with React in 2019 - CSS & Inline Styling
CSS in JS isn't unique to React, however, I'm a little React fanboy and it happens to be one of my favorite JS libraries to write front end applications with, so I'm going to be talking about CSS in JS solutions specifically with React and how I feel about them!
Introduction with regular CSS
Before we dive into anything I think we should take a look at what we can accomplish with some good ol' CSS in the context of a React application.
// Button.js import React from 'react' import './Button.css' const Button = () => { return( <button className="button button-green"> I think I'm green </button> ) }
/* Button.css*/ .button{ border-style: solid; border-width: 2px; border-radius: 2rem; } .button-green { background-color: green; border-color: white; color: white; }
So this looks fairly normal, right? It looks like some regular HTML and CSS besides the
className. If you're not familiar with React
class === className because
class is a reserved word in JS, and since JSX is HTML with embedded JS this is a nono.
Issues I have run into using CSS with React
Before we start here I need to state that I am definitely not an expert or guru of CSS. I can kick my feet around with it and make responsive rules that look alright. I can't name any crazy CSS tricks or create an animated Pikachu in pure CSS.
Because of this, I'm not going to even pretend to talk about all the pitfalls with CSS or any new features with CSS that aim to fix these pitfalls, so I'm going to drop this excellent resource from 2014. I'll let you decide if it still holds up! ;)
Global Namespaces ❌
Are you saying I'll be able to use BEM?
When the time comes, you won't have to.
If you took a look at that link I put up above, you'll notice that global namespaces are the first issue it covers with CSS. However, we've all felt this tremor, which is why naming conventions like BEM exist.
.button and
.button-green from our CSS example is already 2 global namespaces we introduced.
As an application grows, I've found that the CSS rules are written also grows continuously. Creating unnecessary duplicate styles with small tweaks and unused styles bulking up the application. While there are configs out there to make sure unused CSS isn't included in your bundle, they don't vanish from your codebase and that sucks.
Loose Coupling ❌
While you can structure your react app so that your component styles exist in the same directory as your component, there's nothing in your component file strictly tying it together. Instead, you're referencing the rules you've specified for your selectors. Despite the file structure, those styles can be referenced elsewhere; It's just another thing to think about.
Clunky Controls ❌
When using just CSS you're more or less stuck controlling all your styling changes based on changing the class of an element. While this seemed maybe more natural with something like Vanilla Javascript or JQuery, it always felt hacky to me when using React. You have this direct access to your view all in separated components yet we're throwing all this logic to reference different combinations of CSS classes.
I know Inline Styling
Another way to style your React app without any modules is inline styling. Now, pay no attention to the person in the back yelling about how bad inline styling can be, because that's actually me in disguise. We were all taught that inline styles in HTML are bad and class-based styles were rad. However, this is JSX.
// Button.js import React from 'react' const Button = () => { const buttonGreen = { backgroundColor: "green", border: "2px solid white", borderRadius: "2rem" color: "white" }; return( <button style={buttonGreen}> I think I'm green </button> ) }
This doesn't look so bad right?
You may notice that the CSS rules here don't quite look the same. Instead of seeing Kebab Case we are using Camel Case for our rules. They are mapped out in a Javascript Object with their values wrapped in quotations as a string.
Global Namespaces ✅
In our inline styling example,
buttonGreen is local to that file, so we can have as many
buttonGreen vars as we want across the application without running into any conflicts or following any specific naming conventions!
Loose Coupling ✅
With these styles being locally defined, you can't use these styles unless you go through the effort to exporting and importing them; Ideally, there are enough steps to stop the bad things from happening.
I think it also promotes developers to use React in a more intended way to create generic components that can be reused.
Clunky Controls ✅
// Button.js import React from 'react' const Button = ({backgroundColour, colour, children}) => { const buttonStyles = { backgroundColor: backgroundColour, color: colour, border: "2px solid white", borderRadius: "2rem" }; return( <button style={buttonStyles}> {children} </button> ) }
// SomePage.js import React from 'react'; import Button from 'Button'; const SomePage = () => ( <Button backgroundColour="blue" colour="white">I'm going to be blue</Button> )
Now, this is a super simple example but we can see that we have given our
Button specific control over the colours through some props that's passed into the component. I like this approach because it's self-documenting and it keeps all the logic in the component; I know exactly what the prop in the component is controlling and the usage of the component is very clear that it's setting the backgroundColour to blue and the colour to white.
What I don't like about Inline Styling
Using objects for styles
I'm not a fan of this syntax for styling. It's enough of a departure that can cause enough tedious work converting CSS into Javascript objects replacing
; with
,, wrapping rules in
"", and casing the keys into camel case. It's also scary enough for your designers that this approach can get shut down at the sight of it.
Inline styling is still inline
At the end of the day, this is still inline styling and your HTML will still end up with those ugly style attributes.
Would I personally use Inline Styling in React?
Nope.
But that's not the end for dear old CSS in JS! There are some pretty cool packages out there that do cool stuff and try to solve all kinds of issues regarding readability, scalability, and performance with CSS in JS.
In this series, I'm going to try and look at a diverse pool of these solutions whether they are highly rated or buried underneath.
As of writing this, I have used Styled Components, Emotion, and JSS. If you think one is really cool feel free to drop a comment about it!
I'm also on Twitter if you want to follow me or chat on there. I sometimes post things there!
Posted on by:
Phil Tietjen
I'm a Senior Developer and Co-host of Friday Night Deploys Podcast. I'm also a dad that likes to play video games and lift, always failing to keep it real with the kidz.
Discussion
I'm interested to see the rest of the series 😉
I actually really like inline styles for some things... it's nice to not have to switch files and pollute the global space with styles just for one little thing... but it can also become a bad habit.
And I actually really didn't like most css-in-js that I ran across, but after being forced to use it for awhile, it wasn't so bad either; so I'm interested to hear your other posts!
Glad you're liking the series so far!
I initially bounced off the second I first saw and gradually got invested until it's all I do for styling in react in my personal projects.
There's still a few I've seen that I think messy up a component but I won't spoil my series entries here ;).
I'm pretty excited to go over ones I know, revisit ones I bounced off, and find new ones!
I'm listening. *clicks follow* :D
By the way, personally what I'm doing now is that I have scss files that I import around, and I only use inline styles where I want to do parameter expansion from JS into the styles. Like when there is a color picker for a field in the CMS. Or when I want to do some responsive dynamics.
Super appreciate the follow :D. Now I have obligations to continue the series!
I've used SCSS with react too though I was soured because those stylesheets were poorly written. I enjoyed the nesting rules though!
Glad to hear it's working for you though!
You can put your style object outside the render function. Or as static property of the function/class. otherwise you are gonna to recreate the same style each time
Thanks for commenting, You're totally right Pasquale! Ideally you can store your style variables really anywhere you want to and import them into your components.
However I did want to just show off a simple example that can also show props affecting styles.
We can also use
useMemoto avoid recreating the same style on every render but I figured it was just a little out of scope for what I wanted to show :) | https://dev.to/phizzard/let-s-take-a-look-at-css-in-js-with-react-in-2019-css-inline-styling-jcg | CC-MAIN-2020-40 | refinedweb | 1,591 | 68.81 |
The code examples can be found here. Do what you will with it, but as always, no warranty for you!
Sweet sweet abstraction. Its a pretty powerful thing, right? From a clients perspective everything is nice and tidy, while all of the nasty not so pretty details are tucked away under the covers. And as we all know, in statically typed languages, the king of abstraction is the mighty mighty interface. But it might surprise you, that with the ASP.Net Web API, you cannot use interfaces as parameters to your action methods (say what!!!!!). To be fair, if you think about it, it actually does make sense. After all, how can model binders and formatters possibly know what concrete type you want when they encounter an IFoo interface at run-time. In fact, if you try it, the formatters wont even try, resulting in a lovely null reference exception somewhere down the stack. So if you wanted to do something similar to the code snip below, you are out of luck my friend.
using System.Web.Http; using WebApiInterface.Models; namespace WebApiInterface.Controllers { public class FooController : ApiController { public void Post(IFoo foo) { //Do some stuff to foo } } }
So why doesn’t this work? Again, because the JSON formatter cannot determine the desired run-time type of IFoo you want, it simply skips it. And as a result, you will be left with null. Like I said, its very sad, but it makes sense.
So, what can we do to support this?
How About IoC?
At first glance it may seem like simply plugging in some good ol’ inversion of control might be the answer. After all IoC fixes everything, right? For those of you not familiar with some of the extension points of the Web API, let me explain how leveraging IoC might help. In the Web API’s Global Configuration there is a DependencyResolver property exposed.
GlobalConfiguration.Configuration.DependencyResolver
By setting this configuration option to an IoC enabled dependency resolver, we can control how the Web API resolves dependencies. So for example, when the Web API creates our controllers and it comes across an interface parameter, we can tell it to look in our IoC container for the run-time type bound to the specified interface. This is very useful for injecting a UnitOfWork a Service or any other object we need in our controller. While this is a great first step, it does not directly help us with our desire to have interface types as action parameters. But it is a very important first step, so lets take a look at how we plug IoC into the ASP.Net Web API
Setting Up IoC
OK, let me first say that I know most of the cool kids hate on Unity. But hey, I don’t wear skinny jeans, I still have a PC, and I have grown used to the judgmental glares I get at my local coffee shop as I proudly play with my windows phone. That said, for simple no frills IoC support I still think its a really nice option. So with some help from the great NuGet package we are going to set up our Web API to use unity as our dependency resolver. First run the NuGet package, here are some details on the set up. Once Unity is set up, we need to configure our container like so.
namespace WebApiInterface { public static class UnityConfig { public static void RegisterComponents() { var container = new UnityContainer(); container.RegisterType<IFoo, Foo>(); GlobalConfiguration.Configuration.DependencyResolver = new UnityDependencyResolver(container); } } }
Nothing too fancy, just a simple registration stating that when a controller constructor asks for an IFoo, it should get an instance of type Foo. Additionally, you can see that I am setting the DependencyResolver configuration option I mentioned above to the a new instance of the UnityDependencyResolver supplied by the Untiy.WebApi Nuget package. This is what does the actual work of resolving the object instances from the container. With these changes our IoC configuration is ready to go. And god willing, when our controllers are created any dependencies they declare will be resolved by way of our unity container.
And there it is.
But What About Our Action Methods
While setting up IoC this way works great to resolve interface types when our constructors ask for them. It does nothing to help us in our action methods. The image below shows an http post to our Foo controller. As you can see we end up with a null, not a Foo.
Custom Creation Converter
OK so how can we leverage our IoC container setup to resolve interface types in our action methods. As it turns out, for that we need to look for extension points within our configured formatter for a solution. While the Web API’s Dependency Resolver will help us with constructor injection, it does nothing to help us with our action methods. For that we need something a a little different.
For those of you who want a wonderful explanation of model binding and formatters: read this, its a great article.
Looking at our problem it turns out that JSON.Net has one very slick extension option that can help us. And that is the Custom Creation Converter. Much like our dependency resolver gives the Web API the ability to do constructor injection, Custom Creation Converters give JSON.Net specific instructions on how to create object instances during the deserialization process. So when JSON.Net is walking down our object graph, as it finds interface types, it will look for a configured Custom Creation Converter to delegate the object creation process to. This allows us to explicitly influence how objects are created when specific interfaces are found during the model binding process.
The Custom Converter
Here is the code for our custom creation converter.
public class FooCustomConverter : CustomCreationConverter<IFoo> { public override IFoo Create(Type objectType) { return new Foo(); } public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer) { if (reader.TokenType == JsonToken.Null) { return null; } IFoo obj = Create(objectType); serializer.Populate(reader, obj); return obj; } }
And in order to tell JSON.Net to use this converter we also need to register it. Again we can use the global configuration objects for this.
GlobalConfiguration.Configuration.Formatters.JsonFormatter.SerializerSettings.Converters.Add(new FooCustomConverter());
As you can see there is not much to this at all. First we simply derive from the CustomCreationConverter<T> given to us by the JSON.Net framework. This gives us two methods we need to override. First, the Create method. This method creates a new instance of our Foo object. Second, the ReadJson method. This method is called for us by JSON.Net as it reads each JSON element. There are a few things to note here. We want to return null when the token type is null. This happens when no value is supplied for the JSON element. If the token type is not null, we call the Create method to create the new Foo object. Then we use the supplied serializer instance to map the JSON bits into the new Foo object by calling the populate method. And last we return the newly created and mapped object so JSON.Net can add it to the object graph that is being created.
All and all this is pretty simple, but this design has some very serious flaws. For one, it would required a CustomCreationConverter for every interface type you want to support, boo. Secondly since our converters are not using IoC, if we want to support multiple derived types, that logic needs to be put into the converters somehow, booooooooooo. So, lets tighten up our implementation so both of those problems go away.
Enter The IoC Custom Creation Converter
Since we have IoC all ready wired up, it is possible to create a single CustomerCreationConverter to service all of our needs. Notice in the code below, that in our override of the ReadJson method JSON.Net is kind enough to supply us with the type information for each object it is trying to deserialize. This information is precisely what we need to delegate object creation to our unity container. Lets take a look at what this might look like.
public class IocCustomCreationConverter<T> : CustomCreationConverter<T> { public IocCustomCreationConverter() { } public override T Create(Type objectType) { return (T)ServiceLocator.Current.GetInstance(objectType); } public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer) { if (reader.TokenType == JsonToken.Null) { return null; } var obj = Create(objectType, reader, serializer); serializer.Populate(reader, obj); return obj; } }
As you can see, we did not have to change the code very much. In fact the only major change is in the Create method. Here we ask the container (via the service locator) to create the object instance for us, as opposed to us manually ‘newing’ up the object. We are using the ServiceLocator pattern so we do not need direct access to the container in our JSON serilaizer. If you are not familiar with the ServiceLocator pattern, here is a quick primer on it. Now that we have our IoCCustomerCreationConverter ready to go we need to register this with JSON.Net with the simple one-liner below.
GlobalConfiguration.Configuration.Formatters.JsonFormatter.SerializerSettings.Converters.Add(new IocCustomCreationConverter<IActionParameter>());
One thing you might notice here is that I am registering the custom creation converter to only work with types that derive from IActionParameter. This is not required but there are some things that we may not want to use our Custom Creation Convert for. For primitive types such as an int or bool, it does not make any sense to ask JSON.Net to use a custom converter. So by using this simple marker interface we can control what types are actually resolved using our new converter.
Wrapping Up
So there you have it! By configuring IoC and adding a single JSON.Net Custom Creation Converter, we are now able to abstract the parameters to our action methods. This is great for testability, as well as creating APIs that may need to serve multiple clients with varying data structures. And that is a pretty cool thing!
BDN | https://brettedotnet.wordpress.com/category/asp-net-web-api/ | CC-MAIN-2021-10 | refinedweb | 1,688 | 56.66 |
Marble::GeoDataObject
#include <GeoDataObject.h>
Detailed Description
A base class for all geodata objects.
GeoDataObject is the base class for all geodata classes. It is never instantiated by itself, but is always used as part of a derived object.
The Geodata objects are all modeled after the Google KML files as defined in.
A GeoDataObject contains 2 properties, both corresponding directly to tags in the KML files: the id, which is a unique identifier of the object, and a targetId which is used to reference other objects that have already been loaded.
The id property must only be set if the Update mechanism of KML is used, which is currently not supported by Marble.
Definition at line 48 of file GeoDataObject.h.
Member Function Documentation
Compares the value of id and targetId of the two objects.
- Returns
- true if they these values are equal or false otherwise
Definition at line 131 of file GeoDataObject.cpp.
Get the id of the object.
Definition at line 80 of file GeoDataObject.cpp.
Reimplemented from Serializable.
Definition at line 119 of file GeoDataObject.cpp.
Provides the parent of the object in GeoDataContainers.
Definition at line 65 of file GeoDataObject.cpp.
Set the id of the object.
- Parameters
-
Definition at line 85 of file GeoDataObject.cpp.
Sets the parent of the object.
Definition at line 75 of file GeoDataObject.cpp.
set a new targetId of this object
- Parameters
-
Definition at line 95 of file GeoDataObject.cpp.
Get the targetId of the object to be replaced.
Definition at line 90 of file GeoDataObject.cpp.
Reimplemented from Serializable.
Definition at line 125 of file GeoDataObject.cpp.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2020 The KDE developers.
Generated on Sun May 24 2020 22:38:28 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/marble/html/classMarble_1_1GeoDataObject.html | CC-MAIN-2020-24 | refinedweb | 315 | 52.26 |
This is the source code of an amended version of Vcredist_x86.exe, the redistributable distributed by Microsoft. Microsoft forgot to include the required Windows Installer 3.1 in the redistributable. The above file comes with a bootstrapper that detects if any prerequisites are missing and installs the required runtimes.
This article is also a compilation of the various forum/usenet/message board postings and findings on the topic of C++ deployment in Visual C++ 2005.
This section mostly applies to programs built from the command line (with cl or nmake). Although, you can reproduce the same from the IDE, it requires changing numerous project settings, and if you're doing this, I assume you already know what you're doing!
You've set up the Visual Studio environment and compiler, you've successfully built your project, and now your ready to test your VC++2005 app. However, before it is able to start up, you get an error:
And you want to know what's going on.
Programs compiled with Microsoft Visual C++ that dynamically link to the C runtimes (/MD or /MDd) have to bundle with them a copy of the C-runtime DLLs (usually called MSVCRT.DLL or MSVCRxx.DLL where xx represents the version of Visual C++). If you just copy the .EXEs but forget to copy MSVCRxx.DLL along with it, you'll get the above error.
Okay, you note that MSVCR80.dll isn't located in System32. It is located in another directory (C:\WINDOWS\WinSxS\x86_Microsoft.VC80.CRT_1fc8b3b9a1e18e3b_8.0.50727.42_x-ww_0de06acd). You copy it from there to System32 (or if you're a veteran from the DLL hell days, you'll know better and copy the DLL to your app directory instead) and you try to run it again:
.
This error occurs whether you copy it to your application directory or System32. The release build gives you a slightly different error message, but still mostly the same.
Visual Studio 2005 made a number of changes to the way the C-runtime library is linked in. The first change is that the single threaded libraries are now gone. If you need the performance boost that the single threaded libraries provided and are willing to sacrifice thread safety, you should make use of the nolock variants of the CRT library functions. The second change is that projects created with VC2005 IDE now dynamically link to the C-runtime libraries by default. In VC2003, only MFC and managed C++ apps dynamically linked to the CRTs by default.
Finally, the C and C++ runtimes are now implemented as Side-by-Side DLLs. It's no longer enough to copy MSVCR80.DLL/MSVCP80.DLL/MSVCM80.DLL (from now on called the CRT DLLs) into the System32 directory. You must now load the CRT DLLs through a manifest. If you attempt to load the CRT DLLs without using a manifest, the system will detect this, raise an R6034 assertion, and abort(). That's why the CRT DLLs are now located in WinSXS and not in the System32 directory.
abort()
If you go back to the build directory, you will notice that there is a new manifest file called <appname>.exe.manifest (if it's not there, then remove the /MANIFEST:NO switch from the linker command line). You can either copy this manifest to the local directory, or you can embed this as a resource of your executable:
#include <winuser.h>
CREATEPROCESS_MANIFEST_RESOURCE_ID RT_MANIFEST helloconsole.exe.manifest
Compile with the rc.exe tool and embed using the linker. Alternatively, you can use the mt.exe tool to create and embed the manifest to the executable:
mt.exe /manifest helloconsole.exe.manifest /outputresource:helloconsole.exe
Once you have embedded a manifest, your build now resembles apps made from the IDE.
(This part applies to builds made in the VS IDE and command line builds that followed the previous section). You've built your app, and now you attempt to run it on your machine. At last your program runs properly:
However, when you try to run on another machine (a machine that does not have Visual C++ installed), you receive this message:
A bit of troubleshooting (Dependency Walker) tells you that it has something to do with the CRT DLLs, but you cannot figure out where to put those darn DLLs. You've copied msvcr80.dll to the System32 directory, the application directory, even the WinSXS directory. It still doesn't work. What's going on?
Now that your app is configured to load the CRTs through a manifest, the CRT DLLs must now be placed in a directory recognized by the side-by-side technology. You can either load the CRT DLLs via a shared side-by-side assembly, or through applocal assemblies. Applocal assemblies are covered in a later section (see below). First, we will cover how to install the CRTs as a shared side-by-side assembly.
If you choose to install a shared side-by-side assembly, they will need to be installed with a Windows Installer setup project (copying doesn't work). The setup will create policies, manifests, and some registry keys to the HKLM registry (the manifests and policies are important for the side-by-side engine to recognize the isolated library).
You cannot use a third party setup to install a shared side-by-side assembly. That part of the setup must be performed by a Windows Installer package. It's bad news if you're using Express Edition (which doesn't have the ability to create setup projects).
For older Windows (Win2K and below), there is no such concept as a side-by-side DLL, and the CRT DLLs will need to be installed either in the System32 directory or the same folder as the application. All this means that you have to create a complicated installer that behaves differently on different operating systems (even the service pack level of the OS can alter the behaviour of your installer).
Fortunately, Microsoft has already written a merge module that does all this for you (including registering the side-by-side DLL). Look in the "\Program Files\Common Files\Merge Modules" to find a set of MSM files (ignore the properties that say it's Beta 2. It actually installs the final release). The files that have "CRT" in their names are the ones you need to redistribute (if you use MFC/ATL, you'll also need the other MSMs). By including these files in your setup project, your setup will properly install the runtime DLLs in the correct location.
If you're not using Windows Installer, you can use the executable located in "\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bootstrapper\Packages\vcredist_x86\vcredist_x86.exe". All you have to do is execute this redistributable in your third party setup.
However, there is a major bug with the current version of the merge modules. Before you install the runtime redistributables you have to ensure the following components are installed first:
For example, if you're trying to run your app on a Win2K computer, you have to download the latest Windows 2000 service pack (SP4), install that, reboot, then download Windows Installer 3.1, install that and reboot. Then finally, you need to copy the vcredist_x86 file and install that. Such a number of steps (which significantly alters the operating system, requires many reboots, and probably breaks the existing apps) are very inconvenient for an end user. If any one of these components is not present, the merge module will fail with a nondescript error at the end user's expense (I got an error 1723 when testing). This complex install process was a design decision made by Microsoft.
Although there are changes coming for SP1, I couldn't wait for that to be released, so I came up with my own installer that included all the necessary updates in it. And this is the topic of this article.
The code is for a bootstrapper that detects the presence of C-runtimes and if not present, installs them. If either Internet Explorer or Windows Installer needs to be updated, the user is presented with a dialog asking to install them as well. The main components are launched in an interactive mode with the /norestart switch set to prevent them from rebooting the computer. If you want to change this, you'll need to alter the InstallRequiredApps() function in my source. You'll also need to download each component separately:
/norestart
InstallRequiredApps()
Place all these packages in the same directory as the solution file.
Detecting Internet Explorer is done through checking the value of the "Version" string in HKLM\Software\Microsoft\Internet Explorer. The version of Windows installer is checked by calling the DllGetVersionInfo export from MSI.DLL. The Windows version is checked through the GetVersionEx() API. In order to detect if the CRTs are installed, I have created a simple DLL that dynamically links to the C runtimes. If a call to LoadLibrary() fails for this DLL, we know that the C-runtimes are not installed and should install them.
DllGetVersionInfo
GetVersionEx()
LoadLibrary()
The bootstrapper's binaries will be compressed with IExpress (the package and deploy wizard bundled with Windows). The IExpress packager works in all 32-bit versions of Windows, checks for binary integrity (e.g. bad downloads), and automatically determines a writable temporary directory (for LUAs). Although, we could implement these features ourselves (that's what earlier editions of this article did), I'd rather make use of externally tested code than attempt to reinvent the wheel.
When you make a bootstrapper application (that needs to run on every OS from NT3.x to Windows Vista), you have to be very careful about which APIs you invoke, and how you run your app (static link only, no shlwapi, no MFC, no internet, no ATL, no .NET, no debug helpers, etc.). Unfortunately, Visual C++ 2005 makes code that is inherently unrunnable in older Windows (it needs GetLongPathNameW() and IsDebuggerPresent() to be present in Kernel32.dll). If you want my bootstrapper to run on older Windows OSes (even if it's just a message box telling the user the program cannot run on this OS), you have to use an older compiler, such as VC6.
GetLongPathNameW()
IsDebuggerPresent()
And that should be it. The return numbers returned by my bootstrapper are:
If you dynamically link to the MFC DLLs and make use of its data access classes, you'll have to include a whole bunch of redistributables (such as the updated common controls, the MDAC, and DCOM update). And if you use its internet related classes, you'll have to update the version of Internet Explorer. Note that the requirement of IE is only for MFC. If you don't use MFC, you don't need to redistribute Internet Explorer (indeed, if you #define EXCLUDE_IE60 with my bootstrap, then you can compile a smaller bootstrap that removes the IE-related stuff).
#define EXCLUDE_IE60
The MFC runtimes themselves will be installed by vcredist_x86.exe, which will also install the ATL and OpenMP runtimes. If you are using the Unicode release of the MFC DLLs, note that the Microsoft provided DLLs will not work on Win9x (since Win9x does not implement Unicode). Please follow the instructions located in Michael Kaplan's blog to see how to rebuild the MFC DLLs to use the MSLU.
If you're running Visual C++ Express, you can still use my bootstrapper application even though you lack the capability of creating a setup project (and you lack the redist folder, and vcredist_x86). Outlined in Nikola Dudar's blog are the steps to compile a CRT MSI from those merge modules which you can redistribute to the end users. The first step is to download and extract the WiX binaries to a folder (actually, the first step is to download and install platform SDK, but you've already done that haven't you?).
Once you have installed WiX (I'll assume you installed it to "C:\WiX"), you need to open up a SDK command prompt. Run the following command twice:
uuidgen.exe
This should give you two GUIDs (one for each time you run uuidgen) which will be useful later on. Memorize those two GUIDs. Now create an XML file called "C:\WiX\vccrt.wxi".
The contents of C:\WiX\vccrt.wxi should look something like this:
<Include>
<?define PRODUCT_ID=00000000-0000-0000-0000-000000000000?>
<?define PACKAGE_ID=00000000-0000-0000-0000-000000000000?>
</Include>
Don't save the file now, replace those 0s with the two GUIDs you memorized earlier. Once you have replaced those GUIDs you can save the file (and you can also forget those GUIDs you memorized). Now create a file called C:\WiX\vccrt.wxs and give it the contents:
[Editor comment: Line breaks used to avoid scrolling.]
<?xml version="1.0" encoding="utf-8"?>
<?include 'vccrt.wxi' ?>
<Wix xmlns="">
<Product Id="$(var.PRODUCT_ID)"
Name="Visual C++ 8.0 Runtime Setup Package"
Language="1033"
Version="1.0.0.0"
Manufacturer="Your Company">
<Package Id="$(var.PACKAGE_ID)"
Description="MSI to redistribute my app"
Manufacturer="Your Company"
InstallerVersion="300"
Compressed="yes" />
<Media Id="1"
Cabinet="VCCRT.cab"
EmbedCab="yes" />
<Directory Id="TARGETDIR" Name="SourceDir">
<Merge Id="CRT"
Language="0"
src="C:\Program Files\Common Files\Merge
Modules\Microsoft_VC80_CRT_x86.msm"
DiskId="1" />
<Merge Id="CRT Policy"
Language="0"
src=>
Now with these two files saved open up a command prompt at C:\WiX, and type in the following commands:
candle.exe vccrt.wxs -out vccrt.wixobj
light.exe vccrt.wixobj -out vccrt.msi
In the end, you should have a file called vccrt.msi. If you're getting any errors, you should check out the troubleshooting section of Nikola's blog. With the vccrt.msi created, run this file on all the machines you intend your app to work on.
If you want my bootstrapper application to invoke vccrt.msi instead of vcredist_x86.exe, you need to alter my code so that the MSI is executed instead (instructions are given in install.txt).
Microsoft offers a number of alternatives for deploying the C runtimes without using a setup project. All of these alternatives allow your VC2005 app to run on another computer.
Assuming your application is called helloconsole.exe, and is installed in C:\test\, you can copy the CRT DLLs to the application folder.
Once you have done that, open up your \Program Files\Microsoft Visual Studio 8\VC\Redist\x86\Microsoft.VC80.CRT folder and look for the file called Microsoft.VC80.CRT.manifest. If you run C++ Express or don't have the redist folder, this is what the Microsoft.VC80.CRT.manifest looks like:
<>
Save this to a UTF-8 XML file called Microsoft.VC80.CRT.manifest (and make sure the copyright sign comes out correctly).
Once you have the manifest, copy it to the C:\Test\ directory, along with the CRT DLLs and then your application should successfully run. Your directory should now look like:
If you're making use of MFC/ATL/OpenMP, you will also need to copy the corresponding runtimes and their manifests as well (see your redist folder). Now your app should run.
Exercise: what happens when the user (or more accurately, the administrator) subsequently installs the vcredist_x86.exe package after you have copied these "app-local" DLLs? Which version will be loaded?
In earlier versions of this article, I assumed that the applocal DLLs would override the WinSXS DLLs (not desirable if the shared assemblies are a later version of the CRT). To make certain however, let's confirm this by performing some PSAPI tests:
#include <vector>
#include <iostream>
#include <windows.h>
#include <psapi.h>
#include "stlunicode.h"
#pragma comment (lib, "psapi")
static const int MAX_MODULES=1024;
int main(void)
{
std::vector<HMODULE> names(MAX_MODULES);
DWORD lpcbNeeded = 0;
if(EnumProcessModules(GetCurrentProcess(),
&names[0], names.size() * sizeof(HMODULE), &lpcbNeeded))
{
names.resize(lpcbNeeded / sizeof(HMODULE));
for each(HMODULE item in names)
{
std::vector<TCHAR> module_name(FILENAME_MAX);
DWORD charsCopied = GetModuleFileName(item,
&module_name[0], FILENAME_MAX);
std::tcout << TEXT("*") << &module_name[0]
<< TEXT("\n");
}
}
return 0;
}
On an XP machine with both applocal and WinSXS DLLs installed, the output looked something like:
*C:\Test\helloconsole.exe
*C:\WINDOWS\system32\ntdll.dll
*C:\WINDOWS\system32\kernel32.dll
*C:\WINDOWS\WinSxS\x86_Microsoft.VC80.
CRT_1fc8b3b9a1e18e3b_8.0.50727.
42_x-ww_0de06acd\MSVCP80.dll
*C:\WINDOWS\WinSxS\x86_Microsoft.VC80.
CRT_1fc8b3b9a1e18e3b_8.0.50727.
42_x-ww_0de06acd\MSVCR80.dll
*C:\WINDOWS\system32\msvcrt.dll
*C:\WINDOWS\system32\PSAPI.DLL
This shows that my earlier assumption was wrong. The shared assemblies are indeed loaded in preference to the app-local DLLs. Until the administrator is coerced into installing the shared CRT DLLs, your app can make use of the app-local DLLs. Once the shared CRTs are installed, your app will start using the shared CRT DLLs, and your app local DLLs become unused files. This method has two advantages going for it (no setup required, will run on unpatched Windows 2000/XP/2003 systems), and I cannot find anything else wrong with this approach, I feel that this method offers an attractive solution for making your app run on another computer. Microsoft fully supports this method of deployment, albeit by arranging the CRT DLLs slightly differently.
If you look at your redist folder, you will notice that the files are located in strangely named subdirectories, like Microsoft.VC80.CRT, or Microsoft.VC80.MFC. These directories are purposefully named, because when you install your application, Microsoft recommends you to copy these folders as-is to your application directory.
According to Microsoft your setup should look like this for WinXP and above:
For Win2K and below, it should look like:
(Exercise: How would you arrange the DLLs such that it is the setup for both WinXP and Win2K?)
Not only are you going to need redundant copies of the CRT DLLs, Microsoft admits that this approach only works if your application is an executable (EXE). It does not work for DLLs. For DLLs, Microsoft mandates that you follow a setup similar to the one I've described. However, if my method works for both EXEs and DLLs, why don't we just always arrange the CRT DLLs in the way I've described?
According to Microsoft, installing a private side-by-side assembly is not supported on Windows 2000 and below, but it seems to work okay. They prefer the CRT DLLs to be installed in the System32 directories. In my opinion, it sounds more like scare tactics from Microsoft. They say it may or may not work, but they're not going to help you if you do it.
If you use any part of the CLR (say, you're writing a C++/CLI app), then you have to install the .NET Framework. If this is the case, then you can kill two birds with one stone by just redistributing the .NET Framework. Note that the .NET Framework is a larger redistributable that will refuse to install if you don't have Internet Explorer 6.0, or the latest service pack for Windows 2000 (or XP or 2003 or Vista). Again, you will also require Windows Installer 3.0. However, with the .NET Framework, at least you get a friendlier error message that indicates the missing component (no need for my bootstrapper).
Overall, the end user may have to download 360 MB of patches before they can install the .NET Framework. However, once the .NET Framework is installed the user will probably have nearly everything they need to run the app (including the CRTs, MSIE, latest service pack, MSI 3.0). Everything apart from MFC / ATL / OpenMP libraries - they still need to be redistributed if you use them.
If you have the source code of C runtimes (only available if you do a full install, and you're not on Express Edition), then you can rebuild the entire CRT to get them to behave just the way you want (in this case, load from your own dir). However, there is a legal issue with distributing an application that links to a hacked CRT (make sure you review the VS EULA with your legal team). Most importantly, the name of these rebuilt DLLs must not be called MSVCR80.DLL (or MSVCP80.DLL or MSVCM80.DLL). You might have other reasons to use a custom CRT, (e.g. to enable MSLU support for the CRTs). If that is the case, you should choose this solution. Steps to rebuild the CRT are given in the MSDN site.
Unlike Microsoft's CRT, this private CRT is not supposed to be placed in the WinSXS directory or even the System32 directory - it should be placed in the same directory as your app. The primary difference between your private CRT and Microsoft's CRT is that _CRT_NOFORCE_MANIFEST is not defined in your build. That means the code that checks if the DLL was loaded via the manifest (which also happens to be the code that prevents your app from running under Windows NT) gets preprocessed out. Thus you can deploy this DLL locally with your app, without needing to include a manifest with it (Windows NT compatibility is an added bonus).
_CRT_NOFORCE_MANIFEST
To use the modified CRT, you need to add a /NODEFAULTLIB switch to your linker command line (Project -> Project Properties -> Configuration Properties -> Linker -> Input -> Ignore All Default Libraries, set to Yes). Then, you need to add the full path to your rebuilt CRT in the Additional Dependencies field (as well as any Win32 libs that got excluded). This needs to be repeated for all your projects.
Also note that you will get serious problems if you use a rebuilt CRT, you call into a DLL that expects STL parameters (or a FILE* or any other CRT-specific construct), and that DLL does not use your modified CRT. Such problems include heap corruptions, unreleased locks, debug asserts and other crashes (i.e. all those crashes which only occurred on the end users' machines but not yours).
FILE*
The major disadvantage with this approach is that if you're using a buggy function in the CRT DLLs and Microsoft issues a fix for that function, your application will not be updated. The only way you can fix this bug is to patch your version of Visual Studio and rebuild the CRTs again.
The final option is to statically link your executable to the CRT (by the way, this is how the bootstrapper is able to run without the CRTs). To statically link to the executables, go to the Project -> Project Properties -> Configuration Properties -> C/C++ -> Code Generation -> Runtime Library (change from "Multi-Threaded DLL" to "Multi-Threaded").
If you're careful enough, you can even get the application to run on Windows NT! The pieces of the CRT that require InterlockedCompareExchange and GetLongPathNameW get discarded out of your final executable, and thus your app can run on Windows NT. In order for this to happen, you have to be careful about which C functions you call (no iostream, no locale, no algorithm), and you'll probably have to upgrade to the latest service pack for Windows NT (plus all post service pack patches).
InterlockedCompareExchange
GetLongPathNameW
iostream
locale
algorithm
Using this method is not much better than rebuilding the CRTs. Once again, if your application consists of several DLLs where each DLL expects STL/CRT parameters, you will get serious bugs until you rebuild by dynamically linking the CRT.
And if Microsoft issues a fix for their routines, and you use this method, you become solely responsible for fixing both yours and Microsoft's bugs.
If you have Windows 2000, please consider yourself a tester for my app. I have not tested my bootstrapper on this OS, and some of my most complex code runs specifically on this operating system. If you are on Win2K, please, let me know how it works.
There is a lot of hardcoding going on in my bootstrapper. It's not as configurable as this installer. It does not support the installation of .NET (although it could with a few minor modifications). And currently it only supports one command line option (/silent), and even this switch doesn't work that well as it should.
/silent
My bootstrap behaves rather unreliably if the user chooses to install Microsoft's components, but cancels them in the middle of the install. If you're still receiving errors even after you have deployed your application, then...
If you're getting errors related to MSVCR80D.DLL/MSVCP80D.DLL/MSVCM80D.DLL (note the D), then please recompile your app to be a release build (go to Build -> Configuration Manager and where it says Active Solution Configuration, select "Release"). If you're the end user, ask the application developer to rebuild his program in a release build.
If you're getting some other error related to missing DLLs, then try opening your application under Dependency Walker (it's located somewhere under the Platform SDK folder - search for the depends.exe in the folder where you installed the Platform SDK. If it's not there, then download it from here). Dependency walker should tell you how a DLL is loaded and if any DLL is missing or outdated. Missing DLLs are noted with ? marks and outdated DLLs are flagged red.
If you are receiving a R6034 error, then make sure your app contains a manifest. Then make sure your folder includes a second manifest called Microsoft.VC80.CRT.Manifest (see above). Note that if you're running Windows 95/NT or below, then it's not possible to solve your error unless you upgrade your operating system.
If you are building a DLL and that DLL is having trouble loading, then make sure that DLL has an embedded manifest. The manifest for a DLL should be created with the following command:
mt.exe /manifest helloconsole.dll.manifest /outputresource:helloconsole.dll;#2
DLLs need to have its manifest embedded as a resource with the resource ID equal to 2 (according to MSDN). External manifests do not work for DLLs, they only work for applications, so unless you are able to alter the application's manifest, you should embed the mt.exe generated manifest into your DLL.
One trick that seemed to have solved some problems is to enable the "Use FAT-32 Workaround" setting in project properties. Although Microsoft hasn't revealed what this setting actually does, some users have replied that it solved whatever problems they had.
If you are not receiving a R6034 and Dependency Walker shows nothing wrong, and you've fully patched Windows and it has a supported version of Windows then perhaps it might be a problem related to the DLL itself. Try looking at the links below to see if they are of any help.
Since this article was published, Microsoft has paid attention to the complaints, and in service pack 1 for Visual Studio 2005, they have fixed some of the most pressing concerns with their software. The vcredist_x86.exe no longer requires MSI 3.0 to run, it can now run on machines where the Windows installer version is 2.0. This means the vcredist install sequence becomes relatively simple:
You no longer need to check the windows version or service pack level (the table above is no longer relevant). What's more, vcredist_x86.exe will guarantee that the CRT DLLs are installed into the correct locations. This same procedure works for all supported versions of Windows. The fix is available as a QFE if you can't wait for service pack 1 to be released.
Thanks. | http://www.codeproject.com/Articles/12482/Bootstrapper-for-the-VC-2005-Redists-with-MSI-3-1?fid=244216&df=10000&mpp=10&sort=Position&spc=None&tid=2039574 | CC-MAIN-2014-35 | refinedweb | 4,597 | 63.19 |
Sometimes we one of the enterprise web services does not provide all the information necessary and it turns out that there is another web service which can return the missing fields. On the other hand many of the calling tools cannot or do not want to call multiple web services one by one to get all of the information but just want to use one composite web service which would return all information. Can PI leverage such integration flows and if so how? There are a couple ways of doing that via PI but in order for them to work efficiently we need to make some assumptions:
a) we cannot use a ccBPM – if we want our composite web service call to be quick we cannot use ccBPM for doing it
b) we should only be using ICO objects – we don’t want to use the ABAP stack at all as it can slow down each call and in particular synchronous web service calls can suffer from that
c) we need to be able to turn of all of the logging for sync calls (which is disabled by default in PI) but we need to make sure it’s turned off remembering that sync call logging on ICO is only possible as of PI 7.31 (EhP1 for SAP PI 7.3)
How to create a composite web service call then in PI itself?
As I mentioned there are still multiple ways of doing a composite web service, we could create a java proxy which would just call all of the necessary web services but would it allow us to use any of the PI tools ? Not necessary, we wouldn’t even be able to provide communication channel details (URL, passwords etc.). Taking that into accout we’d like to prepare a composite web service call which would be able to use PI’s communication channels (as this is a way not only to do nice administration but sometimes it can save us a lot of programming – like programming special autorization types, available with SOAP-AXIS communication channels in strandard). In order to use PI’s communication channels we’d need to either use a standard PI flow (like ICO) or lookup API. As it’s not possible to call two synchronous web services from one call we could try using the lookup API. The idea is to call a first web service in a normal way (using the ICO object) and then call all of the rest of web services using the lookup API.
Imagine that system 1 is calling a WS on system 2,3 and 4. ICO object has a standard configuration to call just system 2 and systems 3 and 4 are being called using the lookup API as per screenshot below. The lookup API for web service can be called using Bhavesh’s blog: Webservice Calls From a User Defined Function..
Was that simple ? Is that all ? It turnes out we may have a second issue with this approach which we need to solve – passing parameters. How can we call the rest of the web services if the first one will not return the object key? We can have two situations:
a) composite web service calls can return multiple outputs from multiple object IDs (like details for multiple master data objects -materials). In this case the return messages from all web service calls need to return object IDs which can be used for calling additional web services. There is no problem here.
b) composite web service call can return only information about one object (like Purchase Order details). In this case the response may not have Purchase Order number in the response so we need to find out a way to call the rest of the web services with the same object ID.
How can we deal with the issue from the second approach if we cannot use variables from request message in the response mapping?
We can easily do it using Adapter Specific Message Attributes – ASMA. Inside the request mapping put the object ID into some ASMA parameter and inside the response mapping you can query ASMA data to get the same object ID and this way it does not have to be in the reponse of the first web service call at all.
How do you fell out this approach?
now you’ve stubbed a knife in my heart 🙂
a sync BPM with multiple sync send steps inside?
I guess this would no longer be a sync call but a overnight call I guess hehe 🙂
Regards,
Michal Krawczyk
But on a serious note, I have one of the most complex BPMs I have ever put my hands running in production all happy and fine. it involves numerous WS calls and the end result of which is a E2E processing time under 5-6 sec
What i love is the way we can control on the exceptions thus making it a meaningful business process in its own terms!
Hi Michal,
What if it is not ICO – will it still work – will the parameter be available in the response? To be honest the context of ASMA attributes is not defined after all…
And I am thinking – what if the attribute is set directly in the request (e.g. System namespace, FileName param) – can it be used directly in the response e.g. when it comes for writing the file ?
Thanks & best regards,
Lalo | https://blogs.sap.com/2011/12/26/michals-pi-tips-composite-web-services-on-pi/ | CC-MAIN-2017-39 | refinedweb | 908 | 63.22 |
This is the small tale of a turtle living on the bleeding edge. I'm not the first guy to talk about Protocols in Clojure, but I certainly wont be the last either, here we go...
Everybody who's over the age of 25 has probably seen 1 or 2 turtle graphics implementations in his day - this was what most of us grew up with before penumbra came along. In this small post I'll show one way of doing some simple turtle graphics, with a surprising result.
So to get the ball rolling I'll show you how we would usually make the little guy:
(def turtle-map {:x 50 :y 50 :dir 0})
Thats an ordinary hash-map, nothing fancy about it. The turtle carries around a position on the board as well as a direction (degrees). To work the map we would then make up a few helper functions like move/turn/run etc. But to make sure I got it just right in the first take I consulted with Chris Houser, co-author of this book, and asked him what we had to say about a protocol wielding turtle.
In the olden days we used multimethods for any kind of dispatching and those truely were the good ol' days. But because of multimethods lack of speed, Rich has recently implement protocols. They allow us to program to abstractions which provide a type of polymorphism, but this time with a higher potential for speed, thus paving the way for clojure-in-clojure. To get a good grip on the motivation behind protocols and the implementation I highly recommend that you view: this video. Its 27 minutes long and Stuart Halloway does a fantastic job of explaining the basics!
For the turtle, life is simpler: First we define the Turtle itself:
user> (defrecord Turtle [x y dir canvas panel]) user.Turtle user> (Turtle. 5 5 0 nil nil) #:user.Turtle{:x 5, :y 5, :dir 0, :canvas nil, :panel nil}
As you can see defrecord makes a class in your current namespace called Turtle. I can construct an instance of this Turtle in the exact same manner as I would a regular Java Class. In case you're wondering about the extra 2 params, thats just for drawing the trail and updating the window. Second order of business is to define a protocol which acts as an interface to this Turtle, exposing methods and fields but not handling any implementation code:
(defprotocol PTurtle (move [this dist]) (turn [this deg]))
That simply defines an interface, which provides 2 methods, both taking a mandatory first argument 'this' and then whatever we like. The wonderful thing about protocols is that I can now extend that protocol to handle any type of data I want, which uses those 2 method-names. If I wanted to challenge myself, I'd implement something like the Mars-Rover which moves, then signals back home where its at now etc, but since I already titled this blogpost ProtoTurtle, lets stick with that:
(extend-protocol PTurtle Turtle (move [{:keys [x y dir canvas panel]} dist] (let [dx (+ x (* dist (Math/cos (Math/toRadians dir)))) dy (+ y (* dist (Math/sin (Math/toRadians dir))))] (.drawLine canvas x y dx dy) (.repaint panel) (Turtle. dx dy dir canvas panel))) (turn [{:keys [x y dir canvas panel]} deg] (Turtle. x y (+ dir deg) canvas panel)))
I'm extending the protocol PTurtle, to handle the case where it gets passed a 'Turtle' and I'm providing the implementation of move and turn. Neither of them should have any surprises if you've been following this blog for a while. What happend to the mandatory 'this' first argument you ask? Its still there, but its destructured in place into its keys - this isn't as fast as (:key map), but its more concise and you get the point anyway.
So now that we have our shiny new turtle, lets see how it works:
user> (def turtle (Turtle. 5 5 90 nil nil)) #'user/turtle user> (move turtle 2) #:user.Turtle{:x 5.0, :y 7.0, :dir 90, :canvas nil, :panel nil}
So just as you would expect: I start the turtle off at a 90 degree angle (ie. looking upwards) and then I move it 2 units in its current direction, changing y from 5 to 7. But where we differ substantially from the OO mindset, is in the fact that good ol' Turtle is still the same:
user> turtle #:user.Turtle{:x 5, :y 5, :dir 90, :canvas nil, :panel nil}
He's still in (5,5) looking upwards because he's immutable and always returning new instances of himself. That means if we want to do some fancy dance moves, it'll look something like:
(-> turtle (move 30) (turn -90) (move 10) (turn -90) (move 20) (turn 90) (move 10) (turn 90) (move 20) (turn -90))
But as any other Lisper I'm not too fond of writing move/turn constantly, so we need a way of abstracting that away. It would be tempting to write a macro, which you just feed pairs of [steps angles] and then it expands into the code you see above:
(defmacro turtle-motion [turtle reps & motions] `(-> ~turtle ~@(interleave (for [step (map first motions)] `(move ~step)) (for [angle (map last motions)] `(turn ~angle))))) user> (macroexpand-1 '(turtle-motion "turtle" 2 [5 6] [7 8] [9 0])) (-> "turtle" (move 5) (turn 6) (move 7) (turn 8) (move 9) (turn 0))
But while that works, it's not well suited for iterative patterns where you want to keep working on the latest coordinate, so a way to accumulate the coordinates could be:
(defmacro turtle-motion [turtle reps & motions] (reduce (fn [turtle _] `(-> ~turtle ~@(interleave (for [step (map first motions)] `(move ~step)) (for [angle (map last motions)] `(turn ~angle))))) turtle (range reps)))
But while that works, we must remember that the first rule of macros is: Dont write macros. So after chatting a little with my good friend Christophe Grand, he proposed an iterative version, which I mangled into:
(defn turtle-motion [turtle reps motions] (-> (iterate #(reduce (fn [turtle [step angle]] (-> turtle (move step) (turn angle))) % motions) turtle) (nth reps)))
As always Christophe was cheering for a full-blown Turtle DSL with Monads and all the trimmings, but if he wants that he will have to write it himself :) The above works fine, creating an infinite stream of patterns and then returning the nth-reps item of that stream.
So now that our ProtoTurtle is all fired up and ready to go, try some simple patterns:
(turtle-motion turtle 1 (for [i (range 120)] [i 45]))
Which then becomes something like the Dharma logo
Or try something like:
(turtle-motion turtle 1 (for [i (range 1 w 5)] [(- w i) -90]))
Which then just becomes a little bit weird:
Oh and then there's the iterative patterns. He's a formula I saw somewhere on the interweb [[90 -a] [30 -a] [60 a] [30 a] [60 -a]], which when you run all the angles (a) from 1 to 180 looks like this:
Now please don't think that I'm in any way endorsing Nazism or anything remotely similar, I just found it interesting to see 2 well known symbols emerge from that simple formula.
So protocols have landed and they seem extremely cool while filling a much needed gap in Clojure-land. If you can think of a fun use-case for extending the above Turtle, please send it my way :)
If you're exploring Clojure for fun I hope you'll poke around some more on my blog, there should be something for new-comers as well as experts. On the other hand, if you're a professional looking to improve your game - You should really check out: Conj Labs
Code is here: link (clone it, run 'lein deps' to get bleeding edge jars) | http://www.bestinclass.dk/index.clj/2010/04/prototurtle-the-tale-of-the-bleeding-turtle.html | CC-MAIN-2014-10 | refinedweb | 1,318 | 59.37 |
Calculate MD5 checksum for a file
Question.
Accepted Answer
It's very simple using System.Security.Cryptography.MD5:
using (var md5 = MD5.Create()) { using (var stream = File.OpenRead(filename)) { return md5.ComputeHash(stream); } }
(I believe that actually the MD5 implementation used doesn't need to be disposed, but I'd probably still do so anyway.)
How you compare the results afterwards is up to you; you can convert the byte array to base64 for example, or compare the bytes directly. (Just be aware that arrays don't override
Equals. Using base64 is simpler to get right, but slightly less efficient if you're really only interested in comparing the hashes.)
If you need to represent the hash as a string, you could convert it to hex using
BitConverter:
static string CalculateMD5(string filename) { using (var md5 = MD5.Create()) { using (var stream = File.OpenRead(filename)) { var hash = md5.ComputeHash(stream); return BitConverter.ToString(hash).Replace("-", "").ToLowerInvariant(); } } }
Read more... Read less...
This is how I do it:
using System.IO; using System.Security.Cryptography; public string checkMD5(string filename) { using (var md5 = MD5.Create()) { using (var stream = File.OpenRead(filename)) { return Encoding.Default.GetString(md5.ComputeHash(stream)); } } }
I know this question was already answered, but this is what I use:
using (FileStream fStream = File.OpenRead(filename)) { return GetHash<MD5>(fStream) }
Where GetHash:
public static String GetHash<T>(Stream stream) where T : HashAlgorithm { StringBuilder sb = new StringBuilder(); MethodInfo create = typeof(T).GetMethod("Create", new Type[] {}); using (T crypt = (T) create.Invoke(null, null)) { byte[] hashBytes = crypt.ComputeHash(stream); foreach (byte bt in hashBytes) { sb.Append(bt.ToString("x2")); } } return sb.ToString(); }
Probably not the best way, but it can be handy.
I know that I am late to party but performed test before actually implement the solution.
I did perform test against inbuilt MD5 class and also md5sum.exe. In my case inbuilt class took 13 second where md5sum.exe too around 16-18 seconds in every run.
DateTime current = DateTime.Now; string file = @"C:\text.iso";//It's 2.5 Gb file string output; using (var md5 = MD5.Create()) { using (var stream = File.OpenRead(file)) { byte[] checksum = md5.ComputeHash(stream); output = BitConverter.ToString(checksum).Replace("-", String.Empty).ToLower(); Console.WriteLine("Total seconds : " + (DateTime.Now - current).TotalSeconds.ToString() + " " + output); } }
Here is a slightly simpler version that I found. It reads the entire file in one go and only requires a single
using directive.
byte[] ComputeHash(string filePath) { using (var md5 = MD5.Create()) { return md5.ComputeHash(File.ReadAllBytes(filePath)); } }
And if you need to calculate the MD5 to see whether it matches the MD5 of an Azure blob, then this SO question and answer might be helpful: MD5 hash of blob uploaded on Azure doesnt match with same file on local machine | https://ask4knowledgebase.com/questions/10520048/calculate-md5-checksum-for-a-file | CC-MAIN-2021-10 | refinedweb | 458 | 53.88 |
Hi Guys,
I have written a small example where i have a datagrid populated with random numbers and if the numbers are positive the text colour is green, otherwise red (based on using a label as a custom renderer). This is all fine up to the point where i want to change the background of the label to green if the value is incremented and red if the value is decremented (using the Flex timer). Ive tried to create a variable called "oldValue" on the custom label class to store the original value so that i can use it as part of the condition to determine the background colour. But as itemrenderers are reusuable it seems that the old value does not refer to the same instance as it was created with. Does anyone know a solution for this?
Thanks
Do you have a model of some kind ? If you don't I would have something like this
[Bindable]
public class StockQuote extends EventDispatcher implements IUID
....
public var currentPrice:Number;
pubilc var previousPrice:Number;
...
In your itemrenderer I would do something like this.
override public function set data( value:Object ):void
{
super.data = value;
if( value != null )
{
var stockQuote:StockQuote = value as StockQuote;
if( stockQuote.currentPrice > stockQuote.previousPrice )
{
//run function to make background green
}
else if( stockQuote.currentPrice < stockQuote.previousPrice )
{
//run function to make background red
}
}
}
Don't put a custom property on the label because whether or not the price has changed belongs more in the model. That way, it doesn't matter how the prices and renderers are used/reused, the data used to calculate what color it should be stays in sync with itself. You may have to add a listener to the stock quote and do some other stuff to get it to work correctly on price changes.
Thank you for your answer Ubuntu, where would be the correct place to set the previous price? Im trying to set it in the creationComplete method for itemRenderer but it still seems to mismatch the different values from other renderers. For example if i have 2 cells in one column, lets say 0 and 7. If i increment both by one and then Alert out the previous and current values of stockQuote i get the following.
Im guessing this is because of the recycling of the itemrenderers? | https://forums.adobe.com/thread/790136 | CC-MAIN-2018-30 | refinedweb | 389 | 61.77 |
etcd/clientv3 is the official Go etcd client for v3.
go get github.com/coreos/etcd/clientv3
Create client using
clientv3.New:
cli, err := clientv3.New(clientv3.Config{ Endpoints: []string{"localhost:2379", "localhost:22379", "localhost:32379"}, DialTimeout: 5 * time.Second, }) if err != nil { // handle error! } defer cli.Close()
etcd v3 uses
gRPC for remote procedure calls. And
clientv3 uses
grpc-go to connect to etcd. Make sure to close the client after using it. If the client is not closed, the connection will have leaky goroutines. To specify client request timeout, pass
context.WithTimeout to APIs:
ctx, cancel := context.WithTimeout(context.Background(), timeout) resp, err := cli.Put(ctx, "sample_key", "sample_value") cancel() if err != nil { // handle error! } // use the response
etcd uses
cmd/vendor directory to store external dependencies, which are to be compiled into etcd release binaries.
client can be imported without vendoring. For full compatibility, it is recommended to vendor builds using etcd's vendored packages, using tools like godep, as in vendor directories. For more detail, please read Go vendor design.
etcd client returns 2 types of errors:
Here is the example code to handle client errors:
resp, err := cli.Put(ctx, "", "") if err != nil { switch err { case context.Canceled: log.Fatalf("ctx is canceled by another routine: %v", err) case context.DeadlineExceeded: log.Fatalf("ctx is attached with a deadline is exceeded: %v", err) case rpctypes.ErrEmptyKey: log.Fatalf("client-side error: %v", err) default: log.Fatalf("bad cluster endpoints, which are not etcd servers: %v", err) } }
The etcd client optionally exposes RPC metrics through go-grpc-prometheus. See the examples.
The namespace package provides
clientv3 interface wrappers to transparently isolate client requests to a user-defined prefix.
More code examples can be found at GoDoc. | https://apache.googlesource.com/cloudstack-kubernetes-provider/+/b13b4a31891ea31a105db83bf019224b9407aa9e/vendor/github.com/coreos/etcd/clientv3 | CC-MAIN-2020-50 | refinedweb | 289 | 54.59 |
Tracking Upgrade Default Tcl/Tk to 8.6
Build failures with Tcl/Tk packages from experimental
The packages listed here are those which build depend on Tcl/Tk and FTBFS when using Tcl/Tk packages from experimental. The possible reasons for FTBFS may be:
- Bumping tcltk-defaults to 8.6
- Multiarchifying Tcl/Tk
- Dropping alternatives for /usr/bin/tclsh and /usr/bin/wish
- Reasons not related to the changes in Tcl/Tk
Here follows the list with build logs.
aolserver4 log (doesn't survive -fvisibility=hidden flag taken from the Tcl build flags, uses deprecated interp->errorLine field, uses unqualified calls to [namespace] inside ::oo ns, see 724879)
blt log (manually processes tcl8.5 and tk8.5 shlibs, which are gone in favor of symbols, see 724882)
bookview log (build-depends on tk8.4 and searches for wish, see 724975)
db log (unrelated to changes in Tcl/Tk)
db5.3 log (unrelated to changes in Tcl/Tk)
db6.0 log (unrelated to changes in Tcl/Tk)
dns-browse log (build-depends on tk8.5 and searches for wish, see 724979)
eggdrop log (custom configure script can't find libtcl8.5.so in multiarch location, see 724986)
elmerfem log (inconclusive, looks unrelated to changes in Tcl/Tk)
fossil log (build-depends on tcl8.5 and uses /usr/bin/tclsh to run tests, see 724987)
ftools-fv log (build depends on tcl8.5 and calls tclsh, uses tclPort.h, see 725939)
git log (build-depends on tcl8.5 and searches for tclsh, see 725961)
ibutils log (custom configure script can't find libtcl.so in multiarch location, see 724998)
isdnutils log (uses deprecated interp->errorLine and interp->result, see 725000)
llvm-toolchain-3.2 log (can't find tclsh, see 725952)
llvm-toolchain-3.3 log (can't find tclsh, see 725953)
llvm-toolchain-snapshot log (can't find tclsh, see 725954)
modules log (needs /usr/bin/tclsh, see 725010)
mozart log (doesn't work on amd64 at all)
nam log (uses deprecated interp->result, see 725015)
namazu2 log (build-depends on tk8.4 and searches for wish, see 725027)
netexpect log (build depends on tcl-dev but passes /usr/lib/tcl8.5 to configure, see 725072)
ns2 log (custom configure can't find Tcl in multiarch location, use of deprecated interp->result and (char *) return type for Tcl_?GetHashKey, see 725079)
openmsx log (unrelated to changes in Tcl/Tk)
otcl log (uses interp->result and interp->errorLine, has to add -I.../tcl-private/unix for tclUnixPort.h, see 725086)
plplot log (inconclusive, can't satisfy build dependencies)
radiance log (build depends on tk8.4 and needs wish, see 725088)
ruby1.8 log (configure can't find multiarchified Tcl/Tk, see 725096) removed from jessie
ruby1.9.1 log (configure can't find multiarchified Tcl/Tk, see 725097)
saods9 log (uses interp->result, also uses ?TclGetLong and ?TclSetStartupScriptFileName, porting to 8.6 is non-trivial, see 726758)
scid log (custom configure can't find multiarchified libtcl8.5.so and libtk8.5.so, see 725084)
scsitools log (build depends on tk8.4 and needs wish, see 725080)
sqlite log (inconclusive, looks unrelated to changes in Tcl/Tk)
taglog log (build depends on tcl8.5 and uses tclsh, see 725070)
tclcl log (uses deprecated interp->result, see 725016)
tcl-signal log (bug in tcl8.6-dev dependencies, fixed in tcl8.6 8.6.1-3)
tclvfs log (incompatible change in tcl8.6 internals, tclvfs adapted in 1.3-20080503-4)
timidity log (build depends on tcl8.4 and uses tclsh, also uses interp->result, see 725040)
ttt log (caused by BLT dropping tcl8.4, see 724135)
volview log (searches for /usr/lib/libtcl8.5.so, which isn't compatible with multiarch, see 724875)
vtk log (searches for /usr/lib/libtcl8.5.so, which isn't compatible with multiarch, see 724831)
xapian-bindings log (/usr/bin/tclsh-default is no longer provided, tclStubsPtr was excluded from libtcl8.6.so and tkStubsPtr from libtk8.6.so, see 724830)
xcircuit log (xcircuit uses custom way to find libtcl8.5.so, incompatible with multiarch, see 724826)
xotcl log (tclStubsPtr was excluded from libtcl8.6.so and tkStubsPtr from libtk8.6.so, see 724816) | https://wiki.debian.org/Teams/DebianTclTk/UpgradeDefaultTclTkTo86?action=diff | CC-MAIN-2015-48 | refinedweb | 695 | 59.09 |
Scroll down to the script below, click on any sentence (including terminal blocks!) to jump to that spot in the video!
gstreamer0.10-ffmpeg
gstreamer0.10-plugins-goodpackages.
A whole mini-series on Symfony's Dependency Injection Container? Yes! Do you want to really understand how Symfony works - and also Drupal 8? Then you're in the right place.
That's because Symfony is 2 parts. The first is the request/routing/controller/response and event listener flow we talked about in the first Symfony Journey part. The second half is all about the container. Understand it, and you'll unlock everything.
Symfony normally gives you the built container. Instead of that, let's do
a DIY project and create that by hand. Actually, let's get out of Symfony
entirely. Inside the directory of our project, create a new folder called
dino_container. We're going to create a PHP file in here where we can mess
around - how about
roar.php.
This file is all alone - it has nothing to do with the framework or our project at all. We're flying solo.
I'll add a namespace
Dino\Play - but only because it makes PhpStorm
auto-complete my
use statements nicely.
Let's require the project's autoloader - so go up one directory, then get
vendor/autoload.php:
Great, now we can access Symfony's DependencyInjection classes and a few other libraries we'll use, like Monolog.
In fact, forget about containers and Symfony and all that weird stuff. Let's
just use the Monolog library to log some stuff. That's simple, just
$logger = new Logger(). The first argument is the channel - it's like a
category - you'll see that word
main in the logs. Now log something:
$logger->info(), then ROOOAR:
Ok, let's see if we can get this script to yell back at us. Run it with:
php dino_container/roar.php
Fantastic! If you don't do anything, Monolog spits your messages out into stderr.
To pretend like this little file is an application, I'll create a
runApp()
function that does the yelling. Pass it a
$logger argument and move our
info() call inside:
I'm just doing this to separate my setup code - the part where I configure objects like the Logger - from my real application, which in this case, roars at us. It still works like before.
Now, to the container? First, the basics:
A service is just a fancy name a computer science major made up to describe a useful object. A logger is a useful object, so it's a service. A mailer object, a database connection object and an object that talks to your coffee maker's API: all useful objects, all services.
A container is an object, but it's really just an associative array that holds all your service objects. You ask it for a service by some nickname, and it gives you back that object. And it has some other super-powers that we'll see later.
Got it? Great, create a
$container variable and set it to a new
ContainerBuilder
object.
Hello Mr Container! Later, we'll see why Mr Container is called a builder.
Working with it is simple: use
set to put a service into it, and
get to
fetch that back later. Call
set and pass it the string
logger. That's
the key for the service - it's like a nickname, and we could use anything we
want.
TIP The standard is to use lowercase characters, numbers, underscores and periods. Some other characters are illegal and while service ids are case insensitive, using lower-cased characters is faster. Want details? See github.com/knpuniversity/symfony-journey/issues/5.
Then pass the
$logger object:
Now, pass
$container to
runApp instead of the logger and update its
argument. To fetch the logger from the container, I'll say
$container->get()
then the key -
logger:
The logger service goes into the container with
set, and it comes back
out with
get. No magic.
Test it out:
php dino_container/roar.php
Yep, still roaring.
A real project will have a lot of services - maybe hundreds. Let's add a second one. When you log something, monolog passes that to handlers, and they actually do the work, like adding it to a log file or a database.
Create a new
StreamHandler object - we can use it to save things to a file.
We'll stream logs into a
dino.log file:
Next, pass an array as the second argument to our Logger with this inside:
Cool, so try it out. Oh, no more message! It's OK. As soon as you pass at
least one handler, Monolog uses that instead of dumping things out to the
terminal. But we do now have a
dino.log.
With things working, let's also put the stream handler into the container.
So,
$container->set() - and here we can make up any name, so how about
logger.stream_handler. Then pass it the
$streamHandler variable:
Down in the
$logger, just fetch it out with
$container->get('logger.stream_handler'):
PhpStorm is highlighting that line - don't let it boss you around. It gets a little confused when I create a Container from scratch inside a Symfony project.
Try it out:
php dino_container/roar.phptail dino_container/dino.log
Good, no errors, and when we tail the log, 2 messages - awesome!
Up to now, the container isn't much more than a simple associative array. We put 2 things in, we get 2 things out. But we're not really exercising the true power of the container, yet. | https://symfonycasts.com/screencast/symfony-journey-di/container-in-the-wild | CC-MAIN-2020-29 | refinedweb | 936 | 75.3 |
WinJS contains several useful classes which are unfortunately hidden (starts with ‘_’). In this post we will look at one of them, WinJS._Signal.
A common usage of this class is to manage (completes, cancels, fails) the promise. The promise itself is simple concept. But sometimes it’s awkward to complete it.
Let’s see the example how it’s possible to create and manage a promise:
As you can see, the promise can be created if it’s necessary:
- to wrap the asynchronous operation and completes/errors on behalf of it, shown in Example #1
- to use a promise as a synchronization construct (e.g. promise is completed based on some event), Example #2 shows it.
Usage #2 is very awkward. That’s the reason why there is WinJS._Signal class.
Synchronization problem
It’s quite common to synchronize two code paths and one of it depends on an event. It could be possible to write the code into the event handler and do the stuff there. But the problem is that the logic is split all over the code base. Wouldn’t it be better to keep the code on one place?
Let’s see how it’s possible to do it with WinJS._Signal:
We could enhance the previous sample and create a similar signal/event couple when the internet connection is established, join loaded and internet connection promises and then call the xhr(uri). The main advantage is that you have one construct (_Signal) using which it’s possible to complete/cancel/fail the promise.
I hope, new WinJS will have it exposed as public class in the future release. | https://blogs.msdn.microsoft.com/fkaduk/2013/11/18/winjs-internals-winjs-_signal/ | CC-MAIN-2017-17 | refinedweb | 274 | 64.71 |
Download presentation
1
TCSS 342, Winter 2005 Lecture Notes
Stacks and Queues Weiss Ch. 6, pp Weiss Ch. 16, pp
2
Review: List method runtimes
Operation add to start of list add to end of list add at given index clear get find index of an object remove first element remove last element remove at given index set size toString Array list O(n) O(1) Linked list
3
What operations should we use?
neither list is fast for adding or removing from arbitrary indexes linked list can add/remove from either end quickly linked list is bad at getting / setting element values at arbitrary indexes neither list is fast for searching (indexOf, contains)
4
How do we use lists? in many cases, we want to use a list, but we only want a limited subset of its operations example: Use a list to store a waiting line of customers for a bookstore. As each customer arrives, place him/her at the end of the line. Serve customers in the order that they arrived. Which list methods do we need here, and which do we not need?
5
Common idiom: "FIFO" many times, we will use a list in a way where we always add to the end, and always remove from the front (like the previous example) the first element put into the list will be the first element we take out of the list First-In, First-Out ("FIFO") therefore, let's create a new type of collection which is a limited version of List, tailored to this type of usage: a Queue
6
Abstract data type: Queue
queue: a more restricted List with the following constraints: elements are stored by order of insertion from front to back items can only be added to the back of the queue only the front element can be accessed or removed goal: every operation on a queue should be O(1)
7
Operations on a queue enqueue: add an element to the back
dequeue: remove and return the element at the front peek: return (but not remove) front element dequeue or peek on an empty queue causes an exception other operations: isEmpty, size
8
Queue features ORDERING: maintains order elements were added (new elements are added to the end by default) DUPLICATES: yes (allowed) OPERATIONS: add element to end of list (enqueue), remove element from beginning of list (dequeue), examine element at beginning of list (peek), clear all elements, is empty, get size all of these operations are efficient! O(1)
9
Our Queue interface public interface Queue { public void enqueue(Object o); public Object dequeue(); public Object peek(); public boolean isEmpty(); public int size(); } Java has no actual Queue interface or class (until v1.5), so we must write our own or simulate it using a normal list we'll assume instructor-provided LinkedQueue
10
Queue programming example
double the contents of a Linked List (named list), using a LinkedQueue as an auxiliary data structure e.g. ["hi", "abc", "bye"] --> ["hi", "hi", "abc", "abc", "bye", "bye"] in O(n) Queue q = new LinkedQueue(); while (!list.isEmpty()) q.enqueue(list.removeFirst()); while (!q.isEmpty()) { Object element = q.dequeue(); list.add(element); list.add(element); }
11
Queue programming example
double the contents of a Queue (named queue), using no auxiliary data structures e.g. ["hi", "abc", "bye"] --> ["hi", "hi", "abc", "abc", "bye", "bye"] in O(n) int size = queue.size(); for (int i = 0; i < size; i++) { Object element = queue.dequeue(); queue.enqueue(element); queue.enqueue(element); }
12
More queue programming
The Sieve of Eratosthenes is an algorithm for finding prime numbers up to some max n store all numbers in [2, n] in a queue numbers: [2, 3, 4, ..., 23, 24, 25] now process the queue, removing the first element each time (it will be prime) and eliminating all the remaining numbers that it divides evenly numbers: [3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25] primes: [2] numbers: [5, 7, 11, 13, 17, 19, 23, 25] primes: [2, 3]
13
More queue programming
numbers: [7, 11, 13, 17, 19, 23] primes: [2, 3, 5] numbers: [11, 13, 17, 19, 23] primes: [2, 3, 5, 7] ... (when can the algorithm stop?) primes: [2, 3, 5, 7, 11, 13, 17, 19, 23] public static void sieve(int max) { // let's write it ... }
14
Queue implementations: array
array queue: a queue implemented using an array or array list; queue of size n occupies slots 0 to n-1 in the array for (int i = 0; i < 80; i += 10) q.enqueue(new Integer(i)); [ 0][10][20][30][40][50][60][70][ ][ ] q.dequeue(); [10][20][30][40][50][60][70][ ][ ][ ] problem: expensive dequeue (must slide) = O(?)
15
More queue implementations
circular array queue: front element may not be in slot 0; elements can wrap around + cheaper dequeue (no sliding) = O(?) disadvantages: harder to implement; resizing? how many elements can a circular array hold? for (int i = 0; i <= 40; i += 10) q.enqueue(new Integer(i)); [ 0][10][20][30][40][ ] front = 0, back = 4 for (int i = 0; i < 3; i++) {q.dequeue();} [ ][ ][ ][30][40][ ] front = 3, back = 4 for (int i = 50; i <= 70; i += 10) [60][70][ ][30][40][50] front = 3, back = 1 show front, back indexes
16
More queue implementations
linked queue: a queue that uses linked nodes or a linked list to hold its elements one of enqueue/dequeue will be expensive unless we have a myFront and myBack reference, a circularly linked list, etc.
17
Another idiom: "LIFO" there are also many times where it is useful to use a list in a way where we always add to the end, and also always remove from the end example: Write code to match brackets in a code file the last element put into the list will be the first element we take out of the list Last-In, First-Out ("LIFO") therefore, let's create another new type of collection which is a limited version of List, tailored to this type of usage: a Stack
18
Abstract data type: Stack
stack: a more restricted List with the following constraints: elements are stored by order of insertion from "bottom" to "top" items are added to the top only the last element added onto the stack (the top element) can be accessed or removed goal: every operation on a stack should be O(1). stacks are straightforward to implement in several different ways, and are very useful
19
Operations on a stack push: add an element to the top
pop: remove and return the element at the top peek: return (but not remove) top element pop or peek on an empty stack causes an exception other operations: isEmpty, size push(a) push(b) pop()
20
Stack features ORDERING: maintains order elements were added (new elements are added to the end by default) DUPLICATES: yes (allowed) OPERATIONS: add element to end of list (push), remove element from end of list (pop), examine element at end of list (peek), clear all elements, is empty, get size all of these operations are efficient! O(1)
21
Stacks in computer science
the lowly stack is one of the most important data structures in all of computer science function/method calls are placed onto a stack compilers use stacks to evaluate expressions stacks are great for reversing things, matching up related pairs of things, and backtracking algorithms stack programming problems: reverse letters in a string, reverse words in a line, or reverse a list of numbers find out whether a string is a palindrome examine a file to see if its braces { } and other operators match convert infix expressions to postfix or prefix inbox and outbox at work are stacks connect-four are stacks of chips
22
Stacks in computer science
calculators: postfix or reverse Polish notation. example: * Method calls stack frame or activation record func1() { func2(); func3(); } func4 return var locals vars arguments func2 func1 func2() { func4(); }
23
Java's Stack class A possible Stack interface:
public interface Stack { public void push(Object o); public Object pop(); public Object peek(); public boolean isEmpty(); public int size(); } Java does have a java.util.Stack class with the above methods, so we can use it Java's Stack extends Vector (which is basically the same as an ArrayList) -- is this good or bad? Why?
24
Stack programming example
mirror the contents of a queue (named queue) e.g. ["hi", "abc", "bye"] --> ["hi", "abc", "bye", "bye", "abc", "hi"] in O(n) Stack s = new Stack(); int queueSize = queue.size(); for (int i = 0; i < queueSize; i++) { Object element = queue.dequeue(); s.push(element); queue.enqueue(element); } while (!s.isEmpty()) { Object element = s.pop();
25
More stack programming
Write a method bracketsMatched that takes a String and returns true if the { }, [ ], and ( ) match up in nesting and in number. Write a method equalElements that takes as parameters two stacks and that returns true if the two stacks store the same elements in the same order. Your method will examine the two stacks but should not destroy them; it must return them to their original state before returning. Use one stack as auxiliary storage to solve this problem.
26
Even more stack programming
Write a method splitStack that takes a stack containing a list of integers and that splits it into negatives and nonnegatives. The numbers in the stack should be rearranged so that all the negatives appear on the bottom of the stack and all the nonnegatives appear on the top of the stack. In other words, if after this method is called you were to pop numbers off the stack, you would first get all the nonnegative numbers (at the top) and then get all the negative numbers (at the bottom). It does not matter what order the numbers appear in as long as all the negatives appear lower in the stack than all the nonnegatives. Use a single queue as auxiliary storage to solve this problem.
27
Stack implementations: array
array stack: a stack implemented using an array or array list; a stack of size n occupies slots 0 to n-1 in the array for (int i = 0; i < 80; i += 10) s.push(new Integer(i)); [ 0][10][20][30][40][50][60][70][ ][ ] s.pop(); [ 0][10][20][30][40][50][60][ ][ ][ ] notice that array stack doesn't have efficiency problems like an array queue does
28
Stack implementations
linked stack: a stack implemented using linked nodes or a linked list the front element of the list is the top of the stack push: insert at front pop: remove and return front element peek: return front element clear: set myFront to null
29
Another collection: Deque
deque: a double-ended queue can add and remove only from either end useful to represent a line where an element can "cut in" at the front if needed can be implemented with a linked list with head and tail references (for O(1) add and remove)), or an array with sliding front and back indexes we will not use deque in this course's programming
30
Stack / queue runtimes Operation add (push, enqueue)
remove (pop, dequeue) get particular element (peek) clear size Stack O(1) Queue
Similar presentations
© 2020 SlidePlayer.com Inc. | http://slideplayer.com/slide/5215068/ | CC-MAIN-2020-24 | refinedweb | 1,882 | 52.46 |
Dive into OpenShift v3
This article has originally been written for SysAdvent 2015
Over the last year there has been a lot of buzz around open-source Platform-as-a-Service (PaaS) tools. This blog will give you an overview of this topic and some deeper insight into Red Hat OpenShift 3. It will talk about the kind of PaaS tools you install on your own infrastructure - be it physical, virtual or anywhere in the cloud. It does not cover installation atop Heroku or similar services, which are hosted PaaS solutions.
But what exactly is meant by PaaS?
When we talk about PaaS, we mean a collection of services and functions which serve to orchestrate the processes involved with building software up to running it in production. All software tasks are completely automated: building, testing, deploying, running, monitoring, and more. Software will be deployed by a mechanism similar to "git push" to a remote source code repository which triggers all the automatic processes around it. While every PaaS platform behaves a bit differently, in the end they all do the same function of running applications.
Regarding running an application within a PaaS, the application should optimally be designed with some best practices in mind. A good guideline which incorporates these practices is commonly known as the The Twelve-Factor App.
Short overview of Open Source PaaS platforms
There are a lot of Open Source PaaS tools in this space as of late
- Dokku:
A simple, small PaaS running on one host. Uses Docker and Nginx as the most important building blocks. It is written in Bash and uses Buildpacks to build an application specific Docker container.
- Deis:
The big sister of Dokku. Building blocks used in Deis include CoreOS, Ceph, Docker, and a pluggable scheduler (Fleet by default, however Kubernetes will be available in the future). Buildpacks are used for creating the runtime containers. At least three servers are needed to effectively run Deis.
- Tsuru:
Similar to Deis, but says it also supports non-12-Factor apps. There is even a possibility to manage VMs, not only containers. This, too, uses Docker as building block. Other components like scheduler are coming from the Tsuru project.
- Flynn:
Also similar to the two tools above. It also uses Docker as backend, but uses many project specific helper services. At the moment, only PostgreSQL is supported as a datastore for applications.
- Apache Stratos:
This is more of a framework than just a "simple" platform. It is highly multi-tenant enabled and provides a lot of customization features. The architecture is very complex and has a lot of moving parts. Supports Docker and Kubernetes.
- Cloud Foundry:
One of the biggest player besides OpenShift in this market. It provides a platform for running applications and is used widely in the industry. Has a steep learning curve and is not easily installed,configured, or operated
- OpenShift:
Started in 2011 as a new PaaS platform using it's own project specific technologies, has been completely rewritten in v3 using Docker and Kubernetes as the underlying building blocks.
OpenShift
There are some compelling reasons to look at OpenShift:
- Usage of existing technology
Kubernetes and Docker are supported by big communities and are good selections to serve as central components upon which OpenShift is built. OpenShift "just" adds the functionality to make the whole platform production-ready. I also like the approach of Red Hat to completely refactor OpenShift V2 to V3, taking into account what they have learned from older versions and not just simply trying to improve the old code base on top of Kubernetes and Docker.
- Open development
Development happens publically on GitHub: OpenShift Origin is"the upstream open source version of Red Hat's distributed application system, OpenShift" per Red Hat.
- Enterprise support available
Many enterprises want to or need to have support contracts available for the software which they run their business upon. This is completely possible using the OpenShift Enterprise subscription which gets you commercial support from Red Hat.
- Excellent documentation
The documentation at is very well structured, allowing for rapid identification of a topic youre seeking.
- My favorite functions
There are some functions which I like the most on OpenShift:
- Application templates: Define all components and variables to run your application and then instantiate it very quickly multiple times with different parameters.
- CLI: The CLI tool "oc" is very well structured and you get it very quickly how to work with it. It also has very good help instructions integrated, including some good examples.
- Scaling: Scaling an application just takes one command to start new instances and automatically add them to the load balancer.
So why not chose Cloud Foundry? At the end of the day, everyone has their favourite tool for their own reasons. I personally found the learning curve for Cloud Foundry to be too steep. I didn't manage to get an installation up and running with success. Also I had lots of trouble to understand the things around BOSH, a Cloud Foundry-specific configuration management implementation.
Quick Insight into OpenShift 3 - What does it look like?
OpenShift consists of these main components:
- Master
The API and Kubernetes Master / Scheduler
- Nodes
Runs pods, including the workload
- Registry
Hosts docker images
- Services / Router
Takes client requests and routes them to backend application containers. In a default configuration it's a HAProxy load balancer, automatically managed by OpenShift.
For a deeper insight, consult the OpenShift Architecture
OpenShifts core is Kubernetes with additional functionality for application building and deployment made available to users and operators. So it's very important to understand the concepts and the architecture of Kubernetes. Consult the official Kubernetes documentation to learn more:
Communication between clients and the OpenShift control plane happens over REST APIs. The oc application, available via the OpenShift command-line client, gives access to all the frequently-used actions like deploying applications, creating projects, viewing statuses, etc.
Every API call must be authenticated. This authentication is also used to check if you're authorized to execute the action. This authentication component allows for OpenShift to support multi-tenancy. Every OpenShift project has it's own access rules. Projects are separate from each other. On the network side, they can be strictly isolated from each other via the ovs-multitenant network plugin. This means many users can share a single OpenShift platform without interfering each other.
Administrative tasks are done using oadm within the OpenShift command-line client. Example tasks involve operations such as deploying a router or a registry.
There is a helpful web interface which communicates to the master via the API and provides a graphical visualization of the cluster's state.
Most of the tasks from the CLI can also be accomplished via the GUI.
To better understand OpenShift and its core-component Kubernetes, it's important to understand some key terms:
- Pod
In Kubernetes, all containers run inside pods. A pod can host a single container, or multiple cooperating containers*". Roughly translated, this means that containers share the same IP address, the same Docker volumes and will run always on the same host.
- Service
A pod can offer services which can be consumed by other pods. Services are addressed by their name. This means for example a pod provides a HTTP service on the name "backend" on port 80. Another pod can now access this HTTP service by just addressing the namespace of “backend”. Services can be exposed externally from OpenShift using an OpenShift router.
- Replication Controller
A replication controller takes care of starting up a pod and keeping it running in the event of a node failure or any other disruptive event which could take a pod down. It is also responsible for creating replication pods in an effort to horizontally scale the lone pod.
Quickstart 3-node Origin Cluster on CentOS
There are a number of good instructions how to install OpenShift. This section will just give a very quick introduction to installing a 3-node (1 master, 2 nodes) OpenShift Origin cluster on CentOS 7.
If you want to know more details the OpenShift documentation at Installation and Configuration is quite helpful. The whole installation process is automated using Ansible, all playbooks are provided by the OpenShift project on Github. You also have the option to run an OpenShift master inside a Docker container, as a single binary or installing it from source. However the Ansible playbook is quite helpful for getting started.
Pre-requisites for this setup
- Three CentOS 7 64-Bit VMs prepared, each having 4+GB RAM, 2 vCPUs, 2 Disks attached (one for the OS, one for Docker storage). The master VM should have a third disk attached for the shared storage (exported by NFS in this example), mounted under
srv/data.
- Make sure you can access all the nodes from the master using SSH without a password. This is typically accomplished using ssh keys. The above user also needs sudo rights, typically via the
NOPASSWDoption in typical Ansible fashion.
- A wildcard DNS entry pointing to the IP address of the master. It is at this master where routers will run to allow for external clients to request application resources running within OpenShift.
- Read the following page carefully to make sure your VMs fit the needs of OpenShift: Installation Pre-requisites. Pay special attention to the Host Preparation section.
- Ensure you have pyOpenSSL installed on the master node as it was a missing dependency during the time of writing this article. You can install it via
yum -y install pyOpenSSL.
- Run all the following steps as your login user as opposed to root on the master node.
Setup OpenShift using Ansible
Put the following lines into
/etc/ansible/hosts. Update the values to fit your environment (hostnames):
[OSEv3:children] masters nodes [OSEv3:vars] ansible_ssh_user=myuser ansible_sudo=true deployment_type=origin osm_default_subdomain=apps.example.com osm_default_node_selector='region=primary' openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/htpasswd'}] [masters] originmaster.example.com [nodes] originmaster.example.com openshift_node_labels="{'region': 'infra', 'zone': 'dc1'}" openshift_schedulable=true originnode1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'dc1'}" openshift_schedulable=true originnode2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'dc1'}" openshift_schedulable=true
- Run Ansible:
ansible-playbook playbooks/byo/config.yml
- Check that everything went well via the OpenShift client. Your output should show 3 nodes with STATUS ready:
oc get nodes
- Add a user to OpenShift:
sudo htpasswd /etc/origin/htpasswd myuser
- Deploy the registry (additional details at Deploy Registry):
sudo htpasswd /etc/origin/htpasswd reguser
oadm policy add-role-to-user system:registry reguser
sudo oadm registry --mount-host=/srv/data/registry --credentials=/etc/origin/master/openshift-registry.kubeconfig --service-account=registry
- Deploy the router (additional details at Deploy Router):
sudo oadm router --credentials=/etc/origin/master/openshift-router.kubeconfig --service-account=router
Add persistent storage
After these steps, one important piece is missing: Persistent storage. Running any application which stores application data will lose its data after migration to another OpenShift node after restarting, redeployment, and so on. Therefore we should add shared NFS storage. In our example, we will make this available by the master:
- Add the following line to /etc/exports:
/srv/data/pv001 *(rw,sync,root_squash)
- Export the directory:
sudo exportfs -a
- On all cluster nodes including the master, alter the SELinux policy in order to enable the usage of NFS:
sudo setsebool -P virt_use_nfs 1
- Create a file called
pv001.yamlwith the following content and create the obejct with
oc create -f pv001.yaml:
apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv001" spec: capacity: storage: "30Gi" accessModes: - "ReadWriteOnce" nfs: path: "/srv/data/pv001" server: "originmaster.example.com" persistentVolumeReclaimPolicy: "Recycle"
You now have a 3-node OpenShift Origin cluster including persistent storage, ready for your applications! You can check the definition with:
oc get pv.
Please note: This is a very minimal setup. There are several things to do before running in production. F.e. you could add some constraints to make sure that the router and registry only run on the master and all applications on the nodes. Other things to include into production deployment considerations: storage capacity, network segmentation, security, accounts. Many of these topics are discussed in the official OpenShift documentation.
Deploy applications
Now that the 3-node cluster is up and running, we want to deploy apps! Of course, that's the reason we created it in the first place, right? OpenShift comes with a bunch of application templates which can be used right away. Let's start with a very simple Django with PostgreSQL example.
Before you begin, fork the original GitHub project of the Django example application to your personal GitHub account so that you're able to make changes and trigger a rebuild. The intent here allows you to control webhook configurations for your fork to trigger code updates within OpenShift. Once done, then you can proceed to create the project within OpenShift.
- Login to with the user created in step 4 above.
- Click on "New Project" and fill in all the fields. Name it myproject. After clicking on “Create” you're directed to the overview of available templates.
- Create a new app by choosing the django-psql-example template
- Insert the repository URL of your forked Django example application into the field
SOURCE_REPOSITORY_URL. All other fields can use their default value. By clicking on "Create" all required processes are started in the background automatically.
- The next page gives you some important hints. To have the complete magic: To automate the delivery of new code via a deployment without human intervention, configure the displayed webhook URL in your Github project. Now every time you push code to your git remote on GitHub, OpenShift gets notified and will rebuild and redeploy your app without effort spent by a human. Please note that you probably need to disable SSL verification as by default a self-signed certificate is used which would fail verification
- Watch your new app being built and deployed on the overview page. As soon as this is finished, the app is reachable at
- To get the feeling of the full automation: Change some code in the forked project and push it to GitHub. After a few seconds, reload the page and you should see your changes active.
Scaling
One of the most exciting features in OpenShift is scaling. Let's say the above Django application is an online shop and you create some advertisement which will lead into more page views. Now you want to make sure that your online shop is available and responsive during this time. Simple as that:
oc get rc oc scale rc django-psql-example-2 --replicas=4 oc get rc
The replication controller (rc) takes care of creating more Django backends on your behalf. Other components will make sure that these new backends are added to the router as load balancing backends. The replication controller also makes sure that there are always 4 replicas running, even if a host fails and there are enough hosts available on the cluster to run the workload on.
Scaling down is just as easy as scaling up, just adjust the
--replicas= parameter accordingly.
Another application
Now that we've deployed and scaled a default app, we want to deploy a more customized app. Let's use Drupal 8 as an example. Drupal is a PHP application which uses MySQL by default, so we need to use the matching environment for this. Set it up via the following command:
oc new-project drupal8 oc new-app php~ \ mysql --group=php+mysql -e MYSQL_USER=drupal -e \ MYSQL_PASSWORD=drupalPW -e MYSQL_DATABASE=drupal
This long command needs a bit of explanation:
- oc: Name of the command line OpenShift client
- new-app: Subcommand to create a new application in OpenShift
- php~: Specifies the build container to use (if not provided the S2I process tries to find out the correct one to use)
- Path to the git repository with the app source to clone and use
- #openshift: Git reference to use. In this case the openshift branch
- mysql: Another application to use, in this case a MySQL container
- –group=php+mysql: Group the two applications together in a Pod
- -e: Environment variables to use during container runtime
The new-app command uses some special sauce to determine what build strategy it should use. If there is a Dockerfile present in the root of the cloned repository, it will build a Docker image accordingly. If there is no Dockerfile available, it tries to find out the project's language and uses the Source-to-Image (S2I) process to build a matching Docker image containing both the application runtime and the application code.
In the example above we specify to use a PHP application container and a MySQL container to group them together in a pod. To successfully execute the MySQL container a few environment variables are needed.
After the application has been built, the service can be exposed to the world:
oc expose service drupal-openshift --hostname='drupal8.apps.example.com'
As the application repository is not perfect and the PHP image not configured correctly for Drupal (perhaps we hit the issue #73!), we need to run a small command inside the pod to complete the Drupal installation. Substitute ID with the ID of your running pod which is visible via oc get pods):
oc exec drupal-openshift-1-ID php composer.phar install
Now navigate to the URL
drupal8.apps.example.com and run the installation wizard. You'll want to set the DB host to "127.0.0.1" as “localhost” doesn't work as you might expect.
At the end there is still a lot to do to get a "perfect" production-grade Drupal instance up and running. For example it is probably not a good idea to run the application and database in the same pod because it makes scaling difficult. Scaling a pod up scales all Docker images contained within the pod. This means that the database image also needs to know how to scale, which is not the default case. In it's heart, scaling means just spinning up another instance of a Docker image, but that does not mean that the user generated data is automatically available to the additional running Docker images. This needs some more effort put in.
The best thing here would be to have different pods for the different software types: One pod for the Drupal instance and one pod for the database so that they can be scaled independently and the task of replicating user generated data can be done tailored to the softwares need (which is of course differently between a web server and a database server).
Also there is persistent storage missing in this example. Every uploaded file or other working files will be lost when the pod is restarted. If you want more information on how to most effectively add persistent storage, here is a very good blog post describing it in detail: OpenShift v3: Unlocking the Power of Persistent Storage.
Helpful commands
When working with OpenShift, it's useful to have some commands ready that help getting information or show what is going on.
OpenS.
Commands to run on the OpenShift server
The OpenShift processes are managed by systemd. All logs are written to the systemd journal. So the easiest way to get information about what is going on it to query the system journal:
sudo journalctl -f -l -u docker
System logs of Docker
sudo journalctl -f -l -u origin-master
System logs of Origin Master
sudo journalctl -f -l -u origin-node
System logs of Origin Node
Some more interesting details
Router
The default router runs HAproxy which is dynamically reconfigured should new services be requested to be routed by the OpenShift client. It also exposes its statistics on the router's IP address over TCP port 1936. The password to retrieve this is shown during the application's deployment or it can be retrieved by running
oc exec router-1-<ID> cat /var/lib/haproxy/conf/haproxy.config | less. Look for a line beginning with "stats auth".
Note: for some reasons an iptables rule was missing on my master node preventing my getting at the router statistics. I added one manually to overcome via
sudo iptables -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 1936 -j ACCEPT.
Logging
OpenShift brings a modified ELK (Elasticsearch-Logstash-Kibana) stack called EFK: Elasticsearch-FluentD-Kibana. Deployment is done using some templates and the OpenShift workflows. Detailed instructions can be found at Aggregating Container Logs in the OpenShift documentation. When correctly installed and configured as described under the above link, it integrates nicely with the web interface. These steps are not part of the Ansible playbooks and need to be carried out manually.
Metrics
Kubernetes Kubelets gather metrics from running pods. To collect all these metrics, the metric component needs to be deployed similar to the logging component via OpenShift workflows. This is also very well documented at Enabling Cluster Metrics in the OpenShift documentation. It integrates into the web interface fairly seamlessly..
Failover IPs
Creating a highly available service most of the time involves having IP addresses be made highly available. If you want to have a HA router, there is the concept of IP failover available within OpenShift. This means you configure an IP address as a failover address and attach it to a service. Under the hood, keepalived now keeps track of this IP and makes it highly available using the VRRP protocol.
IPv6
I couldn't find much information about the state of IPv6 in OpenShift. But it seems problematic right now, as far is I can see. Docker is supporting IPv6 but Kubernetes seems to lack functionality: Kubernetes issue #1443
Lessons learned so far
During the last few months while making myself familiar with OpenShift I've learned that the following points are very important in understanding OpenShift:
- Learn Kubernetes
This is the main building block in OpenShift, so a good knowledge is necessary to get around the concepts of OpenShift.
- Learn Docker
As this is the second main building block of OpenShift, it's also important to know how it works and what concepts it follows.
- Learn Twelve-Factor
To get the most out of a PaaS, deployed applications should closely follow the Twelve-Factor document.
Some other points I learned during experimenting with OpenShift:
- The Ansible playbook is a fast moving target. Most of the time the paths written in the documentation don't match the Ansible code sadly. Also some things didn't work well, for example upgrading from Origin 1.0.8 to Origin 1.1.0.
- OpenShift heavily depends on some features, such as SELinux, which are by default only present on Red Hat-based Linux distributions. This makes it hard to go about getting OpenShift working on Ubuntu-based Linux distributions without things quickly becoming a yak shaving exercise. In theory it should be possible to run OpenShift on all distributions supporting Docker and Kubernetes, but as always the devil lies in the details.
- Proper preparation is the key to success. As OpenShift is a complex system, preparation helps to get a working system. This is not only preparing VMs and IP addresses, but also preparing knowledge, like learning how everything works and try it out in a test system.
Some more technical learnings:
- When redeploying the registry the master needs to be restarted, as the master caches the registry service IP. This is possible via:
sudo systemctl restart origin-master
- Before running applications on OpenShift it makes sense to run the diagnostics tool using: sudo openshift ex diagnostics
Automate all the things!
The friendly people at Puzzle are working on a Puppet module to make the OpenShift installation and configuration as automated as possible. Internally it calls Ansible to do all the lifting. While it doesn't make sense to re-implement everything in Puppet, the module helps with manual tasks that have to be carried out after having OpenShift installed by Ansible. For an already existing Puppet environment, this is great to get OpenShift integrated.
You can find the source on GitHub and help to make it even better: puzzle/puppet-openshift3.
Conclusion and Thanks
This blog post only touches the tip of the iceberg regarding OpenShift and its capabilities as an on-premise PaaS. It would need many books to cover all of this very exciting technology. For me it's one of the most thrilling piece of technology since many years and I'm sure it will have a bright future!
If you are interested in running OpenShift in production, I suggest doing a proof-of-concept fairly early. Be prepared to read and plan a lot. Many important topics such as monitoring, security, backup were not covered here and are important topics for a production-ready PaaS
For those wanting additional reading materials on OpenShift, here are some additional links with lots of information:
-
-
-
-
-
-
-
I want to thank everyone who helped me with this blog, especially my workmates at VSHN (pronounced like vision / vĭzh'ən) and our friends at Puzzle ITC. And another special thank goes to Nick Silkey (@filler) for being the editor of this article.
If you have any comments, suggestions or if you want to discuss more about OpenShift and the PaaS topic, you can find me on Twitter @tobruzh or on my personal techblog called tobrunet.ch. | https://tobru.ch/dive-into-openshift-v3/ | CC-MAIN-2022-21 | refinedweb | 4,230 | 53.1 |
vim - how to stop this bug: insert single char func - put vs p
I wanted to be able to easily insert a single char from normal mode, and clean up some of the mappings that were being added for that purpose while I was at it. I have this in my vimrc now:
"insert single char from normal mode function! InsertSingleChar() "jump into insert mode and place character after the cursor let l:char = getchar() if l:char != 0 silent! exec "normal a" . nr2char(l:char) endif endfunction
But the only way to input a p is to wait for it to time out then type it in, or it puts instead.
I call this using the following map:
nnoremap <leader>j :call InsertSingleChar()<cr>
How can I fix it? Appreciated.
I had a mapping and forgot about it, when I searched to be sure I mistyped and missed it. Sorry everyone! Thanks for the help
Answers
If I understand you right, you only have a problem with the insertion of p; other letters work fine.
This sounds like you have a jp mapping that is preferred by Vim, so you have to type j (timeout) p to apply your mapping with a value of p, and not trigger the jp mapping instead.
You can check via
:verbose nmap jp
and then probably would have to remap that one to use a second non-printable character, e.g. j<C-P> instead.
Need Your Help
EF Object entity State moves to Unchanged when attached to new Context
linq entity-framework saveI have go the oddest problem, I get an object from EF and pass it back to the Business Logic for manipulation, when finished i'm trying to save the object back to the DB, when the object is passed ... | http://unixresources.net/faq/30004879.shtml | CC-MAIN-2019-04 | refinedweb | 300 | 70.26 |
• The program starts with a series of #include
statements that use several pre-defined libraries that
come with the SimpleIDE software. These libraries
make C language coding on the PAB much easier.
Important! In this version of the PropBot, “front” and
“back” have been swapped. The leaf switches described in
earlier parts of this series defined the “front” of the robot.
• Next are some variable definitions for I/O pins.
• The main program contains the main block of
executable code. It begins — as demonstrated in
earlier installments — with a three-tone song that
signals to us things have loaded up and are about to
begin. The while(1) statement block that follows
forms an endless loop that repeats indefinitely. In this
loop, the
We now want the Ping sensor to be in the front. To
accomplish this, the motor control routines are exchanged
so that the servos work in the reverse fashion of what they
did in earlier articles.
Enhancing the
Navigating Program
if (doPingSimple() <5)
With variations in code, you can have your PropBot do
all sorts of nifty things. Listing 5: PropBot_NavigateAnd
Ping Wander lets the PropBot seek out things to investigate
while checking for obstacles that are too close. This is a
statement fires the Ping
and waits for a response.
If the result is four or
less, the robot is too
close to an object in
front of it. If so, the
robot backs up, turns,
and heads in a new
direction.
Listing 5: PropBot_NavigateAndPingWander.
#include "simpletools.h" // Include simpletools lib
#include "servo.h" // Include servo lib
#include "ping.h" // Include ping lib
void watcher(); // Forward declaration for
// background timer
• The series of motor
control statements are
simple functions that
encapsulate the
commands for turning
the robot’s drive servos
in the proper direction.
int turretPin = 14; // Standard servo connected to P14
int pingPin = 15; // Ping connected to P15
int piezo = 4; // Piezo connected to P4
int minDistance = 5; // Minimum distance (inches) allowed
int watchDog; // Re-used variable for cog ID
int main() { // Main function
• Two user-defined
functions at the end —
doPing and
doPingSimple — fire the
Ping sensor and return
the result. The functions
differ in the control of
the turret servo.
// Sound at startup
freqout(piezo, 200, 500);
freqout(piezo, 200, 1000);
freqout(piezo, 200, 2000);
// Turret positioning values
int turretCenter = 900;
int turretRight = 400;
int turretLeft = 1400;
print ("Started\n"); // Send text to Terminal
watchDog = cog_run(&watcher, 10); // Run background timer in another
To try this program, select
position 1 on the PAB’s main
switch. This turns the Propeller
on, but doesn’t power the
servos.
Click to subscribe to this magazine | https://servo.texterity.com/servo/201412/?pg=36 | CC-MAIN-2021-04 | refinedweb | 440 | 62.07 |
hey guys, thanks for looking at this. I'm new to java and need help with this one problem. What i need to do is,
3) Write a* 703/ height2
Where weight is measured in pounds and height is measured in inches. The program should display a message indicating whether the person has optimal weight, is underweight, or is overweight..
so far my code looks like this and i am just stuck and dont know where to go from here ( or if im even on the right track)
any comments or help would be great.
I know this code is not complete at all but i just dont know where to really go from here. Im not positive on how i can put this part of the problem in.
// This program is to determine someone's bmi import javax.swing.JOptionPane; public class body_mass_index { public static void main (String [] args) { String input; int bmi; int weight; int height; input= JOptionPane.showInputDialog("Please enter your weight"); weight= Integer.parseInt(input); input = JOptionPane.showInputDialog("Please enter your height"); height = Integer.parseInt(input); if (bmi>18.5) JOptionPane.showMessageDialog(null, "Your weight is considered Optimal"); } }
Use if statements following the logic of the statements you have in bold letters.
Do you know how to write a multipart condition test in an if statement.
For example: if(exp1 oper exp2) where cond1 and cond2 are boolean expressions (like: a < 4) and oper is a boolean operator like && or ||
yes. I somewhat know how | http://www.daniweb.com/software-development/java/threads/316509/need-help-with-bmi-code | CC-MAIN-2014-10 | refinedweb | 249 | 74.59 |
/************************************************* * _version(), which returns a string that identifies the PCRE version that is in use. */ #ifdef HAVE_CONFIG_H #include "config.h" #endif #include "pcre_internal.h" /************************************************* * Return version string * *************************************************/ /* These macros are the standard way of turning unquoted text into C strings. They allow macros like PCRE_MAJOR to be defined without quotes, which is convenient for user programs that want to test its value. */ #define STRING(a) # a #define XSTRING(s) STRING(s) /* A problem turned up with PCRE_PRERELEASE, which is defined empty for production releases. Originally, it was used naively in this code: return XSTRING(PCRE_MAJOR) "." XSTRING(PCRE_MINOR) XSTRING(PCRE_PRERELEASE) " " XSTRING(PCRE_DATE); However, when PCRE_PRERELEASE is empty, this leads to an attempted expansion of STRING(). The C standard states: "If (before argument substitution) any argument consists of no preprocessing tokens, the behavior is undefined." It turns out the gcc treats this case as a single empty string - which is what we really want - but Visual C grumbles about the lack of an argument for the macro. Unfortunately, both are within their rights. To cope with both ways of handling this, I had resort to some messy hackery that does a test at run time. I could find no way of detecting that a macro is defined as an empty string at pre-processor time. This hack uses a standard trick for avoiding calling the STRING macro with an empty argument when doing the test. */ PCRE_EXP_DEFN const char * PCRE_CALL_CONVENTION pcre_version(void) { return (XSTRING(Z PCRE_PRERELEASE)[1] == 0)? XSTRING(PCRE_MAJOR.PCRE_MINOR PCRE_DATE) : XSTRING(PCRE_MAJOR.PCRE_MINOR) XSTRING(PCRE_PRERELEASE PCRE_DATE); } /* End of pcre_version.c */ | http://opensource.apple.com/source/pcre/pcre-6/pcre/pcre_version.c | CC-MAIN-2015-18 | refinedweb | 258 | 57.16 |
Routine work is all around us every day, no matter if you like it or not. For a teacher on computing subjects, grading assignments can be such work. Certain computing assignments aim at practicing operating skills rather than creativity, especially in elementary courses. Grading this kind of assignment is time-consuming and repetitive, if not tedious.
In a business information system course that I taught, one lesson was about writing web pages. As the course was the first computing subject for the students, we used Nvu, a WYSIWYG web page editor, rather than coding the HTML. One class assignment required writing three or more inter-linked web pages containing a list of HTML elements.
Write three or more web pages having the following:
- Italicized text (2 points)
- Bolded text (2 points)
- Three different colors of text (5 points)
- Three different sizes of text (5 points)
- Linked graphics with border (5 points)
- Linked graphics without border (5 points)
- Non-linked graphics with border (3 points)
- Non-linked graphics without border (2 points)
- Three external links (5 points)
- One horizontal line--not full width of page (5 points)
- Three internal links to other pages (10 points)
- Two tables (10 points)
- One bulleted list (5 points)
- One numerical list (5 points)
- Non-default text color (5 points)
- Non-default link color (2 points)
- Non-default active link color (2 points)
- Non-default visited link color (2 points)
- Non-default background color (5 points)
- A background image (5 points)
- Pleasant appearance in the pages (10 points)
Beginning to grade the students' work, I found it monotonous and error-prone. Because the HTML elements could be in any of the pages, I had to jump to every page and count the HTML elements in question. I also needed to do it for each element in the requirement. While some occurrences were easy to spot in the rendered pages in a browser, others required close examination of the HTML code. For example, a student wrote a horizontal line (
<hr> element) extending 98 percent of the width of the window, which was difficult to differentiate visually from a full-width horizontal line. Some other students just liked to use black and dark gray as two different colors in different parts of the pages. In addition to locating the elements, awarding and totaling marks were also error-prone.
I felt a little regret on the flexibility in the requirement. If I had fixed the file names of the pages and assigned the HTML elements to individual pages, grading could have been easier. Rather than continuing the work with regret, I wrote a Perl program to grade the assignments. The program essentially parses the web pages, awards marks according to the requirements, writes basic comments, and calculates the total score.
Processing HTML with Perl
Perl's regular expressions have excellent text processing capability and there are handy modules for parsing web pages. The module
HTML::TreeBuilder provides a HTML parser that builds a tree structure of the elements in a web page. It is easy to create a tree and build its content from a HTML file:
$tree = HTML::TreeBuilder->new; $tree->parse_file($file_name);
Nodes in the tree are
HTML::Element objects. There are plenty of methods with which to access and manipulate elements in the tree. When you finish using the tree, destroy it and free the memory it occupied:
$tree->delete;
The module
HTML::Element represents HTML elements in tree structures created by
HTML::TreeBuilder. It has a huge number of methods for accessing and manipulating the element and searching for descendants down the tree or ancestors up the tree. The method
find() retrieves all descending elements with one or more specified tag names. For example:
@elements = $element->find('a', 'img');
stores all
<a> and
<img> elements at or under
$element to the array
@elements. The method
look_down() is a more powerful version of
find(). It selects descending elements by three kinds of criteria: exactly specifying an attribute's value or a tag name, matching an attribute's value or tag name by a regular expression, and applying a subroutine that returns true on examining desired elements. Here are some examples:
@anchors = $element->look_down('_tag' => 'a');
retrieves all
<a> elements at or under
$element and stores them to the array
@anchors.
@colors = $element->look_down('style' => qr/color/);
selects all elements at or under
$element having a
style attribute value that contains
color.
@largeimages = $element->look_down( sub { $_[0]->tag() eq 'img' and ($_[0]->attr('width') > 100 or $_[0]->attr('height') > 100) } );
locates at or under
$element all images (
<img> elements) with widths or heights larger than 100 pixels. Note that this code will produce a warning message on encountering an
<img> element that has no
width or
height attribute.
You can also mix the three kinds of criteria into one invocation of
look_down. The last example could also be:
@largeimages = $element->look_down( '_tag' => 'img', 'width' => qr//, 'height' => qr//, sub { $_[0]->attr('width') > 100 or $_[0]->attr('height') > 100 } );
This code also caters for any missing
width or
height attribute in an
<img> element. The parameters
'width' => qr// and
'height' => qr// guarantee selection of only those
<img> elements that have both
width or
height attributes. The code block checks these for the attribute values, when invoked.
The method
look_up() looks for ancestors from an element by the same kinds of criteria of
look_down().
Processing Multiple Files
These methods provide great HTML parsing capability to grade the web page assignments. The grading program first builds the tree structures from the HTML files and stores them in an array
@trees:
my @trees; foreach (@files) { print " building tree for $_ ...\n" if $options{v}; my $tree = HTML::TreeBuilder->new; $tree->parse_file($_); push( @trees, $tree ); }
The subroutine
doitem() iterates through the array of trees, applying a pass-in code block to look for particular HTML elements in each tree and accumulating the results of calling the code block. To provide detailed information and facilitate debugging during development, it calls the convenience subroutine
printd() to display the HTML elements found with their corresponding file name when the verbose command line switch (
-v) is set. Essentially, the code invokes this subroutine once for each kind of element in the requirement.
sub doitem { my $func = shift; my $num = 0; foreach my $i ( 0 .. $#files ) { my @elements = $func->( $files[$i], $trees[$i] ); printd $files[$i], @elements; $num += @elements; } return $num; }
The code block passed into
doitem is a subroutine that takes two parameters of a file name and its corresponding HTML tree and returns an array of selected elements in the tree. The following code block retrieves all HTML elements in italic, including the
<i> elements (for example,
<i>text</i>) and elements with a
font-style of
italic (for example,
<span STYLE="font-style: italic">text</span>).
$n = doitem sub { my ( $file, $tree ) = @_; return ( $tree->find("i"), $tree->look_down( "style" => qr/font-style *: *italic/ ) ); }; marking "Italicized text (2 points): " . ( ( $n > 0 ) ? "good. 2" : "no italic text. 0" );
Two points are available for any italic text in the pages. The
marking subroutine records grading in a string. At the end of the program, examining the string helps to calculate the total points.
Other requirements are marked in the same manner, though some selection code is more involved. A regular expression helps to select elements with non-default colors.
my $pattern = qr/(^|[^-])color *: *rgb\( *[0-9]*, *[0-9]*, *[0-9]*\)/; return $tree->look_down( "style" => $pattern, sub { $_[0]->as_trimmed_text ne "" } );
Nvu applies colors to text by the
color style in the form of
rgb(R,G,B) (for example,
<span STYLE="color: rgb(0, 0, 255);">text</span>). The above code is slightly stricter than the italic code, as it also requires an element to contain some text. The method
as_trimmed_text() of
HTML::Element returns the textual content of an element with any leading and trailing spaces removed.
Nested invocations of
look_down() locate linked graphics with a border. This selects any link (an
<a> element) that encloses an image (an
<img> element) that has a border.
return $tree->look_down( "_tag" => "a", sub { $_[0]->look_down( "_tag" => "img", sub { hasBorder( $_[0] ) } ); } );
Finding non-linked graphics is more interesting, as it involves both the methods
look_down() and
look_up(). It should only find images (
<img> elements) that do not have a parent link (a
<a> element) up the tree.
return $tree->look_down( "_tag" => "img", sub { !$_[0]->look_up( "_tag" => "a" ) and hasBorder( $_[0] ); } );
Checking valid internal links requires passing
look_down() a code block that excludes common external links by checking the
href value against protocol names, and verifies the existence of the file linked in the web page.
use File::Basename; $n = doitem sub { my ( $file, $tree ) = @_; return $tree->look_down( "_tag" => "a", "href" => qr//, sub { !( $_[0]->attr("href") =~ /^ *(http:|https:|ftp:|mailto:)/) and -e dirname($file) . "/" . decodeURL( $_[0]->attr("href") ); } ); };
Nvu changes a page's text color by specifying the color components in the style of the
body tag, like
<body style="color: rgb(0, 0, 255);">. A regular expression matches the style pattern and retrieves the three color components. Any non-zero color component denotes a non-default text color in a page.
my $pattern = qr/(?:^|[^-])color *: *rgb\(( *[0-9]*),( *[0-9]*),( *[0-9]*)\)/; return $tree->look_down( "_tag" => "body", "style" => qr//, sub { $_[0]->attr("style") =~ $pattern and ( $1 != 0 or $2 != 0 or $3 != 0 ); } );
With proper use of the methods
look_down(),
look_up(), and
as_trimmed_text(), the code can locate and mark the existence of various required elements and any broken elements (images, internal links, or background images).
Finishing Up
The final requirement of the assignment is a pleasant look of the rendered pages. Unfortunately,
HTML::TreeBuilder and its related modules do not analyze and quantify the visual appearance of a web page. Neither does any module that I know. OK, I would award marks for the appearance myself but still want Perl to help in the process--the program sets the default score and comment, and allows overriding them in flexible way. By using alternative regular expressions, I can accept the default, override the score only, or override both the score and comment.
my $input = ""; do { print "$str1 [$str2]: "; $input = <STDIN>; $input =~ s/(^\s+|\s+$)//g; } until ( $input =~ /(.*\.\s+\d+$|^\s*$|^\d+$)/ ); $input = $str2 if $input eq ""; if ( $input =~ /^\d+$/ ) { $n = $input; if ( $n == 10 ) { $input = "good looking, nice content. $n"; } else { ( $input = $str2 ) =~ s/(\.\s*)\d+\s*$/$1$n/; } } marking "$str1 $input";
Finally, the code examines the marking text string containing comments and scores for each requirement to calculate the total score of the assignment. Each line in that string is in a fixed format (for example,
"Italicized text (2 points): good. 0"). Again, regular expressions retrieve and accumulate the maximum and awarded points.
my ( $total, $score ) = ( 0, 0 ); while ( $marktext =~ /.*?\((\d+)\s+points\).*?\.\s+(\d+)/g ) { $total += $1; $score += $2; } marking "Total ($total points): $score";
Depending on the command-line switches, the program may start a browser to show the first page so that I can look at the pages' appearance. It can also optionally write the grading comments and score to a text file which can be feedback for the student.
I can simply run the program in the directory containing the HTML files, or specify the set of HTML files in the command-line arguments. In the best case, I just let it grade the requirements and press
Enter to accept the default marking for the appearance, and then jot down the total score and email the grading text file to the student.
Conclusion
I did not evaluate the time saved by the program against its developing effort. Anyway, the program makes the grading process more accurate and less prone to error, and it is more fun to spend time writing a Perl program and getting familiar with useful modules.
In fact, there are many other modules that could have been used in the program to provide even more automation. Had I read Wasserman's article "Automating Windows Applications with Win32::OLE," the program would record the final score to an Excel file automatically. In addition, networking modules such as
Mail::Internet,
Mail::Mailer, and
Mail::Folder could retrieve the assignment files from emails and send the feedback files to the students directly from the program. | http://www.perl.com/pub/2006/01/19/analyzing_html.html | CC-MAIN-2016-30 | refinedweb | 2,053 | 58.82 |
Hello Programmers,
This is my first post on my quest to self study Java more in depth since my recent graduation for my BSIT/Software Engineering Degree. I'm stuck on this exercise. I'm having trouble understanding why my method will not recognize that I am returning a double. Here is the requirements. And my code is below. I am using the acm libraries solely at this point and shouldn't use the Math libraries as this project is basically about reinventing the wheel. Finally, all my method projects to this point have been with int variable only. Thanks in advance.
Write a method raiseRealToPower that takes a floating-point value x and an integer
k and returns x^k. Implement your method so that it can correctly calculate the result
when k is negative, using the relationship
x^-k = 1 / x^k
Use your method to display a table of values of πk for all values of k from –4 to 4.
import acm.program.*; public class raiseRealToPower extends ConsoleProgram{ double PI = 3.1417; int p; double temp = 1.0; public void run(){ for (int i = -4; i < 5; i++){ double result = power(PI, i); println("The result of PI to the power of" + i + " is : " + result); } } private double power(double x, int k){ double num = x; if (k<0) { p = -k; } if (k == 0) { return 1.0; } for (int i = 1; i < p + 1; i++){ temp = num * temp; } if (k < 0) return (1/temp); if (k > 0) return temp; } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/10022-help-method-returning-double.html | CC-MAIN-2014-52 | refinedweb | 252 | 71.34 |
The client should never ever decide how the server should response.
A few days ago, I was faced by the options to choose whether our API design should compromise its universality by changing the response in a non-standard format or stick with the original design. The need to change the API to the non-standard format came from one of its clients because they have some virtual restriction on how to process the response.
Let me explain that a little bit.
Recently, we have a project with one of telco companies here and we need to be able to open the API to talk to the SMS Gateway as well as IVR. We have defined a set of rules that should never be changed unless the end of the world is near. One of the API is a resource which can store the data into the database based on user’s SMS.
Let’s say that a user type these into their phone and send it as a text message to our server:
REGISTER John Doe#12345678#10-04-2002#Jakarta
and the user, John, will get a lot of information from the server like this:.
The backend will look something like this:
John Doe send a text message to a short number, 1234, and the phone will send it through MSC and SMSC. SMSC forward the packet to SMS Gateway and the SMS Gateway will forward the information to the server. The server’s response will be accepted by SMS Gateway and it will forward to the SMSC. SMSC to the MSC and finally, the user get the response from MSC. (Fiuh, what a confusing world we lived in).
So, why is it all related to the universality of APIs?
We discuss further on how to make a long response fit into user’s phone without even a single word being broken. As we’re already know, SMS is limited to 160 characters and we have the response exceeds this limitation, it will gonna break into two or more text messages. It’s fine. No problem with that.
The actual problem is that we don’t want to break even one word to be two pieces which became no meaning at all. Take as an example: welcome become wel come,communication become commun ication, and so on. This is not good.
So there comes the idea to break the long response into pieces and every pieces is no longer than 160 chars.
But how?
One of our team suggest that the server response would be something like this:
{'status':'OK','code':'200','data':{'response':'register_ok','response_txt':. } }
(If you noticed that I’m putting the pipe ‘|’ there, yes, you’re right. That is the separator for every 160 chars. Even though I didn’t put it exactly at the 160th char. This is just for illustration.)
This suggestion sounds very good and fix the problem. But be aware that this problem also break the API’s rule, universality (or in other term generality), and potentially create problems.
So what’s the problem?
So with the new API, the SMS Gateway is able to split the long response into pieces, each pieces has no longer than 160 chars. Everyone happy. The SMS Gateway is happy. The user is also happy (or maybe don’t care at all). But what about another third party app (such as Android or iOS client) that decided to use our API?
Yes, they have to split the response also. Or the proper way to do that is to strip the separator ‘|’ char from the response. But it’s not a proper way either. And suddenly, all third party clients questioning and complaining our API why in the world we always putting pipe in the middle of our response?
The Solution
The client should never ever decide how the server should response. We rolled back the API to the way it used to be –long response without the ugly pipe– and we suggest this simple function written in Python to solve the SMS Gateway problem:
def split_to_sms(txt): results =[] txt_temp = txt while len(txt_temp)>160: cut_pos =string.rfind(txt_temp,' ',0,160) results.append(txt_temp[0: cut_pos]) txt_temp = txt_temp[cut_pos +1:]if len(txt_temp)>0: results.append(txt_temp)return results
We put the code in the client (SMS Gateway) and everybody is now really happy (the user still don’t care though, the response is much the same as before.)
Problem solved without compromising the API’s design principle.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/universality-apis | CC-MAIN-2016-44 | refinedweb | 764 | 71.65 |
One of the reasons I switched to Python from R is because Python’s phylogenetic capabilities are very well developed, but R is catching up. I’m moving into phylogenetic community ecology, which requires a lot of tree manipulation and calculation of metrics and not so much actual tree construction. Python is excellent at these things and has an excellent module called ETE2. R has a few excellent packages as well, including ape and picante.
I can’t compare and contrast all of the features of R and Python’s phylogenetic capabilities. But since I like making pretty pictures, I thought I’d demonstrate how to plot in both R and Python. I’ll say that making a basic plot is pretty simple in both languages. More complex plots are.. well, more complex. I find that the language of ETE2 is more full featured and better, but it had a pretty steep learning curve. Once you get the hang of it, though, there is nothing you can’t do. More or less.
R’s phylogenetic plotting capabilities are good, but limited when it comes to displaying quantitative data along side it. For example, it’s relatively easy to make a phylogeny where native and introduced species have different colors:
require(picante) SERCphylo <- read.tree('/Users/Nate/Documents/FIU/Research/SERC_Phylo/SERC_Nov1-2013.newick.tre') # species cover fullSpData <- read.csv("~/Documents/FIU/Research/Invasion_TraitPhylo/Data/master_sp_data.csv") # phylogeny SERCphylo <- read.tree('/Users/Nate/Documents/FIU/Research/SERC_Phylo/SERC_Nov1-2013.newick.tre') # traits plantTraits <- read.csv("~/Documents/FIU/Research/Invasion_TraitPhylo/Data/plantTraits.csv") # Put an underscore in the species names to match with the phylogeny plantTraits$species <- gsub(' ', '_', plantTraits$species) #Isolate complete cases of traits traits <- subset(plantTraits, select = c('species', 'woody', 'introduced', 'SLA', 'seedMass', 'toughness')) traits <- traits[complete.cases(traits), ] # Make a phylogeny of species for which traits are present drops <- SERCphylo$tip.label[!(SERCphylo$tip.label %in% traits$species)] cleanPhylo <- drop.tip(SERCphylo, drops) # merge the species with the traits, in the order that they appear in the phylogeny plotTips <- data.frame('species' = cleanPhylo$tip.label) plotCols <- merge(plotTips, traits[,c(1,3,4,6)], sort=F) # make a black/red container tCols <- c('black', 'red') # plot the phylogeny, coloring the label black for natives, red for introduced pT <- plot(cleanPhylo, show.tip.label = T, cex = 1, no.margin = T, tip.color = tCols[plotCols$introduced + 1], label.offset = 2) # put a circle at the tip of each leaf tiplabels(cex = 0.1, pie = plotCols$introduced, piecol = c('red', 'black'))
It’s also relatively easy to display trait data alongside it, using another two other packages, but then you lose the ability to color species differently and, in all honesty, to customize the phylogeny in any way.
require(adephylo) require(phylobase) sercDat <- phylo4d(cleanPhylo, plotCols) table.phylo4d(sercDat)
Python, on the other hand, can do this all in the ETE2 module. The learning curve is a bit steeper, but in all honesty, once you get it down it’s easy and flexible. For example, here’s how to make the first graph above:
import ete2 as ete import pandas as pd # load data traits = pd.read_csv('/Users/Nate/Documents/FIU/Research/Invasion_TraitPhylo/Data/plantTraits.csv') SERCphylo = ete.Tree('/Users/Nate/Documents/FIU/Research/SERC_Phylo/SERC_Nov1-2013.newick.tre') #### TRAIT CLEANUP #### # put an underscore in trait species traits['species'] = traits['species'].map(lambda x: x.replace(' ', '_')) # pull out the relevant traits and only keep complete cases traits = traits[['species', 'introduced', 'woody', 'SLA', 'seedMass', 'toughness']] traits = traits.dropna() # next, prune down the traits data traitsPrune = traits[traits['species'].isin(SERCphylo.get_leaf_names())] # prune the phylogeny so only species with traits are kept SERCphylo.prune(traitsPrune['species'], preserve_branch_length = True) # basic phylogenetic plot SERCphylo.show()
You can use dictionaries to make a couple of guides that retain the trait info for each species
# guide for color cols = [['black', 'red'][x] for x in traitsPrune['introduced']] colorGuide = dict(zip(traitsPrune['species'], cols)) # weights (scaled to 1) slaGuide = dict(zip(traitsPrune['species'], traitsPrune['SLA']/traitsPrune['SLA'].max())) toughGuide = dict(zip(traitsPrune['species'], traitsPrune['toughness']/traitsPrune['toughness'].max())) seedGuide = dict(zip(traitsPrune['species'], traitsPrune['seedMass']/traitsPrune['seedMass'].max()))
Next, you can use node styles to set the basic tree appearance. For example, ETE2 uses thin lines and puts a circle at every node (i.e. split) by default. We can use the traverse function, which just goes through every single node, and set every node to the same style:
# set the base style of the phylogeny with thick lines for n in SERCphylo.traverse(): style = ete.NodeStyle() style['hz_line_width'] = 2 style['vt_line_width'] = 2 style['size'] = 0 n.set_style(style)
This code just says “go through every node, make a default style, but change the line width to 2 and the circle size to 0″. The result is that every node has thicker lines and we’ve removed the circle.
We can go through only the final nodes (the leaves) and tell it to strip out the underscore of the species name, paste in on the end of the branch in italic font, and make the font the color specified in the dictionary above (red if introduced, black if native)
def mylayout(node): # If node is a leaf, split the name and paste it back together to remove the underscore if node.is_leaf(): temp = node.name.split('_') sp = temp[0] + ' ' + temp[1] temp2 = ete.faces.TextFace(sp, fgcolor = colorGuide[node.name], fsize = 18, fstyle = 'italic')
Then, use the treestyle to make a couple of stylistic changes, telling it to apply the layout function, add in some extra spacing between the tips so the phylogeny is readable, and save
ts = ete.TreeStyle() ts.mode = 'r' ts.show_leaf_name = False ts.layout_fn = mylayout ts.branch_vertical_margin = 4 #ts.force_topology = True ts.show_scale = False SERCphylo.render("Python_base.png", w = 1500, units="px", tree_style = ts)
It took a bit more work than R to get this far, but now is the awesome part. We’ve already got a function telling Python to paste a red species name at the end of the branches. We can add in more features, like.. say.. a circle that’s scaled by a trait value by simply adding that to the function. Most of the work is already done. We change the function to:
def mylayout(node): # If node is a leaf, split the name and paste it back together to remove the underscore if node.is_leaf(): # species name temp = node.name.split('_') sp = temp[0] + ' ' + temp[1] temp2 = ete.faces.TextFace(sp, fgcolor = colorGuide[node.name], fsize = 18, fstyle = 'italic') ete.faces.add_face_to_node(temp2, node, column=0) # make a circle for SLA, weighted by SLA values sla = ete.CircleFace(radius = slaGuide[node.name]*15, color = colorGuide[node.name], style = 'circle') sla.margin_left = 10 sla.hz_align = 1 ete.faces.add_face_to_node(sla, node, column = 0, position = 'aligned') # same with toughness toughness = ete.CircleFace(radius = toughGuide[node.name]*15, color = colorGuide[node.name], style = 'circle') toughness.margin_left = 40 toughness.hz_align = 1 ete.faces.add_face_to_node(toughness, node, column = 1, position = 'aligned')
The confusing part is that you first have to make a ‘face’ (ete.CircleFace), giving it a radius proportional to the species trait value and color based on its introduced status. Then, we use the margin property (sla.margin_left) to give it some space away from the other objects. Next, use the align property to make it centered (sla.hz_align = 1). The final call is just telling it to actually add the ‘face’, which column to put it in, and where to put it (see the ETE2 tutorial for a guide). Aligned tells it to put it offset from the branch tip so that all circles are in the same spot (rather than being directly at the end of the branch, which could vary). Column just tells it where to put it, once it’s in the aligned position. So now there’s a phylogeny with quantitative trait data, still colored properly. And this is a simple example. The graphs can get much better, depending on what you want to do.
Took me several hours to get this far, because the language is pretty hard to wrap your head around at first. But once you get it, it sets off all kinds of... | https://www.r-bloggers.com/phylogenies-in-r-and-python/ | CC-MAIN-2016-44 | refinedweb | 1,374 | 59.09 |
You can subscribe to this list here.
Showing
4
results of 4
Hi,
I'am a very beginner with swig and I have a complex (for me) data.
The target language is java.
The data type is define like this in C
typedef struct Factory _Factory;
typedef struct Conv _Conv;
And two methods :
extern Err createFactory( _Factory **factory);
extern Err openFromFactory(_Factory *factory, _Conv **conv);
The **factory are null and initializze in the createFactory method, and i
must get the result (OUTPUT).
Same thing for _Conv **conv.
swig generate SWIGTYPE_p_Conv, SWIGTYPE_p_Factory, SWIGTYPE_p_p_Conv and
SWIGTYPE_p_p_Factory class
but I didn't now how to useit.
I look at the documentation for typemap and OUTPUT but i don't know how to
begin.
Thanks a lot,
Igor Devor
On Tue, 08 Nov 2005 06:02:26 +0100, Thomas.Weinmaier@...
<Thomas.Weinmaier@...> wrote:
>);
>
> I am stuck on this problem for several days now and I wonder whether my
> perl-call is wrong or I could use some typemap to solve this problem.
>
> Every suggestion is appreciated.
>
> Thomas
I think this happens because your DerivedFromC* would be casted implicitly
to an AbstractClassC* in C++. However, with SWIG matters are different I
think. You need to tell SWIG explicitly that there is an implicit cast.
Take a look at the implicit.i file somewhere in the swig library. I hope
it will work for you.
-Matthias
--
Using Opera's revolutionary e-mail client:
yes, it seems you have installed one version and compiling the examples
with another. Please check the Makefile and change something like
SWIG = ...../swig
to
SWIG = ..../preinst-swig
or just sync the swig version you have installed.
Marcelo
Jeffery D. Collins wrote:
> I must be doing something systematically wrong, because I have tried
> this with various combinations of python and swig versions. Swigging
> std::vector eventually results in a compiler error involving
> SWIG_IndexError. Here is a simple example taken from the swig
> documentation (swig-1.3.27, but similar problems with swig-1.3.24;
> python-2.4.1):
>
> example.i:
>
> %module example
> %include "std_vector.i"
>
> namespace std {
> %template(vectori) vector<int>;
> %template(vectord) vector<double>;
> };
>
>
> swig -c++ -python -I/opt/local/include/python2.4/ example.i
> g++ --shared -I/opt/local/include/python2.4/ -o _example
> example_wrap.cxx
>
> example_wrap.cxx: In function `PyObject* _wrap_vectori_pop(PyObject*,
> PyObject*)':
> example_wrap.cxx:3124: error: `SWIG_IndexError' undeclared (first use
> this function)
> example_wrap.cxx:3124: error: (Each undeclared identifier is reported
> only once foreach function it appears in.)
> example_wrap.cxx:3124: error: `SWIG_exception' undeclared (first use
> this function)
> example_wrap.cxx: In function `PyObject*
> _wrap_vectori___getslice__(PyObject*, PyObject*)':
> example_wrap.cxx:3161: error: `SWIG_exception' undeclared (first use
> this function)
> example_wrap.cxx: In function `PyObject*
> _wrap_vectori___setslice__(PyObject*, PyObject*)':
>
> <rest of the stream snipped>
>
> The SWIG_IndexError macro is defined in exception.i, but including
> doesn't change the results. %include "exception.i" doesn't change
> matters. However, if I copy the entire contents of exception.i into
> example.i, it works fine.
>
> What am I missing? I've googled for solutions to this problem without
> success.
>
> Thanks!
>
> --
> Jeffery Collins
>
>
>
>
>
> -------------------------------------------------------
> SF.Net email is sponsored by:
> Tame your development challenges with Apache's Geronimo App Server.
> Download
> it for free - -and be entered to win a 42" plasma tv or your very own
> Sony(tm)PSP. Click here to play:
> _______________________________________________
> Swig-user mailing list
> Swig-user@...
>);
the error is the same. I checked the _wrap.cxx file and the matching
wrapper
function does exist, but it seems not to recognize the second parameter
as a
pointer.
When I change the constructor to expect a real object of DerivedFromC
ClassA (ClassB, DerivedFromC);
and afterwards pass a reference to DerivedFromC this part works and
perl´s
print also tells me that $a is an object of ClassA
$a = mead::ClassA=HASH(0x82bf708)
but I can´t access its attributes and I get a segmentation fault later
on.
I am stuck on this problem for several days now and I wonder whether my
perl-call is wrong or I could use some typemap to solve this problem.
Every suggestion is appreciated.
Thomas | http://sourceforge.net/p/swig/mailman/swig-user/?viewmonth=200511&viewday=8 | CC-MAIN-2014-23 | refinedweb | 676 | 60.01 |
I'm working on a fighting game, using two 360 controllers for 2 players. To get this to work, I've added xInputPureDotNet to my project. Works perfectly, only minor bug seems to be it detects controller 1 as 2 and vice versa, not a big problem to work around. So, In the sample unity project xInputTest which is included, there is a line accessing vibrate for each motor based on trigger input. Since I don't really fully understand how xinput works, I've been trying to use that example to add force feedback to my own project. The line for vibration is :
Gamepad.SetVibration(playerIndex, state.Triggers.left,state.Triggers.right);
By exposing the state.Triggers.left,state.Triggers.right in the inspector, they appear as normal axis input, maxing out at 1.
However, if I plug in a 1 through code, such as :
var vib1 = 1; var vib2 = 1;
Gamepad.SetVibration(playerIndex, vib1,vib2)
(the format is SetVibration(XInputDotNetPure.PlayerIndex,float,float))
Doesn't seem to matter what number I use, all I get is a faint murmur on one controller.
The xinput test only seems to take input from one controller (controller 2), so what I want to figure out is:
A: how can I set up the xinputdotnetpure.PlayerIndex for each controller, since the example only seems to work for one?
Note: the test detects both controllers, as evidenced in the debug log. Only seems to take input from one though.
All I need is to be able to refer to each controller as a PlayerIndex.
B: since the two latter arguments are floats, can't I just set them manually? Why doesn't this work?
In the test scene, (using the state.Trigger.right etc.) the vibrate fully works, at least on the one controller. But if I replace with my own floats, just a faint murmur.
If there's anyone out there who knows how to do this, an example would be extremely helpful..
Thanks in advance for any ideas!
Update:figured out the PlayerIndex, it's PlayerIndex.One, PlayerIndex.Two, etc.
so now I've got the vibrate basically working, just not as strong as I'd like it to be.
Here's the script I'm using, in case anyone's interested:
using UnityEngine;
using XInputDotNetPure;
public class XInputVibrateTest : MonoBehaviour
{
public float testA;
public float testB;
public float testC;
public float testD;
void Update()
{
GamePad.SetVibration(PlayerIndex.Two,testA,testB);
GamePad.SetVibration(PlayerIndex.One,testC, testD);
}
}
then I just set the floats when I want vibration (again, doesn't seem to matter much WHAT I set them to, that's the issue now.. I think it goes 0 to 1, but I can't get the full vibration as in the demo project no matter what I try.)
Answer by Seth-Bergman
·
Jan 06, 2013 at 03:34 AM
In response to this more recent post, I have just revisited this issue. It seems the problem was the part "PlayerIndex.One" "PlayerIndex.Two"... instead I believe simply replacing "0" and "1" would do it, as in:
GamePad.SetVibration(0,testA,testB);
GamePad.SetVibration(1,testC, testD);
(for controller 3 it would be "2", and for controller 4 it would be "3")
(well, I don't know if that was the issue, doesn't really make sense, but at any rate, all seems to work fine now..)
for a javascript example, see my answer in the link :)
Answer by JtheSpaceC
·
Dec 14, 2015 at 02:47 PM
The accepted answer didn't help me. I've failed to get this done about a dozen times until today. You need to download a 'using' namespace to get access to the GamePad function.
The download and instructions (scroll down) can be found here. Enjoy!
Thanks a bunch!
Answer by A0101A
·
Feb 20, 2014 at 02:48 PM
I also have enabled the RUMBLE function, for my FPS game dev, by using the XInputDotNetPure system and it works ok for game pad vibration, but - at the times - i get the permanent vibration that does not stop, after some heavy objects hit the Player, or his camera ... the player has a rigidbody, so, could that be related to the problem, anyone ? Let's shed some light on this, i would not like to think it's a unity bug - i don't think so,Input: vibration on more than 4 360 controller?
0
Answers
rigidbody vibration
0
Answers
Handheld.Vibrate() not working
1
Answer
Network Vibration Problem
1
Answer
Vibrating/Blur Character
1
Answer | https://answers.unity.com/questions/218084/xinput-how-do-i-access-vibration-on-360-controller.html?childToView=375451 | CC-MAIN-2020-05 | refinedweb | 758 | 71.85 |
On 01/09/2011 12:18 PM, Nick Coghlan wrote:
On Mon, Jan 10, 2011 at 3:56 AM, Ron Adamrrr@ronadam.com wrote:
On 01/09/2011 12:39 AM, Nick Coghlan wrote:).
Yes, __builtins__ is a virtual module.
No, it's a real module, just like all the others.
As George pointed out it's "builtins". But you knew what I was referring to. ;-)
I wasn't saying it's not a real module, but there are differences. Mainly builtins (and other c modules) don't have a file reference after it's imported like modules written in python.
import dis dis
<module 'dis' from '/usr/local/lib/python3.2/dis.py'> dis.__file__ '/usr/local/lib/python3.2/dis.py'
import builtins builtins
<module 'builtins' (built-in)> builtins.__file__ Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'module' object has no attribute '__file__'
So they appear as if they don't have a source. There is probably a better term for this than virtual. I was thinking it fits well for modules constructed in memory rather than ones built up by executing python code directly.
Hmmm...
Should modules written in other languages have a __file__ attribute?
Would that help introspection or in other ways?
It's this loading part that can be improved.
I don't understand the point of this tangent. The practice of how objects are merged into modules is already established: you use "import " or some other form of import statement. I want to make that work properly*, not invent a new way to do it.
Sorry, I was looking for ways to avoid changing __module__.
All of the above ways, will still have the __module__ attribute on objects set to the module they came from. Which again is fine, because that is what you want most of the time. Just not in the case of partial.
Setting __module__ manually is easy enough in that case.
Cheers, Nick.
I think I'm more likely to side track you at this point. I am starting to get familiar with the c code, but I still have a ways to go before I understand all the different parts. Getting there though. :-).
Cheers, Ron | https://mail.python.org/archives/list/python-ideas@python.org/message/2XVHKJRXHIE62A3GXVXORUPKYKSZUSSZ/ | CC-MAIN-2021-04 | refinedweb | 371 | 75.71 |
/*- * [] = "@(#)strtoq.c 8.1 (Berkeley) 6/4/93"; #endif /* LIBC_SCCS and not lint */ #include <sys/cdefs.h> __FBSDID("$FreeBSD: src/lib/libc/stdlib/strtoll.c,v 1.19 2002/09/06 11:23:59 tjr Exp $"); #include <limits.h> #include <errno.h> #include <ctype.h> #include <stdlib.h> /* * Convert a string to a long long integer. * * Assumes that the upper and lower case * alphabets and digits are each contiguous. */ long long strtoll(const char * __restrict nptr, char ** __restrict endptr, int base) { const char *s; unsigned long long acc; char c; unsigned long; /* * quads is * [-9223372036854775808..9223372036854775807] and the input base * is 10, cutoff will be set to 922337203685477580 and cutlim to * either 7 (neg==0) or 8 (neg==1), meaning that if we have * accumulated a value > 922337203685477580, or equal but the * next digit is > 7 (or 8), the number is too big, and we will * return a range error. * * Set 'any' if any `digits' consumed; make it negative to indicate * overflow. */ cutoff = neg ? (unsigned long long)-(LLONG_MIN + LLONG_MAX) + LLONG_MAX : LLONG ? LLONG_MIN : LLONG_MAX; errno = ERANGE; } else if (!any) { noconv: errno = EINVAL; } else if (neg) acc = -acc; if (endptr != NULL) *endptr = (char *)(any ? s - 1 : nptr); return (acc); } | http://opensource.apple.com/source/Libc/Libc-594.9.4/stdlib/FreeBSD/strtoll.c | CC-MAIN-2016-07 | refinedweb | 197 | 69.18 |
# Summary
Which common pain points and questions do I tackle in this video and article?
What’s up with all these Angular versions? (0:31)
Do you really need the Angular CLI? (4:33)
Which Visual Studio Code Extensions can I recommend? (7:45)
How can you debug Angular apps? (8:37)
How can you pass data from A to B (e.g. between components) in Angular apps? (14:44)
“Can I use Angular with PHP/Node/…“? (16:53)
“Can I use Angular with Redux”? (18:29)
How to prevent state loss after page refreshes? (19:51)
“Can I host my Angular app on Heroku”? (22:46)
How to fix broken routing after deployment (24:24)
Can everyone see your code? (26:14)
How to integrate 3rd party CSS and JS packages (27:05)
# What’s up with all these Angular versions?
When starting off with Angular, it can be really confusing. There are a lot of different versions. So what’s up with
- Angular 1
- Angular 2
- Angular 3
- Angular 4
- …?
It can be confusing but it’s actually quite simple: Angular 1 was released in 2010 and it is a complete different framework than Angular 2+. The + is important - Angular 2 was a complete re-write of Angular 1 and Angular 3, 4, 5 are simply the latest versions of Angular 2 (Angular 3 actually never came out due to versioning conflicts).
Yes, this sounds strange.
The reason for this strange versioning is that Angular adopted semantic versioning at the end of 2016 (see this and this blog post).
So whilst Angular 5 sounds like a totally different framework, it actually isn’t. The syntax hasn’t changed since then (and is not about to change over the next released either). There have been some changes, most of them happened behind the scenes though.
Important: We nowadays refer to Angular 2+ as just Angular whereas Angular 1 is now called AngularJS. So if you read “Angular”, people typically mean Angular 2 or later.
So if you learn Angular 5/6/7 know, you’ll be prepared for Angular 6/7/8, too.
# Do you really need the Angular CLI?
This is another question I see a lot: “Do I really need the Angular CLI?“.
The answer is: Yes! You should really need it because Angular requires a more complex build workflow.
Why is such a build workflow required? Because Angular uses TypeScript which needs to be compiled to JavaScript. Because Angular should compiled its template code before it gets shipped to production to improve performance. And because Angular apps should be optimized (e.g. unnecessary code should be removed) before the app gets shipped.
All these things are taken care of by the CLI and that’s why you should really use it. It makes your life as a developer easier and your apps better. And if you ever need full access to the underlying Webpack config, you can always eject from the CLI-managed setup via
ng eject.
Connected to that, I sometimes get the questions why you need to install Node.js to developer Angular apps. It’s a valid question - we’re not writing any Node.js code after all! But Node.js is required for two reasons:
- It ships with npm - the Node Package Manager. This tool is the de-facto standard to manage dependencies of frontend and backend projects. Dependencies include production dependencies like the Angular package itself (which you therefore don’t have to download or include from a CDN manually) but also packages like Webpack - a tool which is used by the CLI to bundle all your code together and optimize it
- It powers the build workflow, i.e. it executes a bunch of scripts that compile and optimize your code.
# Which Visual Studio Code Extensions can I recommend?
Visual Studio Code has become a really great IDE for developing Angular apps, I can only recommend using it, especially since it’s free.
One of the major advantages of VSC is its extensibility. You can add dozens of extensions to tailor the editor to your needs. But which extensions do you really need?
The good thing is: For Angular development, you really need only one extension! Angular Essentials by John Papa! It bundles a lot of other useful extensions together, so you actually get more than one extension.
# How can you debug Angular apps?
Debugging Angular apps is something a lot of people struggle with. Let me share the most important tools and tricks to debug your app effectively!
Read the error messages carefully
app-productselector you’re using somewhere. This is a good starting point to dive deeper and explore what could be the error. Some common error sources would be:
- You have a typo in the selector in
@Component()
- You didn’t add the component to the
declarations[]array in your
AppModule
Even if you don’t immediately have these two error sources in mind, you should be able to eventually find them. After all the error is that Angular doesn’t know this selector, so you should look at all places where this selector and component plays an important role.
Use
console.log() to get quick insights into your values
This is kind of a “dirty” trick but it can be really useful to put some
console.log() statements into your code. Why? Because this allows you to quickly look at the value of some variable or property in some execution step in your app. It’s done quickly and can give you the hint you just needed to fix some problem.
Use the browser developer tools
console.log()is alright for some quick and dirty debugging but a better way is to use the browser developer tools every major browser ships with. Here’s a detailed guide to the Chrome Developer Tools for example.
The good thing is: The CLI setup gives you sourcemaps - little translation helpers that allow the browser to map the compiled JS files back to the original source files. Sounds like magic? It’s really useful! It allows you to actually access your original TypeScript source code in the developer tools so that you can place breakpoints and analyze your values directly in that TypeScript code.
In Chrome, you should look for a webpack:// folder in the Sources tab of your dev tools. You should find the TypeScript code there!
Use Augury
Augury is an extension for your developer tools that was built to make the debugging of Angular apps easier. You install it as a Chrome extension and can then use it from inside the Chrome dev tools.
With Augury, you can have a look at your component tree (as it’s currently rendered to the screen), the state of all your components in that tree. injected values and much more. It’s an extremely useful addition that allows you to get even deeper insights into Angular.
If you’re using NgRx: Use the Redux Devtools!
If you’re using NgRx in your Angular app, you should also use the Redux Devtools Extension. Together with the NgRx Devtools, you get detailed insights into the current state of your Redux/ NgRx store, the actions you dispatched and much more.
Extremely useful!
# How can you pass data from A to B (e.g. between components)?
Passing data between components, components and services and components and directives is a common problem in Angular apps. We got a couple of different tools to make sure every piece of our Angular app knows what it needs to know but which tools are this exactly? How and when do you use which “tool”?
Tool 1: Property and Event Binding
Property and event binding are core concepts of Angular apps! Property binding simply means that you pass data into some other element - that could be a native HTML/DOM element or your custom component. So you can set a property of that HTML/DOM element or your component from outside.
Here’s an example:
@Component({...}) export class LoadedProductComponent { @Input() loadedProduct: Product; } @Component({ template: `<app-loaded-product [loadedProduct]="products[0]"></app-loaded-product>` }) export class ProductsComponent { products: Product[] = [{name: 'Milk', amount: 2}] }
In this snippet, the
LoadedProductComponent has a bindable property
loadedProduct. It is bindable (that means: settable from outside) because it’s annotated with the
@Input() decorator.
ProductsComponent uses this feature and passes data into the
LoadedProductComponent: The first element (index
0) of its (non-bindable!)
products array.
Event binding works exactly the other way around. It allows you to emit your own events, which of course gives you a way of information the parent component of another component about some change and optionally pass data with the event. Just as the native
click event also gives you an object with informations about the event (e.g. the
event.target) by default.
It works like this:
@Component({...}) export class LoadedProductComponent { @Output() productOrdered = new EventEmitter(this.orderedProduct); productClicked(selectedProduct) { this.productOrdered.emit(selectedProduct); } } @Component({ template: `<app-loaded-product (productOrdered)="doSomething($event)"></app-loaded-product>` }) export class ProductsComponent { doSomething(orderedProduct) { ... } }
Here, the
LoadedProductComponent emits a custom event
productOrdered - created with EventEmitter, which ships with Angular - some
selectedProduct whenever
productClicked is executed (that could be happening because maybe a listener to the native
click event was added to some DOM element).
The event is listenable from outside (i.e. from the parent component -
ProductsComponent in this case) because
@Output() was added as a decorator to
productOrdered.
In the
ProductsComponent, a listener for
productOrdered is added to
<app-loaded-product> and the
doSomething() method is executed whenever the event occurs. The
$event variable which is passed as an argument is a special variable exposed by Angular that simply carries the event payload (i.e. the emitted product -
selectedProduct).
Tool 2: Services
Property and event binding is extremely useful but you can only pass data from parent to child component and the other way around. What if you need to pass data from a grandchild to some other component? So what do you do if no direct connection between two components exists?
You can follow two routes:
- You can build a long chain of
@Input()s and
@Output()s to get data from A to B. This can be a good solution to build highly re-useable components that can be dumped into the template anywhere and only require certain input/ offer certain output.
- But in many cases, you might want to look at Services
Services are simply normal TypeScript classes that can be injected by Angular. Injection means that Angular instantiates the class and provides that instance to a component via its selectors. You can learn more about Services and Dependency Injection here.
Since services can be injected into any component (and also into other services or directives), you can use them to centralize data and access it from anywhere in your application. Really convenient!
To pass data around efficiently, you typically use RxJS subjects as event emitters like this:
// in data.service.ts import { Subject } from 'rxjs/Subject'; export class DataService { dataAdded = new Subject<any>(); addData(newData) { this.dataAdded.next(newData); } } // in data-receiving.component.ts import { Subscription } from 'rxjs/Subscription'; import { DataService } from './path/to/data.service' @Component({...}) export class DataReceivingComponent implements OnInit, OnDestroy { dataSub: Subscription; constructor (private dataService: DataService) {} ngOnInit() { this.dataSub = this.dataService.dataAdded.subscribe(data => { // do something with the data }); } ngOnDestroy() { if (this.dataSub) { this.dataSub.unsubscribe(); // required to prevent memory leaks } } }
Tool 3: NgRx (Redux)
For bigger apps, you may still end up with a complex construct of services that get injected into different places and have challenging relations. A way to introduce a clear pattern of data flow into your app is to use Redux.
Redux simply is a package that offers you a certain collection of tools and enforces a clear pattern of using these tools to make state management simply. Put simply, you have a central store where you have all your app state (e.g. some selected product) and you can access that store from your components etc. You don’t have dozens of services communicating with each other then.
For Angular, you typically don’t use the Redux package itself (though you could do that, it’s not limited to be used in React apps!) but you use NgRx - an Angular-specific Redux implementation that follows the same logic.
# “Can I use Angular with PHP/Node/…?”
I often see the question whether you can use Angular with PHP, Node.js or some other server-side language.
And the answer is: Yes, absolutely!
Angular doesn’t care about your backend and your server-side language, it’s a client-side framework, it runs in the browser!
With Angular, you build Single-Page-Applications (SPAs) and these aren’t rendered on the server. They only communicate with servers through Ajax requests (via Angular’s built-in HttpClient), hence your backend needs to provide a (RESTful) API to which Angular can send its requests. And that’s all!
One exception is important though: If you’re using Angular Universal, you’ll still not use Angular to write server-side code (i.e. to access databases, work with file storage or anything like that) but you can pre-render your Angular apps on the server. That only works with Node.js as of now though, so if that’s important to you, you should go with Node.js
# “Can I use Angular with Redux?”
Yes! This question was already kind of answered in the Pass data from A to B point. You can use the Angular-specific Redux implementation NgRx to implement a central store and other redux features into your Angular app.
# Angular as Angular (or other frameworks and libraries) creates them, you don’t use sessions though. The reason is simple: Your app is decoupled from your backend. You only have one single page in the end and your (RESTful) API to which you talk occasionally doesn’t care about your app - it’s stateless.
ngOnInitlifecycle method in your
AppComponentsince Angular Angular doesn’t use any server-side language! It’s - after your ran
ng build --prod just a bunch of JavaScript and CSS files as well as the
index.html file. You don’t need a Node server for that!
Therefore, for Angular apps, static website hosts like AWS S3 or Firebase Hosting are better choices. They’re typically cheaper, super-easy to setup and don’t require you to add overhead code (like a dummy Node.js server) to just ship your files to your users.
# How to fix broken Routes after Deployment
After deploying an Angular app to a real server, I sometimes hear that the routing stops working. At least if users directly visit any other page than the main page - say or if users refresh the browser whilst being on such a “sub-page”.
index.htmlfile - that’s about its only job!
Therefore, your server can’t do anything with an incoming request pointing at a
/products route - it’s totally unknown to it! Due to the way the web works, requests reach the server though and your client (i.e. the Angular Angular app gets loaded and gets a chance to handle the request.
If you still want to render a 404 error page for routes that are really unknown, i.e. that are also not configured in your Angular app, you’ll ne to add a catch-all route like this:
// Configure all other routes first or they'll not be considered! { path: '**', component: Custom404PageComponent }
# Can Everyone see your Code?
I explained how you can debug your Angular app.
If you can look into your JS (and even TypeScript) code - can’t everyone else do the same?
The answer is: Yes!
>>IMAGE.
# How to integrate 3rd Party CSS and JS Libraries
“Can I use jQuery in my Angular app” and “how can I use Bootstrap in my Angular app” are questions I see a lot.
And it makes sense - writing Angular apps looks like you’re working with a totally different language! It can be hard to understand that you’re still writing a frontend, JavaScript-based app in the end!
But since you’re doing that, integrating third-party libraries is actually rather easy!
Integrating 3rd Party JavaScript Libraries
Let’s have a look at third-party JavaScript libraries like jQuery first as this is a bit more “special”.
Libraries with
.d.ts Files
We first of all have to differentiate between libraries that ship with type definition files (
.d.ts files) even though the library itself might be written in JavaScript. Quite a lot of libraries do that these days. TypeScript can understand such libraries once you start importing objects, methods etc. from them.
Let’s have a look at an example: The Firebase SDK for JavaScript.
You can add it to your project via
npm install --save firebase
Once you added it, you can use it in your files like this:
import * as firebase from 'firebase'
We’re essentially importing from that library as we import from other (custom created) files. We’ll get autocompletion (if supported by the IDE) and TypeScript support because Firebase actually ships a
.d.ts file with the SDK.
What do these
.d.ts files do then?
They act as “translation helpers” - they allow TypeScript to understand the structure and types of the JavaScript library, even though it’s not written with TypeScript.
Libraries with ES6 exports but without
.d.ts Files
We also have libraries that do use ES6 exports (i.e. we can import with the ES6/ TypeScript
import { something } from 'library' statements) but don’t ship a
.d.ts files.
Whilst we won’t get TypeScript support in this case, we should still be able to import by using the above mentioned syntax.
Libraries without ES6 exports and
.d.ts Files
We also have libraries that are not offering any ES6 exports or
.d.ts files. Often, these are JS packages which you include via a CDN or simply download - so you didn’t use
npm install to fetch them.
In such cases, have three options of including them:
- Add a
<script src="path/to/library">tag in the
index.htmlfile
- Add the path (relative from the
src/directory!) to the
scripts[]array in the
.angular-cli.jsonfile
- Add an import to the
main.tsfile in your Angular project:
import 'path/to/file
The latter two options will lead to the library file being included in your final bundle and hence you can access anything the library offers in your TypeScript files.
Here’s how you could install + add Lodash:
npm install --save lodash
import 'lodash'; @Component({...}) export class MyComponent {...}
One important note though: TypeScript will not be aware of the things your library offers - so you have to tell it. Do that with one of the following two options:
- Add the following entry to
src/typings.d.ts:
declare module 'name-of-library'(create that file if it doesn’t exist)
- Add the following entry in any file where you use something from the library:
declare var whatIUse: any;. Make sure to add it right below the import statements, before you create your classes or execute the other code.
Here’s an example for the latter approach:
import 'lodash'; declare var _: any; @Component({...}) export class MyComponent {...}
Integrating 3rd Party CSS Libraries
Integrating third-party CSS libraries like Bootstrap is also simple. You can either install the libraries via
npm install or download the files (or use a CDN).
The libraries can then be included via one of the following three ways:
- Add a
<link href="path/to/file.css">entry to your
index.htmlfile
- Add the path (relative from the
src/folder) to
styles[]in your your
.angular-cli.jsonfile
- Add an import to a TypeScript file (e.g.
main.ts):
import 'path/to/file.css'
The last option certainly looks strange but it works! You can import
.css files into TypeScript files. They’ll not really get imported but Webpack, which bundles everything behind the scenes, will take the file and ensure that it gets loaded in the
index.html file in the end.
Important: For option two (add path to
styles[]), you must use a path relative from
src/, not from the
.angular-cli.json file.
Here’s the example for Bootstrap:
"styles": [ "../node_modules/bootstrap/dist/css/bootstrap.min.css" ]
# Share your Questions!
Do you have more questions? Please share them - either by leaving a feedback for this article or by sending a mail to feedback@academind.com. Make sure to subscribe to our newsletter to be informed when new interesting content is available! | https://academind.com/learn/angular/angular-q-a/ | CC-MAIN-2019-18 | refinedweb | 3,442 | 65.12 |
»
Databases
»
JDBC and Relational Databases
Author
Access database from a class file
Emma Aziz
Greenhorn
Joined: Apr 27, 2009
Posts: 23
posted
May 10, 2009 22:54:26
0
Hello All,
I am currently building a mini programme which connects to Oracle database. I have tried several ways and they are all successful. The ways are:
1) Connection to database and query to database in a single
jsp
file with action in a different html file
2) Connection to database and query to database in a single class file with action in a jsp file
Now, I would like to have action in one html file, query to database in one jsp file and a connection to database in another jsp / class file. How do I do that?
Search.html -> qrySearch.jsp -> Connection.jsp
Note that I am not familiar at all in doGet / doPost method. If someone can point out how is the step-by-step to connect to database which coded in different jsp file.
Thanks in advance.
Jan Cumps
Bartender
Joined: Dec 20, 2006
Posts: 2532
10
I like...
posted
May 11, 2009 04:21:14
0
Hi Emma,
We do not advise that you put database code in your JSP. I am sure someone will be able to point you to a clean way to access data in your web application.
However, examples of what you describe in your question can be found by googling for "jsp
jdbc
tutorial".
OCUP UML fundamental and ITIL foundation
youtube channel
Balu Sadhasivam
Ranch Hand
Joined: Jan 01, 2009
Posts: 874
I like...
posted
May 11, 2009 12:00:36
0
Emma,
As Jan pointed out , its not advisable to use scriptlets in JSP. As a first try you could define generic DB class and get connections in
Servlet
and pass the values to JSP to display it.
Emma Aziz
Greenhorn
Joined: Apr 27, 2009
Posts: 23
posted
May 11, 2009 19:58:47
0
Thanks for all the reply.
I want to access the database which all the information are stored in a class file. Maybe I stated
them worngly previously.
I have this ConnectOracle.java, compiled to a class, which the code looks like this:
package Connect; import java.sql.DriverManager; import java.sql.Connection; public class ConnectOracle{ public static final String url = "jdbc:oracle:thin:@server:1521:sid"; public static final String driver = "oracle.jdbc.driver.OracleDriver"; public static final String userName = "user"; public static final String password = "password"; public static void main(String[] args) throws Exception{ System.out.println("Oracle connection example."); Connection conn = null; try{ Class.forName(driver).newInstance(); conn = DriverManager.getConnection(url,userName,password); } finally{ if(conn!=null){ conn.close(); } } } }
and I have a jsp/html file which will do the query. This page will look for the ConnectOracle.class.
How do I call the class file from my jsp file.
I am having problem compiling the ConnectOracle package. I put "package Connect" at tge first line
but DJ
Java
decompiler said some problems occured while compiling. Without the "package Connect",
it can be compiled successfully.
Thanks in advance.
Balu Sadhasivam
Ranch Hand
Joined: Jan 01, 2009
Posts: 874
I like...
posted
May 11, 2009 22:21:05
0
Emma,
I guess you are confused with .class file and .java file. To make things clearer , A java file is just a normal file which when compiled using javac (JVM provided in JDK) converts to class file. So whenever you say to call/access the java file or class file it typically means you call "class"( not class file , not java file ) , but the "class" created on compiling java file.
It always better to refer them as "classes".
How do I call the class file from my jsp file.
If you still want to use scriptlets in JSP for learning purpose , you can proceed.So how do you import classes in servlet ? similarly import them to JSP using <@%page import="" %> atttribute.
# public static void main(String[] args) throws Exception{ # System.out.println("Oracle connection example."); # Connection conn = null; # try{ # Class.forName(driver).newInstance(); # conn = DriverManager.getConnection(url,userName,password); # }
This is main method , which can be run standalone. When you run from JSP , you dont require it be standalone. JSP Container takes care of calling the required method.
Also , you can see Connection object is a local variable which created inside main , and once main method ends , Connection object is lost. ( no reference to it). So how do you use connection object to make query ?
You got to change your ConnectOracle class to be useful inside JSP/servlet or any other means.
1) Define static Connection conn object outside the method.
static Connection conn = null;
2) Change the main method to take no args ,( otherwise you got to pass dummy useless
String
array whenever you call main()) ,and change the return type to return conn object.
public static Connection main () { }
2) initialise the conn inside the main like you did. ( dont close the connection object) .check if the conn is already there and return , else initialise
if(conn == null ) // initialise conn else return conn;
3) Define a new method to close the connection object.
conn.close(); conn = null; //nullify conn.
Emma Aziz
Greenhorn
Joined: Apr 27, 2009
Posts: 23
posted
May 19, 2009 03:10:22
0
Thanks for all the replies.
I already set up my class files for Database and Query.
The database code is as below:
import java.io.*; import javax.servlet.*; import javax.servlet.http.*; import javax.sql.*; import java.sql.*; import java.util.*; public class ConnectToDB extends HttpServlet{ public void doGet(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException{ req.setAttribute("conn", conn); RequestDispatcher dispatcher = getServletContext().getRequestDispatcher("Search.class"); dispatcher.forward(req,res); } public void init(ServletConfig config) throws ServletException{ super.init(config); try{ Class.forName("oracle.jdbc.driver.OracleDriver"); conn = DriverManager.getConnection("jdbc:oracle:thin:@server:port:SID", "user", "password"); }catch(ClassNotFoundException e){ throw new UnavailableException(this, "Cannot load OracleDriver"); }catch(SQLException e){ throw new UnavailableException(this, "Cannot get db connection"); } } }
and the query class is as below:
import java.io.*; import javax.servlet.*; import javax.servlet.http.*; import javax.sql.*; import java.sql.*; import java.util.*; public class Search extends HttpServlet{ public void doGet(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException{ Connection conn = null; System.out.println("Con = " + conn); try { res.setContentType("text/html"); PrintWriter out = res.getWriter(); String sql = req.getParameter("sql"); Statement st; ArrayList al=null; ArrayList emp_list =new ArrayList(); if (sql.equalsIgnoreCase("view")){ String query = "select * from POS_DEF_PROD_DEF";); }else{ String query = "select * from POS_DEF_PROD_DEF where prod_code='"+sql+"'";); } String nextJSP = "/viewSearch.jsp"; RequestDispatcher dispatcher = getServletContext().getRequestDispatcher(nextJSP); dispatcher.forward(req,res); conn.close(); }catch (Exception e) { e.printStackTrace(); } } public void doPost(HttpServletRequest request,HttpServletResponse response) throws ServletException, IOException{ doGet(request, response); } }
The thing is, how do I call the "conn" so that it can be recognized by the 2nd servlet? Whenever I try to compile it will say cannot resolve symbol at the conn. part.
Thanks for all the help.
I agree. Here's the link:
subject: Access database from a class file
Similar Threads
Fetch data from different database schema tables
What Servlet/JSP Architecture
Need Help on DAO
SQL queries
about struts
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/444836/JDBC/databases/Access-database-class-file | CC-MAIN-2015-14 | refinedweb | 1,214 | 57.98 |
Python is not exactly the best option for asynchronous programming. Node.js is pure JavaScript, so its basics remain simple for the developers to learn. What are the benefits of developing in Node.js vs Python?
When it comes to back-end website development, a developer will easily find a handful of refined programming languages. Objective-C, C++, PHP, Python, and Java are languages that play a crucial part in the development of websites. Having said that, there are web development companies and developers who initiate the creation of websites and applications without any proper planning or a definite structure. Here, they can seek out the assistance of frameworks. PHP powers some exceptional frameworks like Laravel, Symfony, cake, Yii and so on to help programmers at this point. In short, the choices available for developers to create websites and applications are a lot many.
Node.js is a powerful run-time environment used by developers and many a Node.js development company for creating web solutions. It is pure JavaScript and is easy to learn. Python is a clean server-side scripting language having a lot of admirers addicted to it. Programmers who come from the Java background may find shifting to PHP as a terrible thing. But developers will feel more comfortable in shifting to Python from Java. Some expert developers use both Node.js and Python in tandem. To utilize the complete merits of Node.js and Python in the development of web solutions, developers must have a clear-cut idea about where it should be used? And where it should not be used? They must have a thorough knowledge of advantages, disadvantages, functionality, and the working of both these platforms.
Where Node.js Excels?
Node.js is pure JavaScript and the basics remain simple for the developers to learn. The learning curve is very low. On multiple occasions, it runs faster than Python. Python tends to be slow at the initial stage. Perhaps, Node.js is the best platform available right now to deal with real-time web applications. The ones that deal with queued inputs, data-streaming, and proxy. Nodes.js performs at its best when used to develop chat applications r functional things such as a live stock-exchange.
Where Node.js fall Behind?
Node.js lacks the clean coding standards. Node.js cannot be recommended for larger projects unless you maintain a team of experts who work in a disciplined manner. Every developer who works in the project must stick to the Promise library or Bluebird and every developer must maintain a strict style guideline to avoid the breaking of the project in the middle.
Debugging, and the inclusion of new features while implementing bigger projects using Node.js may cause pain in the nerve for many programmers. When making use of a dynamically typed language, programmers may fall short of many valuable functions in the IDE. Call-backs, error-handling, and overall maintainability of Node.js may cause issues when used with massive projects. It suits or works quickly while used in small projects for enabling functionality which requires less scripting.
Where Python Excels?
The great advantage of using Python is that you will have to write fewer lines of code and is a clean platform. The learning curve of this platform is not that simple, but learners can easily overcome it once things start going to the long run. This platform has a great maintainability, and errors can be solved within less time. The compact syntax is really simple to work with. It is a language which keeps valuable standards, and is easy to debug and fix errors.
Python comprises of a functional library which is better than PHP. The importing exceptions and namespaces really work well without any issues. Simply, Python can do anything that can be done using PHP codes, and all those can be accomplished at a greater speed(even more). Hence, developers may not face any major issues if they use Python for developing larger projects.
Where Python fall Behind?
Python’s performance is not as fast as Java in a run-time environment. It is not the best for activities that are memory intensive. The language is interpreted causing an initial performance drop down in comparison to java or C/C++.
To put it in simple words, it is not a language suited for developing a high-end 3D game involving graphics and a lot more CPU activity. Python continues to be in a developing state, and the documentation happens to be poor for the newly included functionality. The tutorials, as well as resources detailing the functions of Python, is much less if we compare it with PHP, Java, or C.
Bringing it Altogether
Node.js utilizes a V8 JavaScript interpreter having a built-in Just-In-Time compiler to increase the speed of web applications. Python also has a built-in interpreter, namely PyPy. Still, it does not back the Python version 3.5.1. As a final word, we can stay glued to the old saying, no programming language is good or bad. What adds life to a website or an application is the brain, eyes, and hands that put the merits of a language rightly into it at the time of development. The Complete Node.js Developer Course (3rd Edition)
☞ Angular & NodeJS - The MEAN Stack Guide
☞ NodeJS - The Complete Guide (incl. MVC, REST APIs, GraphQL)
☞ Node.js: The Complete Guide to Build RESTful APIs (2018): | https://morioh.com/p/fb616bd633dc | CC-MAIN-2019-47 | refinedweb | 911 | 66.54 |
I read my notes and I found out that I had that problem.
You need to set the SPI clock to 8Mhz!
insert the spi clock speed at line 374 in lib_nrf24.py
After that your python code should work!
Code: Select all
def begin(self, csn_pin, ce_pin=0): # csn & ce are RF24 terminology. csn = SPI's CE! # Initialize SPI bus.. # ce_pin is for the rx=listen or tx=trigger pin on RF24 (they call that ce !!!) # CE optional (at least in some circumstances, eg fixed PTX PRX roles, no powerdown) # CE seems to hold itself as (sufficiently) HIGH, but tie HIGH is safer! self.spidev.open(0, csn_pin) self.spidev.max_speed_hz=8000000 self.ce_pin = ce_pin | https://www.raspberrypi.org/forums/viewtopic.php?f=45&t=17061&start=250 | CC-MAIN-2019-43 | refinedweb | 116 | 86.4 |
- Table of contents
- Editing Code
- Overview
- Coding Conventions
- Using and Creating a Test Release
- Adding Packages to the Test Release
- Removing Packages from the Test Release
- Compiling Packages in the Test Release
- Checking code into the repository
- Updating a test release
- Editing code remotely with Emacs and Tramp
Editing Code¶
Overview¶
The following is an outline of how to edit the code. Step 3 is usually done only once; i.e., the first time one wants to start editing code.
- source the setup file to set the SRT related variables # cd to a working directory # create a test release # cd into test release directory and work on code
Whether you are using the Fermilab installation or have installed LArSoft on a local machine, the steps to editing the code are the same. The main benefit of SRT becomes apparent in this process, namely that it allows users to create test releases that are distinct from the local base release. In this system, multiple users can work on several packages from the base release at once without interfering with each other.
Please be sure to follow the coding conventions enumerated below.
Please also sign yourself up to receive email notification of code commits from svn. This will allow you to monitor your package in case others have to commit changes to it. To sign up, follow these instructions with the list name LARSOFTCOMMIT.
Coding Conventions¶
There are only a few conventions to keep in mind when writing code for LArSoft, and they are
- Namespaces must be explicit, no "using namespace XXX" statements are allowed.
- All packages are a namespace; the names of the namespaces should be kept to 5 letters or less and should indicate the package, ie evd:: for the EventDisplay package
- Data members of an object should have variable names that begin with "f", ie fADC
- Static variables defined in an object should have names that begin with "k", ie kConstantValue
- Variable names should be reasonably descriptive for the scope in which they are used - i is ok in a small for loop, not ok in one spanning > 20 lines.
- Typedefs for predefined types are discouraged, ie typedef int Int_t, typedef std::vector < double > dubvec. Typedefs should be reserved for legitmate new types, ie Origin_t in !SimulationBase/MCTruth.h.
- Comments are mandatory - each new object should have a description of its purpose in both the header and implementation file
- Comments should be of a format that enables doxygen to interpret them
- Use the message service for output to the screen.
- Module types and file names should be consistent, ie if a module is of type MyModule, the file names should be MyModule.h, MyModule.cxx and MyModule_moudle.cc
Using and Creating a Test Release¶
A test release should contain only those packages that the user plans to alter. The remaining packages will be linked from the stable base release. To create a test release cd to a directory other than $SRT_DIST or its subdirectories and do
% newrel -t <base> test/release/directory
where <base> is the name of the base release, i.e. development, and test/release/directory is the name of the test release directory, e.g. lartest.
Only one test release need to be created. It can hold all packages that the user wishes to modify. However, in certain circumstances it can be useful to set up multiple test releases, such as having a test release for development code and one for a frozen point release.
Adding Packages to the Test Release¶
Once the test release is made, the user should cd into the directory and add packages to edit by doing
% addpkg_svn -h package
where the -h indicates that the package should be checked out from the head of the repository and package is the name of the package. If you have trouble with this step, you may need to be added to the list of developers for larsoft so please contact the larsoft managers. Packages that come out of nusoft use cvs and not svn, so if addpkg_svn fails, try
% addpkg -h nusoft_package
If you have a test release based on a frozen release, SYYYY.MM.DD, then do
% addpkg_svn package <tag>
where <tag> is the svn tag for the version of that package you wish to checkout. See the $SRT_PUBLIC_CONTEXT/setup/packages-SYYYY.MM.DD file to determine the tag for the package.
If you wish to create a package in your test release, simply do
% mkdir mypackage
where mypackage is the name of your new package. You will need to create a GNUmakefile inside the directory; you can copy an example from existing packages. You will also want to create a link from your new package to the include directory of your test release in order to compile your package,
% cd include
% ln -sf ../ mypackage .
Removing Packages from the Test Release¶
DO NOT under ANY circumstance do a rm -r package. SRT creates secret links that are not seen by the user that if left in place will mysteriously cause the test release to break. Instead, one must always do:
% rmpkg <package-name>
This removes the package and all the links that SRT has made, including the .so files in one's lib folder.
Compiling Packages in the Test Release¶
Compiling packages under the SRT environment is fairly straightforward. First, one needs to run the command
% srt_setup -a
This command sets the $SRT_PRIVATE_CONTEXT variable to the current directory and also sets paths for the compilation. To compile a package, do
% gmake package.all
If a clean compilation is needed, the user should do
% gmake package.clean
before compiling. When compiling code replace package in the above examples with the name of the package; e.g., RecoBase.
If you have several packages in your release that you want to compile at the same time do
% lar_build -t
and that will do a clean build of all the packages in the proper dependency order.
Checking code into the repository¶
These are instructions for checking code into a package that already exists in the repository. Before checking any code into the repository compile it to be sure it builds. Then cd into the directory and follow these instructions.
Previously Checked Out Code¶
svn diff
This command will print the differences between your edited code and the version in the repository. Check to be sure that the differences are expected. Then do
svn commit -m'an informative message describing the changes'
The "-m" indicates that what follows is a message that will be saved along with the changes in the repository.
Adding New Code to an Existing Package¶
If the code to be committed does not yet exist in the repository it needs to be added with
svn add xxx.yyy
where xxx is the file name and .yyy is the file type, ie .cxx, .h, .cc.
Then do
svn commit -m'adding new code to do ...'
where the commit message indicates the purpose of the newly added code.
Adding a new Package to the Repository¶
Please follow these instructions to add a new package to the repository.
Updating a test release¶
It is useful to bring a test release in line with the head of the release the test is using from time to time. This is especially true if one has several packages in a test release and has not been diligent about updating them. There is a utility to automate this update, source:SRT_LAR/scripts/lar_update_testrel and is used as follows
$lar_update_testrel -rel xxxx
where xxxx is the name of the base release the test release is built on, ie development. If the test release is built on a frozen release, then xxx is the name of that frozen release.
The script will compare all the packages in the test release that are also in the base release. If there is a package in a test release that is not in the $SRT_PUBLIC_CONTEXT/setup/packages-xxxx file for the base release, it will not be updated.
Editing code remotely with Emacs and Tramp¶
Emacs has a handy facility called Tramp that will scp a file to your local machine, allowing you to edit it without network latency, and then scp the saved changes back to the original location. To use tramp add the following to your .emacs file
(define-abbrev-table 'my-tramp-abbrev-table '( ("aliasname" "/ssh:somegpvm.fnal.gov:/uboone/app/users/my_user_name/") ))
where "aliasname" is the name you want to use for an alias, "somegpvm.fnal.gov" is the name of the machine you wish to access remotely, and "my_user_name" is your username on that machine.
Restart Emacs, and then attempt to open a file with "ctrl-x ctrl-f". Replace the "~/" in the file open dialog window with "aliasname" and press tab to access the remote directory. | https://cdcvs.fnal.gov/redmine/projects/larsoft/wiki/Editing_Code | CC-MAIN-2019-47 | refinedweb | 1,466 | 59.13 |
tag:blogger.com,1999:blog-67636492009-05-15T11:15:30.439-05:00It is what it isLargely harmless confabulation on commercial software development, economics, and the relentless desire to see more of the worldSebastian Good ArcIMS 9.3 on Vista or Vista 64: Internal Error 2738<p".</p> .</p> <p>Thanks to Swapna at ESRI support services for resolving this issue in just a couple of hours on an otherwise quiet Wednesday afternoon.</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good, who broke Georgia?<p>The US has been "at war" for so long that I suspect most Americans don't think much of the current tussle over Georgia. While the stress of our wars (against drugs, against the Taliban, and against Iraqi militias) have stretched the military to its limits and hurt our economy, the fact is that most people are still not directly affected by them. Why would the threat of a war over Georgia worry us? The number of casualties has not impressed itself into our national consciousness yet. How could it? With approximately 30,000 wounded in a population of 300,000,000 (one hundredth of a percent), the Iraq war cannot affect our conscience the same way the Vietnam war (330,000 casualties in a population of 200,000,000, a fraction 15 times bigger) or World War II (875,000 casualties in a population of 130,000,000, fraction 70 times as large) did? But shouldn't the thought of war with Russia just scare the pants off everyone? Even a limited nuclear exchange would make the numbers above seem like the good old days. Why does no one talk about it?</p> <p>An increasingly common thread is the one nicely summed up by Thomas Friedman at the NYT wondering why we would "<a href="">cram NATO expansion down the Russians’ throats</a>". Why was Georgia on a track for NATO membership? Why is Poland in NATO? Against the Soviet Union's stated goals of world domination promising to defend a divided Germany or a weakened United Kingdom with nuclear weapons seemed sensible enough. Against a modern Russia more keen to misspend its oil wealth and struggle with its shrinking population, why do we need to commit American troops to defend Poland, a land with no natural borders? Or Georgia, a charming yet tiny republic of limited geopolitical consequence? (Yes, they have an oil pipeline. There's only so often Russia can threaten to not sell their oil. It's all they've got as a source of national wealth.)</p> <p>So I've been wondering this for some time. Why do we need to expand NATO to Russia's doorstep? So we look like fools when the question of "will you defend peripheral NATO members with nuclear force" comes up? If Russia invaded some separatists parts of Poland would we <i>really</i> pull a half million troops from Iraq, steam the Navy into the North Sea, and target our nukes at Moscow? <i>Really?</i>. It seems unimaginable.</p> <p>Reading this article made me remember I've been wondering this a long time. Through the miracles of the Internet and, coincidentally, the Israeli embassy in the US, I found a transcript of me asking this question to then Secretary-of-State Madeline Albright over ten years ago. <a href="">The full transcript is here.</a></p> <blockquote>(My question) It is clear that NATO's role must change in the post-Cold War era, but why does the administration think that expanding NATO's unilateral defense agreements into the former Soviet Bloc is a good idea? Russia's recent pressure on Belarus and some warmongering by its Generals indicate that even an unsteady Russia is not keen on the idea. In this century, Western Europe has offered defense of nations of Eastern Europe before and then reneged on those promises. Why emphasize military inclusion now rather than concentrate on economic inclusion and/or aid?</blockquote> <p>Her answer, emphasis mine</p> <blockquote>This is a very important question and it is clearly among the highest priorities that President Clinton has. He has stated that an undivided and stable Europe is very important to the United States.</blockquote> <blockquote>I think that we all know as students of history that Central and Eastern Europe have, in fact, been the breeding ground of two World Wars. An instability in that region is something that concerns us all. We have made the decision that it is important to expand NATO to cover that region. <strong>We, however, also know that it is very important that the Russians do not feel that an expanded NATO is a threat to them or an adversarial move</strong>.</blockquote> <blockquote><strong>The purpose of an expanded NATO is, in fact, to create, or help to create, stability and deal with problems within that gray zone, that gray area in Central and Eastern Europe</strong>. We think that that is not only to our advantage but, frankly, also to the advantage of Russia. Because we are concerned about Russia and not letting that great country have a sense that it is being left out, we are also in the process of negotiating a charter between NATO and Russia which would, in fact, have the Russians understand that NATO itself is not an adversary.</blockquote> <p>How silly does that statement seem now with Russia making nuclear threats against Poland's new US-supplied missile defense system? Back in 1997, oil was about $15/barrel, Russia was divided and confused, and we were pressing our advantage in the cold war. With oil now at $115/barrel, and US power at a nadir thanks to poorly chosen wars at our periphery, how smart does it look to set up nuclear outposts in the bear's back yard? Why we might imagine how we'd feel if the Soviet Union had tried to place nuclear missiles 90 miles from Miami. Fences can indeed make good neighbors, and we ought to be happy to let Georgia be a fence.</p> <p>Russia won't be strong forever. If we feel threatened by them, let's concentrate on worrying about what gives them strength: the high price of oil. We can wait out their inevitable population decline and foster responsible economic development. If we're going to make a multi-generational geopolitical bet, let's bet on demographics, work on energy independence, and reap the dividends of peace. Let's not bet with our youth that we can push Russia as far as we want, and that all wars are easily fightable on credit. We've already pushed them enough. A war with Russia can't be put on the nation's credit card. I think we knew this 10 years ago when we were uneasy with NATO expansion into the former Warsaw pact. Let's not lose sight of it again.</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good Explorer Build 480: Threading!<p>ArcGIS Explorer (AGX to its friends, apparently) has <a href="">a new release</a> out, with a great number of promised new features. I'd <a href="">watched previous builds</a> of AGX through an <a href="">HTTP proxy</a> to see what all it got up to while fetching images from the web. Investigation showed that it was brutally single-threaded and not terribly bright about what order to retrieve tiles in.</p> <p><em>Build 480 changes those things a great deal for the better</em>. Even when looking at a single layer, such as the default satellite/aerial imagery provided, you can see AGX running 2 to 4 simultaneous connections to fetch tiles. Their documentation promises multi-threaded downloads only on a dual core machine, which I've got, so your mileage may vary. I would have thought that even a single core machine would benefit from having several HTTP requests in flight, as they involve so much waiting around, but one suspects ESRI did some performance testing.</p> <p>When connecting to a very slow service, such as most of the WMS servers out there, I was able to see AGX with as many as 10 connections in flight at once. This is good! There are still some starvation issues (the crappy WMS service kept the fairly snappy ESRI service from showing up for a long time), but this is a great improvement.</p> <p>The order in which tiles are retrieved still seems suspect to me, with the world view first fetching Asia, the north pole, and Antarctica before the western hemisphere view which is the default. Grabbing multiple images at once compensates for the imperfect tile fetching strategy. And like before, shutting down AGX while it has several open HTTP connections is not pretty: it waits until they are timed out to shut down completely (even continuing to fetch new tiles while it tries). These flaws mean that you still need to be careful using any service which is slow or broken -- other imagery will get stuck behind it.</p> <p>More news as investigations continue. So far, so encouraging!</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good for new oil in the USA<p>So the debate over drilling within the US has opened up again. There will be lots of debate about environmental impact that I don't feel sufficiently informed on to comment about. But given the success Bush has had redefining US foreign policy as a contest to see who can be the biggest asshole (read: tough on terrorists), I suspect the debate will revolve once again around that old canard: "energy independence". The idea is that if we drill more at home, we'll be less dependent on foreign oil.</p> <p>Let me present a slightly different viewpoint I don't hear discussed much in the media. Allow me a couple of asides:</p> <p>First, the notion that oil drilled in the United States will be consumed in the United States is of course not entirely true. Oil produced in Alaska may be much more cheaply consumed in Japan than in New York. That's just geometry. The market will make that decision based on shipping distances, quality of crude, availability of refining, and a thousand other factors. What domestic drilling may do is increase the world's supply of oil a bit (or at least slightly arrest the decline of US production), and thereby place downward pressure on the price of oil. This seems like a laudable goal.</p> <p>Second, you will hear a few people say that it's not worth drilling for more oil because the amount is tiny and would not be felt for many years. Just because something won't have an effect for 5 years doesn't mean it shouldn't be done. But it's not often appreciated how much oil we consume and how much impact US exploration might have. Of course the numbers are always open to debate, but take the two <a href="">extremely</a> <a href="">massive</a> oil fields recently discovered off the coast of Brazil. These are game-changing fields, some of the largest on the planet. They total roughly 10-20 billion barrels of oil equivalent. That's a lot, right? What if we found those hiding in the Gulf of Mexico. Even though we're pretty certain there's nothing that big out there, even so... These fields, if drained completely dry, would only serve the US's <a href="">oil consumption</a> for less than three years. Three years. The numbers are staggering. Drilling activists like to suggest that there is a ton of oil in the ground that the tree-huggers are just hiding from us. That's true: but the amount of oil is not particularly significant if you look at it from a multi-decade point of view.</p> <p>Which brings me to the point I was really trying to make. The point is actually very simple.</p> <p>There's not a lot of oil left at reasonable prices given current consumption and growth trends. It's very hard to tell whether this price spike is the beginning of the end, or just a head fake, but the end is coming in our lifetimes. As any petroleum geologist will tell you, "the end" will arrive with lots of oil left in the ground, just too expensive for our means. When that end comes, when oil is $1000 per barrel, gasoline is rationed for national security reasons, when poorer countries without access to alternative energy technology are going to war to secure the oil they need to fertilize their crops so they can <i>eat</i> and <i>drink</i>, what situation do you want the US to be in? With major reserves already tapped to secure a few extra years of $4/gallon gasoline? Or with major reserves available within our borders to provide the fuel the army and navy require to secure peace in this dangerous world? Do you want the strategic oil reserve to have been run down to keep the cost of Summer road trips low? Or in place to ensure the smooth functioning of the military when Mexico, Venezuela, and Nigeria won't have any oil left to export? Do you want to have burned all the oil to light up Starbucks signs at night, or have some left over to maintain the crop yields which allow our nation to produce something the rest of the world thinks is worth buying?</p> <p>Oil is running out. Aside from the obvious implication that we should be working on alternative sources of energy (and oil at $140/bbl is doing that much better than any Congressional plan would), it's not obvious to very many people that we should keep what's left for ourselves. Every nation should be looking towards energy security, just as they look for food security. Why should we pawn our future for a few years of fun in the present?</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good Tenth Rule Redux: Vista is Terrible<p>I just got a new laptop with Vista 64 on it. In lieu of flowers, please send donations to the ACLU. But I digress.</p> <p>It turns out you can't install Visual Studio 2005 from a mount point in Vista. You know, I thought it would be convenient to move on from the bad-idea-when-it-started-14-years-ago of always agreeing to map the same drive letter on each of my computers to the same actual share I maintain of software, music, etc. I thought, hey, Vista supports <a href="">symbolic links</a>. I'll map c:/users/public/software to \\myserver\public\software. And I'll install from there. No drive mappings to remember to add, remove, wait for Explorer to hang from. I'm afraid not. The VS 2005 install just fails. Why? How can it even possibly tell the difference?</p> <p>Here's the kicker. Even when I mapped my F: drive like we used to do in the gay nineties, the install failed. The amazing part? The install log still complained about not being able to read from c:/users/public/software. Deep down in the system, it knew that F: was also mounted to that point on my C: drive! How could it be? Neither F: nor c:\users\public\software are the 'real' location of that share. Good lord. How does Microsoft wait 20 years to implement links and then get them wrong?</p> <p>It reminds me of <a href="">Greenspun's tenth rule of programming</a>:</p> <blockquote>Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified bug-ridden slow implementation of half of Common Lisp.</blockquote> <p>To which I should add: Any sufficiently mature operating system contains an ad-hoc informally-specified bug-ridden implementation of half of Unix. And in the great spirit of <a href="">Raymond Chen</a>, pre-emptive snarky comment: yes, that probably applies to most Unix implementations.</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good 9.2's ST_GEOMETRY: Part Two, The Empire Strikes Back<p>I've <a href="">been</a> <a href="">investigating</a>, <a href="">disappointment</a> at slow performance on large datasets, followed by theorizing about the cost of out-of-process calls.</p> <p>Well, in these things it's not the journey, it's the destination. I thought I better compare to Oracle Spatial's SDO_GEOMETRY to make sure I was comparing apples to apples, as it were. The results are interesting.</p> <p.</p> <p>For Oracle</p> <code>; </code> <p>For SDE</p> <code> select count(*) from zip zi join county co on (st_intersects(zi.shape, co.shape)=1) where co.objectid between 1200 and 1500; select sum(dbms_lob.getlength(sde.st_asbinary(shape))) from zip; </code> <p.</p> <p>The somewhat surprising results are as follows (all times in seconds).</p> <table> <tr><td>Operation</td><td>SDE/SDO</td><td>SDE/ST</td><td>ST_GEOMETRY</td><td>SDO_GEOMETRY</td></tr> <tr><td>Join</td><td>25</td><td>25</td><td>6.0</td><td>?</td></tr> <tr><td>Scan</td><td>15</td><td>5</td><td>35</td><td>∞</td></tr> </table> <p><em.</em></p> <p><em.</em></p> <p>The lessons are certainly mixed.</p> <p><strong>Joins</strong>..</p> <p><strong>Scans</strong>. <em>and</em>?</p> <p.</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good 9.2's ST_GEOMETRY: Part One, perhaps Part Last<p><a href="">Last time</a>, I played a little with SDE 9.2's new ST_GEOMETRY support and was glad I waited for SP5 (!). More experimentation revealed that writing spatial SQL was easy. I've waited a long time to use ESRI-sanctioned methods for asking things like</p> <code> select * from zip zi join county co on (st_intersects(zi.shape, co.shape)=1) where co.population > 1000000; </code>. <p>And it does work. The spatial indexing gets used intelligently, and you can throw (variants of the above) into ArcMap and render spatial queries on the fly. I even started experimenting with doing on-the-fly geoprocessing with things like shape intersections.</p> <p>I soon noticed that doing anything with more than a few hundred shapes was slow. Of course at first I blamed my queries, the Oracle optimizer, the spatial indexes, anything. But this ain't my first rodeo. That stuff was all fine. No, I was noticing that no matter what I did, ST_GEOMETRY couldn't deal with more than about 1000 shapes per second. Fancy spatial indexes don't do much good when the final geometry-to-geometry filter maxes out on so few shapes.</p> <p>Why oh why?</p> <p>Part of the problem seems to be because all ST_GEOMETRY functionality is implemented with an external C DLL, st_shapelib. This is <a href="">how Oracle wants you to do it</a>. Every call to an ST_GEOMETRY function is made out of process from the Oracle process serving your connection (oracle.exe on Windows) to a spawned executable (extproc.exe on Windows) via pipes or sockets. When intercepting a CREATE TABLE command and jumping in to create an index, these two process switches are no big deal. When determining whether one polygon overlaps another during a join operation over tens of thousands of individual polygons, it is murder. Quantifying the cost of a context switch is tricky, but at the very least it is <strong><a href-"">thousands of cycles</a></strong>. Figure four thousand cycles, plus a few thousand for copying the data back (and forth), plus the (perhaps greater) cost of killing cache locality and dumping all your registers, and it's not a pretty picture. Compare that to the cost of doing the few dozen or hundred operations needed for determining overlap of my typical polygons and it's likely this out-of-process trick is killing performance by a factor of ten or more.</p> <p>To make sure this wasn't just a fancy theory, I pulled up trusty perfmon.exe and had it count context switches. Here it is ticking along for a while on my idle machine, then executing a very simple query for a couple of seconds.</p> <p><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="cursor:pointer; cursor:hand;" src="" border="0" alt="" /></a></p> <p>Can you spot the query? The average number of context switches per second hovers around 850, then spikes to 20,000 during the query, then back.</p> <p>For completeness, I wrote a test program which simply created two threads which did nothing but relinquish control and loop, trying to get an upper bound on the number of switches I could do. Trusty F# with its concision was useful here.</p> <code> open System.Threading;; let rec cswitch () = Thread.Sleep(0); cswitch();; let makethread () = (new Thread(cswitch)).Start();; makethread();; makethread();; </code> <p>And I was off to about a million context switches per second. That's an unrealistically high number, since the above code is just an infinite loop with no overhead whatsoever. But I would guess that 1 million no-overhead process switches translates to roughly the ~100,000 order of magnitude real-world process switches I was seeing with the st_shapelib calls. I also suspect it means st_shapelib is doing more than a trivial amount of work, which might mean there's room for performance enhancements, too. Just don't know.</p> <p>But the context switching seems to be a fatal flaw. Oracle's EXTPROC isn't meant for this kind of fine-grained work. The next investigations involve checking out whether Oracle can be sweet talked into running these external processes in-process, but I'm willing to guess the answer is no. So, it was promising, but there's no way this architectural decision gives anyone the kind of speed they're looking for.</p> <p>It appears you can't write code in C in Oracle without being sent off to EXTPROC, so perhaps Oracle Spatial will suffer from the same problems. That'll be the next set of investigations.</p> <p><em>(This post was updated a few hours after being published with a few more thoughts on Oracle extensions.)</em></p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good 9.2's ST_GEOMETRY: Part Zero<p>It was over a year ago I sipped the <a href="">ESRI ST_GEOMETRY Kool-Aid</a>. Fiteen months later, things finally worked out where I could try out some small examples. This should be the first in a series of posts. We'll see. In brief, I'm glad I waited a year to try, and more time yet may be in order, depending on what you're trying to do.</p> <p>Installing SDE 9.2 is easy enough, and creating tables which use the ST_GEOMETRY type is easy, too. You just, you know, do it.</p> <code>CREATE TABLE THINGS (X INT, SHAPE ST_GEOMETRY)</code> <p>The critical documentation is called <a href="">Working with a Geodatabase using SQL</a>. It's easy to find the basic operations.</p> <p>What's not as easy is a few administrative details. First, setting up spatial references is no picnic. If you load data via ArcGIS, it magically gets taken care of for you. If you're loading it manually, you need to create your own spatial references, including resolution, offsets, and the whole nine yards. I haven't done that yet, and have been happy to let utilities create them. Unfortunately, these utilities create a custom SRID for each dataset I load, even though they have the same coordinate system. Each SRID has its own offsets and grid spacings. I'll have to figure out how to force data to be loaded into existing SRIDs, probably by labeling the column in SDE's metadata. I loaded data using shp2sde, which is certainly a cop out.</p> <p>Then comes querying data. If you just run and try "select st_astext(shape) from zip", you may get an interesting listener error: ORA-28575: unable to open RPC connection to external procedure agent. It turns out all the ESRI ST_GEOMETRY functions are implemented using an out-of-process procedure call, which the Oracle listener has to be configured to support. It comes this way with normal installs, but somewhere along the line I'd lost mine. Never mind, ESRI has a good <a href="">article explaining what to do</a>. <p>Now comes the part where I'm glad I waited a year. Doing any of the most trivial operations, such as the aforementioned "select st_astext(shape)..." resulted in a host of clear nastiness, the most common of which was ORA-28579: network error during callback from external procedure agent. A lot of hunting finally revealed that these bugs were only fixed in SDE SP5. (My money says they weren't all fixed; we'll see.) It turns out SP5 for Oracle 10g R2 was withdrawn a few days ago because of a nasty regression. Now this regression was unrelated to ST_GEOMETRY support and Friday afternoons of playing around with technology don't come around very often, so I was very relieved to find that <a href="">James Fee</a> had a copy of it <a href="">still available</a>. That fixed the bugs so I could start doing some real testing.</p> <p>More details in the next post, but first impressions are that doing some casual spatial joins (e.g. setting a where clause in ArcMap to only show cities which intersect Harris county) seems pleasantly fast. <em>(Update, 2008-May-5: <a href="">Further investigations</a> reveal this was only fast on small datasets. Spatial joins are slow, too.)</em> Selecting the data as WKB is intolerably and comically slow. This seems to be because the call to functions like st_asbinary is made out of process to Oracle's extproc.exe. I am currently testing on an old single core laptop, so it may be that a dual processor machine would handle more process switches per second, but it probably goes without saying that calling an out-of-process function to convert a few kilobytes of binary data from one form to another is extravagant. If this is the case, then while the spatial indexing and intersection type functions will be immensely useful, there appears to be no future (on Oracle, at least) at all in writing tools which actually go against the WKB representation made possible by SDE.ST_GEOMETRY. Unless you're happy to query less than 1000 rows per second.</p> <p>SDE, of course, goes "under the covers" when rendering data from ST_GEOMETRY columns, and so its plenty fast. Something to ponder.</p> <p>Until next time.</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good and C++/CLI<p>Just for future Googlers, it is possible to get a compile error from the C++/CLI compiler (that is, Visual Studio 2005 compiling C++ code with the /clr switch) which looks like</p> <code>C2860: 'void' cannot be an argument type, except for '(void)'</code> <p>This happens at absolutely random places where you reference .NET objects that it should already know about. (The error will further read, if you look at the detailed compiler output, something like " This diagnostic occurred while importing type 'Fred::Ethel ' from assembly 'cppclitestingfs, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null'.")</p> <p>Muh? It turns out in this case the class I was using used a class in its definition a class which came from another assembly (FSharp.Core, as it turns out, but it could have been anyone.) Instead of saying something useful like 'Fred::Ethel references type Option<A> which is not defined', it said ''void' cannot be an argument type'. Obviously. </p> <p>My completely un-called for guess? Someone on the C++/CLI compiler team substituted the word 'void', perhaps by accident, when an imported type had an unresolved external reference. Another compiler pass downstream complained quite rightly about this, but lacked context to give a more meaningful message. Is there a place one submits bugs in Microsoft compilers that is not informationally equivalent to a black hole?</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good was Easy! Canceling a credit card<p>It must be a sign that I don't spend enough money. I called American Express today to cancel a credit card. I hadn't used it in a few years, so I didn't expect them to fight much. Imagine my surprise when "cancel a card" was a second-level item on their automated telephony system! Most companies make it impossible to find their phone number, never mind make it easy to find out how to cancel your relationship with them. I was further flabbergasted when I found out that my cancellation merely required me pressing a number on my phone; no high pressure tactics from call center salesmen! It almost made me feel bad to have canceled when the company provided such good customer service! But never mind, it makes me want to keep the existing Amex charge card I have even more: it's too damn hard to find good customer service these days.</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good References between Debug and Release Builds in Visual Studio 2005<p>What if you want to reference different DLLs in Debug and Release builds of your favorite .NET project? Perhaps you've got some unmanaged DLL spat out by a system that you don't control, but that produces different builds? Or, as in my case, you've got an F# assembly you want to reference from a C# assembly, but F# doesn't yet support "project references", so you have to link directly to the DLL. I'm not convinced what I did will always work, but it seems to right now.</p> <p>The references in your project (in this case a C# project) are held in a tag called <ItemGroup>. It turns out you can add restrictions to that tag just like you can the other tags in your project, e.g. giving your assembly different names in debug and release modes. So you can end up with something like this.</p> <code> <ItemGroup> <Reference Include="System" /> <Reference Include="System.configuration" /> </ItemGroup> <ItemGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' "> <Reference Include="my favorite debug dll"> <SpecificVersion>False</SpecificVersion> <HintPath>..\..\Debug\so-and-so.dll</HintPath> </Reference> </ItemGroup> <ItemGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' "> <Reference Include="my favorite debug dll"> <SpecificVersion>False</SpecificVersion> <HintPath>..\..\Release\so-and-so.dll</HintPath> </Reference> </ItemGroup> </code><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good smoothly oiled Starbucks PR Machine<p>Starbucks closed recently to <a href="">retrain their baristas</a>. Why was this necessary? A massive overexpansion, poor training program, sluggish growth, focus on sandwiches and music instead of coffe, and competition from Dunkin Donuts and McDonalds? It seems their US president was <a href="">shown the door</a> today. Not pretty.</p> <p>But the PR machine is operating perfectly. The cover story was that Starbucks was going to "revive the intimate, friendly feel of a neighborhood coffee shop". You mean the ones they try to put out of business? Still, the entire planet was notified of the planned three hour closing. It was considered national news in the United States, complete with radio and TV interviews of Starbucks staff. Tens of millions of dollars of free advertising in return for shutting down during three slow late evening hours. I thought that was smart.</p> <p>But what if you were thinking of working at Starbucks and read about the training? Apparently it was pretty medieval. Who wants to take a job where you have to be trained to be snob? Seriously, and at a part-time fast food job? Forget it.</p> <p>No problem. Starbucks staff get health benefits, and no one should forget that. What if there was a health-related story about just how amazingly cool Starbucks staff were? What if the story were so irresistibly sweet that it would have to make the national papers, just like the shut down? This might heal any potential damage the shut down did, right? <em>That</em> would be smart!<p> <p>Shoot on over to today's <a href="">human interest piece</a> in the New York Times about, get this, a Starbucks barista who <em>donated her kidney to a customer</em>. Not to take away from the impressive generosity this woman showed, but why didn't this story make the rounds last fall when it happened? Right in time for the Christmas season perhaps?</p> <p>Nah, probably better that it showed up today to either remind workers that Starbucks is great, even though they train the hell out of you, or to totally bury the news that their US president got the sack today.</p> <p>Say what you like about their coffee, their business practices, or their organ donors, but their PR department kicks all ass. It almost makes you feel sorry for the schmucks at the New York Times who swallowed this hook, line and sinker instead of reporting on, oh, the massive cost of the war or something useful. Almost.</p> <p>Postscript: There is apparently some evidence that Starbucks's ubiquity leads to an expansion of demand for coffee so large that many neighborhood coffee shops have increased sales after a Starbucks moves in across the street... Anyone wonder who planted that story?</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good Newman Rocks<p><a href="">Randy Newman</a> is brilliant, but has anyone else noticed that the intro to his main <a href="">Monsters Inc.</a> theme is the same as the intro to the famous ragtime hit Temptation Rag? The arrangement is even identical to the Benny Goodman version, with clarinet and xylophone melody.</p> <p>How about that?</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good. Mono Just Works.<p>So I continue work on a project which I want to write in .NET but needs to run on Red Hat Enterprise Linux (an ancient version, I'm sure) and Vista. And my home computer is a Mac. Surely this is a recipe for <a href="">Mono</a>.</p> <p>I started gingerly, programming in Visual Studio 2005 and building comprehensive unit tests. I run them with Test Runner or NUnit directly in our automated build. Fine.</p> <p>Then I got a little braver. I installed Mono on our Linux machines. (Long story, actually, and not very amusing. It had to be compiled from source and one of the source files edited because this version of Linux is so old some of the thread functions now have different signatures. But never mind.) Not wanting to jump in whole hog, I simply copied the compiled DLLs from the Windows machine to the Linux machine. Mono comes with a version of NUnit, so I started it up.</p> <code> linux-machine:/glb/home/myuserid: bin/nunit-console2 MyProject.4.21.47 Mono Version: 2.0.50727.42 ...... Tests run: 6 (all pass), Not run: 0, Time: 3.859981 seconds Tests run: 6, Failures: 0, Not run: 0, Time: 3.859981 seconds </code> <p>Wow. Just like that. This code isn't exactly running a nuclear reactor, but it is out there multithreading, doing file I/O (with locks), XML serialization, and all manner of fun. Mono even gets Linux's different file naming conventions correct, and thanks to defensive coding of file-handling functions, no change needed on my part. (Hint: learn to love <a href="">System.IO.Path</a>).</p> <p>Well, if it works on Very Old Red Hat Linux, surely it'll just work on Mac OS X, right? Yes, why not. I did "svn up" and got the code and binaries down to my little Mac called, er, LittleMac. nunit-console2 reported the same happy results.</p> <p>One binary, three runtimes.</p> <p>But why stop there? Why not try the Mono compilers, too? Other than masochism, no particular reason, but I was intrigued by the idea of being able to debug in Linux. After all, the programmers this is supplied to will probably want to occasionally debug the code, and they'll want to do it in Linux as well as Windows. Why not try <a href="">MonoDevelop</a>? Heck, it's included with Mono downloads. Turns out it's file format compatible with the .sln and .csproj files Visual Studio 2005 uses. I just opened it on the source code I'd downloaded from Windows. I pressed Build, then ran nunit-console2 again. Same results. That was on the Mac.</p> <p>It just worked. I'm sure my honeymoon will be over soon as I'm just scratching the surface, but I'm very encouraged. Will the auto-generated C++ wrappers for my C# code work just as pleasantly on all these platforms? Stay tuned.</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good Begins Self Construction<p>Yep, Google is now constrained by the very infrastructure of the Internet. From the <a href="">Official Google Blog</a>.</p> <blockquote>... One of the biggest challenges we face is staying ahead of our broadband capacity needs, especially across Asia ... Collectively we just signed an agreement to build a new high-bandwidth subsea cable system linking the U.S. and Japan.</blockquote> <p>I think there's a couple of interesting points here.</p> <ul> <li>Google's demand on bandwidth is greater than any one company can provide. This seems to give it a great deal of leverage, as it probably makes a higher margin converting that bandwidth to advertising than the telcos make converting boats and electricity into bandwidth. Google's incessant preaching about how it just wants an open Internet to compete on is probably true. The bigger the Internet gets, the more money they make. But clearly they have switched from roughly passive beneficiaries of Internet advancement to aggressively self-interested builders of the Internet.</li> <li>While Google may not be getting into the telco business (they emphatically deny it in this release) their leverage and reach make them natural competitors to the telephone companies. Everyone would love to see the telcos suffer for their crappy service and monopoly behavior. But the telcos were granted their monopoly by the government and so still have to (theoretically) abide by serious regulations. Google earned their monopoly all by themeselves. Who are they beholden to?</li> <li>The speed of light is still a hard limit. Why isn't Google building more data centers in Japan? Why a big cable from the US to Japan? Is there that much youtube content generated in one country and then viewed in another? Perhaps, but it makes me wonder whether Google's <em>internal</em> traffic needs are on a par with or greater than its <em>external</em> traffic needs. That's pure speculation.</li> <li>Where I came up, a "couple" can mean more than two things.</li> </ul> <p>When will Skynet become sentient? I console myself that any soul who comes into being through reading the Internet will probably be so confused as to be harmless. Will it be addicted to porn? Unable to rectify the political rants from right to left? Get into Wikipedia editing contests with itself? Let's just hope the launch codes for the nukes aren't online anywhere.</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good Automation: Man the Message Pumps!<p>The <a href="">PUG</a> is coming up, so it seems an opportune time to serve a little <i>amuse bouche</i> to get people in the mood. We write a desktop application which wants very much to be an ArcMap plugin. It creates layers, maps, and generally talks to people's real live ArcMap sessions. But we don't want to be an ArcMap plugin, because frankly the world's a lot bigger than ArcMap. We do other stuff, too. So? We automate ArcMap.</p> <p>We tickle it remotely. You want to create IFeatureLayers in a map? You can do it cross-process, thanks to the beauty of DCOM. We do this in C# where it doesn't blow our minds, and it works. It's actually pretty clever.</p> <p>ESRI's main applications (like ArcMap) expose an IObjectFactory interface, through which you can create remote objects. We abstract that so that our functions can run either on our server (creating local map documents on the fly) or on a desktop (tickling ArcMap remotely. Like so.</p> <code> public static object Create(Type type, IObjectFactory factory) { string progId = GetProgId(type); return factory == null ? Activator.CreateInstance(Type.GetTypeFromProgID(progId)) : factory.Create(progId); } </code> <p>Cool. So you can use this factory to create layers remotely; it looks like the code you'd write in a plugin, but you don't have to be a plugin. For instance</p> <code> public static IGeoFeatureLayer CreateFeatureLayer( bool visible, string layerName, string layerDescription, IFeatureClass featureClass, IFeatureRenderer renderer, bool legendVisible, string legendLabel, string labelExpression, bool isLabelled, string displayField, bool showTips, IObjectFactory factory) { IGeoFeatureLayer layer = (IGeoFeatureLayer)InteropUtil.Create(typeof(FeatureLayer), factory); layer.Visible = visible; layer.Name = layerName; if (layerDescription != null) ((ILayerGeneralProperties)layer).LayerDescription = layerDescription; layer.Renderer = renderer; SetLayerLegendProperties(layer, legendVisible, legendLabel); SetFeatureLayerLabel(layer, labelExpression, isLabelled, factory); if (displayField != null) layer.DisplayField = displayField; layer.ShowTips = showTips; layer.FeatureClass = featureClass; return layer; } </code> <p>Brilliant. You're in, but you're not a prisoner. The exact same approach can be used for automating MS Office. It's built this way on purpose, and it's cool. Well there's a twist. You can write this code, and you can call it, but it usually doesn't run. It takes minutes or hours to complete, if it completes at all.</p> <p>You scratch your head. You ponder. Then, absolutely randomly, you notice that when you move your mouse repeatedly over the ArcMap window, it finishes faster. Surely this can't be, you say. I'm being fooled by randomness. But this persists.</p> <p>Finally you break out the DCOM scriptures and note that DCOM messages between processes arrive as windows messages, just like Alt-Tab requests, minimize requests, and yes, mouse movements! Could the DCOM messages your app is sending be stalled upon arrival to ArcMap?</p> <p>It turns out the answer is yes. ArcMap does not properly receive these DCOM messages without intervention. You have to force them to pump their Windows message pump. Moving the mouse does that; it's just a nice benefit that while responding to the mouse move messages, they also process the DCOM calls you're making to create layers.</p> <p>Telling users to scribble over the ArcMap process with their mouse while we were talking to it seemed like it'd get a lot of laughs in our training courses, and not the good kind. So what else could we do? We wrote a function which, while the DCOM call is pending, pelts ArcMap's message pump with messages, forcing it to work. It's like screaming at the top of your lungs "PAY ATTENTION PAY ATTENTION PAY ATTENTION TO ME!!!!". Hey, who are we to argue with a <i>prima donna</i>?</p> <p>Thusly:</p> <code> /// example: using(new RemoteArcObjectsPump(app.HWND) { ... } public class RemoteArcObjectsPump : IDisposable { /// Constructs and automatically begins pumping. If HWND is zero, no timer is created. public RemoteArcObjectsPump(int hWnd) { if (hWnd == 0) return; _hWnd = hWnd; _thread = new Thread(new ThreadStart(Ping)); _thread.ApartmentState = ApartmentState.MTA; _thread.<img width='1' height='1' src=''/></div>Sebastian Good are thou Topology? (A Plague On Both Your Houses!)<p>Not a lot of ESRI posts here recently for a mostly happy reason: there's been little ESRI programming in my universe the past few months. I've been working on a fun little wrapper generator for managed code called GIWS (no, not about the Chosen Tribe, it's SWIG backwards. Get it?) Anyway, that's a post for another time.</p> <p><b>Today I come not to praise ESRI Topologies but to bury them</b>. One of our products has a feature where you download a little personal geodatabase from our web application to your desktop to do detailed editing in ArcMap, then you send it back to us and we unpack it. We're thinking of scrapping the whole approach as it turns out to make sense to programmers, but not to end users where it is deployed. But that's yet another post for another time.</p> <p>Anyway, we take care to create the feature classes in this personal geodatabase in a topology, complete with rules about overlaps, to help users create clean datasets. Several of our feature classes are essentially coverages: the polygons need to be non-overlapping. We thought that by creating a topology automatically we'd be doing our users a favor. They're tedious to put together, and somewhat intricate.</p> <p>Well, two years later we're getting rid of the topologies. I thought it might be worth sharing our reasoning, as I'm curious whether anyone else has found similar problems. (FWIW, we're using 9.0. Doing work for a big company means you have the awesome upside of knowing your work matters on a large scale. The downside is usually being 12 months behind the technology curve. That's okay.)<p> <ul> <li><b>Topology algorithms find uncorrectable problems</b>. We had many instances of topology scans finding, for example, polygon overlaps, which turned out to be degenerate lines, points, or even invisible artifacts. These seemed to be associated with datasets where original work had been done in a projected coordinate system, then projected into the WGS1984 we use internally. I understand that projection might cause points to snap differently, creating errors. We all get it. But the problems detected would turn out to be invisible or uncorrectable. That aggravated people.</li> <li><b>Topology algorithms are different than geoprocessor algorithms</b>. The feature classes users edit were inputs to a series of geoprocessing algorithms. Nothing exciting, mostly intersections followed by some algebra on the attributes. But quite often a topology check would claim no overlaps, while an intersect would show overlaps. (It forced us to do a sanity check before geoprocessing of doing a self-intersect on each layer and asserting that the number of input polygons equalled the number of output polygons.) We did not spend time to figure out who was right -- and given well-known <a href="">robustness issues</a> in spatial algorithms, it may well be that both are correct. But since our results are created by the geoprocessor, we decided to use it.</li> <li><b>Topological Editing is a very advanced skill—people don't like learning it</b>. We had trouble convincing our users to learn the topological editing tools. Heck, normal editing in ArcMap is hard enough. I couldn't really blame them.</li> <li><b>Topologies add awesome bloat to geodatabases</b>. We were seeing geodatabases with a dozen simple feature classes bloating to 700MB after editing. Compacting them would take them back to 2MB. Ouch. We know databases need to be compacted now and again, but this was a little much for us.</li> <li><b>Toplogies would cause obscure COM errors in our geoprocessor</b>. This one may be sort of our fault. We are using the 9.0 geoprocessor in-process on the main STA thread of our desktop application. It doesn't seem to like that, and we're contemplating running it as an out-of-process python script on demand. Nonetheless, the stability of our tool has increased since we didn't include topologies. Given the above reasons not to use topologies, it wasn't worth debugging this one.</li> </ul> <p>We really wanted topologies to work. They make sense, they reflect how people really think, they ought to be the bee's knees. But we were ultimately disappointed. Perhaps they're better in 9.2 or 9.3, but I doubt we'll try them again. We coded our own overlap detection and repair tool for people who can't use the ones already out there (e.g <a href="">ET Geowizards</a>).</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good a Neogeographer Today<p>Having had the delightful opportunity to recently hear "neogeographer" bandied around (in person!) like a swear word, I've been keen on the recent discussion involving it in the GIS press, e.g. on the <a href=";-not-LI.html">All Points Blog</a> or <a href="">The Memory Leak</a>.</p> <p>It reminded me of how much trouble I got into for being such a sourpuss in my <a href="">analysis of the 2007 ESRI UC keynote session</a> and ESRI's strategy in general. Six months ago, I said </p> <blockquote. True "professionals" will make "authoritative" data and "publish" it to these "free, consumer" services.</blockquote> <p>I'm not saying I said it first. But that schism is opening wider and wider.</p> <p>And I'll say it again: GIS departments need to go the way of the VARCHAR department. There will always be an important place for the surveyor and geodesist who really can tell me if I'm sinking a well within 10 meters of the exact seismic trace I had in mind. But the cult of the GIS dinosaurs simply needs to get out of the way of all the millions of people whose creativity and knowledge can be unleashed by "neo"geography tools. The VARCHAR department will still have well-paid experts who can beautifully typeset new hardcovers and textbooks, but it's just not necessary to use desktop publishing software when a TEXTAREA or Twitter post will do.</p> <p>Hey GIS programmers: barbarians are at the gates! Do us a favor and let a few in!</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good's VMWare Image: Sometimes a ZIP is not a ZIP<p>So it finally came down to it: I wanted to run Linux in order to test some Mono interoperability code. We are writing libraries that will be used in .NET/Windows environments as well as C++/Linux environments, and getting the pointer buggery correct is important. We've got a C++ program (with a 25 year pedigree) that needs to start using components we'd just as soon write using .NET.</p> <p>This is a case where I am not comfortable just testing some things in Mono on OS X because the issue is not managed code, but specifically unmanaged code. OS X has enough of its own skeletons; I don't want to sweat the wrong details. So I've gone off to the <a href="">Mono site</a> and downloaded their most recent <a href="">VMWare image</a>. It downloads in 20 minutes or so and unzips in another five. In a strange twist, I found that if I unzipped it using OS X's command line <strong>unzip</strong> command, the command complained about corrupted zip entries. VMWare started the unzipped mess properly, but nothing quite worked right correctly in the resultant image. I used the GUI unzip (whatever happens when you double-click the ZIP file in Finder) and it worked like a charm. Humph.</p> <p>So I guess my Linux world for the next few weeks or months will be openSUSE 10.2, Novell's pet distribution. We'll see how it goes. I love that every few years as I play with Linux I have to learn a new package management library.</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good Can a RESTaurant Teach You About REST? (Whataburger)<p>We're talking about <a href="">RESTaurants</a>. Today: one of Texas's great institutions, <a href="">Whataburger</a>. ("What a burger!")</p> <p><em>Mmm... burgers!</em></p> <p>Stay focused. We're talking about State Transfer (the ST in reST). So when I'm driving between offices, I often stop by a Whataburger to get some lunch. When I arrive, I need to know how to order my food.</p> <code> GET /order/ HTTP/1.1 Host: whataburger42.example.com</code> <p>And Whataburger #42 kindly responds with two links I can follow to make that choice: Drive Through or Dining Room?</p> <code> HTTP/1.1 200 OK Date: Mon, 23 May 2005 22:38:34 GMT Server: Rosalinda Content-Type: text/html; charset=UTF-8 <ul> <li><a href="/order/drivethru/">Drive Thru. Cars in line: 9</a></li> <li><a href="/order/diningroom/">Dining Room. Cars in parking lot: 14</a></li> </ul> </code> <p>It's very likely the cache headers on that response would only be for 30 seconds or so—it's a busy Whataburger at lunch time. So this is classic REST: we're using HTTP to retrieve hyperlinks that navigate us through application state. The hungry client (me!) can choose between two options, and the exact method for specifying which one is simple: I follow a link.</p> <p><em>You're a Houstonian. Surely you always take the drive through?</em></p> <p>Surely. Well, actually in the above case I'd almost certainly go into the dining room. And theirein lies the interesting lesson of the Whataburger concurrency dilemma. The two order pipelines handle state transfer entirely differently, because of concurrency and correctness issues. I won't keep posting silly HTTP transcripts, but you can play along as if I did. So whether I'm in the dining room or the drive through, I need to order my food. We might imagine that a GET of the resources presented above (e.g. /order/drivethru/) returns a <form> which I can POST to in order to create an order. This works in both places, right?</p> <p>Actually, sort of. And here we get back to the concept of <strong>getting it right</strong>. If I'm ordering food during the incredibly busy lunch hour, my order goes through a <em>canonicalization</em>. If it's 3.30 in the afternoon, the B-team is on staff and just takes my order and gives me a number. Why? Let's imagine I'm in the dining room and order (POST) something like "Um, I'll take a #1 meal, with onion rings instead of fries, and a drink. Oh, no pickles." At lunch time they can't afford to screw up their pipeline of burgers, so they'll ask some clarification. "Do you want cheese with that?"</p> <p><em>Of course, everyone loves cheese.</em></p> <p>What does that look like in a REST universe? The restaurant doesn't even need to issue me an order number before they come back with their upsell/correction. So I imagine they'd simply return yet another <form> which I'd need to fill out. Perhaps a <form> to fill out with restricted options: the order you gave me, or the order you gave me with cheese. If I'm in the drivethrough, they always canonicalize the order in a certain form to absolutely minimize confusion. My order becomes a "#1 w/cheese, no pickles, onion rings, diet coke". That's another representation of the same resource (my order), but the server is insisting on canonicaliation because at lunch time <strong>getting it wrong</strong> is too expensive.</p> <p><em>Got it. I'll be ready the very moment cars come standard with an HTTP client.</em></p> <p>Don't be a smart ass. This simple transaction (and we're not even done yet!) already elucidates one example of choices for managing state. We POSTed to a URL which gave us another form to interact with. In the real world, the state of that particular application involves Rosalinda—our ever cheerful cashier—and I remembering what we're talking about. If we came back in an hour, we'd have to start at the "Um". But in our REST example, the state of the conversation is <em>entirely contained</em> in the form I got back asking me if I wanted cheese with that. Rosalinda doesn't need to give me an order number or alter her databases.</p> <p><em>When does this become relevant to my day job?</em></p> <p>Lots of transactions need canonicalization. <a href="">Geocoding</a> is a great example. We've all typed "800 8<sup>th</sup> St" into a mapping program and had it answer not with the map we expected, but a form or list of hyperlinks asking whether we meant to ask for that address in Port Arthur, Hempstead or Port Neches. (It could even be an "HTTP 300: Multiple Choices" response, but something tells me that fine a reading of the HTTP spec is some years away.) Those links contain the entire correct <em>canonicalized</em> address, and the server doesn't need to remember it was talking to you. My request for 800 8<sup>th</sup> St Port Arthur is indistinguishable from the less knuckleheaded person's request who asked for it correctly in the first place. I arrived at the same application state. Yes, I wanted cheese.</p> <p><em>That's a good point, you said the drive through and the dining room were different, but all we've talked about is identical canonicalization processes.</em></p> <p>Well, both order channels have guided me through their state identically so far. In the dining room, my final POST of a canonical order results in the creation of an order resource: I get a little orange plastic order number: 23. In the REST world, I am told my order now exists at /order/diningroom/23. I can GET the status of that resource as often as I like. Is my burger ready yet? Is my burger ready yet?</p> <p>Rosalinda's co-worker Randall is happy to tell me as often as I ask that my order is or isn't ready yet. But he tells me immediately. And he's also answering my fellow hungry diner's queries as well: #21, #22, and #24 are all asking. In the dining room, food is served asynchronously. When it's ready, it's ready. It would not be unusual to get my order before #21 if he also ordered a milkshake and biscuits. One of these times, my GET will results in a beautiful (digitally signed!) cheeseburger. (And this is why you choose the dining room over the drive through when the drive through is long. You can get your food in the average time it takes to prepare it under load, not the sum of times it takes the people ahead of you to fill their orders.) After my digitally signed burger is eaten, if I GET at the same resource again, I will probably be told "HTTP 410: Gone" or "HTTP 404: Not found". (Yes, I'm totally ignoring security and the possibility someone will steal my burger by guessing my order number. There are many orthogonal ways to handle that.)</p> <p>Back in the drive through, the state transfer is totally different. A simple order number and asynchronous handling is not enough. Cars must be served in order. I have to wait in line. My POST to create a new resource won't return a nice URL I can poll on. It will probably block until it returns the digitally signed cheeseburger. I get on a busy web server and had to wait behind other requests. Where was the state? It was all in the server: shuffling connections, building & servicing queues, etc. As far as I was concerned, the application was stateless.</p> <p>But at what cost?! The server (the drive through) had to maintain an open connection with me that whole time. And remember what my order was. Heavy duty, man. And it's not very scalable. In the dining room, Randall could easily handle dozens of diners asking him where is order was. The diners held onto their own state—he hardly had to remember anything! But in the drive through, cars are waiting in line and waiting in line and my dreams of a fast lunch are shattered when I see the car in front of me ordering 12 burgers for her office lunch.</p> <p>In return for the simplicity of simply POSTing a blocking call (easier to program—you can leave the air conditioning on), the server takes on a heavy burden. The Whataburger near my office chooses an alternative to asynchronicity in an attempt to scale: they have two drive-through lanes. When I POST my order, I am probably getting an "HTTP 302: Found" or "HTTP 303: See Other" telling me which drive through URL to make my blocking post to (e.g. /order/drivethrough/1).</p> <p><em>Now I'm hungry</em></p> <p>Not me. I got today's burger at the <a href="">neighborhood beer joint</a> a few hours ago. They handle their scalability and state transfer issues like Whataburger's dining room: my number today was 8.</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good Can a RESTaurant Teach You About REST? (part 1)<p>I'm buying a house soon. That's the sort of transaction you have to get right. No matter what the <a href="">cool kids</a> are saying about <a href="">transactions being dead</a>, I'll fight <a href="">Brewer's conjecture</a> all the way the bank. I want my house, and the seller wants his money. It's definitely not okay for one of us to end up with both. Why? Because it's too hard to correct the error[*]. Therefore we pay the not inconsiderable overhead of title companies, escrow agents, loan officers, wire transfer fees, etc. to <strong>get it right</strong>. It's way cheaper than going to court.</p> <p>But in the rarefied world of blogosphere REST pundits, we evangelize the webby way for lots of things. Fire and forget. Assume your communication channel is going to fail a lot. Assume statelessness. This would totally suck for buying a house. Imagine having to bring all 300 pages of documentation required to every meeting you attended—tax returns, site surveys, credit reports. And how could anyone be sure you were being consistent? Boy I would <em>love</em> to have brought a different set of papers to the loan officer as I did to the IRS. (Hello, sub-prime mortgage crisis! But I digress.)</p> <p><em>But aren't we talking about RESTaurants? I thought it was a clever pun.</em></p> <p>Sorry, yes we were. But the point was to introduce the cost of <strong>getting it right</strong>. And besides, it's hard to concentrate on anything else when you're buying a house, so indulge me.</p> <p>What, honestly, is the cost to you if a restaurant fails to <strong>get your order right</strong>? Mistakes are made all the time by waiters, customers, busboys, managers, and cooks. Yet, unlike closing on a house, you do not sign contracts or involve lawyers when you order food at a restaurant. In fact, you don't even do a credit check; centuries of social convention have blessed us with a system where it is assumed you can pay for dinner and only have to prove it at the end.</p> <p>Or has it? I once booked an anniversary party for my company at a tony restaurant in Las Vegas. I had to put down a credit card to hold the table. Why there and not at, say, my <a href="">neighborhood pizza joint</a>? The answer is fairly obvious: if I flake on my big party at said tony restaurant, they're out a private room and four figures. If I flake on dinner with my wife at said pizza joint, they'll fill the table anyway most nights, and if they don't, they're out 20 bucks, tops.</p> <p><em>Okay, now you're bugging me. REST pundits are supposed to wax poetically about URL design and resource representations. You know, <a href="">eBay transactions are really resources</a>, what is <a href="">the URL of a pixel</a>[**]?—that sort of thing. This is a rambling diatribe about transactions, not REST. You can't fool me! Though you are at last talking about restaurants.</em></p> <p>Well, REpresentations are only half of REST. (As measured by the letters they get in the acronym.) State Transfer is pretty damn important in a stateless protocol. Transactions are only one kind of state transfer. And what I'm warming up to talk about (warming up, get it? restaurants? anyone? is this thing on?) is state transfer. How does that latte get to you at Starbucks? My jalapeño sausage at the local barbecue joint? My #1 Meal, cheese, no pickles, onion rings and a Diet Coke at Whataburger? My Kansas City Strip at Delmonico? Getting there involves many state transfers, and each of these restaurants has chosen a different system.</p> <p><em>But now you've gone and spent all your time on silly jokes.</em></p> <p>So I have. See you tomorrow.</p> <p>[*] Oh yes, funny story. When I bought my first house, the combination of my naiveté and an under-trained clerk at the title office led to me bringing a personal check for the down payment on my house. It was an average house, but a 20% down payment still made the check five figures. The title company protested that they couldn't be expected to float that kind of money waiting for my check to clear. And besides, we had already signed so much paperwork and gotten everyone in the same room, that it seemed unlikely we would want to do this again in 5 days. I said, "The check won't bounce, and if it does, what's the problem? You know where I live, right?" Cursory examination of the 300 pages of paper signed that afternoon indicated more than a few copies of my old and new addresses. Some uneasy laughter ensued, and that was the end of that. I got lucky. That actually would have been a very expensive error on the title company's part.</p> <p>[**] Turns out, <a href="">whole pictures</a> can have URLs. That's an approach I'm pretty sure <a href="">wouldn't work</a> for satellite imagery. But again, I digress.</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good Mac is Back<p>Tiny post, for anyone else wondering about high CPU utilization and slow or crashing network services on their Mac while running Parallels Desktop for the Mac 3.0. I've been a happy Parallels-er, running a few critical apps on a virtual PC while doing most everything else on my Mini Mac. (Not a speed-demon, but quiet, cool, and dual core.) After the upgrade to 3.0, I found Parallels was routinely sitting on 40% of my CPU, and causing network connectivity, especially in Safari, to totally suck. I had to Force Quit Safari daily. That didn't make sense. A few other people seemed to suggest that file sharing might be the problem. I've disabled file sharing of the Windows folders into the Mac. This is a nice feature, but in practice I don't use it (or the reverse) terribly often. All my permanent files are on a central networked drive which serves both Macs and PCs in my house. Disabled the sharing, and all was better. Perhaps they'll fix this in their next patch and let me use more than 1 CPU in my virtual machine, or I'll have to join the hordes defecting to the <a href="">newly rich</a> <a href="">VMWare</a>.</p> <p><a href=""><img style="cursor:pointer; cursor:hand;" src="" border="0" alt="" /></a></p> <p><em>Added 19 Aug 2007:</em> It appears USB support is no picnic for Parallels either. By disabling USB devices when I'm not using them in the VM, CPU usage has dropped quite a bit also. It still sits at 13% when idle, which I find disturbing. Experiments with VMWare are ongoing.</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good for Rasters: a RANGE of options<p>Hey, I know a <a href="">dogpile</a> when I see one. I'll jump in. (Seriously, follow <a href="">that link </a>first, then read the response.)</p> <p>Sean's spot on. Resources can have subcomponents. I even—while addressing a different but related point—<a href="">have opined</a> about file-based formats not forming a sensible basis of REST rasters. Dividing rasters into interesting slices as Sean has suggested is the right big idea. As far as suggestions, I've been meaning to write about the following because we've been working on them on one of my projects.</p> <p><strong>RANGEd (partial) GETs</strong>. HTTP GET supports ranged requests. In other words, I can ask for specific bytes of a file. In this case, a smart raster client could indeed notice that a resource was a GIF, and only ask for the first few hundred bytes, read critical header information, and use random access to obtain the actual pixel values. A naive approach would end up being awfully chatty, and it leaves the resource a bit more opaque than you might like (after all, the byte range is not part of the URL) but it is a practical solution.</p> <p><strong>Standard Raster Accessors</strong>. Libraries like GDAL or ESRI make very very similar assumptions when dealing with raster data. Everyone agrees that rasters have bands of data, might hold data of different types (ints, floats, bits, etc.), are conveniently arranged in tiles, etc. It is probably not too hard to agree on a standard header for raster information to be returned to a HEAD request. And then a fragment/anchor identifier language could be used to agree on raster chunks to be returned in an appropriate binary format. (That's the part after the '#' in the resource.)</p> <ul> <li>. The whole shootin' match. Ask for a HEAD or do smart partial GET requests.</li> <li>*.0,0.100,100. Gets all bands (*) and all pixels between (0,0) and (100,100) pixel coordinates.</li> </ul> <p>The actual syntax above is obviously only a stab in the dark, and perhaps there are already better examples in things like the <a href="">OGC Web Coverage Service</a>, but the fact is indexing into even very large rasters doesn't take a lot of data, and it'd fit nicely in the fragment/anchor. Which would be very RESTy, very URL chic.</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good: Let the Sun Shine!<p>Every once in a while someone spends the weekend finishing something that should have just been taken care of years ago. A few weekends ago, <a href="">Howard Butler</a> and <a href="">Cristopher Schmidt</a> did that with <a href="">spatialreference.org</a>. Someone needed to take the EPSG spatial reference database and web-enable it. OGC's solution was to demand a solution so obscured by me-too XML-ism, it could only be built over many many years by their favorite XML promulgators, <a href="">Galdos</a>. And in fact, it's still not there. Howard & Christopher realized what <a href="">Steve Jones</a> pointed out: in 2007, <a href="">CRUD applications should be considered "dull, boring and uninteresting"</a>. Yes, it should be possible to CRUD-ify the EPSG database in a matter of hours and get on to the interesting stuff. Thank goodness they did.</p> <p>Yes, they've taken the latest EPSG codes and made them available for download and upload. But hey, here's the extra cool stuff they can do now this CRUD's out of the way and trivially accessible via an easy to program REST service.</p> <ul> <li>Get spatial references in a variety of formats! Yes, it's all well and good you & I agree we are talking about my favorite coordinate system, <a href="">Aratu / UTM Zone 24S</a>. But I love the fact that by just asking for the same URL postfixed by "ESRIWKT" I can serve my corporate overlords by downloading a PRJ file so I can do the math with the ESRI projection engine. Or by postfixing with "proj4", I can work instead with everyone's favorite open source engine. That's value add.</li> <li>Those usage rectangles hiding in the EPSG database can be used to tell me where that projection is preferred or valid! No sooner did I predict to some colleagues that these crafty Python programmers would find it easy to make the usage areas available (Tuesday) than those crafty Python programmers just did it (Thursday!), complete with nifty map showing valid areas! Yep, Aratu is used <a href="">off the coast of Brazil</a>.</li> <li>I can <a href="">search for different coordinate systems</a> without having a copy of MS Access on my machine.</li> </ul> <p>And because they're programming simple REST in Python, they can churn out features in hours instead of months. Let's hear it for spatialreference.org.</p> <p>I am sure that spatialreference.org will be seen as a dangerous influence. I have already seen stiff resistance to the concept. As a GIS programmer stitching lots of systems together, I've been waiting years for a standard authority that's actually user editable and supports multiple formats. So please forgive the following rant. Here's what the geodetic elites will say: <i>We can't have <strong>amateurs</strong> doing this, can we? How can we be sure these crazy Python people aren't secretly corrupting the EPSG database they're supplied with by, um, EPSG? How can we be sure they're even keeping it up to date and avoiding errors? (GDAL's enormous world of contributers and commercial support apparently not withstanding. Sorry, Frank, I guess.) We recommend <strong>not</strong> using it; you must wait for the OGC/Galdos officially sanctioned SOAP monster from hell this September!</i>. What if all the EPSG database needed was a better technical solution from people with technical common sense, instead of years of hand-wringing by otherwise very smart people? Look, these standards improve because they're widely used. Locking up EPSG codes in an Access database begs them to be copied offline, abused and misused. Opening them up to the harsh daylight of global interoperability will get them cleaned up in a jiffy. <a href="">Sunlight is the best disinfectant</a>.</p> <p>Things I'd love to help patch into this beauty:</p> <ul> <li><strong>Units of measure</strong>. The EPSG database defines a wealth of linear and angular units of measure, but a nice user-defined database of other units would sure be handy. Degrees Celsius per millisecond, anyone?</li> <li><strong>Coordinate Transformations</strong>. These are a bug-bear for everyone. Hyperlinked lists of transforms appropriate for different projections and areas of the world are sorely needed.</li> <li><strong>Secure Endorsements & Selections</strong>. Companies often invent new reference systems and transforms. They should be able to upload them and sign them cryptographically so that on download they can be sure they have not been altered. EPSG might even sign their own with a public certificate as part of a formal review. Also, many users may wish to browse the database in a way that limits their view to only common systems in use in their area. A Brazilian oil company is going to be endlessly fascinated by Aratu and the Illustrious South American Datum 1969, but annoyed if they have to wade through four screens worth of Xian 1980 / 3-degree Gauss-Kruger zones. Why not let that oil company create their own lists of preferred systems (and transforms, don't forget the transforms!) and sign those lists? Coolness all around.</li> <li><strong>More Authority Translations</strong>. What is the Blue Marble factory code for Aratu? Mentor? These take a bit of input from the companies themselves and the geodetically minded. spatialreference.org merely provides the clearing house and brings the discussion out into the open where it can be vetted.</li> <li><strong>Warnings of Incomplete Translations</strong>. Aratu is one of my favorite examples because it often highlights differences in projection engines. ESRI knows that the <a href="">Aratu</a> datum is based on the <a href="">International 1924 ellipsoid</a>. But it's a datum, not a pure ellipsoid. PROJ4 doesn't know Aratu, it just calls it "+intl". The PROJ4 codes given by spatialreference.org (by way of GDAL, if I understand correctly) for "Aratu" and "Unknown datum based upon the International 1924 ellipsoid" are the same ("+proj=longlat +ellps=intl +no_defs"). Yes, the latter is marked clearly "Not recommended", but the Aratu page's PROJ4 version merrily drops the Aratu-ness right out of Aratu. Icky. Again, this isn't an issue the Python boys can merrily solve; it's a thorny data issue. But by providing a framework for that to be annotated, good things can happen.</li> </ul> <p>Let the Sun Shine!</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good to US Government: Open The Airwaves Or We'll Buy Them. All of Them.<p>Google <a href="">demanded today</a> that the US Government agree that use of the public wireless spectrum be subject to the following four rules, meant to require the airwaves be "open".</p> <ul><li> <strong>Open applications</strong>: consumers should be able to download and utilize any software applications, content, or services they desire; </li><li> <strong>Open devices</strong>: consumers should be able to utilize their handheld communications device with whatever wireless network they prefer; </li><li> <strong>Open services</strong>: third parties (resellers) should be able to acquire wireless services from a 700 MHz licensee on a wholesale basis, based on reasonably nondiscriminatory commercial terms; and </li><li> <strong>Open networks</strong>: third parties (like Internet service providers) should be able to interconnect at any technically feasible point in a 700 MHz licensee's wireless network.</li></ul> <p>Fair enough. Google isn't a phone company, is sad that the phone and cable companies want to charge for their networks, so they want Uncle Sam to demand that Google be allowed to play with the big boys. Since Uncle Sam granted the monopolies the phone and cable companies are abusing, I say this is all good.</p> <p>But the kicker, and it's an amazing kicker, is that Google is willing to pay the government <em>$4.6 billion</em> if they agree to its terms.</p> <blockquote><p>That's why our CEO Eric Schmidt today sent <a href="">a letter</a> to FCC Chairman Kevin Martin, saying that, should the FCC adopt all four license conditions requested above, Google intends to commit at least $4.6 billion to bidding for spectrum in the upcoming 700 Mhz auction.</p> <p.</p> </blockquote> <p>I think the obvious question is as follows: <strong>If Google is willing to spend $4.6b to encourage the government to adopt its standards, what is it willing to spend to buy the whole spectrum and force its way if the government disagrees? And will it—can it—outspend the phone companies to get its way? And if they do get their way, will they still "do no evil?"</strong></p> <p>It's a bold move. That's a lot of money, even for Google. But it doesn't seem quite subtle enough to work. Would the commissioners FCC rather do the "right" thing under such naked, public threats by such an arrogant company, or slide a deal in the backdoor while receiving some nice <a href="">golfing trips</a>, fine meals, <a href="">cash in a freezer</a>, and <a href="">evening entertainment</a>?</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Sebastian Good | http://feeds.feedburner.com/palladiumconsulting/sebastian | crawl-002 | refinedweb | 13,660 | 65.12 |
Working with environment variables is a great way to configure different aspects of your Node.js application. Many cloud hosts (Heroku, Azure, AWS, now.sh, etc.) and Node.js modules use environment variables. Hosts, for example, will set a
PORT variable that specifies on which port the server should listen to properly work. Modules might have different behaviors (like logging) depending on the value of
NODE_ENV variable.
Here are some of my tricks and tools when working with environment variables in Node.js.
The Basics
Accessing environment variables in Node.js is supported right out of the box. When your Node.js process boots up it will automatically provide access to all existing environment variables by creating an
env object as property of the
process global object. If you want to take a peek at the object run the the Node.js REPL with
node in your command-line and type:
console.log(process.env);
This code should output all environment variables that this Node.js process is aware of. To access one specific variable, access it like any property of an object:
console.log('The value of PORT is:', process.env.PORT);:
const app = require('http').createServer((req, res) => res.send('Ahoy!')); const PORT = process.env.PORT || 3000; app.listen(PORT, () => { console.log(`Server is listening on port ${PORT}`); });
The highlighted line will take the value of the
PORT if it’s available or default to
3000 as a fallback port to listen on. Try running the code by saving the it in a file like
server.js and run:
node server.js
The output should be a message saying
Server is listening on port 3000. Stop the server with
Ctrl+C and restart it with the following command:
PORT=9999 node server.js
The message should now say
Server is listening on port 9999 since the
PORT variable has been temporarily set for this execution by the
PORT=9999 in front of
node.
Since
process.env is simply a normal object we can set/override the values very easily:
process.env.MY_VARIABLE = 'ahoy';
The code above will set or override the value of
MY_VARIABLE. However, keep in mind that this value is only set during the execution of this Node.js process and is only available in child processes spawned by this process. Overall you should avoid overriding environment variables as much as possible though and rather initialize a config variable as shown in the
PORT example.
Explicitly loading variables from
.env files
If you develop on multiple different Node.js projects on one computer, you might find you have overlapping environment variable names. For example, different messaging apps might need different Twilio Messaging Service SIDs, but both would be called
TWILIO_MESSAGE_SERVICE_SID. A great way to achieve project specific configuration is by using
.env files. These files allow you to specify a variety of different environment variables and their values.
Typically you don’t want to check these files into source control so when you create one you should add
.env to your your
.gitignore. You will see in a lot of Twilio demo applications
.env.example files that you can then copy to
.env and set the values yourself. Having an
.env.example or similar file is a common practice if you want to share a template file with other people in the project.
How do we load the values from this file? The easiest way is by using an npm module called
dotenv. Simply install the module via npm:
npm install dotenv --save
Afterwards add the following line to the very top of your entry file:
require('dotenv').config();
This code will automatically load the
.env file in the root of your project and initialize the values. It will skip any variables that already have been set. You should not use
.env files in your production environment though and rather set the values directly on the respective host. Therefore, you might want to wrap your load statement in an if-statement:
if (process.env.NODE_ENV !== 'production') { require('dotenv').config(); }
With this code we will only load the
.env file if the server isn’t started in production mode.
Let’s see this in action. Install
dotenv in a directory as shown above. Create an
dotenv-example.js file in the same directory and place the following lines into it:
console.log('No value for FOO yet:', process.env.FOO); if (process.env.NODE_ENV !== 'production') { require('dotenv').config(); } console.log('Now the value for FOO is:', process.env.FOO);
Afterwards, create a file called
.env in the same directory with this content:
FOO=bar
Run the script:
node dotenv-example.js
The output should look like:
No value for FOO yet: undefined Now the value for FOO is: bar
As you can see the value was loaded and defined using
dotenv. If you re-run the same command with
NODE_ENV set to
production you will see that it will stay
undefined.
NODE_ENV=production node dotenv-example.js
Now the output should be:
No value for FOO yet: undefined Now the value for FOO is: undefined
If you don’t want to modify your actual code you can also use Node’s
-r argument to load
dotenv when executing the script. Change your
dotenv-example.js file:
console.log('The value for FOO is:', process.env.FOO);
Now execute the file first normally:
node dotenv-example.js
The script should print that the current value for
FOO is
undefined. Now execute it with the appropriate flag to require
dotenv:
node -r dotenv/config dotenv-example.js
The result is that
FOO is now set to
bar since the
.env file has been loaded.
If you want to learn more about
dotenv make sure to check out its documentation.
An alternative way to load
.env files
Now
dotenv is great but there was one particular thing that bothered me personally during my development process. It does not override existing env variables and you can’t force it to do so.
As a result of these frustrations, I decided to write my own module based on
dotenv to fix this problem and make loading environment variables more convenient things. The result is
node-env-run or
nodenv. It’s a command-line tool that will load a
.env file, initialize the values using
dotenv and then execute your script.
You can install it globally but I recommend to only use it for development purposes and locally to the project. Install it by running:
npm install node-env-run --save-dev
Afterwards create a file
nodenv-example.js and place this code in it:
console.log('The value for FOO is:', process.env.FOO);
As you can see, we don’t need to require anything here. It’s just the application logic. First try running it using
node:
node nodenv-example.js
This executed code should output
The value for FOO is: undefined. Now try using
node-env-run by running:
node_modules/.bin/nodenv nodenv-example.js
The result should be
The value for FOO is: bar since it loaded the
.env file.
node-env-run can override existing values. To see it in action first run the following command:
FOO=foo node_modules/.bin/nodenv nodenv-example.js
The command-line output should say
The value for FOO is: foo. If we now enable force mode we can override existing values:
FOO=foo node_modules/.bin/nodenv --force nodenv.js
As a result we should be back to the original
The value for FOO is: bar.
If you want to use this tool regularly, I recommend that you wrap it into an npm script. By adding it in the
package.json like so:
{ "name": "twilio-blog", "version": "1.0.0", "description": "", "main": "nodenv-example.js", "scripts": { "start": "node .", "start:dev": "nodenv -f ." }, "author": "", "license": "ISC", "devDependencies": { "node-env-run": "^2.0.1" } }
This way you can simply run:
npm run start:dev
If you want to learn more about
node-env-run make sure to check out its documentation.
Environment variables && npm scripts
There are scenarios where it’s useful to check the value of an environment variable before entering the Node.js application in npm scripts. For example if you want to use
node-env-run when you are in a development environment but
node when you are in
production mode. A tool that makes this very easy is
if-env. Install it by running:
npm install if-env --save
Make sure to not install it as a “dev dependency” since we will require this in production as well.
Now simply modify your npm scripts in your
package.json:
"scripts": { "start": "if-env NODE_ENV=production ?? npm run start:prod || npm run start:dev", "start:dev": "nodenv -f .", "start:prod": "node ." }
This script will now execute
npm run start:prod and subsequently
node . if
NODE_ENV has the value
production and otherwise it will execute
npm run start:dev and subsequently
nodenv -f .. You can do this technique with any other environment variable as well.
Try it out by running:
# should output "The value of FOO is: bar" npm start # should output "The value of FOO is: undefined" NODE_ENV=production npm start
If you want to learn more about
if-env make sure to check out its documentation.
Debugging
We all know the moment where things are just not working the way you want it to and some module is not doing what it’s supposed to do. That’s the moment it’s time for debugging. One strategy that helped me a lot is using the
DEBUG environment variable to receive more verbose logs for a bunch of modules. If you for example take a basic
express server like this:
const app = require('express')(); const bodyParser = require('body-parser'); app.use(bodyParser.json()); // for parsing application/json app.use(bodyParser.urlencoded({ extended: true })); // for parsing application/x-www-form-urlencoded app.post('/', (req, res, next) => { console.log(req); }); app.listen(3000);
And start it up with the
DEBUG variable set to
*:
DEBUG=* node server.js
You’ll receive a bunch of extensive logging that looks something like this:
The “magic” behind it is a lightweight module called
debug and its usage is super easy. When you want to use it all you have to do is to initialize it with a “namespace”. Afterwards you can log to that namespace. If someone wants to see the output all they have to do is enable the namespace in the
DEBUG variable.
express in this case uses a bunch of sub-namespaces. If you would want everything from the
express router, all you have to do is set
DEBUG with the appropriate wildcard:
DEBUG=express:router* node server.js
If you want to use
debug in your own module all you have to do is to first install it:
npm install debug --save
And afterwards consume it the following way:
const debug = require('debug')('myApp:someComponent'); debug('Here is a pretty object %o', { someObject: true });
If you want to learn more about
debug make sure to check out its documentation.
Now use all the environment variables!
These are by far not all the things you can do with environment variables and all the tools that exist but rather just the ones that I use most often. If you have any other great tools that you regularly use, I would love to hear about them!
Now that you’ve learned a bit more about Environment variables in Node.js, try out Twilio’s Node.js quickstarts! | https://www.twilio.com/blog/2017/08/working-with-environment-variables-in-node-js.html | CC-MAIN-2021-17 | refinedweb | 1,920 | 67.25 |
Opened 17 years ago
Closed 17 years ago
Last modified 17 years ago
#1244 closed defect (fixed)
[Performance] PointObj cause performance lost
Description
In bug 1224, I found that that a performance lost is observed since we add the Z parameter in the point object. Just because the pointObj is bigger, it takes longer to access the parameters of the pointObj. A line like: x1 = shape->line[i].point[j].x; y1 = shape->line[i].point[j].y; takes between 15% and 50% more time. Daniel and I talked about it and he proposed to put all access to the m and z parameters inside a #ifdef USE_SHAPE_Z_M ... #endif By default we could make the m and z options disabled since most of the users don't use it. I made some test and it seems that if we put the m and z inside the ifdef, we gain with this change: (50 calls to the gmap mapfile) With the M and Z parameters in point object 4.4: 7.56 4.0: 6.18 With all M and Z parameters inside not enabled ifdef 4.4: 7.10 4.0: 6.23 I propose to commit that in 4.5, but this would require changes to the core and to mapscript (all flavor). So I want to inform you and maybe have some comment before.
Change History (21)
comment:1 Changed 17 years ago by
comment:2 Changed 17 years ago by
comment:3 Changed 17 years ago by
I would think the the compiler optimization should deal with the "shape->line[i].point[j]" as a semi-invariant within the loop, but maybe it is not. You should try some manual optimization in case the expression is too complicated for the compilier to optimize correctly. Like: pobj = shape->line[i].point[j]; x1 = pobj.x; y1 = pobj.y;
comment:4 Changed 17 years ago by
Steve, I know we already have a lot of compile option, but How can we fix that if not by a compilation option? We may move the m and z parameters alone in a new object (PointDimensionObj) and add the new object in the shape object. This new object could contain a pointer to the x and y values. It should take care of the performance lost without adding the compilation option. However, I'm not sure how easy and how clean it is to do that. Any other idea? Stephen Woodbridge: I'm pretty sure the compiler takes care of this issue. Others may confirm or contradict that.
comment:5 Changed 17 years ago by
Steve, since I did not receive any response, I will integrate the compile option. However, if we want, we may change that later. Aiming 4.6
comment:6 Changed 17 years ago by
Sounds ok I guess. (I was gone on vacation last week which is why you didn't hear from me...)
comment:7 Changed 17 years ago by
I commited my changes to remove the Z and M parameters in th pointObj by default. Most of the work was pretty straight forward, but I did not test with swig mapscript nor windows. The new configure flag is --enable-shape-z-m and the define is USE_SHAPE_Z_M. This gives around 7 to 10% performance gain in the overall. Marking as FIXED
comment:8 Changed 17 years ago by
Thanks Julien, that's a significant enough gain to warrant the change then... Steve
comment:9 Changed 17 years ago by
I've made a few changes to mapscript and the Python tests pass without the --enable-shape-z-m option. I haven't run the tests with Z and M enabled. Making this a compile option effectively doubles the amount of testing I have to do now, did you realize? It was enough to stay on top of one MapServer, but now we have two different programs and I have to stay on top of two configurations and builds. Sort of a pain. My last two cents is that the option and macro should be changed to --enable-point-z-m and USE_POINT_Z_M because these attributes are possessed by pointObj, not shapeObj.
comment:10 Changed 17 years ago by
Tests pass with --enable-shape-z-m
comment:11 Changed 17 years ago by
I agree it would make sense to change to --enable-point-z-m and USE_POINT_Z_M Could you please make that change Julien?
comment:12 Changed 17 years ago by
I'll change shape-z-m to point-z-m Reopening
comment:13 Changed 17 years ago by
I commited my changes to use USE_POINT_Z_M and --enable-point-z-m. Marking as FIXED
comment:14 Changed 17 years ago by
Hi folks, I will check the code for maporaclespatial.c. A user report that using the last version of cvs the points from Oracle Spatial don't appear in Map. Thanks.
comment:15 Changed 17 years ago by
Hi folks, I did the tests with Oracle Spatial connection, but the point only appear with --enable flag. When I don't use this flag or user the --disable the points (points, polygons, and lines) don't appear. After this I checked the maporaclespatial.c and I didn't fine problem with the last changes. In my tests is only wotk with --enable-point-z-m Is it working with another connections (shape, PostGis)? Any tests? Thanks.
comment:16 Changed 17 years ago by
Shape, Tab, OVF work. I guess PostGIS works too since there's multiple user of it. I have few oracle data here. I will make some test.
comment:17 Changed 17 years ago by
Hi, Problem solved. The SDOPointObj "need" the z value. It's a internal point Object for Oracle (only used by Oracle). So, I remove the ifdef for this typedef and worked without --enable flag. I will do more tests to check it, but I belive that it's finished. I saw in the code that only Oracle Spatail read and set the real z values for points, it's right? PostGis, OGR, and Proj set the value to 0, right? Thanks.
comment:18 Changed 17 years ago by
i'm setting up to test this issue with Oracle for Julien (although i don't have the skill to go as deep as you guys).
comment:19 Changed 17 years ago by
Jeff, Fernando has found and fixed the issue. There is probably no need to duplicate the testing effort.
comment:20 Changed 17 years ago by
sorry, i should have added myself to this bug earlier
comment:21 Changed 17 years ago by
Updated the new version of maporaclespatial.c
Note: See TracTickets for help on using tickets. | https://trac.osgeo.org/mapserver/ticket/1244 | CC-MAIN-2022-05 | refinedweb | 1,119 | 72.66 |
DID NOT OUR HEART BURN WITHIN US WHILE HE TALKED WITH US BY THE WAY,AND WHILE HE OPENED TO US THE SCRIPTURES? Luke 24:32
Volume 2 Issue 1
March 2007
WE HAVE HEARD... I cannot express how humbled we are by the response of those who have had the opportunity to read the newsletter. It is truly God’s uncommon favor. It seems the newsletter has been a spiritual catalyst to many individuals. David said "Let the words of my mouth and the meditation of my heart be acceptable in Your sight, O Lord, my rock and my redeemer" - Psalm 19:14 The newsletter was intended for women, but is touching the lives of men also. God’s word does not return void. Isaiah 55:11 (King James Version) So shall my word be that goeth forth out of my mouth: it shall not return unto me void, but it shall accomplish that which I please, and it shall prosper in the thing whereto I sent it. (New Living Translation reads) - It is the same with my word. I send it out, and it always produces fruit. It will accomplish all I want it to, and it will prosper everywhere I send it. People have thanked me for obedience to the Lord because they have been blessed by reading a word here. It grieves my spirit to think of how much time I have lost. I think of the parable of the talents in Matthew 25: 14 to 30, and how the Lord gave talents to individuals, and when asked what have you done, there was one who had buried his talent. The parable notes that the Master took his talent away, and gave it to another, and he was punished. Let’s remember Esther in Esther 4:14, if she had not dared to step out and step up, then surely God would have used someone, else, but think of what Esther would
Kairos-Women to Women
Women Positioned for the Kingdom Images on these pages are from:
have missed. There is Saul, whose kingdom was taken and given to David. What blessings will I miss, if I do not obey the Lord in this. Even if one soul comes to the Lord because of this newsletter, then all will be worth the cost. So it is that when I received the Word from the Lord, that I ran with it. There is no time to lose. There is a sense of urgency. In addition, this newsletter will take you on journeys traveled by others. These experiences serve to remind you and me that God is right there in the thick of things with us. 1 Thessalonians 5:11, NASB) “Therefore encourage one another and build up one another, just as you also are doing.” We share these experiences of God’s grace and faithfulness, and he can do it for you, if you reach out to him. You should know that this issue of the newsletter was delayed time after time. This issue had a different heading and graphics. I heard the whisper in my being: “we have heard”. Then people started coming to me, saying, Oh, we heard this, or we heard that..” Let us take time to hear that in-
ner voice, which is the Holy Spirit. Share your faith, so that others will approach you and say, “We have heard what God is doing”. . .....Alma Barela
Inside This Issue Front Page : LETTER FROM THE EDITOR
1
MY OWN ALABASTER BOX
2, 3, 4
BREAKING THE ALABSTER BOX
5
CALEUM’S STORY
6, 7
INTERCESSION, FASTING AND THE HEART OF GOD
8,9
PHOEB’E’S HEART: UNION RESCUE MISSION
10 ,11
TRIBUTE TO SPC. JAMES KIEHL
12
“FAITH IN THE FOXHOLE”
13
CARRY EACH OTHERS BURDENS
14
DEVOTIONALS FOR PET LOVERS: LIFE WITH KIKI PROBERBS 31 WOMAN
15
BEAUTY FOR ASHES: MY JOURNEY, MY TESTIMONY
16 ,17
LAYING CLAIM TO OUR HERITAGE
18,19
BACK PAGE : - STATEMENT OF FAITH - BRETHREN MARKETPLACE - PHOEB’S HEART 2 - BARTER CORNER - KAIROS HUMOR SPOT
20
KAIROS-WOMEN
PAGE 2
TO
WOMEN
MY OWN ALABASTER BOX There once was a girl who had been given talents extended by God's grace. She never took the credit for that which God had provided. She was wholesome and happy with her life. She didn’t care for status or title, and was known only as Rose. She served her family, and friends with joy. She enjoyed her work very, very much. Everything she did she strived to do her best. Her life was an open book. Alas, Rose had a transparent heart. Life was good as she knew it. She didn’t have romantic love, but it would come to her transparent heart, and then surely it would pulsate with joy and she would feel the crimson shades of love in her heart. She longed to find love and knew that it would come one day. She didn’t believe you had to go out and look for it, because if it was meant to be, then love would find her. She had always been told that she was selfless. Her full name, Alma Rosa, could be translated into pure soul, innocent soul. Surely she would be blessed by love. Then came the day when everything changed. She met a man. He was clothed as Prince Appeal. He was older, and had lived a full life, and told her he had been on great adventures. Her heart was swept away. She gave her heart to this man. She felt her transparent heart pulsate with crimson love.
cooing and rocking back and forth, trying to reassure her spirit and broken heart, that somehow morning would come and the sun would shine for her, for she had read that Joy cometh in the morning. Her transparent heart turned out to be easily manipulated, and weak. She came to find that Prince Appeal was really Baron Malevolence himself. There was no crimson love in his heart, only gray famine, for he lived without hope and without Grace itself. The day came when her crimson heart shattered, and the transparency in her heart was so, that she could not have foretold that Baron Malevolence would hurt her so, because she trusted him blindly. Baron Malevolence had a heart so black that he could not even fathom that she would take hold of her roots and self, and rise and call them by name, and break through the fortress he had built in her mind, and his strongholds. The once crimson heart, woven with transparency, shattered into a million pieces. Although the heart of Rose was broken, and her exterior-self had fissures that would seemingly spew great waves of sorrow, much less be able to contain her heart, she bravely picked up the shattered pieces of crimson and transparency which had formed her heart, put them in her box of alabaster and held them close,
She grappled through the dark tunnel and down a hollow shaft and made it to daybreak. She saw a marker on the road, which had the address of Romans 8, paths 38 and 39, which indicated that “neither death, nor life, nor angels, nor principalities, nor powers, nor things present, nor things to come, nor height, nor depth, nor any other creature, shall be able to separate us from the love of God, which is in Christ Jesus our Lord”. Just then, the sun pierced through the darkness she had come to know and she was temporarily blinded by the intensity of the sun and the hot tears streaming down her cheeks. She was in emotional chaos, as the sun’s warmth shocked her senses, as she realized that she had lived in cold forbidding shadows far too long. She had compromised everything she knew, and she realized that any compromise at any level had only resulted in further compromise, until there was no real self left. As she leaned against the marker on the road, waiting for a carriage to take her from the God-
VOLUME 2 ISSUE 1
PAGE 3
MY OWN ALABASTER BOX (CONTINUED-PAGE 3) forsaken village behind her, she found her foot on the ROCK OF AGES, and she remembered the words which had been ingrained into her self and heart. They had been hidden in her heart, and just as she had learned in times of trouble, she was not alone. She remembered the words of a faithful servant Paul who served the Lord of the FORTRESS of the LIVING WORD. The words seemed to resonate in her head and entire self, and shook her to her core. Yes, she would go there, for she also knew HIM by name, but wasn’t sure if HE still wanted to call her by name. There in the middle of the most lowest moment of her life, bereft of all she had thought to be true, Rose humbly recognized GRACE, and in the middle of her pain, reached out and cried for GRACE, and instead of crying why me, why this, her trembling lips said, THANK YOU, thank you for saving me from this wretched existence. I don’t understand now, and I don’t know how I am going to make it through this, but I still know you, and I ask you to let me recognize that your GRACE is sufficient. With BLESSED ASSURANCE by her side, she made it to the citadel, which was at the foot of the FORTRESS of the LIVING WORD, where she found sanctuary for her broken heart, tired spirit and defeated body.
The address read: Jeremiah 31:3: Because I love you with an everlasting love. Rose’s pain was great and she would often go down the lane, to the garden of Deuteronomy 4:29: If you seek me with all your heart, you will find me… She would hold her heart and look at it and be struck that she would never be able to put it together again. She tortured her mind questioning her reasoning and how she had come down this downtrodden path and circumstances. As deep as her sorrow was, she could not let herself wallow in her own sorrow. Who was she to say, that this was the end for her. She didn’t dare be so presumptive, as to decide that there was no more for her. After all, wasn’t she relation to the kingdom? Was she not destined with a purpose? As she kept her crimson and transparency in her box of alabaster, some of the shattered pieces took on a hue of purple and blue, but there was really no form to her heart. So off she went to the Potter’s House, he lived in the fork in the road of Genesis 1:27, in the Village of Psalms 139: Building 13 and 14. Rose asked the Potter to somehow give her heart its shape. “For thou hast possessed my reins: thou hast covered me in my mother's womb. I will praise thee; for I am fearfully and wonderfully made: mar-
velous are thy works; and that my soul knoweth right well.” As the months passed on into years, with the help of her family and beloved pet, some of the pieces took on a tinge of yellow, orange and pink, but always with a slight tinge of gray. As the days and months passed the edges of the pieces took on a little black and silver, which served to bond the broken pieces. As she struggled to regain her sense of self, and barged on, facing new beginnings, she set out to find Jeremiah who lived in building 29, apartment 11, and he said, my plan for your future has always been filled with hope. Upon meeting him, some of the pieces of her heart took on a hue of green, as she gladly made friends with HOPE. Before she knew it time had passed, and as she looked around she realized that although tiny tears sometimes leaked through the fissures, she realized that she had been saved by GRACE who resided in the land of John 3:16. (CONTINUED ON PAGE 4)
From:
KAIROS-WOMEN
PAGE 4
TO
WOMEN
MY OWN ALABASTER BOX (CONTINUED-PAGE 4) She rode on the wings of Angels, with those she had met on the way, Charity, Simplicity, and Grace.
There were areas where you were not sure what hue it was and the colors formed prisms and sunburst of color.
She learned how to call GRACE by name, and as GRACE stood by her, she dared to open her box of alabaster and look at the transparent heart.
Rose gasped as she looked at the once shattered pieces, for they had melded back into the shape of a heart, and as she held it up toward the sun, the light pierced through the tiny fissures and illuminated her self.
At these times, she would often put on her garment of Psalm 34:18, when you are brokenhearted, I am close to you. He had given her a garment of praise for the spirit of heaviness, he had taken all the burnt ruins (BEAUTY FOR ASHES) and given her the oil of JOY FOR MOURNING provided through Grace in the mountain top of ISAIAH 61:3. She poured this oil of joy in her alabaster box. Her heart seemed so fragile, as she lifted it up for GRACE to take a look, and then GRACE took her hand and held her arms out toward the sun, not letting her falter. GRACE washed the bonded fragments with the oil of anointing. The fragments, with a chip here and there of course, had allianced themselves into a splendid tapestry with hues of crimson, gold and purple, royal colors. There was pink, orange and yellow and green too, and yes there was a bit of transparency for good measure.
As Rose tapped on the heart gently, she learned that it was quite resilient, the shattered pieces had been through the fire and fashioned into a tiffany-like tapestry with ribbons of light. She knew that this heart had been hard to come by and she had paid a great price. She held it up to the sun and toward GRACE, and she paid particular attention to every shade and blush of color in her heart, her tiffany heart. She was in awe of the reflections of GRACE. Her perspective was directed by the WORD. Grace told her to appreciate the brilliance and beauty of the tiffany heart, and she knew that the coming together of these prisms of light had cost her, and she knew each tint and hue had been forged, and she knew that she did not have any uncertainties, because having known someone like Baron Malevolence and then going to the opposite extreme, had
Images from:
shown her the bounty of kindness, generosity and compassion not only in her self but also in those she loved. Rose would take the box of alabaster and her tiffany heart and place them in a prominent place and give the heart its due, and allow the sunlight and GRACE to shine through to bless others. She had come to live under God’s wing and placed her life, her hopes, dreams and future in him. Her spirit sang joyfully, while her heart smiled. Her soul had been healed. She had forgiven those who had hurt her, because her joy was full indeed. She was indeed destined with a purpose. She came to live in the house of Jeremiah 29:11: "'For I know the plans I have for you, 'plans to prosper you and not to harm you, plans to give you hope and a future.'" She knew that prosperity is not measured in wealth or monetary gain, but in loved ones and things not palpable. Written by Alma Barela
VOLUME 2 ISSUE 1 BREAKING THE ALABASTER BOX The story in pages 2 ,3,and 4 are part of my story. If you are going through or have gone through anything similar, I would say hold onto God, but that is not enough, rather cleave to God because he is the only one who can keep the waters from taking us under. Trust in God’s word as you scale the mountain, use his promises as a foothold or the crevice in the rock to get your bearings. Cleave to God, as your life depends on it, because it does, eternal life. Maybe the darkest moment, or that mountain that seems to high to climb, or the bitter tears or anxiety, maybe these things are the very things that will bring you into your victory, and into new levels of faith. We must break our alabaster boxes before the oil of anointing can flow. Let’s also consider forgiveness, because it was part of the lesson. I don’t want you to think that it was easy or that I did it to be holy. In the beginning, it was the only way I could preserve my own sense of self-worth. You know not forgiving others, can lead to hate, and that is a cancer that will eat away at you. It later became about forgiving, because we will not be forgiven for our wrongdoings. It took time but the Lord brought me to a place where I could pray for that person who had hurt me. God’s word says that we must bless those that curse us, pray for those that mistreat you (Luke 6:28 –New American Standard Bible); also (Matthew 5:44 – King James version) - But I say unto you, Love
PAGE 5 your enemies, bless them that curse you, do good to them that hate you, and pray for them which despitefully use you, and persecute you. In seeking forgiveness from God and seeking to forgive others, I clung to Psalm 32. I encourage you to read this beautiful Psalm. In the midst of your journey, let God be your “hiding place”. An excerpt from Psalms, 5,6,7-King James Version reads as follows: “I acknowledge my sin unto thee, and mine iniquity have I not hid. I said, I will confess my transgressions unto the LORD; and thou forgiv want to tell you that in my period of disobedience and living with one foot in Church and the other “testing” the waters, you can lose yourself as I did, and not realize it. I was not standing on level ground. You compromise once, twice, and soon right and wrong do not trouble your conscience. God’s grace is amazing because he can take the mess we make, and use it for his glory.
selves in a similar situation. In order to reflect God’s grace and mercy you have to share how “real” God has been in your “real” situation. I knew in writing this story that I might be judged, but I remembered that King David chose the mercy of God, rather than that of man, when he failed God. Our goals and our perspectives need to be set by the Word of God, rather than that of the world. If we study the life of David, we will see God’s master plan for our lives. He is the direct connection to Christ. I hope that this will help you move forward through the darkness. Don’t miss what the Lord has planned and promised to be rightfully yours. You know if you refer to the meaning of numbers in the Bible, I believe it is called Numerology in Theology, the number “7” equals rest, completion, finishing, the year of restoration. When completing this story of the Alabaster box, it struck me that this is the seventh year of my restoration journey. I do not consider that a coincidence. I believe I was meant to share this story for such a time as this, it was destined with a purpose. Written by A. Barela
Perhaps the thought has crossed your mind, and you have said to yourself, how embarrassing, how can she share something like this? I will tell you why, how else can I reach someone who finds them
Images from:
VOLUME 2, ISSUE 1
PAGE 6
CALEUM’S COURAGE I received a very touching email from our faithful Heart-to-Heart member, Rosa, on January 11, 2007. She received it as a request for prayer from a good friend of hers who works at the same company as Michael Miller, the father of this precious child, Caleum. It was a request for prayer to go around the globe. This article is written and printed with permission of Mr. Miller. It was sent out Tuesday, January 9, 2007 7:31 PM. It reads: My Dear Friends, I am finally finding the ability to write this email. In a way it is a form of therapy but also so I don’t have to recount the events again when I return (to work). Having 4 teenagers and 2 young ones gives one the ability to compare in extreme detail development of children. Without question, Caelum, the youngest, 17 months old, is the brightest. He is our first fearless child pulling a chair between our recliner and couch to make a sort of bridge. He would then walk his bridge launch his body over the arm of the couch and in the process totally lose himself in a tumble that would inevitably bring him crashing to the ground. You would see his body tense waiting for impact and hear him say "oh no" or "woh" until he hit the ground. After impact he would say "ow" and start all over again. He is the most inquisitive with an insatiable appetite for the new. The first to test boundaries of what he can and cannot get away with, the first to talk. Knowing me you might wonder, 17 months old, Mike, aren't you and Annemarie getting up there why would you want another child at "your age". I must say that it was a bit of a surprise to both of us when wefound out Annemarie was pregnant with Caelum. If you've ever read the disclaimer on the directions of birth control pills you'll notice that in 99.5% of the time they are effective. Lucky us we've the 0.5% and really lucky us we wouldn't change a thing. Caelum has a birthmark on the back on his neck in the shape of a heart and if you know this little boy you would see how much he has to offer. We nicknamed him our love baby, I know it’s sappy, but that's what we feel. As it turns out his name is fitting as well. Caelum is not a typical name one that you may never have heard of. Caelum is the name of a constellation. His middle name is Dante, translation "lasting". Both really fit this little boy, a bright and lasting star. On December 31 at around 3:30 in the afternoon our children, Shawn, Ryan, Adam, Devyn, Jacob and Caelum were getting prepared for our traditional New Years Eve dinner at a Japanese Teppan restaurant in LaVerne. Our bright curious little boy got onto the back patio figured out how to open the gate and fell into our pool. I found him after what we estimate to be 10 minutes lying face down, floating in the pool. The image is permanently embedded in my vision, when I close my eyes, drive my car or am alone, it is a nightmare. By the time his little heart was started again 40 minutes had passed. The damage to his precious body is grave. Deprivation of oxygen to his brain has placed him in a vegetative state. Jacob (6 years old) constantly asks God to fix his baby brothers brain to make it better like his old brain. As strange as it may seem to some we truly believe that he loved my wife and I and his brothers and sister so much that he didn't want to leave, that he found a way back so we wouldn't have to remember him in the pool or the subsequent lifeless look on his face when he was pulled from it. The days are passing slowly and both my wife and I cannot remember what day it is. With each passing day we have been blessed with another moment of Caelum's life of his angelic face. To not see his beautiful face again would have been more painful then anything we could possibly imagine. The crying has subsided somewhat but it comes at you when you least expect. Seeing a toy, a book, a shirt, a favorite food, hearing a word he'd repeat, writing this email, or simply because you think of him the tears come. Hard to control. We are looking for little miracles in each second of each and every day. Attached is a link that you can track the progress of this little boy, of our Caelum. We've taken advantage of this way of updating our friends and family so we don't have to recap events in person. If you believe in God, please say a prayer for Caelum and his bothers and sister and my wife, if you don't I ask that you please say one anyway.
CALEUM’S COURAGE (CONTINUED)
PAGE 7
Go to , open the "Patients and Family" section, then look to the right and open "Patient CarePages", scroll down to the bottom of the page and open the "Create a CarePage", then open "Visit a CarePage", in this section you'll need to sign up to visit a care page. Our page is MichaelandAnnemarieMiller which you'll need, all one word no spaces. The children are coping as best they can. Again please offer a prayer for Caelum. - Michael Miller As we have prayed for this child, from time to time we visit his page to note the progress he has achieved. God is good. While reading the last excerpt posted by his mother, AnnMarie at the above noted site, we see how God’s grace is moving. Please take time to read about this family and pray for ongoing strength for the dark moments in their lives, and Praise God for the miracle that this child is. A fundraiser was held for this family and for the costs that they are bearing. Let God move in your heart, and may he prompt you to offer more than a prayer. Below is an excerpt from the last entry on “Caleum’s Page” by his mother. “Caelum continues to teach us about "time". It has taken on new meaning for our family & for so many people who continue to follow our story of hope and love and support us. Michael & I are forever grateful for the time you all take to read the updates, think about us, pray for us & contribute to Caelum's recovery. It is having a profound effect on our journey of healing & I am certain one of the main reasons Caelum continues & will continue to recover. I have said from the very first week that Caelum's personality has been "intact". I could see it in his strength to struggle his way back to us during those early days. Now we continue to see his strength & determination in his intense effort to lift his head, or keep his head still while sitting, reach out his hand to try & touch a favorite toy, or to try to roll over onto his tummy. Herculean efforts from such a tiny little boy. We know without doubt that these things will come to Caelum in time. He is becoming more aware of his surroundings and his "people" - nurses, therapists & assistants. The therapists feel that he is now able to focus on their faces when he looks at them, rather than looking through them - exciting progress, to say the least.” “His favorite place is outside & we have a new stroller now to take him out. It is amazing to see how calm he gets as soon as the fresh air hits his face. The sunshine on his beautiful hair feels so good to him as I take him out of the stroller to sit him in my lap. He leans against me to take in the surroundings - sticking his tongue out as if to taste the fresh air, turning his head all around and then fixing his gaze on one of his favorites - the big camphor tree on the grounds,& I am sure he can hear all those birdies singing for him. We talk, sing & just sit to listen. Time again seems to disappear into nothingness. I could sit with him there forever. We talk about Jacob - sweet big brother that talks to Caelum a couple of times a day on the telephone. Caelum's eyes seem to search for Jacobs voice and he becomes very calm while Jacob talks to him. A deep connection between these two is another amazing thing to witness. Just last night, Jacob began to ask some of the more difficult questions to answer; too hard for me to list here. I will tell you that after Jacob and I talked about Caelum's recovery and progress, he had some profound things to say. Jacob said that he wanted only one thing - "love from Caelum". Then he said, "No really it's three things I want God to do first, give me love from Caelum, second to fix his brain and third to fix his body - I have faith in Caelum, mommy". The perfect prayer from a newly turned 7 year old - Oh my Jacob how you continue to teach us too. I love you sweet boy and my darling baby Caelum, & my Michael.” - Trust & Believe ALWAYS, Mommy
So rest and relax and grow stronger let go and let God share your load Your work is not finished or ended you’ve just come to a bend in the road From “Bend in the Road” by Helen Steiner Rice
KAIROS-WOMEN
PAGE 8 FASTING, INTERCESSION & THE HEART OF GOD As the year started, my sister and I were determined to know God more intimately. We agreed to fast much of the month of January. The concept of fasting and interceding intrigued me, because the time spent before the Lord left us yearning for more of God. I began to not only marvel about intercession, and fasting, but also the Heart of God. As I studied, I came across the following faith contract - Jesus said, "If two of you on earth agree about anything you ask for, it will be done for you by my Father in heaven." Matthew 18:19 During the time we fasted, we found that the more we sought his word and presence, we could deny ourselves physically for longer periods of time, and yet feel refreshed. I attribute that to the verse that says: "Those who hope in the Lord will renew their strength. They will soar on wings like eagle; they will run and not grow weary, they will walk and not be faint." Isaiah 40:31 In our daily walk with the Lord, as we continued earnestly seeking his will and presenting specific people and situations to the Lord, the Holy Spirit would impress upon us yet another need, and another need, until I began to wonder if some things were to trivial to bring before the Lord. Well as I questioned this, I came across the following verse: “Don't worry about anything; instead, pray about everything. Tell God what you need, and thank him for
all he has done.” Philippians 4:6 It occurs to me that as we come before the Lord, and commit our day to him, and strip away our physical needs, selfish strongholds, and intercede for the needs of others, that subsequently our supplications, turn into praise. It cannot be helped, when you read a verse such as: "The Lord will perfect that which concerns me" Psalm 138:8. I like to temporarily replace the “me” for whoever I am praying for. There have been times during this period of fasting that I feel the urgency to seek God, but am not sure what the exact needs is, at times such as these we can bring to mind Romans 8:26 to 28, which. Make no mistake, fasting is difficult because we our separating ourselves, which can effect us in body, in mind and in spirit, and makes us greater targets for the enemy. In addition, sometimes you can actually experience the pain and turmoil that soul is going through. I believe this is so that we can “walk in that person’s
TO
WOMEN
shoes” and know exactly what to bring before the Lord, and how to intercede like spiritual warriors before the throne of God. The word of God says in Philippians 2:4 (I like the way it reads in the New American Standard Bible version - “do not merely look out for your own personal interests, but also for the interests of others.” As I wanted to know more about fasting, and the importance it plays in our relationship with God, I began to see that we go to people to fix things broken in our everyday lives. We take our car to the mechanic, and call the repairman to fix our appliances, and even refer to our warranties. Why then don’t we use our warranty that was purchased with a great price on our behalf. Why is it that we don’t call the Restoration Advisor and use the maintenance guide, and claim our warranty, when our lives, dreams and futures are broken? This is how I reasoned fasting is a tool in our spiritual toolbox. I refer to these verses: I Pet. 5:8 - Be sober, be vigilant; because your adversary the devil, as a roaring lion, walketh about, seeking whom he may dev… (CONTINUED NEXT PAGE)
KAIROS-WOMEN
PAGE 9
TO
WOMEN
FASTING INTERCESSION & THE HEART OF GOD-CONTINUED
Images on these pages are…
prayers together with praise can move the Heart of God. This is what the Bible says about the Heart of God: Acts 13:22 - And when he had removed him, he raised up unto them David to be their king; to whom also he gave their testimony, and said, I have found David the son of Jesse, a man after mine own heart, which shall fulfill all my will. Isaiah 40:11- As a shepherd carries a lamb, I have carried you close to my heart.
Revelations 5:11 reads - Then I looked and heard the voice of many angels, numbering thousands upon thousands, and ten thousand times ten thousand. They encircled the throne and the living creatures and the elders. Hebrews 1:14 reads - Are not all angels ministering spirits sent to serve those who will inherit salvation?
Jeremiah 3:15 - "I will appoint over you shepherds after my own heart, who will shepherd you wisely and prudently" 1 John 4:16 - I am not distant and angry, but am the complete expression of love. Jeremiah 31:3 - Because I love you with an everlasting love. Jeremiah 32:41 - Yea, I will rejoice over them to do them good, and I will plant them in this land assuredly with my whole heart and with my whole soul.
I believe that intercession and fasting prepares us for a purpose, which is to be open to be used by God. I have found that
angels, who are warriors themselves, before God?
Considering the Heart of God and his love for us culminates in a time of worship and praise. Can you imagine standing with the
So it is that I invite you to experience new levels of faith in 2007. Let’s take that journey together. Intercession and fasting can reveal new areas in our walk with God. I leave you with these words – Philippians 4: 8 - Finally, brethren, whatsoever things are true, whatsoever things are honest, whatsoever things are just, whatsoever things are pure, whatsoever things are lovely, whatsoever things are of good report; if there be any virtue, and if there be any praise, think on these things. By Alma Barela
VOLUME 2, ISSUE 1
PAGE 10
SPOTLIGHT-PHOEBE’S HEART
In Volume 1; Issue 1, we made reference to Romans 16:1, where Paul refers to Phoebe as a minister of the church in Cenchrea, and we noted that she was a servant or helper. , KJV). Our vision is to have or be a place where women help women in need. We mentioned being a help to abused or battered women, who are women needing a temporary helping hand, just that little bit extra to get them back on their feet. We’ve all gone through difficult times. We have all been women in some sort of transition needing a loving hand. The following is an excerpt taken from the UNION RESCUE MISSION, a Christianbased organization, with permission. Women's Ministry Programs For Crisis Center and Family Together Programs help point "the way home" for women and families in crisis.
Through the Women's and Family Ministry Programs, the following services are provided: •
Spiritual Encouragement
•
Substance Abuse Recovery
•
School Enrollment (required for all children)
•
"Mommy and Me" Program
•
Discovery Lab
•
Individual Counseling
•
Education & Vocational Training
•
Personal and Family Relationships
•
Medical, Dental, and Mental Health Care
•
Legal Aid
•
Recreation & Fitness
•
Work Therapy
•
Financial Planning
•
Community Church Involvement
Donations of New and Used Items Your gifts of clothing, furniture, and other needed items go directly to the homeless and desperately poor of Los Angeles. At Union Rescue Mission, every "guest" or resident is treated with all the respect a child of God deserves. That means we never give an individual an item of clothing or pair of shoes that we would not be willing to wear ourselves. We ask that all donations of used clothing be clean, of good quality, and ready to wear (no repairs needed). We ask that all donations of used clothing be sorted, boxed, and labeled according to gender (men, women, or children) and marked as "used." We ask that all donations of new clothing be sorted, boxed, and labeled according to gender (men, women, or children) and marked as "new."
VOLUME 2 ISSUE 1
PAGE 11
SPOTLIGHT-PHOEBE’S HEART— •
We are in need of quality clothing and shoes of all sizes for men, women, and children
•
To make a donation of clothing, furniture, or other needed items, please contact:
Maternity clothing.
Specialty sizes (extra large and tall) are especially needed.
Gifts in Kind Department
•
-Property Donations
•
Business attire for men and women (suits, shirts, ties, scarves, etc.) are greatly appreciated.
•
-Product Donations -Vehicle Donations Camille Hernandez, GIK Departmet Director
Donations of coats, hats, and gloves are very important during the winter months.
(213) 347-6300, GIK Dept. or Camille Hernandez
Donations of new clothing are always appreciated, especially in the following categories: undergarments for men, women, and children; socks; panty hose; and baby clothes.
They will let you know what items are needed and provide instructions on how to deliver or ship your donation to the Mission.
•
Baby Items: Donations of baby items such as cribs, strollers, high chairs, etc. can mean so much to a desperately poor family. We ask that all baby items be clean, safe, and in excellent condition.
•
Furniture: Due to seriously limited space in our warehouse, donations of furniture are accepted according to immediate need. We ask that donated furniture be clean, in excellent condition, and ready to place in an apartment or home.
Visit their website: and go to the “donate” tab or “get involved” or browse their sight, and let God lead you. Sign up for URM’s monthly E-News at So again, join the vision, let the Lord lead you and open your heart and hands. As I said before, I know the Lord will designate the receivers and the givers. Let us be a “blessing in the valley”. We will feature “Phoebe’s Heart” on our (backpage) in every issue.
•
Housewares: Items essential to setting up a home, such as pots and pans, lamps, bedspreads, etc. are accepted according to immediate need. We ask that donated housewares be clean, in excellent condition, and boxed with contents clearly marked. Also needed:
•
Bibles (large print, Spanish, and teen)
•
Phone cards
•
Educational games and toys
Image on this page is from:
KAIROS-WOMEN
PAGE 12 Tribute to Spc. James M. Kiehl-A Fallen Hero I received this email in January 2007 from Rosa, a precious Heart-to-Heart member. It was an email about the "Tribute to Specialist James Kiehl", well I was not comfortable about using the story without permission, and had looked up every avenue, so, I basically gave up. I told the Lord, well Lord, if you want me to use this special story you will find a way to let me know I am to use it. Well, as usual the Lord woke me up with new things for the newsletter, and these words were impressed on my mind, "read it again". So I got up and went to the computer, and reread the original mail, and sure enough, there was the name of the Aunt, Vickie Pierce, and her recounting of her precious nephew's journey home. Well this led me to the original story, and the website. I contacted the webmaster at that site, and he in turn graciously forwarded my letter to Vicki Pierce. She answered and allowed us to feature her Nephew's tribute. He was born in Comfort, Texas on December 22, 1980. James was assigned to the
TO
WOMEN young children. The military presence..at least two generals, a fist full of colonels, and representatives from every branch of the service, plus the color guard who attended James, and some who served with him. was very impressive and respectful, but the love and pride from this community who had lost one of their own was the most amazing thing I've ever been privileged to witness. (Please visit the website and see his full story and pictures). Let us pray for our men and women in the armed forces who are away from home and in the path of danger for you and me. I would like to extend a heartfelt thank you to Vickie Pierce for sharing the precious moments. Also thank you to Anthony W. Pahl OAM, Webmaster at International War Veterans Poetry Archives (). And a special thank you to Rosa, who has a heart for special stories, and to her friend that brought this to her attention.
KAIROS-WOMEN
PAGE 13
"It's hard to come by a good
Bible here." U.S. Soldier serving in Iraq. As U.S. troops deploy to Iraq, Afghanistan and other dangerous locations around the world, you can help insure that they go into harm’s way with the Word of God as their shield when you help Military Ministry provide Bibles for our troops. More than 1.3 million Military Ministry Rapid Deployment Kits (RDKs) have been handed out at the request of Soldiers, Sailors, Airmen and Marines since 9/11. Only God's Word can supply the anchor for the soul that military men and women are seeking. While military training has equipped them physically and mentally, they can also be equipped spiritually to withstand whatever challenges they meet. Each pocket-size, waterproof RDK includes a special camou-
WOMEN
Thank you so much for your
“FAITH IN THE FOXHOLE”
Help Give Our Troops an Anchor for the Soul
TO
flage-cover American Bible Society New Testament, a daily devotional from RBC Ministries, and a copy of Do You Want to Know God
Personally? from Campus Crusade for Christ. Requests for RDKs come in from chaplains at the rate of nearly 20,000 per month. Please help us continue to meet this demand as chaplains desire to spiritually outfit every military member. Rapid Deployment Kits cost only $3.25 each to distribute.
prayers and financial support on behalf of our military men and women and their families. To help us provide more RDKs, send a check made out to Military Ministry or give by credit card online or by phone. May God bless you! Military Ministry — A division of Campus Crusade for Christ PO BOX 120124 • Newport News, VA 23612-0124 Call 800-444-6006 or visit
Special thanks to: Mike McCandless, Director of
Development for Military Ministry, a Division of Campus Crusade for Christ International
They are strengthening and encouraging the faith of Christian soldiers and leading many to Jesus. One chaplain in Iraq shares, "We see lives changed for
Christ on a daily basis." Your partnership in praying for and helping to place the message of Christ into the hands of our men and women in uniform is greatly appreciated. For more information on this vital outreach, please visit the Military Ministry website:
Do you have a testimony to share with us? You never know who will be touched by one word from the experiences you share. You may be planting the seed of God’s word into someone’s heart. Email us: .kairoswomen@yahoo.com
KAIROS-WOMEN
PAGE 14
TO
WOMEN
CARRY EACH OTHER’S BURDENS In prayer for our troops, another web-site was brought to my attention, and I contacted Jeff Johnson, at Mission: Welcome Home Coordinator from the Wisconsin Department of Veterans Affairs. He gave permission to list the following information, which may fit into your ministry, women’s group or club. Families might use this as a resource to connect with their kids and assist them in doing the Lord’s work, and learning Christian principles. Are we not to “Carry each other’s burdens and in this way you will fulfill the law of Christ” Galations 6:2 Ask the Lord to lead you, and if you feel led to send a care package or a letter, please tuck in one of these newsletters, which you can print out. We sometimes think that we have nothing to contribute or nothing to give, but we can give our time and our encouragement. Use your talents and abilities as an extension of his grace. God bless you as you minister to the needs of others. Links to Support Our Troops AAFES Gift Certificates The Army and Air Force Exchange Services is where most service men and women do their shopping. You can purchase gift certificates for those in Iraq and those hospitalized. - Adopt a Platoon—has several ongoing projects to ensure that every soldier overseas does not walk away from mail call emptyhanded. - Operation Shoe Box - Send packages to troops. - Give 2 The Troops - Send letters and messages to troops. Operation Paperback - Recycled reading for our troops. /1/index.htm - Cell phones for Soldiers - Donated cell phones are recycled and turned into cash. The cash is used to purchase calling cards for soldiers in Iraq. - Books For Soldiers - Help the troops escape boredom by donating some books. You can also donate DVDs and CDs requested by soldiers. Defend America - Thank any service member stationed throughout the U.S. and the world with an e-mail. Letters From Home - Support the troops by sending letters so that they receive mail. -. - Operation Call Home - Operation Call Home's mission is to provide each platoon with its own satellite phone. - Operation Hero Miles - You can donate your unused frequent flier miles to help soldiers travel on emergency leave. They are
From:
also used to help families fly to hospitalized soldiers. Operation Iraqi Children - Many soldiers are rebuilding schools in Iraq and scrounging around for school supplies. Help by donating a school supplies kit. - Treats for Troops - Treats for Troops helps you provide packages to your loved ones overseas. If you don't know anyone, the Foster-A-Soldier Program matches you with a registered soldier by branch of service, home state, gender, or birthday - or you can choose to sponsor a group of soldiers. - Special Operations Warrior Foundation - Helping the Children of Fallen Special Operations Warriors - Hugs4SmilesUSA - Hugs4SmilesUSA will assign you a deployed soldier and/or family to send care packages and correspondence. 82,0_321_,00.html - Support the American Red Cross Military Members and Families Services To offer assistance, or for additional information contact: Jeff Johnson, WDVA "Mission: Welcome Home" Coordinator 1-800-WIS-VETS (947-8387) Click to email: Jeff Johnson
Please take a minute to visit these sites. You will be blessed.
KAIROS-WOMEN
PAGE 15
TO
WOMEN
LIFE WITH KIKIDEVOTIONALS FROM A PET LOVER If you read the first newsletter you read the story titled “Tweet, Tweet said the Lord.” Well the Lord has continued to bless me with awesome truths from his word as I walk our pet poodle Kiki. That’s Kiki to the right. This will be a regular feature in our newsletter. As I was walking with Kiki, the Lord brought it to mind that a cell phone has functions that are very much like the Holy Spirit. You see cell phones have internal functions such as leave a message, page or instant message, text message, etc. Have you ever tried to ignore your cell phone, when your call comes in, then you ignore it, and you know the call is for you, just as you know the Holy Spirit is impressing something upon you, well it is the same thing, unless you answer it, or check out the number, that thing will keep alerting you to the fact that there is a call or message waiting to be answered. Has it happened to you that you are in a sticky situation and a verse comes to mind, well that is the Holy Spirit text messaging you. Likewise, the directory of important numbers saved to the cell phone’s memory, you can access them any time you want, same as the scriptures stored in your heart and mind. John 14:26 - But the Counselor, the Holy Spirit, whom the Father will send in my name, will teach you all things and will remind you of everything I have said to you. Is that your cell phone or mine ringing?
Whenever the rainbow appears in the clouds, I will see it and remember the everlasting covenant between God and all living creatures of every kind on the earth." - Genesis 9:16
PROVERBS 31 WOMAN (CONTINUED)
Take out the biscuits from the can, poke a hole (like a donut hole) let stand for a minute while you pour oil in a pan and place on high/med heat. While oil is getting hot for deep frying, pour some sugar and cinnamon in a plate or bowl. Place biscuits in oil for approx 30 seconds, turn over for another 20 seconds, remove from oil, sprinkle with sugar and cinnamon. I served these donuts with a humble heart but they were very well received. My guests seemed to enjoy them. They were hot, homemade donuts. Hope you’ll try.”
PROVERBS 31 Proverbs 31 If we study the Proverbs 31 woman we see that she was a wise, resourceful Godly woman dedicated to her household with a positive attitude to glorify God. So this section is dedicated to the household. ******** RECIPE ********
Liz I., our Heart-to-Heart Group Leader has a quick recipe suggestion. She says: “I am not sure if this is a common thing or not but my friend Debbie came over one night and I did not have anything to offer her with her coffee. I decided to make some quick and easy donuts out of biscuits like my mother used to make for us when we were kids. All you need is can of biscuits, sugar, cinnamon and oil.
******** CRAFT IDEA ********
A Sweet Fragrance unto the Lord and Our Home - Save those orange rinds, and apple peels, take them and add a broken up stick of cinnamon, and / or cloves and place aside. Then take a brown paper bag and cut a circle the size of a saucer. Place the peels and rinds, and spices in the center of the cut-out circle. Gather up the sides, and bunch into a little pouch. Secure the packet with twine or other organic type cord. Place these little packets in a basket or container near the fireplace, and pop one into the fire when you light it up on these cold days
Images from:
KAIROS-WOMEN
PAGE 16
TO
WOMEN
BEAUTY FOR ASHES—TESTIMONIAL CORNER My Journey, My Testimony Part II As part of a Christian family I knew first hand the power in prayer. I recall being taken to the hospital because I had severe pain in my left side. There wasn’t much the doctors could do for me, except prescribe anti-inflammatory medication or corticosteroid drugs that helped treat internal changes caused by lupus. I remember being in the hospital with severe pain and being wheeled in to the room after having had a liver ultrasound. As I waited for the doctor to come in and give the results of the ultrasound, I remember repeating Psalm 31: 1-2 over and over: In thee, O Lord, do I put my trust; let me never be ashamed: deliver me in thy righteousness. Bow down thine ear to me; deliver me speedily: be thou my strong rock, for a house of defense to save me. The doctor returned with discouraging results. He said there was a suspicious spot in my liver and I should follow-up with my private physician. The following day with the ultrasound results in hand, I visited my private doctor. He sent me out for another ultrasound and a third. All three results came back with the same impression. He immediately sent me out for a biopsy to be performed in the hospital. I could no longer stand from the pain that had
stricken my body. I was totally bent forward from the excruciating pain and swelling. The doctor mentioned if the biopsy was positive, I should consider a liver transplant. When I heard those words, “liver transplant”, I felt as my world came crashing down. What was I to do, if this were true. I had three young boys and a husband that depended on me. I cried out to my Lord, who is my defense, my rock, my strong tower, my deliverer. I knew that he was going to carry me out of the fire. He is the God that healed and had touched my body so many times before. I went home and cried out to him to do his will and use me as an instrument for his purpose because I knew I had favor with the gate keeper. God never failed me. I knew the truth regarding power in prayer and the result are nothing less than miraculous. I asked my family and relatives to join me in fast and prayer the day before I went in to have the biopsy performed. It was amazing how many people were praying for me. There were prayer warriors all over the nation. My petition went out over Christian radio. I waited on him for his healing power. I went to Him with all my weaknesses: physical, emotional and spiritual. I knew if I rested in His presence nothing was impossible for Him and he was able to do far beyond all that I asked or imagined.
Anxiety attempted to wedge its ugly head into my thoughts. I reminded myself to leave it in the hands of my Creator. Rather than trying to maintain control over my life, I abandoned myself to His will. Though this may have felt scary and dangerous, I knew that the safest place was to be in His will. The morning came and I was ready to go have the biopsy done. I remember the sun rays peaking thru my bedroom window and brightening my room with much splendor. That is when I felt his promise that He was always with me, holding me by my right hand. I knew and felt the reassurance of His Presence and glorious hope of heaven despite the outcome. I drew strength from my faith in God’s love for me. I arrived at the hospital with my sister, Alma. I felt comfort knowing she would accompany me in my time of trouble. She managed to push me in the wheel chair through the glass doors of the hospital.
KAIROS-WOMEN
PAGE 17 BEAUTY FOR ASHES—TESTIMONIAL CORNER (CONTINUED) My Journey, My Testimony (Continued) I was tired and had so much pain on my right side. As she pushed me along the corridors of the hospital it seemed dark and cold, but here the Lord’s light was to shine in transcendent splendor and give His angels charge over me.
Thru this supernatural experience I learned that a thankful attitude opens windows of heaven. As you look up with a grateful heart, you get glimpses of his glory through those windows. Through my experiences, I already know the ultimate destination of my journey is my entrance into heaven.
I was wheeled into a room away from everybody I knew, but I kept thinking in my mind on my present journey, and enjoying His presence in the midst of the storm. Recalling to walk by faith, not by sight and trusting Him to open up the way before me.
A life of praise and thankfulness becomes a life filled with miracles!
When I awoke from having the procedure, I felt no pain and wondered what had they done. Soon a nurse came in and prompted me into a sitting position. I could not believe how well I felt and looked, as I was told. Hours passed and my sister wheeled me across the hospital to the doctor’s offices.
This is just one of the many miralces he has completed in my life. Please look for “My Journey, My Testimony” in each newsletter. By Mary Lou Barela Acedo
There was the doctor waiting to give me my results. I waited anxiously. As I entered into the room, he sat behind his desk with my chart in front of him. He was serious and shaking his head as I reached his desk. I was confused. No words were spoken from either of us. He raised his hands in the air and blurted out “There is no medical explanation”, if I were you I would go home and get on my knees and thank and praise your God.”
He is God, He is faithful. Faith will hold you and sustain you when you cannot go on, faith will stand in the gap and bring forth hope.
From:
PRAYER CLOSET “Is there no balm in Gilead; is there
no physician there? Why then is not the health of the daughter of my people recovered?” -Jeremiah 8:22
This is in reference to Israel and it’s spiritual health, but today it is valid in so many ways. We all have prayer requests for healing, salvation, etc.
TO
WOMEN
PRAYER CLOSET (CONTINUED) In our first issue, we asked prayer for those who are homeless, and less fortunate, and that God would supply their needs, but most of all that he be manifested in their lives and come to call him Lord. We asked for prayer for family members who were ill and needed God’s healing touch. We also offered my cousin Topi up in prayer, and we are believing for a miracle and complete healing. Well BREAKING NEWS: My cousin was released from the hospital, and where did she go Sunday February 25th? CHURCH! You know man has counted her out, but God has her here. By man’s standard she should be at home or in the hospital, but I know God still has a plan for her. I believe there is an incredible outpouring of his Spirit before us, so let us come before the Lord with a fragrance of praise for all his benefits, and in so doing, also ask for his continued touch on those we love. Please pray for: •
Yolanda diagnosed with Lymphoma Cancer
•
Topi McMinn with Multiple Myeloma and complications thereof.
•
Little Caleum and his family
•
Prayer for our troops
•
President of the United States
VOLUME 2, ISSUE 1
PAGE 18
LAYING CLAIM TO OUR HERITAGE-MY AUNT’S TESTIMONY I had the privilege of interviewing my Aunt Esperanza. As I translated this story, I could no keep the tears from flowing. Tears of gratitude. Tears because this is the woman who believed for all of us in my family. My family being her children, her grandchildren, the children of her brother, and the children of her sisters, aunts and uncles, cousins, nieces, nephews, and so forth. Those in my family (meaning all of the above, including in-laws to all of the above), are the result of the promise given to a woman who dared to believe in God’s promise, and fifty-four years later, I have the great honor to share this story with you. Alma: Tia (means Aunt in Spanish) tell us in what year you came to know the Lord and how you came to know him? Tia Esperanza: It was in 1953, the 20th of November, a Sunday night, that the Lord saved me. That day was the most glorious of all my life. I did not want anything to do with Christians. I hated Christians. But one day, I woke up with this feeling of desperation and I felt that way all day. Well the next day, I still had that feeling of desperation, and I felt as if I wanted to go outside, and run. Later that night, I went to my bedroom to pray, and in that bedroom, I had some statues of saints. As I entered my bedroom, I heard a voice say: “Go to the Temple of Refuge”, which was a Christian Church. Well I thought I was hearing this due to the great desperation I felt. I stayed quiet for awhile, and then again the voice said to me in a clearer voice, “Go to the Temple of Refuge.” Well I could not disobey, and I got ready and went to see a neighbor friend and asked her to take me to the church. You Esperanza?, she questioned. Listen, I said, don’t ask questions, just please take me. Well, she took me to the church, and I sat in the furthest seat in the back of the church. While seated there, I asked myself within, ‘What are you doing here?” “Why did you come to this place”? “I don’t like it, I don’t like the church, and I don’t even like these people.”
While there, I was being careful that no one touched my young children who were seated with me. Three of which were on the same bench and one that was cradled in my arms. ‘What was I thinking?, I asked myself. Well that is the last thought I had, and that was about (9:00 p.m., that night), and when I opened my eyes I was at the foot of the altar, and it was midnight. There were rays of light surrounding me, pink in color, in fact they covered me, they covered all of me, all of my body. I could hear a choir of angels from on high, and I did not want to rise up from that place. When I finally got to my feet, I felt as if I were not the same Woman. I felt something so beautiful inside me. I looked at the brothers and sisters in the Lord who surrounded me, and their faces were that as of porcelain dolls, whereas before when I looked at them, I would see frogs and toads. They were beautiful now. I wanted to hug each one, and they were all hugging and crying because this rebellious woman had been saved. The next day, the trees seemed greener, the sun more bright, and everything looked so beautiful to me. I had gotten up early, and went to the fence outside my house, and I stood there. At that time, I knew little English, almost nothing, but I stood there. At that time my neighborhood was composed of many races, and as people passed, I would not hesitate, and I would tell them, “ I happy”, “ I happy”! The people would just look at me and they surely thought I was a mental case, but I could not contain this joy inside of me. To everyone that passed, I said, “ I happy” “I Happy”, but they did not understand. I want to tell you that from that glorious day, I have lived a new life, and I can tell you that now I am an older person, but I still feel that same joy I felt that day. Alma: Tia, what promise did God give you for the salvation of your family? Tia Esperanza: There is a verse in the Book of Acts, Acts 16:31- "(Believe in the Lord Jesus, and you will be saved, you and your household." Well I clung to that verse, and still do to this day, because I Knew and know that what HE promises, that HE will do. The bible says in Numbers 23:19 - “God is not a man, that he should lie, nor a son of man, that he should change his mind. Does he speak and then not act? Does he promise and not fulfill?”
VOLUME 2, ISSUE 1
PAGE 19
LAYING CLAIM TO OUR HERITAGE TESTIMONY (CONTINUED) When three months had passed of my having accepted the Lord, I was going through some big difficulties, and it was about two or three in the morning, and I did not yet know much about God’s word, and as I was weeping, I head a voice say, “Do not fear, for I am with you”. Alma: Tia, what did you feel when you saw and continue to see God’s promise being fulfilled in your family, in your sister, my mother, and all that have come since then? Tia Esperanza: For me, it has signified what he promised, and I believed in that verse in the Book of Acts, for my household. I know the Lord never fails. I know the promises he has given me, and up to this moment, and in every day that passes, I am closer to the Lord, and he has done great things in my life. (I, Alma, take the liberty to insert this verse here, Psalm 37:25 - I was young and now I am old, yet I have never seen the righteous forsaken or their children begging bread). Amen. Tia Esperanza continues: What he has promised, sooner or later, comes to pass. Those words that I heard, the voice that said, “Do not fear, for I am with you”, I will not forget them. I knew I had the joy, and I had the peace, and I had something beautiful in my heart. He has been real in my life, and to this moment, he has been so good to me, so merciful, and I know that he shall continue to be. Without my Aunt, this newsletter would not have a voice. The promise was fulfilled, and her reward for her faithfulness continues today. As mentioned elsewhere in this newsletter Isaiah 55:11 (King James Version) - So shall my word be that goeth forth out of my mouth: it shall not return unto me void, but it shall accomplish that which I please, and it shall prosper in the thing whereto I sent it.
I can remember that she always asked what is the condition of your heart? For 54 years, she believed for us, a light and inspiration. I can remember admiring her because she did not sway in her beliefs. I have witnessed years of faithfulness and grace. That was so striking. How many sorrows, how many put downs has she had to endure? Countless. I can remember her being ostracized from the family, more than once. It has not been easy, but pursuing a vision never is. She has lived her life with a purpose: to one day see God, and for those she loves, to see God also. Please realize that our todays are the harvest of what someone sowed yesterday. Our todays can determine someone elses tomorrows. Let this inspire you to believe for yours, as she continues to believe for all of hers to be joint heirs in Christ. She has lived her life with an eternal perspective. Her promise is also for her children, her grandchildren and children of the grandchildren. She continues her ministry. I pray that if the enemy has taken anything that God has faithfully promised her, I pray that we “take it back” for her and ourselves. One last thought: her name is Esperanza, meaning hope. Clearly God has called her by name in more ways than).
Kairos-Women to Women
©2007 Kairos-Women to Women
WOMEN POSITIONED FOR THE KINGDOM
Women Positioned for the Kingdom
EMAIL: kairoswomen@yahoo.com WEBSITE: coming soon STATEMENT OF FAITH The purpose of this newsletter is to encourage, and reach out to others and ourselves in our Christian walk, and to be prayer buddies to those in need, but most of all to spread the GOOD NEWS that Jesus LIVES. We believe that there is one God in three persons: the Father, the Son, and the Holy Spirit. We believe that the Bible is God's word inspired. We believe that if we confess with our mouth and believe in our heart that Jesus died on the cross to save us from our sins, and if we receive him as Lord and Saviour of your life he will forgive us. Let’s Pray: Heavenly father, I ask you to come into my life, and forgive me of my sins so that I am born again. I believe you died on the cross and were raised from the dead, and I believe you forgive me of my sins.
PHOEBE’S HEART 2 Romans 16:1: Paul refers to Phoebe as a minister of the
church in Cenchrea. Translations show her as a servant, helper or deaconess.
If your know anyone who can use any of these items please contact us at kairoswomen@yahoo.com or any of the women from HEART-TO HEART-who will put us in contact with you. Information will be kept confidential. We also accept donations of any kind. You can contact us for drop-off information. We presently have: •
Gently used clothing, very good condition (Dressy and Casual) Women sizes available 16, 18 ,large, x-large, 1x & 2x
•
Clothing accessories
•
Small house-ware accessories
BRETHREN’S MARKETPLACE Share your faith with your customers TranscriptionPlus Consultants (626) 284—5469 Back to the Garden of Eden (323) 265—1900
BARTER PLACE
KAIROS HUMOR SPOT
Trading our God-given talents or objects we have been blessed with, but without exchange of any money: • full-size headboard honey colored; no rails Interested ? email us to forward your email
Images from:
Would you like to be added to our mailing list, or know anyone who you would like to receive our newsletter? Well
send us an email with their name and their email address to: kairoswomen@yahoo.com
cartoon is from:
Have any ideas or want to see something featured here, let us know. Send emails to:kairoswomen@yahoo.com
Kairos - Woman To Women Magazine Article | https://issuu.com/aussiebard/docs/keihljm | CC-MAIN-2017-26 | refinedweb | 11,976 | 76.86 |
Failing to plot data from Pandas Datafeed(*Please help QAQ stuck for 2 days)
- SeanytheCodie last edited by
Hi guys,
I'm fairly new to backtrader, and I was given this project to build a backtester for a simple strategy. I have already cropped out some data from my main data-sample set(it's about size 100 but I'll only post part of it here so it doesn't take too much space), and tried to search online as much as possible for this problem, but when I try to execute the code below(I've also pasted my pandas dataframe):
My code:
from future import (absolute_import, division, print_function,
unicode_literals)
import argparse
import backtrader as bt
import backtrader.feeds as btfeeds
import datetime
import pandas
class PandasData(bt.feeds', 'open'), ('high', 'high'), ('low', 'low'), ('close', 'close'), ('volume', None), ('openinterest', None), )
cerebro = bt.Cerebro()
file_path = 'tester_v1.csv'
btc_5min = pandas.read_csv(file_path,parse_dates=['date'],index_col=0)
print(btc_5min)
class quickStrategy(bt.Strategy):
def init(self):
self.dataclose = self.datas[0].close
def next(self): if self.dataclose[0] < self.dataclose[-1]: if self.dataclose[-1] < self.dataclose[-2]: self.buy()
cerebro.addstrategy(quickStrategy)
data = btfeeds.PandasData(dataname = btc_5min,timeframe = bt.TimeFrame.Minutes)
cerebro.broker.set_cash(10000)
cerebro.adddata(data)
print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue())
cerebro.run()
cerebro.plot(volume=False)
print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue())
My pandas Dataframe:
date, open, high, low, close
2021-03-18 00:06:00,59427.64,59561.54,59159.92,59422.37
2021-03-18 00:01:00,59200.26,59267.62,58767.93,59191.35
2021-03-17 23:56:00,58734.82,58835.85,58626.63,58819.72
2021-03-17 23:51:00,58678.56,58734.0,58596.68,58688.36
2021-03-17 23:46:00,58585.78,58608.71,58468.66,58603.43
2021-03-17 23:41:00,58535.1,58592.0,58459.66,58481.24
2021-03-17 23:36:00,58459.41,58549.51,58313.17,58461.21
2021-03-17 23:31:00,58562.45,58612.8,58490.08,58540.19
I get this error below:
"Locator attempting to generate 9504000003 ticks ([81647999999.0, ..., 91152000001.0]), which exceeds Locator.MAXTICKS (1000). Killed: 9"
I have no idea what's going on. I would really appreciate any helps or ideas on how to resolve it.
Thanks a lot! | https://community.backtrader.com/topic/3633/failing-to-plot-data-from-pandas-datafeed-please-help-qaq-stuck-for-2-days/1 | CC-MAIN-2022-05 | refinedweb | 397 | 54.08 |
Armin Gayl 7 August 2018Beginner, Caché, Databases
Hello,
I would like to schedule the Database Compact and Freespace methods as legacy tasks.
Has anyone implemented this yet?
Is this even possible?
The request for this is due to the fact that we have 3 interfaces in a namespace whose messages are deleted after 7 days. All other messages in this namespace should be kept for one year.
This leads to a certain fragmentation. Furthermore, the messages to be deleted are relatively large (MDM^T02 > 32MB), which in turn leads to a fast growth of the database size.
How would you solve this problem?
With kind regards
Armin Gayl | https://community.intersystems.com/post/database-freespace-compact-task | CC-MAIN-2019-43 | refinedweb | 108 | 76.32 |
25 August 2008
By clicking Submit, you accept the Adobe Terms of Use.
Beginning.
Event handling in ActionScript 3.0 depends heavily on the EventDispatcher class. Although this class isn't entirely new to ActionScript, it is the first time it has been included as a core part of the ActionScript language. You may also be familiar with EventDispatcher from JavaScript or ActionScript 2.0 when using V2 components. With V2 components, an external version of the EventDispatcher class was used to handle component events. This version is slightly different from the version of EventDispatcher used internally by ActionScript 3.0.
For those not familiar with EventDispatcher, the basic concept is this: First you create functions, or event handlers, to react to various events. Then you associate those functions with the events by using the
addEventListener() method, which is called from the object that will receive the event. This is similar to the normal, core process in ActionScript 2.0 (not using EventDispatcher). The difference is that in ActionScript 2.0, you define the event handler within the object receiving the event—giving the function the name of the event being received. For example, to react to an "onPress" event for a button named
submitButton in ActionScript 2.0, you would use:
submitButton.onPress = function() { ... }
Using EventDispatcher, the same elements are at play; an object receiving an event, an event name, and a function that reacts to an event—only the process is slightly different. The code using EventDispatcher looks like this:
function pressHandler(){ ... } submitButton.addEventListener("onPress", pressHandler);
This process adds what appears to be an extra step, but it allows for more flexibility. Since you are using a function to add event handlers instead of defining them directly on the target object itself, you can now add as many handlers as you like to "listen" to a single event.
Removing events in ActionScript 2.0 just meant deleting the handler:
delete submitButton.onPress;
Using EventDispatcher, you use
removeEventListener(). This method removes an event listener that matches the same definition used in addEventListener (up to the third parameter).
submitButton.removeEventListener("onPress", pressHandler);
You may have noticed that the code snippets above do not explicitly reference EventDispatcher. In fact, it's rare that you would ever use EventDispatcher directly in your code. EventDispatcher, in ActionScript 3.0, is actually a base class, which other classes extend in order to be able to have access to
addEventListener and other EventDispatcher methods. In ActionScript 2.0, EventDispatcher was a mixin class.
This meant in order for it to give these methods to other objects,
EventDispatcher.initialize() was used to copy them from EventDispatcher into the desired object. Now classes just inherit the methods by extending EventDispatcher. Luckily most classes that need to use EventDispatcher in ActionScript 3.0, like MovieClip and other DisplayObjects already extend EventDispatcher making it accessible and easy to use (though if necessary, advanced users can also include EventDispatcher functionality through composition).
Here is a summary of the methods in EventDispatcher for ActionScript 3.0. Many of these methods are similar to the methods in the ActionScript 2.0 version:
addEventListener(type:String, listener:Function, useCapture:Boolean = false, priority:int = 0, useWeakReference:Boolean = false):void
removeEventListener(type:String, listener:Function, useCapture:Boolean = false):void
dispatchEvent(event:Event):Boolean
hasEventListener(type:String):Boolean
willTrigger(type:String):Boolean
If you are familiar with ActionScript 2.0, you'll notice that there are two new methods,
hasEventListener and
willTrigger. Additionally,
addEventListener for ActionScript 3.0 now only allows functions as listeners, not objects (objects could be used as listeners in the older version). Since methods are now bound to their instances in ActionScript 3.0, there is essentially no need to use objects for listeners anymore. This means that the
this keyword in a function will always correctly reference the instance to which it was obtained. It also eliminates the need for the ActionScript 2.0 Delegate class.
addEventListener. The same first 3 arguments used in
addEventListenermust be used in
removeEventListenerto remove the correct handler.
hasEventListenerbut this method checks the current object as well as all objects that might be affected from the propagation of the event.
These methods, as well as any other function or method in ActionScript 3.0 language, can also be found in the ActionScript 3.0 Language Reference.
As a simple example, consider clicking on a square instance named "box" on the screen (see Figure 1). The goal of this example is to handle that event so that the text "click" is traced in the Output panel when the box is clicked with the mouse.
To create this test movie, do the following:
function clickHandler(event:MouseEvent):void { trace("click"); } box.addEventListener(MouseEvent.CLICK, clickHandler);
<?xml version="1.0" encoding="utf-8"?> "mx:Application xmlns: </mx:Canvas> </mx:Application>
Let's take a look at the script. The first step is to define the event handler (listener function). As with all events, this accepts in its parameter list a single event instance that's automatically passed into the function when called from an event dispatcher. After that, the function is set as a listener to the box instance—the event dispatcher—listening for a
MouseEvent.CLICK event using a basic
addEventListener call (this occurs in the
initApp method in Flex). Since the box is an instance of MovieClip (Canvas in Flex) it inherits from EventDispatcher and has access to all the EventDispatcher methods, including
addEventListener.
MouseEvent.CLICK is a constant variable defined in the MouseEvent class. It simply provides the string of the event; for
MouseEvent.CLICK that's "click." Other event types are also stored in similar constants in the MouseEvent class, as well as other event-related classes. Note that many of these have changed compared to their own ActionScript 2.0 counterparts. For example, rather than using the onPress event in ActionScript 2.0, you would use MouseEvent.MOUSE_DOWN (or "mouseDown") in ActionScript 3.0. You can find more of these distinctions within the ActionScript 3.0 Language Reference in the events package and in the different Event classes that exist within that package.
For this example you could just as easily have used "click" instead of
MouseEvent.CLICK, but using these constants helps you detect typos in your code. Mistyping the string "click," for example, would not result in a compile-time error since, as a string, Flash has no way of knowing whether or not its contents are accurate. If you misspell
MouseEvent.CLICK, however, Flash will be able to recognize the error and can throw a compile-time error. Most event classes have these constants that relate to their event type strings. It is highly recommended that you use them instead of the actual string itself.
Testing the movie will display a clickable box that, when clicked, traces the word "click".
Note: Flex users should be sure to test the movie using Debug so that the trace output can be captured by Flex Builder.
Although you might not have realized it in the previous example, the event that took place as a result of the box being clicked actually affects many different objects, not just the object being clicked. The big, new feature in ActionScript 3.0 event handling is the support for event propagation—the transference of a single event applying to multiple objects. Each of those objects receives the event, instead of just the object in which the event originated.
With ActionScript 2.0, there was no such thing as event propagation. In fact, you couldn't even have objects with certain event handlers associated with them inside another object that had its own event handler. For example, in ActionScript 2.0, if you were to assign an
onPress event handler to a window object that had a button within it, any
onPress (or similar) event handlers assigned to the button would not function and receive events. With ActionScript 3.0, you no longer have that issue. Instead, you have events that propagate through instances and their parents.
With event propagation you're dealing with three "phases" of an event (see Figure 2). Each phase represents a path or the location of an event as it works itself through the display objects in Flash that relate to that event. The three phases of an event are capturing, at target, and bubbling:
Not all propagated events (and not all events propagate) go through each phase, however. If the Stage object, for example, receives an event, there will only be an at target phase since there are no objects beyond the stage for the capturing or bubbling phases to take place.
Note: The hierarchy in Flex is a little different because it contains the Application instance between root and the box Canvas instance (not shown).
As the event makes its way though each phase and each object within those phases, it calls all of the listener functions that were added for that event. This means that clicking on the box doesn't limit the event to the box; the stage also receives the event. The stage receives the event twice, once in the capturing phase and once in the bubbling phase (see Figure 3).
You can see how this all works more clearly by adding more listeners to our example.
To see how a single mouse click propagates through many objects within the display list hierarchy, you can add additional listeners to receive the event for each of those objects it affects. For this example, however, we'll want to have listeners for all phases of the event. For this we need to make use of the third parameter in
addEventListener, the
useCapture parameter.
The
useCapture parameter in
addEventListener lets you specify whether or not a listener should be listening in the capture phase. If not, it will be listening for the event in the at target or bubbling phases. The default value of false sets a listener to listen to the at target and bubbling phases. By passing in a true value, you can listen to events in the capture phase. If you want an event to listen for an event in all phases, you simply use
addEventListener twice, once with
useCapture set to false (or omitted) and once with
useCapture set to true.
In this example (see Figure 4) we will add listeners for
stage,
root, and
box. For
stage and
root we will be adding event listeners that alternately use and don't use the
useCapture parameter.
function boxClick(event:MouseEvent):void { trace("box click"); } function rootClick(event:MouseEvent):void { trace("root click"); } function stageClick(event:MouseEvent):void { trace("stage click"); });
<?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns: <mx:Script> <![CDATA[ public function boxClick(event:MouseEvent):void { trace("box click"); } public function rootClick(event:MouseEvent):void { trace("root click"); } public function stageClick(event:MouseEvent):void { trace("stage click"); } public function initApp():void {); } ]]> </mx:Script> <mx:Canvas </mx:Canvas> </mx:Application>
Note: Be sure to use the
applicationComplete event in Flex to assure access to
stage and
root within the Application script. The
stage and
root will not be accessible in the
creationComplete event.
Test this movie and click around to see the results. Clicking on the box should give you the following output:
stage click root click box click root click stage click
The output shows how both the stage and root objects received the event twice—one time each in the capturing and bubbling phases of the event. In contrast, the target of the event, the box, received the event only once in the at target phase. Try clicking anywhere off of the box (on the stage) and, if you are using Flash, you get an output of:
stage click
Because the stage is at the top of the hierarchy in the Flash movie, the only phase of a stage-based event is at target.
In Flex, when clicking the stage, you will get an output that looks more like this:
stage click root click root click stage click
This is a result of the intermediate application instance (
Application.application) containing the gradient background encompassing the entire area of the stage. By clicking the stage you're actually clicking this instance—which has no listeners—instead of the stage itself. If the intermediate application instance is not present, only the stage listener will be called.
Though event propagation is most prevalent in mouse events, it also occurs in other events such as keyboard events. Event propagation is also used in the added and removed events in DisplayObjectContainer instances, where child objects are added or removed from their display lists.
All events, like mouse clicks, start off in Flash with the process of event targeting. This is the process by which Flash determines which object is the target of the event (where the event originates). In the previous examples we've seen how Flash was able to determine whether or not you clicked on the box or the stage. For every mouse event, the event references one object—the object of highest arrangement capable of receiving the event (see Figure 5).
This behavior is almost the same as that seen in ActionScript 2.0. For example, in ActionScript 2.0, when clicking once on two buttons that are placed directly on top one another (assuming they both have event handlers assigned to them), only the topmost button will receive the event. This remains to be the case with ActionScript 3.0. Though both buttons are technically below the mouse during the click, only the topmost button receives the event because that is the object which is targeted for the event.
There is one difference with ActionScript 3.0: All display objects are, by default, enabled to receive mouse events. This means that even if no event handlers have been assigned to a particular display object, it will still be targeted for an event when clicked, preventing anything below it from receiving events. This is not the case with ActionScript 2.0. With ActionScript 2.0, movie clip instances are enabled to receive events only when an event handler like
onPress is assigned to them. Without such a handler, the instances are ignored by event targeting and those beneath can be targeted instead.
To get this behavior in ActionScript 3.0, use the
InteractiveObject mouseEnabled property. Setting it to
false will disable an interactive object instance from receiving mouse events and allows other instances below it to be targeted for mouse events:
myInteractiveObject.mouseEnabled = false;
Event objects are the objects listener functions receive as an argument when called during the occurrence of an event. Event objects in ActionScript 3.0 are similar to those used in the ActionScript 2.0 version of the EventDispatcher class. The main difference is that now they are a little more structured by having their own class (the Event class) and have additional properties to describe the event being handled.
The event objects received by listener functions are always of the type Event but can also be a subclass of Event rather than specifically being an Event instance. Common subclasses include MouseEvent for events associated with the mouse and KeyboardEvent for events associated with the keyboard. Each class also contains the event type constants used for listening to related events, e.g.
MouseEvent.CLICK.
The stage is a rather new concept in ActionScript 3.0, at least in terms of the stage being an object that can be referenced in code. The stage represents the topmost container of all display objects within a Flash movie, almost an equivalent of
_root in ActionScript 2.0. But with ActionScript 3.0, the
root (accessed via the DisplayObject root property, without an underscore) now exists within the stage object.
Additionally, stage targeting for mouse events does not depend on stage contents as is the case with other objects. With the box example, you have the basic hierarchy of stage > root > box (in Flex the hierarchy is stage > root > application > box). To click on the box and have Flash target the box for the click event, you need to click on the shape that makes up the box. Similarly, to click on the root object, you need to click on its contents, or the box instance. Clicking anywhere else will not be clicking on root since root consists only of the box. For the stage, however, you can click on the stage by clicking anywhere on the movie because the stage exists everywhere as a persistent background for the movie, even if there is no other content on the stage. In some cases (with mouse up and mouse move events for instance), the stage can even detect interaction outside of the movie. This behavior will be important when trying to achieve some tasks in ActionScript 3.0, as we will see.:
addEventListener. Ex:
MouseEvent.CLICK.
EventPhase.CAPTURING_PHASE,
EventPhase.AT_TARGET, and
EventPhase.BUBBLING_PHASE, depending on which phase the listener is being called.
TextEvent.TEXT_INPUT) event can recognize when this happens and cancel this default behavior. In such a case, the cancelable property would have a value of true.
These properties can be useful in determining specific actions that need to be taken for various events.:
targetproperty of the event object. If so, the mouse was released over the original target. If not, the mouse was released outside of that object.);
<?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns: <mx:Script> <![CDATA[ public function boxDown(event:MouseEvent):void { trace("box down"); stage.addEventListener(MouseEvent.MOUSE_UP, boxUpOutside); } public function boxUpOutside(event:MouseEvent):void { if (event.target != box) { trace("box up outside"); } stage.removeEventListener(MouseEvent.MOUSE_UP, boxUpOutside); } public function initApp():void { box.addEventListener(MouseEvent.MOUSE_DOWN, boxDown); } ]]> </mx:Script> <mx:Canvas </mx:Canvas> </mx:Application>).
There are also a few useful methods associated with Event objects. They include but are not limited to:
stopPropagation()
stopImmediatePropagation()
preventDefault()
isDefaultPrevented()
As with the properties, Event subclasses will often contain additional methods. Both KeyboardEvent and MouseEvent instances, for example, also have an
updateAfterEvent() method. This method allows you to redraw the screen after the event completes.
Here is a description of what some of these event methods can do:
stopPropagationexcept
stopPropagationwill not prevent additional events in the current object to be called (if there is more than one listener listening for the same event in the same object).
trueor
falsedepending or whether or not
preventDefaulthas been called for the current event (either in the current listener or any previous listener that has also been called in response to this event).
As I mentioned earlier, mouse events are inherently enabled for all interactive objects in ActionScript 3.0. You can disable mouse interaction by setting their
mouseEnabled property to
false. Additionally, there's a similar property for display object containers that allow you to disable mouse events for all children of that object,
mouseChildren. By setting
mouseChildren to
false, you can effectively prevent the mouse from being enabled for all instances within any display object container.
However, if you only want to disable certain mouse events for a collection of objects within a container, you'll need to take an alternate approach. In this situation, you'll use an event listener in the target parent instance listening for the event to be disabled and have that listener stop propagation of that event. This prevents the listeners working for objects within that container from being called.
In our next example we'll create two buttons in a container window. We will set their click actions enabled or disabled from another button outside of the container. The third external button sets a property that determines whether or not the container will stop propagation for the click event when detected in the capture phase of the click event.
var clickEnabled:Boolean = true; function clickHandler(event:MouseEvent):void { trace("click: " + event.currentTarget.name); } function toggleEnabled(event:MouseEvent):void { clickEnabled = !clickEnabled; } function disableClickHandler(event:MouseEvent):void { if (clickEnabled == false) { event.stopPropagation(); } } window.button1.addEventListener(MouseEvent.CLICK, clickHandler); window.button2.addEventListener(MouseEvent.CLICK, clickHandler); window.addEventListener(MouseEvent.CLICK, disableClickHandler, true); enabler.addEventListener(MouseEvent.CLICK, toggleEnabled);
<?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns: <mx:Script> <![CDATA[ public var clickEnabled:Boolean = true; public function clickHandler(event:MouseEvent):void { trace("click: " + event.currentTarget.name); } public function toggleEnabled(event:MouseEvent):void { clickEnabled = !clickEnabled; } public function disableClickHandler(event:MouseEvent):void { if (clickEnabled == false) { event.stopPropagation(); } } public function initApp():void { button1.addEventListener(MouseEvent.CLICK, clickHandler); button2.addEventListener(MouseEvent.CLICK, clickHandler); container.addEventListener(MouseEvent.CLICK, disableClickHandler, true); enabler.addEventListener(MouseEvent.CLICK, toggleEnabled); } ]]> </mx:Script> <mx:Panel <mx:Button <mx:Button </mx:Panel> <mx:Button </mx:Application>
Test the movie to see how the enabler button uses the
clickEnabled variable to disable or enable the buttons within the container from receiving the click event. This technique allows mouse events like roll over to continue to function for the buttons within the window, because the
clickEnabled variable is only disabling the click events. The roll over mouse event won't work if the
mouseChildren property is set to false for the container.
You can capture events with the EventDispatcher in ActionScript 3.0. You can also use it to create your own events. This encompasses dispatching new or existing events, creating new types of events, and defining new event classes (based on the Event class) whose instances are to be passed to event handlers listening for that event.
To dispatch events manually, you use the
dispatchEvent method of EventDispatcher. When calling
dispatchEvent you pass an event object that describes the event being dispatched. This event then makes its way through all valid targets (multiple if propagated) causing any event handlers assigned as listeners to those targets to be called (if they are listening for that particular type of event). When the handlers are called, they each receive the event object passed to dispatchEvent:
target.dispatchEvent(new Event("type"));
New event instances are created with a type parameter and optional bubbles and cancelable parameters. By default, at least in the Event class, both bubbles and cancelable are false if not explicitly passed in as true. Subclasses of the Event class like MouseEvent accept even more parameters, and in the case of the MouseEvent class, the default setting for bubbles is true. For more information on using MouseEvent and other Event classes, see the ActionScript 3.0 Language Reference.
You can create your own event classes by extending the core Event class. These custom subclasses can be used in dispatching your own custom events and have properties of your own choosing. In extending the Event class, however, you will want to be sure to override the default method and implement your own
clone() method. Although it is not necessary in all situations, the clone method is sometimes internally used by Flash to copy event instances. If the clone method does not create an accurate copy of the event instance being cloned, an error will occur.
Custom events are useful for indicating events which are not inherently recognized within Flash player. By making your own Event classes, you can provide handlers for those events by specifying additional information relating to your custom event. This example will use a custom BounceEvent class that will be used with a bounce event to indicate when a box on the screen has bounced off the edge of the screen.
The BounceEvent class, as with ActionScript 2.0 classes, will be defined in an external ActionScript file. In addition to your normal event properties, which are all automatically inherited by extending the Event class, the BounceEvent class includes an additional side property allowing the user to determine which side of the screen the bouncing object originated from.
In addition to custom properties, it is a best practice to provide type constants (e.g.
MouseEvent.MOUSE_DOWN) in your custom event classes that correspond to the different types used with the event. The BounceEvent class will use one type of event named "bounce" which will be stored in the BOUNCE constant. Here is the class definition:
package { import flash.events.Event; public class BounceEvent extends Event { public static const BOUNCE:String = "bounce"; private var _side:String = "none"; public function get side():String { return _side; } public function BounceEvent(type:String, side:String){ super(type, true); _side = side; } public override function clone():Event { return new BounceEvent(type, _side); } } }
Notice that a clone function was also included in the BounceEvent definition. Although the returned value is a BounceEvent instance and not specifically an Event instance, it's acceptable since BounceEvent extends Event, making it of the type Event. (Event is required to be the return type of the clone function since overrides must have the same function signature as the original methods they override).
This class can now be used to create event instances to be dispatched when a box bounces. Event handlers listening to the bounce event can receive the BounceEvents relating to that event.
stage.scaleMode = StageScaleMode.NO_SCALE; stage.align = StageAlign.TOP_LEFT; var velocityX:Number = 5; var velocityY:Number = 7; var padding:Number = 10; function bounceHandler(event:BounceEvent):void { trace("Bounce on " + event.side + " side"); }")); } } box.addEventListener(Event.ENTER_FRAME, moveBox); addEventListener(BounceEvent.BOUNCE, bounceHandler);
<?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns: <mx:Script> <![CDATA[ public var velocityX:Number = 5; public var velocityY:Number = 7; public var padding:Number = 10; public function bounceHandler(event:BounceEvent):void { trace("Bounce on " + event.side + " side"); } public")); } } public function initApp():void { box.addEventListener(Event.ENTER_FRAME, moveBox); addEventListener(BounceEvent.BOUNCE, bounceHandler); } ]]> </mx:Script> <mx:Canvas </mx:Canvas> </mx:Application>
Test the movie. As the box bounces off of the sides of the stage, you should see the following output:
Bounce on bottom side Bounce on right side Bounce on top side Bounce on left side Bounce on bottom side ...
This is a result of the
bounceHandler event handler function being called on each bounce event. The bounce event is being dispatched within the
moveBox function (which is an event handler for the enter frame event) each time the box instance is given a new direction using the
dispatchEvent method. For example, let's review this line of code:
box.dispatchEvent(new BounceEvent(BounceEvent.BOUNCE, "left"));
This dispatches a new bounce event from the box instance to all listeners which are able to determine that the side of the bounce was "left" using the custom side property from the event object it was passed.
You may notice that even though the event was dispatched from box, the
bounceHandler (which was not a listener of the box instance) was still able to receive the event. This occurs because in the BounceEvent definition, the call to
super() in the constructor (which runs the Event constructor), was given a second argument of true—indicating that the event bubbles. That allowed propagation to pass the event to all parents of the box including the one in which
bounceHandler was listening (root in Flash, application in Flex).
If you're new to ActionScript 3.0, or very accustomed to ActionScript 2.0 events, you may find it difficult to get used to the new behaviors of the new event system. In time it will become easier and you'll be able to fully appreciate the level of control available. Here are a few additional tips and precautions to keep in mind as you work with events in ActionScript 3.0:
eventPhaseproperty of the Event object, you can get helpful feedback and prevent unwanted event propagation. If the
eventPhaseequals
EventPhase.AT_TARGETthen you can determine the event's target.
MouseEvent.ROLL_OVERand
MouseEvent.ROLL_OUT, and
MouseEvent.MOUSE_OVERand
MouseEvent.MOUSE_OUT. The difference is that the roll events won't bubble. This means you won't have confusion between mouse over and out events being propagated from an object's children. For other events, you can use the
mouseChildrenproperty to help prevent propagation confliction for mouse events.
mouseEnabledproperty of that object to
false.
stageproperty is only accessible from display objects when they are within an active display list or rather, a display list that is attached to the stage and visible on the screen. If a display object is not contained within such a display list, its
stageproperty is null. The same behavior applies to the
rootproperty as well.
Event.ADDEDand
Event.REMOVED) bubble just like mouse events, but remember that capturing and bubbling only goes through parents, not children. Children of an instance will not receive a parent's events. This means there is no easy way for children of a container to know if that container (or any of its parents) has been added to an active display list, thereby granting them to the
stageand
rootproperties of DisplayObject. The solution for this is using the
Event.ADDED_TO_STAGEand
Event.REMOVED_FROM_STAGEevents added in Flash Player version 9,0,28,0 (released on November 14, 2006). These events let any display object know when it has been added or removed to or from an active display list, thereby allowing it to know if
stageand
rootare accessible.
MouseEvent.MOUSE_MOVE) in ActionScript 3.0 is only invoked when the mouse is over the object receiving the event. If you are using this event to move or drag objects on the screen, you might find that the event is lost if the mouse is inadvertently moved off the area of the object. To assure a consistent mouse move event, add listeners to the stage.
stopImmediatePropagation()), you can use the fourth parameter in
addEventListenerwhich lets you set priority for the listener. Listeners with the highest priority are always called first.
When you learn any new development strategies, it is useful to build small test files—like the examples described in this tutorial. Isolate the functionality and try passing different values into the parameters to get a better understanding for how events are controlled and captured in ActionScript 3.0. Try investigating different scenarios, such as placing objects on top of each other, to see how the new event handling differs from ActionScript 2.0. Tracing the results to the Output panel will give you immediate feedback regarding the events received.
Once you become familiar with how the new EventDispatcher behaves, you can begin extending the Event class to create your own custom classes by defining them in an external ActionScript file. Experiment with creating custom subclasses to dispatch your own custom events with unique properties. By making your own Event classes, you can use the default parameters and provide new handlers for a custom event not currently handled by Flash Player. This strategy is very useful for determining the location of an object in relation to the screen and for enabling advanced user interactivity. Once you master using the new event handling features in ActionScript 3.0, you will find that there are many new development possibilities and increased control of input tracking.
Be sure to download and explore the sample files provided at the beginning of this tutorial to review the concepts discussed here. As always, it is a good idea to become familiar with the event-related classes and event handling information available in the ActionScript 3.0 Language Reference. | https://www.adobe.com/devnet/actionscript/articles/event_handling_as3.html | CC-MAIN-2015-48 | refinedweb | 5,176 | 55.24 |
Open share dialog
Can I have a script generate a PIL Image, then open the standard iOS Share menu so the image can be opened in another app?
You can use the Open in… menu like this:
import Image import console img = Image.new('RGB', (100, 100), 'green') img.save('image.png') console.open_in('image.png')
Hmm.... This crashes pythonista for me. Running on iPad mini 2
It's actually the save line that causes the crash.
It's a problem with PNGs. JPEGs work fine.
PNG images work fine for me in the latest beta. Perhaps a 1.5 bug fixed in 1.6?
@Gerzer maybe.
@omz can I display a dialog that is the share dialog? The one that includes mail, message, save image, and open in? At a higher level than "open in" | https://forum.omz-software.com/topic/1871/open-share-dialog | CC-MAIN-2022-05 | refinedweb | 136 | 78.75 |
SDM-IO ultrasonic.
Hardware Installation
A short ultrasonic pulse is transmitted at the time T1, reflected by an object. The senor receives this signal and converts it to an electric signal. The next pulse can be transmitted when the echo is faded away. This time period is called cycle period. The recommend cycle period should be no less than 10ms. If a 10μs width trigger pulse is sent to the signal pin, the Ultrasonic module will output some 40kHz ultrasonic signal and detect the echo back, each back pulse width is 150us so there is different to HC-SR04 module.The measured distance we can’t use the echo pulse width to calculate by the formula, but can be calculated by
Formula: distance = (T2 – T1 – 250Us) * (High speed of sound(340M/S)) / 2 // 250us is circuit delay’s time.
The arduino demo you can reference to HC-SR04, but you should note that transmitted signal is LOW to HIGH trigger and you should not use pulseIn() function for the pulse width, however you need start a Timer for count T1 and T2.
The following is a demo for 51:
#include "reg51.h" #include "sio.h" sbit TRIG = P2^7; sbit ECHO = P2^6; #define XTAL 19660800L #define PERIOD12MS (12L * XTAL / 12L / 256L / 1000L) void delay(unsigned int t) { while(t--) ; } void main (void) { EA = 0; TMOD &= ~0x0F; // clear timer 0 mode bits TMOD |= 0x01; // put timer 0 into MODE 1, 16bit com_initialize (); /* initialize interrupt driven serial I/O */ com_baudrate (14400); /* setup for 14400 baud */ EA = 1; // Enable Interrupts while (1) { START: TR0 = 0; TH0 = 0; TL0 = 0; TRIG = 0; //Sends a negative pulse, delay(100); TRIG = 1; //start detect TR0 = 1; //start timer0 while (ECHO) //listen ECHO signal { if (TH0 >= PERIOD12MS) //The cycle period timeout goto START; } TR0 = 0; //stop timer0 com_putchar(TH0); //printf com_putchar(TL0); TR0 = 1; while (TH0 < PERIOD12MS) ; //keep 12ms cycle period } }
Download C51 Demo from here.
Join the discussion 4 Comments
more information? and tutorial for arduino.
[Reply]
dany Reply:
July 16th, 2013 at 11:30 am
Thank you very much for your attention, and we are trying our best to add tutorial for many products researched by Elecfreaks, especially some a little bit difficult ones, could you tell us currently which tutorials you need most? We will take your idea into consideration.
[Reply]
I can’t seem to properly browse this post from my droid!!!!
[Reply]
robi Reply:
May 15th, 2011 at 11:21 am
Maybe incompatible about browser,We suggest using Firefox to try again.
[Reply] | http://www.elecfreaks.com/264.html | CC-MAIN-2018-17 | refinedweb | 422 | 64.54 |
honey the codewitch wrote: a fair amount of confusion
honey the codewitch wrote:I don't care what pronouns you use for me. Use whatever makes the most sense to you.
my gender is bees.
Quote: "Hey honey, take a walk on the wild side"
honey the codewitch wrote:my gender is bees.
def printGlobal():
print(str(extra))
extra = 35
printGlobal() # prints 35
extra = "Python are stupid."
class Arsinine:
def __init__(self):
print(extra)
a = Arsinine() # prints Python are stupid.
extra
raddevus wrote:843 People Upvoted this Comment
ZurdoDev wrote:844 now.
raddevus wrote:I'm putting you in a special box.
ZurdoDev wrote:But in seriousness, there are times when globals make sense.
0x01AA wrote:No
0x01AA wrote:but also injection is a kind of global behavior in the broadest sense
Greg Utas wrote:Global constants, yes.
Global variables, no.
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Lounge.aspx?fid=1159&df=90&mpp=25&sort=Position&view=Normal&spc=Relaxed&prof=True&select=5726080&fr=15775 | CC-MAIN-2020-50 | refinedweb | 171 | 67.35 |
Yet Another GMAIL client, using AsyncIO
Project description
aioyagmail -- Yet Another GMAIL/SMTP client, using AsyncIO
The goal here is to make it as simple and painless as possible to send emails using asyncio.
In the end, your code will look something like this:
import asyncio from aioyagmail import AIOSMTP loop = asyncio.get_event_loop() async def send_single(): # walks you through oauth2 process if no file at this location async with AIOSMTP(oauth2_file="~/oauth2_gmail.json") as yag: await yag.send(to="someone@gmail.com", subject="hi") async def send_multi(): async with AIOSMTP(oauth2_file="~/oauth2_gmail.json") as yag: # Runs asynchronously! await asyncio.gather(yag.send(subject="1"), yag.send(subject="2"), yag.send(subject="3")) loop.run_until_complete(send_single()) loop.run_until_complete(send_multi())
Username and password
It is possible like in
yagmail to use username and password, but this is not actively encouraged anymore.
See how to do it.
For more information
Have a look at
yagmail. Any issue NOT related to async should be posted there (or found out about).
Word of caution
Watch out that gmail does not block you for spamming. Using async you could potentially be sending emails too rapidly.
Donate
If you like
aioyagmail, feel free (no pun intended) to donate any amount you'd like :-)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/aioyagmail/ | CC-MAIN-2021-43 | refinedweb | 235 | 56.66 |
Prev
C++ VC ATL STL XML Experts Index
Headers
Your browser does not support iframes.
Re: Linux programming, is there any C++?
From:
James Kanze <james.kanze@gmail.com>
Newsgroups:
comp.lang.c++
Date:
Thu, 21 Feb 2008 03:14:16 -0800 (PST)
Message-ID:
<01ff81a1-06d5-4000-bb48-a9ae4dad8978@72g2000hsu.googlegroups.com>
On Feb 20, 11:36 pm, Jeff Schwab <j...@schwabcenter.com> wrote:
James Kanze wrote:
You might want to take a look at OSE
(). IMHO, a lot better designed and
easier to use than the STL. Above all, a different approach.
It tend to use STL mainly as the low level tools, over which I
build the library I actually use. OSE is usable directly.
Seems like it has special support for Python. Speaking of stuff that's
"in the air," it sure seems like Python is rapidly becoming the de facto
standard scripting/dynamic language for interfacing to programs written
in C++. Now I just have to convince the clients that they don't really
want all that legacy code they've written in a half dozen other
scripting languages, and that it's time to learn yet another...
I've heard a lot of good things about Python. On the other
hand, I learned scripting back before even perl existed. Since
scripting is not an important enough part of my activity to
justify effort to continuously learn new things, and what I know
suffices for what I do, I still use mainly grep, awk and sed.
(Even before templates were added to the language, people
were simulating them with macros.)
I don't know about "most people," but there was a relatively advanced
technice that I have used in C called XInclude:
#define ELEM_T int
# include "list.h"
#endif
s/endif/undef ELEM_T/ (my bad)
It's a far cry from what C++ templates give you. Googling
XInclude just turns up something related to XML-related
processing. Googling XInclude -XML also fails to turn up
the XInclude pattern. It's still not an especially
well-known practice.
Try googling for <generic.h>:-).
Lots of <generic.h>s, but none that look like precursors to templates.
I was thinking of headers that defined a bunch of type-specific
structures and functions by being included multiple times, each time
with a set of macros representing a different static type. They mention
it briefly here:
Is it that long ago, that no one still has explinations of how
to use it. Basically, <generic.h> (part of the standard library
which came with CFront) defined macros which allowed you to
write things like:
#define MyClassdeclare(T) \
...
#define MyClassdefine(T) \
...
The user then wrote:
declare( MyClass, T )
and got the declaration for MyClass for type T, and
define( MyClass, T )
to get the implementation. (It may have been implement, rather
than define. It's been awhile.)
If I were going to write something with a GUI, I'd probably give
wxWidgets a trial. On the other hand, GUI's are something that
Java actually does quite well. (More because Swing is well
designed, that because of anything in the language itself.)
I like Swing, too, although the handful of GUI experts I know
still seem wary of it. I haven't used wxWidgets, but I hear
mixed reviews.
I've only taken a quick glance, and didn't particularly like
what I saw, but it wasn't enough to fairly judge. The fact
remains that in practice, it and Qt are the only widely used
libraries, and Qt requires a pre-processor.
The only other "iterators" I was using at the time were
hateful little C-style things that were intended to work like
this:
some_lib_iter* iter = some_lib_create_iter(some_lib_some_contain=
er);
while (!some_lib_iter_done(iter)) {
some_item* item = (some_item)some_lib_iter_next(iter);
// ...
}
By the way, I'm currently using a recently written,
professional, industry-specific C++ library that supports
almost the same idiom, and I still don't like it.
It's very close to the USL idiom:-). And the Java one. And
yes, combining advancing and accessing in a single function is
NOT a good idea.
Do you use istream_iterator?
At times. Most of the time, however, my input requires somewhat
more complex parsing than you can get from an istream_iterator.
[...]
So how to you write a function which returns a range,
I don't think I've ever needed to.
It would seem to occur naturally fairly often as a result of
functional decomposition. I was using the GoF iterator pattern
long before I'd heard of the STL, with filtering iterators and
functions returning custom iterators as part of the package.
If I did, I'd probably follow the STL approach of returning a
std::pair (like std::equal_range).
Which can't be used directly as an argument for the next
function, so you can't chain.
and use the return value of that function as the argument to
a function which takes a range? Or how do you use the
decorator pattern on an iterator, to provide a filtering
iterator?
That, I've done, and with some success. I didn't come across
any particular problems (or if I did, they're so subtle I
still don't see them). You have the outer, decorating
iterator, and the inner iterator whose type is a template
parameter. Intercept all increment/dereference/etc. calls,
and provide whatever delegation and decoration are necessary.
No fuss, no muss. Clean, simple client code.
Except that the incrementation operator will typically want to
increment the decorated iterator more than once, and needs to
know the end, to avoid real problems.
Try writing an iterator which will iterate over the odd values
in a container of it, for example. Or one that will iterator
over the values outside a given range in a container of double.
In general, a filtering iterator must contain both the current
and the end iterators of what it's iterating over.
I guess you're not a big fan of STL-style iterators, but I
still love them.
I guess if I'd never known anything else, they wouldn't seem so
bad.
[...]
Seriously, the problem with std::string is that it is sort of a
bastard---it's too close to an STL container of charT to be an
effective abstraction of text, and it adds a bit too much which
is text oriented to be truly an STL container.
I don't see those as contradictory goals. Any representation
of text is effectively a container of characters.
And there is no basic type in C++ which represents a character.
Text is hard. Very hard, since it was designed by and for
humans, not machines. And humans are a lot more flexible than
machines. (There's also the fact that text is in two
dimensions, rather than one, and that it is graphical. I'm not
sure to what degree a string class should take that into
account, however.)
In its defense: even today, I'm not sure what a good abstraction
of text should support.
Right, there still does not seem to be any widespread agreement on that.
It's probably a good idea to keep the C++ standard string class
interface minimal, until C++ developers know what they really want.
Agreed. My real complaint about std::string is that it is too
heavy, not that it is missing features. I'd rather see it as
"just" an STL container. But then, what separates it from
std::vector<char>? Suppose we provide an overloaded operator+=,
operator+ and a replace function for vector, and all of the rest
of the functionality as external functions. (If I consider my
pre-standard string class, only two functions---other than
construction, assignment and destruction, of course---weren't
implemented in terms of other functions. Everything I did with
the string was defined in terms of replace or extract.)
Case-insensitive compare is covered in plenty of
introductory C++ texts, because it's one of the easiest
things to show people.
Case insensitive compare is one of the most difficult
problems I know. Just specifying it is extremely difficult.
Depends what you mean by it. What most new C++ developers
mean by it is a pretty simple idea, and a FAQ. Plenty of
string classes that have alleged case-insensitive comparison
functions actually provide only the toupper-each-char
implementation.
The problem is that toupper-each-char isn't implementable. At
least for any usable definition of toupper. What's toupper('=DF')
supposed to return?
The real problem with case insensitive comparison, of course, is
that it isn't defined. You can't write a function to implement
it, because you don't know what that function really should do.
(And of course, what it should do depends on the locale. In
France, '=E4' compares equal to 'A', in Germany, it should collate
as "AE". Except, of course, that in France, it would compare
greater than 'A' if the two strings were otherwise equal. And
in Germany, there are actually several different standards for
ordering.)
If you're talking about an industrial-strength, portable
implementation, then of course it gets complicated, as do all
natural-language related issues.
As you say: natural-language related issues. That's the
problem.
If you have a copy of Effective STL handy: The simple case is
covered by Item 35, and the complicated case is Appendix A,
which is the Matt Austern article from the May 2000 C++
This article is getting a little long in the tooth; has
anything really changed? The only new info I've seen is
library-specific documentation (ICU and Qt).
Well, Matt does seem to ignore the fact that toupper and tolower
not only aren't bijections, but they aren't one to one. As I
said, in German, tolower( '=DF' ) must return a two character
sequence. It also ignores the fact that many characters require
two units to be represented---even in char32_t (32 bit Unicode).
And that frequently, a single character will have several
possible representations, using different numbers of units: in
Unicode, "\u00D4" and "\u006F\u0302" must compare equal. (Both
represent a capital O with a circumflex accent.)
[...]
(Note that you're certainly
not alone in this. The toupper and tolower functions in C and
in C++ all suppose a one to one mapping, which doesn't
correspond to the real world, and every time I integrated my
pre-standard string class into a project, I had to add a
non-const []---although the class supported an lvalue substring
replace:
s.substring( 3, 5 ) = "abcd" ;
was the equivalent of
s = s.replace( 3, 5, "abcd" ) ;
.)
Whether toupper and tolower are correct is a completely
orthogonal issue to whether it makes sense for the string
class to have array-style character indexing.
The question is: when could you use a non-const [] on a string,
if even for case conversions, it's wrong? Is there ever a case
where you can guarantee that replacing a single char with
another single char is correct. (There may be a few, e.g.
replacing the characters in a password---required to be US
ASCII---with '*'s. But they're very few.)
(And of course, the [] operator of std::string gives you
access to the underlying bytes, not the characters.)
But that makes sense for that particular abstraction, because
std::string is a typedef meant to represent the common case of
characters that fit within bytes.
It's such a common case that it doesn't exist in the real world.
If the idea of a character is too complex to be represented by
a char or wchar_t, then it merits its own, dedicated type,
with support for conversions, normalization, etc.
You said it above: it's a natural-language related issue. Thus,
by definition, extremely difficult and complicated.
That's a sometimes-true but fundamentally misleading
statement. If you have a character type that serves better
than char or wchar_t, you're free to instantiate basic_string
with it, specialize char_traits for it, and generally define
your own character type.
Are you kidding. Have you ever tried this?
Yes, and it seemed to work well. It never got released in
production code though, because there just wasn't any need for
it.
You mean you redefined everything necessary, all of the facets
in locale, etc., and everything necessary for iostream to work?
But that's not the problem. I usually use UTF-8, which fits
nicely in a char. But [] won't return a character.
What do you mean? std::basic_string::operator[] returns a
reference-to-character, as defined by the character and traits types
with which basic_string was instantiated.
No. basic_string::operator[] returns a reference to charT.
(With the requirement that charT be either char, wchar_t or a
user defined POD type.) A character is something more
complicated than that.
The lack of a real Unicode character type in the standard
library is a valid weakness, but not a fundamental limitation
of the std::basic_string.
Even char32_t will sometimes require two char32_t for a single
character: say a q with a hacek.
I'll take your word for that example. :) Characters just
aren't all the same size anymore.
(Someone else who's only scratched the surface of the
problem:-). You might want to look at the technical reports at
the Unicode site, or get Haralambous' book.)
And by the way, I was relating my own experience. At the time
I first used std::string, the characters I needed to represent
fit very comfortably into bytes, and the [] operator did
provide correct access to them.
Take a look at my .sig. I should be obvious that this is not
the case for me.
Your sig looks fine to me, accented characters and all. It's
actually a nice proof of concept, since it includes three
different (Western) languages.
Except that in Unicode, some of the characters in it have
several different representations, some of which require a
sequence of code points. (I actually refered to it simply as an
indication that I do have to deal with multiple languages and
non-ASCII characters, on a daily basis.)
But even in English, if you're dealing with text, how often do
you replace a single letter, rather than a word?
Admittedly, not often. It's just not something that comes up
a lot. If I'm accessing an individual character, chances are
good that I'm actually iterating over the characters in a
string. This kind of code is usually just buried in low-level
library functions. If a library is going to support strings
and substrings, then some code somewhere has to work at this
level. There's no getting around it.
Even if the standard library provided lots of Unicode-friendly
string support, indexed character access would still be
important.
Note that I'm not against it for read-only access. You often
have to scan, code point by code point, to find something. But
it's almost always a mistake to replace single code points,
without the provision for changing the number of code points.
[...]
The more you learn, the more C++ rewards you. I remember
someone I used to work with, who had a morbid fear of C++,
taking one look at a typical C++ reference book and laughing
derisively (yes, derisively, just like an arrogant Bond
villain). "How do they expect anybody to learn all that?" he
asked. The answer is that you don't have to learn it all
before you can use it.
But there's no real point in using it otherwise.
Huh? Do you really think you know every nook and cranny of
the standard off the top of your head, including the standard
libraries?
Not every nook and cranny. But I do expect anyone using C++ to
have at least an awareness of what it can do.
In my experience, most C++ developers have no idea what the
language can do. They use it as a sort of "C with classes,"
replacing function-pointers with virtual functions, but
otherwise writing glorified C code.
In which case, they'd probably be better off in Java.
I've encountered developers like that, but I've also worked in
shops that insisted on quality code.
My point is just that if your goal is to just learn a
minimum, and start hacking code, C++ probably isn't the
language for you.
Oh, I think it is. Suppose you start with <insert
language-of-the-month here>. "Wow," you say, "this is really
neat! LotM lets me print 'hello world' with just a single
line!" Or (this one is in vogue now): "Look how much stuff I
can do with Excel macros! I'm going to implement all my
business logic using them. Instead of writing applications,
I'll give everybody macro-heavy spreadsheets to fill in."
Sooner or later, that person needs to write a real,
non-trivial program, at which point the knowledge they gleaned
from "Learn Language X in 24 Seconds" becomes worse than
useless. It becomes baggage. Writing very small programs in
C++ is harder than writing them in some other languages, but
the point of newbie hacking isn't just to get something
working, but to lay the groundwork for harder tasks that lie
ahead.
The problem is that C++ has enough gotcha's that code written
without some basic undertstanding will contain subtle errors.
Note that my personal opinion is that programming is a complex
profession, that you can't learn in a week or two.
Independently of the language. I don't consider the effort
needed to learn the "necessary minimum" in C++ excessive.
Although it's probably more than is needed for the necessary
minimum in Java (for example), the fact is that in both cases,
it's only a small percentage of everything you need to know in
order to write correct programs.
[...]
I would have liked to see a more Smalltalk-heavy industry.
All modern dynamic languages seem to me like convoluted
imitations of Smalltalk. I'm not a Smalltalk expert, and it
doesn't seem have much of a fan-base anymore (like the Lisp
cult), but the syntax was so clean, and you could port it to a
new bare-hardware platform in a Summer. What happened? Was
it the licensing? Why is Java the server-side "safe bet,"
rather than Smalltalk?
Smalltalk got a bad reputation for performance. And of course,
static type checking (a la C++ or Java) does improve program
reliability, by couple of orders of magnitude.
--
."
-- Greg Felton,
Israel: A monument to anti-Semitism | https://preciseinfo.org/Convert/Articles_CPP/XML_Experts/C++-VC-ATL-STL-XML-Experts-080221131416.html | CC-MAIN-2022-05 | refinedweb | 3,111 | 65.32 |
Scale Dask In the Cloud…In Minutes
Coiled helps data scientists use Python for ambitious problems, scale to the cloud for computing power, ease, and speed—all tuned for the needs of teams and enterprises.
Join Thousands of Users
As soon as you sign up, you’ll have access to your Coiled dashboard. You can dive right in and start spinning up Dask clusters at scale in minutes!
“Quite literally ‘burst to the cloud from your laptop’ — everything I’ve been dreaming of since grad school.”
Eric Ma
Principal Data Scientist, Moderna
Coiled users benefit from faster cluster startup times, savings on cloud costs, and running their Python workloads faster
Enterprise-Ready Dask Deployments
…In Minutes
Dask on a
laptop
Simple
Secure
Scalable
- Running Dask on your laptop is super easy.
- Just install and go.
- But what if you want to scale up your computations?
- That’s when things can get messy…
Dask self-managed
on the cloud
Simple
Secure
Scalable
- Dask lets you run at scale but can be complex to set up securely.
- You need to orchestrate many different technologies like cloud VPCs, subnets, docker registries, and secure user credentials.
- Dask on a cluster is powerful but time-consuming to get right.
Dask with
Coiled
Simple
Secure
Scalable
- Coiled manages cloud infrastructure for you, providing the simple experience of a laptop with the scalability of the cloud.
- This way you can focus on bigger problems while Coiled handles Dask DevOps.
01
Launch Dask Clusters
With ONE line of code:
Once they’re created, they’ll persist and scale according to your needs.
import coiled cluster = coiled.Cluster(n_workers=20)
Coiled lets you scale Python to the cloud using tools you’re familiar with like NumPy, pandas, scikit-learn, and Jupyter Notebooks.
02
Seamless Integration With Your Favorite Cloud Tools
Read data from multiple data stores and use a Coiled cluster to run machine learning and advanced analytics. Enterprises can perform machine learning and data engineering workloads on their data, wherever it is stored, using Coiled’s seamless integrations. Run Dask on your cloud environment. Get your preferred software packages on the cluster, with ease!
03
Use Dask in production, the way you want to.
Ok, where does Coiled actually run? You can use either our cloud or your own. We built Coiled to help you unleash the power of computationally intensive Python in the cloud. We believe you should have the time and space to solve bigger problems, not infrastructure issues.
100% of Coiled data science users get to go back to their real jobs.
“Pangeo emerged from the Xarray development group, so Dask was a natural choice. We needed a parallel computing engine that does not strongly constrain the type of computations that can be performed nor require the user to engage with the details of parallelization.”
“Coiled becomes an exceedingly useful tool when working in teams with a wide variety of expertise and seniority. Using Coiled allows all of our developers and engineers to focus on the things that they already do really well, and to expand beyond the limits of their local machines, without too much concern for what’s happening “under the hood” on an infrastructural level.”
“Coiled is amazing technology. We were able to get an initial reduction across all of our pipelines — data curation to automated experiments. We saw our processing time drop from 66 hours to 35 minutes and, with additional tuning, down to 15 minutes. We gained 64 hours and 45 minutes using Coiled and Dask.”
Ready to try Coiled?
What can you do with Coiled?
You’re a Data Scientist
You have terabytes of parquet data. What if you could ask questions of your data as easily as you do with Pandas?
You’re a Data Engineer
You need to modernize your Spark ETL pipelines. What if you could write everything in Python and still work at scale?
You’re DevOps
You want to empower your Python teams to scale. What if you could make them happy and secure at the same time?
In all of these scenarios, Coiled is the answer. Whether you’re a data scientist, data engineer, or DevOps engineer, Coiled will solve your problems so you can get back to doing what’s important.
Scale Python Without the Deployment Headaches
Join Thousands of Users | https://coiled.io/ | CC-MAIN-2022-21 | refinedweb | 719 | 63.9 |
The best answers to the question “How to import other Python files?” in the category Dev.
QUESTION:
How print
What do I add to the
import statement to just get
gap from
extra.py?
ANSWER:
importlib was added to Python 3 to programmatically import a module.
import importlib moduleName = input('Enter module name:') importlib.import_module(moduleName)
The .py extension should be removed from
moduleName. The function also defines a
package argument for relative imports.
In python 2.x:
- Just
import filewithout the .py extension
- A folder can be marked as a package, by adding an empty
__init__.pyfile
- You can use the
__import__function, which takes the module name (without extension) as a string extension
pmName = input('Enter module name:') pm = __import__(pmName) print(dir(pm))
Type
help(__import__) for more details.
ANSWER::
[email protected]::
[email protected]::
Make a new directory /home/el/foo5/herp/derp
Under derp, make another
__init__.pyfile::
[email protected]::
ANSWER:
First case
You want to import file
A.py in file
B.py, these two files are in the same folder, like this:
. ├── A.py └── B.py
You can do this in file
B.py:
import A
or
from A import *
or
from A import THINGS_YOU_WANT_TO_IMPORT_IN_A
Then you will be able to use all the functions of file
A.py in file
B.py
Second case
You want to import file
folder/A.py in file
B.py, these two files are not in the same folder, like this:
. ├── B.py └── folder └── A.py
You can do this in file
B.py:
import folder.A
or
from folder.A import *
or
from folder.A import THINGS_YOU_WANT_TO_IMPORT_IN_A
Then you will be able to use all the functions of file
A.py in file
B.py
Summary
- In the first case, file
A.pyis a module that you imports in file
B.py, you used the syntax
import module_name.
- In the second case,
folderis the package that contains the module
A.py, you used the syntax
import package_name.module_name.
For more info on packages and modules, consult this link.
ANSWER:
To import a specific Python file at ‘runtime’ with a known name:
import os import sys
…
scriptpath = "../Test/" # Add the directory containing your module to the Python path (wants absolute paths) sys.path.append(os.path.abspath(scriptpath)) # Do the import import MyModule | https://rotadev.com/how-to-import-other-python-files-dev/ | CC-MAIN-2022-40 | refinedweb | 385 | 70.6 |
RSA_private_encrypt.3ossl - Man Page
low-level signature operations
Synopsis
#include <openssl/rsa.h>
The following functions have been deprecated since OpenSSL 3.0, and can be hidden entirely by defining OPENSSL_API_COMPAT with a suitable version value, see openssl_user_macros(7):
int RSA_private_encrypt(int flen, unsigned char *from, unsigned char *to, RSA *rsa, int padding); int RSA_public_decrypt(int flen, unsigned char *from, unsigned char *to, RSA *rsa, int padding);
Description
Both of the functions described on this page are deprecated. Applications should instead use EVP_PKEY_sign_init_ex(3), EVP_PKEY_sign(3), EVP_PKEY_verify_recover_init(3), and EVP_PKEY_verify_recover(3)..), EVP_PKEY_sign(3), EVP_PKEY_verify_recover(3)
History
Both of these functions were deprecated in OpenSSL 3.0.
Licensed under the Apache License 2.0 (the “License”). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at <>.
Referenced By
RSA_sign.3ossl(3).
The man page RSA_public_decrypt.3ossl(3) is an alias of RSA_private_encrypt.3ossl(3). | https://www.mankier.com/3/RSA_private_encrypt.3ossl | CC-MAIN-2022-33 | refinedweb | 160 | 51.34 |
!
Posted by Adrian Holovaty on July 15, 2007
Jeff Croft July 15, 2007 at 4:57 p.m.
Congrats to Adrian, Simon, Jacob, Wilson, and everyone else involved in the project. It's been wonderful watching it grow.
I have to say, I'm constantly amazed at how many blog posts and articles refer to Django as "new." Is a two-year old really the new kid on the block in web development these days? If so, our industry is moving slower than I thought!
Benjamin Schwarze July 15, 2007 at 5:12 p.m.
Thank you, guys, for this nearly perfect web framework.
Adrian, I'm curious about the third anniversary announcement, and its "One year ago, ..." list. I will track the upcoming development and may be some new amazing features.
Sebastjan Trepča July 15, 2007 at 5:53 p.m.
Congrats and much kudos to you for building this amazing web framework!
Bert Heymans July 15, 2007 at 6:08 p.m.
Congratulations! Keep it up guys, every day I find something new in Django, amazing framework, great documentation.
Paul Bx July 15, 2007 at 6:38 p.m.
I remember those early days. Django came along at just the right time for me. I agree with you that the stability of the design is something to be proud of. Many projects are well into their "ground-up rewrite" stage by this point (a stage from which they often never recover).
Ross Poulton July 15, 2007 at 6:56 p.m.
Congratulations to the entire time for creating this product then being a part of the fantastic community that surrounds it.
Django is the base for many of my personal & paid projects, and it's definitely made life easier.
If you're ever in Melbourne I think I owe a beer or three (plus, there are a few nice Jazz clubs around here)
Ross
guotie July 15, 2007 at 7:41 p.m.
when does 1.0 out?
Clint Ecker July 15, 2007 at 7:43 p.m.
I probably drunkenly blathered about this to you and Wilson enough last week, but Django was one of the best things that could've happened to my career! Thanks everyone!
Empty July 15, 2007 at 9:10 p.m.
Congrats. Everything you've done is quite impressive.
Jay States July 15, 2007 at 9:22 p.m.
Congrats.... Django made me learn Python which is a great language... and Django is the best framework on the Internet. Thanks
Mike Cantelon July 15, 2007 at 9:37 p.m.
Congrats and thanks for all the Django team's hard work!
Mike Kramlich July 15, 2007 at 11:24 p.m.
yeah thanks for making Django the way it is, and lettings others use it too. Great stuff. Very elegant, aggressively simple and sensible. DRY and KISS principles are obviously understood by the designers. Things "just work". And stellar documentation. Best webapp framework (and possibly the best rapid development system) I've ever used. Good job guys!
Paul Smith July 15, 2007 at 11:30 p.m.
Excellent work, congratulations.
Jogenchen July 16, 2007 at 12:02 a.m.
Congrats.... django is wonderful, but it's book is less than rails!
David, biologeek July 16, 2007 at 2:50 a.m.
Félicitations ! :-)
igwe July 16, 2007 at 3:55 a.m.
"I have to say, I'm constantly amazed at how many blog posts and articles refer to Django as "new." Is a two-year old really the new kid on the block in web development these days? If so, our industry is moving slower than I thought!"
That's because Django is a 2 year old but not yet a 1.0
Rob B July 16, 2007 at 4:57 a.m.
I'm new to the framework but have loved everything I have seen so far. Thanks so much for your hard work.
Congrats :) July 16, 2007 at 6:40 a.m.
Now I really can't wait to see 1.0 :)
mrben July 16, 2007 at 7:10 a.m.
Thanks for all your hard work - truly we are standing on the shoulders of giants.
Peter July 16, 2007 at 7:58 a.m.
And... if that was the two-year-old model, what does its equivalent today look like?
Adrian Holovaty July 16, 2007 at 10:34 a.m.
Jogenchen: You're correct -- our book costs less than the Rails book.
Kyle Robertson July 16, 2007 at 10:58 a.m.
Wee! Congratulations to all who have contributed to this grand project, and to the primary developers that initiated it. Your hard work is most appreciated =)
Adam Endicott July 16, 2007 at 1:16 p.m.
Congratulations! I've been using Django since soon after this, and it's been remarkable (and fun!) watching it change. It's a great testament to the team that it's been so easy to keep up with all the great stuff added over the last two years. Thank you to everyone involved for all your hard work.
Peter July 16, 2007 at 4:16 p.m.
OK I think I got the equivalent code for now:
from django.db import models
class Package(models.Model):
label = models.CharField('label', maxlength=20, primary_key=True)
name = models.CharField('name', maxlength=30, unique=True)
def __unicode__(self):
return self.name
class Meta:
db_table = 'packages'
ordering = ['name']
Jeff Croft July 16, 2007 at 6:09 p.m.
igwe: You're totally right. The lack of a 1.0 version number definitely gives people the perception that Django is new, unstable, and not yet ready for primetime. It's unfortunate, really -- but also totally understandable.
Bryan Veloso July 17, 2007 at 3:56 a.m.
Congratulations to everybody involved! Honestly, if it wasn't for this framework, I would have never associated programming with "fun". I could never express my thanks. :) Here's to another two years and 1.0! :)
Primski July 18, 2007 at 2:43 a.m.
Brilliant product.
Philip Lindborg July 18, 2007 at 11:49 a.m.
I've just started using Django and I must say I'm hooked. I learn new stuff every day and it still keeps amazing me. Thanks for your great framework, Django guys!
Rajesh Dhawan July 18, 2007 at 3:42 p.m.
Congratulations!
Here's to many more anniversaries....
Kele July 19, 2007 at 3:41 a.m.
I like django and reading django book ,it is funny and let me dip into it.thanks everyone contribute to it
Dipu July 25, 2007 at 12:33 p.m.
First off all congratulations to all who have contributed to the project!
Infrequently (and inactive) I've been following the development in the Django feel thought and philosophy since the beginning of the public release two years ago.
The most precious notable thing in what I saw in the development of Django is how the end-user was satisfied in it's ease of being able to develop what he wants.
Compliments for doing the excellent job on keeping the framework consistent whilst adding the appropriate features needed.
Douglas Jarquin July 29, 2007 at 5 p.m.
I knew that Django was a cancer; that's why we get along so well.
Here's to the next two years.
To prevent spam, comments are no longer allowed after sixty days.
Sean Stoops July 15, 2007 at 4:07 p.m.
Congrats to everyone! I've only been using Django since last summer but have fell completely in love with it. I still discover new features every time I read through the docs. I can't wait to see what another two years brings.. hopefully into the post 1.0 world. | http://www.djangoproject.com/weblog/2007/jul/15/twoyears/ | crawl-002 | refinedweb | 1,293 | 77.53 |
HOUSTON (ICIS)--Here is Thursday’s end of day ?xml:namespace>
CRUDE: Apr WTI: $99.43/bbl, down 94 cents; May Brent: $106.45/bbl, up 60 cents
NYMEX WTI crude futures fell on length liquidation as the April contract expired at the end of the session. Rising crude oil inventories revealed by the US Energy Information Administration (EIA) stats earlier in the week and a stronger dollar encouraged the selling.
RBOB: Apr $2.8955/gal, up 2.67 cents/gal
Reformulated blendstock for oxygen blending (RBOB) gasoline futures fluctuated throughout the day, but settled higher on support from heating oil and an encouraging jobs report.
NATURAL GAS: Apr $4.369/MMBtu, down 11.5 cents
The front month contract on the NYMEX natural gas futures market closed down nearly 3% on bearish sentiment related to milder near term weather forecasts and the release of the EIA’s latest weekly gas storage report showing a lower than expected withdrawal from US inventories over the week ended 14 March.
ETHANE: lower at 28.50 cents/gal
Ethane spot prices were lower in a quiet market on Thursday.
AROMATICS: toluene flat at $3.55-3.68/gal, mixed xylenes flat at $3.60-3.65/gal
Activity was thin in the US aromatics market for toluene and mixed xylenes (MX) during the day, sources said. There were no fresh trades heard during the day. As a result, toluene and mixed xylene (MX) spot prices held steady from the previous session.
OLEFINS: ethylene done higher at 51.75 cents/lb, PGP lower at 66.0-68.5 cents/lb
US March ethylene traded higher on Thursday at 51.75 cents/lb compared with a trade the previous day at 51.50 cents/lb. US March PGP bid/offer levels were heard at 66.00-68.50 cents/lb, lower than a trade the previous day at 68.75 cents/lb.
For more pricing intelligence please visit | https://www.icis.com/resources/news/2014/03/20/9764489/evening-snapshot-americas-markets-summary/ | CC-MAIN-2017-47 | refinedweb | 323 | 68.57 |
Basic Elements of Oracle SQL, 9 of 10
This section provides:
The following rules apply when naming schema objects:
If your database character set contains multibyte characters, Oracle recommends that each name for a user or a role contain at least one single-byte character.
Depending on the Oracle product you plan to use to access a database object, names might be further restricted by other product-specific reserved words. For a list of a product's reserved words, see the manual for the specific product, such as PL/SQL User's Guide and Reference.
DIMENSION,
SEGMENT,
ALLOCATE,
DISABLE, and so forth). These words are not reserved. However, Oracle uses them internally. Therefore, if you use these words as names for objects and object parts, your SQL statements may be more difficult to read and may lead to unpredictable results.
In particular, do not use words beginning with "SYS_" as schema object names, and do not use the names of SQL built-in functions for the names of schema objects or user-defined functions.
The following figure shows the namespaces for schema objects. Each box is a namespace. Tables and views are in the same namespace. Therefore, a table and a view.
The following figure shows the namespaces for nonschema objects. Because the objects in these namespaces are not contained in schemas, these namespaces span the entire database.
If you give a schema object a name enclosed in double quotation marks, you must use double quotation marks whenever you refer to the object.
Enclosing a name in double quotes allows it to:
By enclosing names in double quotation marks, you can give the following names to different objects in the same namespace:
emp "emp" "Emp" "EMP "
Note that Oracle interprets the following names the same, so they cannot be used for different objects in the same namespace:
emp EMP "EMP"
If you give a user or password a quoted name, the name cannot contain lowercase letters.
Database link names cannot be quoted.
The following examples are valid schema object names:
ename horse scott.hiredate "EVEN THIS & THAT!" a_very_long_and_valid_name
Although column aliases, table aliases, usernames, and passwords are not objects or parts of objects, they must also follow these naming rules with these exceptions:
Here are several helpful guidelines for naming objects and their parts: database EMP and DEPT tables are both named DEPTNO. | http://docs.oracle.com/cd/A84870_01/doc/server.816/a76989/ch29.htm | CC-MAIN-2014-35 | refinedweb | 393 | 60.35 |
What is functionA function is section of program that performs some specific task. Let's take a example int add (int x, int y){ return x+y; } I declared a function add which takes two arguments x and y. It returns the
addition of x and y.
What's the use of functionSuppose you are writing a program in which you want to perform the same task ten times. Then you have two options
a. Either you write the same piece of code again and again.
b. Second is Make a function and call the function whenever you perform the
action.
The second advantage of function is if you want to do some changes, you need to do at one place only and it will reflect in all other places.
// Example of function
#include <stdio.h>
int add(int x, int y); // Declare add function with two arguments int main() { int a,b,c; printf("Enter two numbers"); scanf("%d%d",&a,&b); c = add(a,b); printf("The addition of two numbers is%d",c); return 0; } int add(int x, int y) { return x+y; // Returns the addition of two numbers } | http://www.cprogrammingcode.com/2013/10/write-c-program-which-explain-concept.html | CC-MAIN-2021-17 | refinedweb | 193 | 70.73 |
Blokkal::AccountManager Class ReferenceGlobal account manager class. More...
#include <blokkalaccountmanager.h>
Detailed DescriptionGlobal account manager class.
This is the global account manager class. It maitains information of all accounts and saves the configuration of registered accounts.
Definition at line 49 of file blokkalaccountmanager.h.
Member Function Documentation
Returns the account id if such an account has been registered or 0 if no such account exists.
- Parameters:
-
- Returns:
- the desired account or 0 if it does not exist
Definition at line 182 of file blokkalaccountmanager.cpp.
Returns the node of the account with name id if it exists. If it does not exist, a new node for this account will be created, but not inserted in the tree. You should check whether the protocol attribute is set and set if if necessary. This method is used by the AccountConfig constructor.
- Parameters:
-
- Returns:
- the configuration node
Definition at line 191 of file blokkalaccountmanager.cpp.
This signal is emitted whenever an account is registered.
- Parameters:
-
Returns a list of available accounts.
- Returns:
- registered accounts
Definition at line 138 of file blokkalaccountmanager.cpp.
This signal is emitted whenever an account has been unregistered. Usually this means, that the account is about to be deleted, so you should not store the pointer.
- Parameters:
-
Loads the account information and creates the accounts.
Definition at line 85 of file blokkalaccountmanager.cpp.
Checks that account does not already exists and registers it. If it tries to take over an already registered account, account will be deleted.
- Parameters:
-
- Returns:
- account or 0 if it was not registered
Definition at line 144 of file blokkalaccountmanager.cpp.
Saves the account information to the account file.
Definition at line 125 of file blokkalaccountmanager.cpp.
Returns a pointer to the global account manager. If no account manager exists yet, one will be created.
- Returns:
- global account manager
Definition at line 133 of file blokkalaccountmanager.cpp.
Unregisters account
- Parameters:
-
Definition at line 171 of file blokkalaccountmanager.cpp.
The documentation for this class was generated from the following files: | http://blokkal.sourceforge.net/docs/0.1.0/classBlokkal_1_1AccountManager.html | CC-MAIN-2017-43 | refinedweb | 334 | 51.75 |
ztrie(3)
CZMQ Manual - CZMQ/3.0.2
Name
ztrie - Class for simple trie for tokenizable strings
Synopsis
// This is a draft class, and may change without notice. It is disabled in // stable builds by default. If you use this in applications, please ask // for it to be pushed to stable state. Use --enable-drafts to enable. #ifdef CZMQ_BUILD_DRAFT_API // Callback function for ztrie_node to destroy node data. typedef void (ztrie_destroy_data_fn) ( void **data); // *** Draft method, for development use, may change without warning *** // Creates a new ztrie. CZMQ_EXPORT ztrie_t * ztrie_new (char delimiter); // *** Draft method, for development use, may change without warning *** // Destroy the ztrie. CZMQ_EXPORT void ztrie_destroy (ztrie_t **self_p); // *** Draft method, for development use, may change without warning *** // Inserts a new route into the tree and attaches the data. Returns -1 // if the route already exists, otherwise 0. This method takes ownership of // the provided data if a destroy_data_fn is provided. CZMQ_EXPORT int ztrie_insert_route (ztrie_t *self, const char *path, void *data, ztrie_destroy_data_fn destroy_data_fn); // *** Draft method, for development use, may change without warning *** // Removes a route from the trie and destroys its data. Returns -1 if the // route does not exists, otherwise 0. // the start of the list call zlist_first (). Advances the cursor. CZMQ_EXPORT int ztrie_remove_route (ztrie_t *self, const char *path); // *** Draft method, for development use, may change without warning *** // Returns true if the path matches a route in the tree, otherwise false. CZMQ_EXPORT bool ztrie_matches (ztrie_t *self, const char *path); // *** Draft method, for development use, may change without warning *** // Returns the data of a matched route from last ztrie_matches. If the path // did not match, returns NULL. Do not delete the data as it's owned by // ztrie. CZMQ_EXPORT void * ztrie_hit_data (ztrie_t *self); // *** Draft method, for development use, may change without warning *** // Returns the count of parameters that a matched route has. CZMQ_EXPORT size_t ztrie_hit_parameter_count (ztrie_t *self); // *** Draft method, for development use, may change without warning *** // Returns the parameters of a matched route with named regexes from last // ztrie_matches. If the path did not match or the route did not contain any // named regexes, returns NULL. CZMQ_EXPORT zhashx_t * ztrie_hit_parameters (ztrie_t *self); // *** Draft method, for development use, may change without warning *** // Returns the asterisk matched part of a route, if there has been no match // or no asterisk match, returns NULL. CZMQ_EXPORT const char * ztrie_hit_asterisk_match (ztrie_t *self); // *** Draft method, for development use, may change without warning *** // Print the trie CZMQ_EXPORT void ztrie_print (ztrie_t *self); // *** Draft method, for development use, may change without warning *** // Self test of this class. CZMQ_EXPORT void ztrie_test (bool verbose); #endif // CZMQ_BUILD_DRAFT_API Please add '@interface' section in './../src/ztrie.c'.
Description
This is a variant of a trie or prefix tree where all the descendants of a node have a common prefix of the string associated with that node. This implementation is specialized for strings that can be tokenized by a delimiter like a URL, URI or URN. Routes in the tree can be matched by regular expressions and by using capturing groups parts of a matched route can be easily obtained.
Note that the performance for pure string based matching is okay but on short strings zhash and zhashx are 3-4 times faster.
Example
From ztrie_test method
// Create a new trie for matching strings that can be tokenized by a slash // (e.g. URLs minus the protocol, address and port). ztrie_t *self = ztrie_new ('/'); assert (self); int ret = 0; // Let's start by inserting a couple of routes into the trie. // This one is for the route '/foo/bar' the slash at the beginning of the // route is important because everything before the first delimiter will be // discarded. A slash at the end of a route is optional though. The data // associated with this node is passed without destroy function which means // it must be destroyed by the caller. int foo_bar_data = 10; ret = ztrie_insert_route (self, "/foo/bar", &foo_bar_data, NULL); assert (ret == 0); // Now suppose we like to match all routes with two tokens that start with // '/foo/' but aren't '/foo/bar'. This is possible by using regular // expressions which are enclosed in an opening and closing curly bracket. // Tokens that contain regular expressions are always match after string // based tokens. // Note: There is no order in which regular expressions are sorted thus // if you enter multiple expressions for a route you will have to make // sure they don't have overlapping results. For example '/foo/{[^/]+}' // and '/foo/{\d+} having could turn out badly. int foo_other_data = 100; ret = ztrie_insert_route (self, "/foo/{[^/]+}", &foo_other_data, NULL); assert (ret == 0); // Regular expression are only matched against tokens of the same level. // This allows us to append to are route with a regular expression as if // it were a string. ret = ztrie_insert_route (self, "/foo/{[^/]+}/gulp", NULL, NULL); assert (ret == 0); // Routes are identified by their endpoint, which is the last token of the route. // It is possible to insert routes for a node that already exists but isn't an // endpoint yet. The delimiter at the end of a route is optional and has no effect. ret = ztrie_insert_route (self, "/foo/", NULL, NULL); assert (ret == 0); // If you try to insert a route which already exists the method will return -1. ret = ztrie_insert_route (self, "/foo", NULL, NULL); assert (ret == -1); // It is not allowed to insert routes with empty tokens. ret = ztrie_insert_route (self, "//foo", NULL, NULL); assert (ret == -1); // Everything before the first delimiter is ignored so 'foo/bar/baz' is equivalent // to '/bar/baz'. ret = ztrie_insert_route (self, "foo/bar/baz", NULL, NULL); assert (ret == 0); ret = ztrie_insert_route (self, "/bar/baz", NULL, NULL); assert (ret == -1); // Of course you are allowed to remove routes, in case there is data associated with a // route and a destroy data function has been supplied that data will be destroyed. ret = ztrie_remove_route (self, "/foo"); assert (ret == 0); // Removing a non existent route will as well return -1. ret = ztrie_remove_route (self, "/foo"); assert (ret == -1); // Removing a route with a regular expression must exactly match the entered one. ret = ztrie_remove_route (self, "/foo/{[^/]+}"); assert (ret == 0); // Next we like to match a path by regular expressions and also extract matched // parts of a route. This can be done by naming the regular expression. The name of a // regular expression is entered at the beginning of the curly brackets and separated // by a colon from the regular expression. The first one in this examples is named // 'name' and names the expression '[^/]'. If there is no capturing group defined in // the expression the whole matched string will be associated with this parameter. In // case you don't like the get the whole matched string use a capturing group, like // it has been done for the 'id' parameter. This is nice but you can even match as // many parameter for a token as you like. Therefore simply put the parameter names // separated by colons in front of the regular expression and make sure to add a // capturing group for each parameter. The first parameter will be associated with // the first capturing and so on. char *data = (char *) malloc (80); sprintf (data, "%s", "Hello World!"); ret = ztrie_insert_route (self, "/baz/{name:[^/]+}/{id:--(\\d+)}/{street:nr:(\\a+)(\\d+)}", data, NULL); assert (ret == 0); // There is a lot you can do with regular expression but matching routes // of arbitrary length wont work. Therefore we make use of the asterisk // operator. Just place it at the end of your route, e.g. '/config/bar/*'. ret = ztrie_insert_route (self, "/config/bar/*", NULL, NULL); assert (ret == 0); // Appending to an asterisk as you would to with a regular expression // isn't valid. ret = ztrie_insert_route (self, "/config/bar/*/bar", NULL, NULL); assert (ret == -1); // The asterisk operator will only work as a leaf in the tree. If you // enter an asterisk in the middle of your route it will simply be // interpreted as a string. ret = ztrie_insert_route (self, "/test/*/bar", NULL, NULL); assert (ret == 0); // If a parent has an asterisk as child it is not allowed to have // other siblings. ret = ztrie_insert_route (self, "/config/bar/foo/glup", NULL, NULL); assert (ret != 0); // Test matches bool hasMatch = false; // The route '/bar/foo' will fail to match as this route has never been inserted. hasMatch = ztrie_matches (self, "/bar/foo"); assert (!hasMatch); // The route '/foo/bar' will match and we can obtain the data associated with it. hasMatch = ztrie_matches (self, "/foo/bar"); assert (hasMatch); int foo_bar_hit_data = *((int *) ztrie_hit_data (self)); assert (foo_bar_data == foo_bar_hit_data); // This route is part of another but is no endpoint itself thus the matches will fail. hasMatch = ztrie_matches (self, "/baz/blub"); assert (!hasMatch); // This route will match our named regular expressions route. Thus we can extract data // from the route by their names. hasMatch = ztrie_matches (self, "/baz/blub/--11/abc23"); assert (hasMatch); char *match_data = (char *) ztrie_hit_data (self); assert (streq ("Hello World!", match_data)); zhashx_t *parameters = ztrie_hit_parameters (self); assert (zhashx_size (parameters) == 4); assert (streq ("blub", (char *) zhashx_lookup (parameters, "name"))); assert (streq ("11", (char *) zhashx_lookup (parameters, "id"))); assert (streq ("abc", (char *) zhashx_lookup (parameters, "street"))); assert (streq ("23", (char *) zhashx_lookup (parameters, "nr"))); zhashx_destroy (¶meters); // This will match our asterisk route '/config/bar/*'. As the result we // can obtain the asterisk matched part of the route. hasMatch = ztrie_matches (self, "/config/bar/foo/bar"); assert (hasMatch); assert (streq (ztrie_hit_asterisk_match (self), "foo/bar")); zstr_free (&data); ztrie_destroy (&self); . | http://czmq.zeromq.org/czmq3-0:ztrie | CC-MAIN-2017-13 | refinedweb | 1,527 | 61.26 |
Next: Function Caveats, Previous: Definition Syntax, Up: User-defined nonstandard extension.)
The following is an example of a recursive function. It takes a string as an input parameter and returns the string in backwards order. Recursive functions must always have a test that stops the recursion. In this case, the recursion terminates when the starting position is zero, i.e., when there are no more characters left in the string.
function rev(str, start) { if (start == 0) return "" return (substr(str, start, 1) rev(str, start - 1)) }
If this function is in a file named rev.awk, it can be tested this way:
$ echo "Don't Panic!" | > gawk --source '{ print rev($0, length($0)) }' -f rev.awk -| !cinaP t'noD
The C
ctime() function takes a timestamp and returns it) } | http://www.gnu.org/software/gawk/manual/html_node/Function-Example.html | CC-MAIN-2013-48 | refinedweb | 130 | 67.86 |
2D array: Find an array of characters
Scotty Young
Greenhorn
Joined: Oct 26, 2006
Posts: 10
posted
Apr 14, 2011 14:30:07
0
I'm working on a problem to help try and improve my
java
skills before I sit
SCJA
:
I have an array, "word", with three characters "D" "O" and "G".
I'm trying to search through a larger 2D array and then print out all three characters when they are matched in a row.
I have worked this out on a white board with someone before, but I cant remember how I solved it... derp.
This is the code I currently have:
class FindWord{ public static void main(String args[]){ char [][] square= { {'A','B','S','C','D','E'}, {'F','G','S','G','S','G'}, {'A','D','O','H','E','F'}, {'D','D','I','G','H','H'}, {'E','H','D','O','G','E'}, {'X','V','H','O','G','P'} }; char [] word = {'D','O','G'}; for(int x =0;x<square.length;x++){ for(int y =0;y<square.length;y++){ for(int i=0;i<word.length;i++){ if(word[i]==square[x][y+i]){ //if(word[i]==square[y][i]){ System.out.print(square[x][y+i]); continue; } else{ break; } } } } } }
I feel like I'm close to solving it.>
Greg Brannon
Bartender
Joined: Oct 24, 2010
Posts: 555
posted
Apr 14, 2011 15:23:16
0
How close? What's your program doing now? What should it be doing?
Learning Java using Eclipse on OpenSUSE 11.2
Linux user#: 501795
Scotty Young
Greenhorn
Joined: Oct 26, 2006
Posts: 10
posted
Apr 14, 2011 15:32:21
0
Well, I'm trying to get it to output DOG. At the minute its outputting DDODDDOG.
I thought I knew what I was doing but now I'm a bit lost and confused. I want it to compare every single character in square to the characters in word, and output the characters that match, but only if it finds them in the sequence D-O-G.
Carey Brown
Ranch Hand
Joined: Nov 19, 2001
Posts: 167
I like...
posted
Apr 15, 2011 17:24:50
0
I know we're not supposed to give out "answers" but I couldn't resist giving this a try. Works for any size 'square' and 'word'. Prints out list of square coordinates where 'd-o-g' was found.
import java.util.ArrayList; import java.util.List; public class PuzzleFindWord { // Eight possible directions: horizontally, vertically, and diagonally private static final XY[] directions = { new XY( 0, 1 ), new XY( 0, -1 ), new XY( 1, 0 ), new XY( -1, 0 ), new XY( 1, 1 ), new XY( 1, -1 ), new XY( -1, 1 ), new XY( -1, -1 ) }; private static final char[][] square = { { 'A', 'B', 'S', 'C', 'D', 'E' }, { 'F', 'G', 'S', 'G', 'S', 'G' }, { 'A', 'D', 'O', 'H', 'E', 'F' }, { 'D', 'D', 'I', 'G', 'H', 'H' }, { 'E', 'H', 'D', 'O', 'G', 'E' }, { 'X', 'V', 'H', 'O', 'G', 'P' } }; private static final char[] word = { 'D', 'O', 'G' }; public static void main( String[] args ) { List<XY> list; for( int x = 0 ; x < square[ 0 ].length ; x++ ) { for( int y = 0 ; y < square.length ; y++ ) { for( XY direction : directions ) { list = find( x, y, direction ); if( list != null ) System.out.println( list ); } } } } private static ArrayList<XY> find( int x, int y, XY direction ) { ArrayList<XY> list = new ArrayList<XY>(); int wordIndex = 0; for(;;) { if( x < 0 || x >= square[ 0 ].length || y < 0 || y >= square.length || wordIndex >= word.length || square[ y ][ x ] != word[ wordIndex ] ) { return null; } list.add( new XY( x, y ) ); if( wordIndex == word.length - 1 ) return list; x += direction.x; y += direction.y; wordIndex++; } } private static class XY { int x, y; public XY( int x, int y ) { this.x = x; this.y = y; } @Override public String toString() { return "(" + x + "," + y + ")"; } } }
It is sorta covered in the
JavaRanch Style Guide
.
subject: 2D array: Find an array of characters
Similar Threads
June Newsletter Puzzle
Dsplaying the alphabet with a loop?
Please help in this selection sort
"char cannot be dereferenced error"
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/534447/java/java/array-Find-array-characters | CC-MAIN-2013-48 | refinedweb | 686 | 81.02 |
"John Morris" <jrmorrisnc at gmail.com> wrote > So this seems like it will make scope/namespaces a bit > interesting... namespaces in python are literally that, they are spaces where *names* are visible. Objects are something else entirely and assignment only pins a name to an object. So in Python namespaces contriol where you can use a name, not where you can use an object. def f(y): return y+1 x = 66 print f(x) y is a name that is only visible inside f. 66 is a number object associated with x and passed into f where it is associated with y. The number object inside f is the same object outside f only the name has changed.The return value is a new number object (67 in this case). HTH, -- Alan Gauld Author of the Learn to Program web site | https://mail.python.org/pipermail/tutor/2008-January/059748.html | CC-MAIN-2017-30 | refinedweb | 142 | 74.59 |
So I was looking for a way to block input if a button was pressed and naturally came upon the EventSystem.current.IsPointerOverGameObject() function. This helps with what I want but unfortunately it causes issues in other places. If I call this function it doesn't distinguish between certain things. I'll set up an example scene:
EventSystem.current.IsPointerOverGameObject()
Lets say I want to set the color of cubes that I click on in the scene and I have a button in the UI that changes the color that I'm setting them to. Now, when I click this button, I don't want the cube behind it to set it's color as well, so I use the function mentioned above and if it returns false, then I cast a ray to see if a cube is hit. The problem comes in when I have an image in the UI that I do want clicks to go through ( this image could have alpha transparency or just be something I want to ignore ). The function returns true if the click happens on the image as well. So I can't decide which things actually catch the input.
In other UI solutions these UI elements would just have colliders on them if I wanted them to catch input, but this is not the case in uGUI. How do I solve this situation?
Answer by spiceboy9994
·
Jan 19, 2015 at 06:50 PM
What about disabling the button once it has been clicked?. If its the case that you want to disable the same button that you just clicked you could try to add a click trigger and then add reference to the button:
using UnityEngine.UI;
public class MyButtonBehavior;
{
private Button myButton;
void Start() {
myButton = transform.GetComponent<Button>();
}
void myButtonClick() {
myButton.interactable = false;
}
}
That would disabled the button once it has been clicked. Remember to attach the event to the click events of the UI Button.
This isn't the problem I'm having, I want to be able to click through some of my UI controls and not through other UI controls. Hence, the whole EventSystem.current.IsPointerOverGameObject().
Canvas Graphics raycaster, prevent 2d and 3d physic raycast
0
Answers
Collider Blocking Worldspace Canvas?
1
Answer
Detect only UI button click
3
Answers
How to make UI block clicks from triggering things behind it
4
Answers
Make "polygon" collider for UI element
2
Answers | https://answers.unity.com/questions/880318/blocking-and-not-blocking-input-in-new-ugui-46.html | CC-MAIN-2019-47 | refinedweb | 404 | 69.82 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.