text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Communities How to get if a run batch job had errors ?Olivier RENAULT Jan 21, 2014 10:09 AM Hi ! I'm using jython to retrieve some informations from bladelogic. I've written some code in order to launch a job and to find if it's ok or not. For a deploy job, all is ok, I can find the result (failed or pass) for the job run. But for the batch job, I don't know how to get this information. This is an extract of my code: def BatchGetDBKeyByGroupAndName(self,groupName,jobName): cmd = ['BatchJob','getDBKeyByGroupAndName',groupName,jobName] handle = self.jli.run(cmd) if(handle.success()): getDBKeyByGroupAndName = handle.returnValue returnvalue = handle.returnValue else: print "Failed to retrieve the BatchgetDBKeyByGroupAndName for batchJob groupName,jobName "+groupName+","+jobName+"." returnvalue = 0 return returnvalue def BatchExecuteJobAndWait(self,jobKey): cmd = ['BatchJob','executeJobAndWait',jobKey] handle = self.jli.run(cmd) if(handle.success()): returnvalue = handle.returnValue print "DEBUG: Launch BatchJob %s ..." % returnvalue else: print "Failed to launch the batchjob job." returnvalue = 0 return returnvalue def jobRunKeyToJobRunId(self,jobRunKey): cmd = ['JobRun','jobRunKeyToJobRunId',jobRunKey] handle = self.jli.run(cmd) if(handle.success()): returnvalue = handle.returnValue print "DEBUG: jobRunKeyToJobRunId %s" % returnvalue else: print "ERROR, can't retrieve the job run ID from a job run key." returnvalue = -1 return returnvalue The main call is: BatchJobKey = self.BatchGetDBKeyByGroupAndName(blGroupName,blJobName) if(BatchJobKey != 0): BatchJobRunKey = self.BatchExecuteJobAndWait(BatchJobKey) if(BatchJobRunKey != 0): BatchJobRunId = self.jobRunKeyToJobRunId(BatchJobRunKey) if(BatchJobRunId != 0): batchJobHadErrors = ???? (I want to use the getHadErrors here...) Thanks for your help Olivier 1. Re: How to get if a run batch job had errors ?Bill Robinson Jan 21, 2014 11:02 AM (in response to Olivier RENAULT) once you have the jobrunid you should be able to use the commands in the job/jobrun namespace. 2. Re: How to get if a run batch job had errors ?Olivier RENAULT Jan 21, 2014 12:59 PM (in response to Bill Robinson) Yes, for sure, but how to do this ? getHadErrors do not take any parameter... I may have to store the values with a kind of blcli_storeenv but don't know how to call this in my bljython code... Regards O. 3. Re: How to get if a run batch job had errors ?Bill Robinson Jan 21, 2014 1:06 PM (in response to Olivier RENAULT) You load the job run object into memory and then get the hadErrors. There should also be a ‘JobRun.getJobRunHadErrors’ command that takes the jobrunkey. 4. Re: How to get if a run batch job had errors ?Olivier RENAULT Jan 21, 2014 1:13 PM (in response to Bill Robinson) Can you give me a short code to be sure to understand correctly ? For a deploy job, I can retrieve the result with a kind of 'JobRun.getJobRunHadErrors' but it's different with a batch job. And I'm not sure how to load the object in memory in the right way. Thanks for your help! 5. Re: How to get if a run batch job had errors ?Bill Robinson Jan 21, 2014 1:16 PM (in response to Olivier RENAULT)1 of 1 people found this helpful You have the job run id from the batch job right? So feed that job id to the ‘JobRun.getJobRunHadErrors’ ? 6. Re: How to get if a run batch job had errors ?Olivier RENAULT Jan 22, 2014 10:02 AM (in response to Bill Robinson) Hi Bill, Your workaround is working. I'm using the JobRun.getJobRunHadErrors method, as I was already done for testing the result of a deploy job. As this is a Batch job and I have to use the "batchjob" and "batchjobrun" dedicated namespace to run it, and there's a getHadErrors method in batchjobrun, I thought I have to use this. But using the JobRun.getJobRunHadErrors is working on the batchjobrun too (with the batchjobrunkey). This is not the more elegant way, but it is working so we can close this. Thanks a lot.
https://communities.bmc.com/thread/98571
CC-MAIN-2017-51
refinedweb
660
67.96
QML Internationalization Strings in QML can be marked for translation using the qsTr(), qsTranslate(), QT_TR_NOOP(), and QT_TRANSLATE_NOOP() functions. For example: Text { text: qsTr("Pictures") } These functions are standard QtScript functions; for more details see QScriptEngine::installTranslatorFunctions(). QML relies on the core internationalization capabilities provided by Qt. These capabilities are described more fully in: You can test a translation with the QML Viewer using the -translation option. Example First we create a simple QML file with text to be translated. The string that needs to be translated is enclosed in a call to qsTr(). hello.qml: import QtQuick 1.0 Rectangle { width: 200; height: 200 Text { text: qsTr("Hello"); anchors.centerIn: parent } } Next we create a translation source file using lupdate: lupdate hello.qml -ts hello.ts Then we open hello.ts in Linguist, provide a translation and create the release file hello.qm. Finally, we can test the translation: qmlviewer -translation hello.qm hello.qml You can see a complete example and source code in the QML Internationalization.
http://doc.qt.io/qt-4.8/qdeclarativei18n.html
CC-MAIN-2016-07
refinedweb
168
51.75
Ads by The Lounge Old news from Joel on Software that I finally got around to playing with: It's a fixed width font that looks good at small font sizes, slashes zeros, etc.: Back in the days when I did Mac development (System 6) the biggest monitors available for the Mac were maybe 9", and the only way to see a reasonable amount of code on screen was to use a tiny font. Now that I have two 18". [Joel on Software] More info here: Well, I've been using it as my default font in notepad and Visual Studio for a week and I'm still happy with it. The TTF version seems to work best for me. Microsoft has a developer special on their Empower Program. 5 MSDN Universal subscriptions for only $375. You'll get copies of most of their software with it. [Techbargains.com] Info / Enrollment ISV Requirements on Microsoft website (shows old price - $750). I've stumbled across Doxygen documentation on Rotor (a.k.a. SSCLI) on several searches for obscure .NET FCL methods, and it's really interesting. Sure it's different than the Windows .NET Framework redistributable, but I'd bet it's pretty close in most cases, and even if not it illustrates what's functionally going on since this is coded to the same spec. Looking at code that implements the .NET CLI classes tells me much more about them than documentation usually does. I like the Doxygen format pretty well, too - cross-referenced via hyperlinks, class / namespace / file views, etc. For starters, check these out: Probably the best top level entry points are the main Shared Source CLI page or the Namespace index. Oh, and all these links are to 'System.Uri.NonPathPart' is inaccessible due to its protection level Argh! Why? Parsing URL's is a pain, the System.Uri object's got some nice utility functions that show the right values in a watch window, and I can't use them. I hate duplicating this kind of parsing, since that includes duplicating the QA effort as well. Reminds me of trying to call DirectX functions from VB that weren't exported or exposed via the COM interface back in the day. I can see the candy, but I cannot eat the candy. Wish: System.Uri would be less protective of its utility properties. No rocket science here, but this query has saved me a lot of time. I've got it saved as a tql file in “Microsoft SQL Server\80\Tools\Templates\SQL Query Analyzer\” so I can hit 'Alt+F N' and pick it from the templates. Scenerio: I've generated data objects with CodeSmith that have a property for each table column. I still need to build edit screens that map textboxes (for the single edit case) or datagrid columns to the objects. Enter the INFORMATION_SCHEMA view, which returns a row for each column in a given database. I switch query analyzer to text output mode (Ctrl-T) , run the queries, copy the html into the ASPX (HTML view), switch to Design view (so the IDE sees the new controls and throws references to them in the ASPX.CS). Then I pop into the code behind and paste the appropriate “fields to properties“ and “properties to fields“ code and I'm off... to correct the compilation errors. I'm not mapping the datatypes here, and I'm generating edit fields for ID fields that shouldn't be edited. But, it's a good start on an edit page. declare @table varchar(100) declare @database varchar(100) set @table = 'PortfolioPerformance' set @database = 'ResearchReports' --**SINGLE ITEM EDIT** --Generate single item edit textboxes '<TR> <TD>' + column_name + '</TD> <TD> <asp:TextBox</asp:TextBox></TD> </TR>' from information_schema.columns where table_name = @table and table_catalog = @database --Fill Text fields from class 'txt' + column_name + '.Text = _' + @database + '.' + column_name + '.ToString();' from information_schema.columns where table_name = @table and table_catalog = @database --Set class properties from text boxes '_' + @database + '.' + column_name + ' = decimal.Parse(txt' + column_name + '.Text);' from information_schema.columns where table_name = @table and table_catalog = @database --**DATAGRID EDIT** --Generate datagrid template column '<asp:TemplateColumn <ItemTemplate> <asp:Label </asp:Label> </ItemTemplate> <FooterTemplate> <nobr> <asp:TextBox</asp:TextBox> <asp:RequiredFieldValidator <nobr></nobr> </asp:RequiredFieldValidator> </nobr> </FooterTemplate> <EditItemTemplate> <asp:TextBox </asp:TextBox> <asp:RequiredFieldValidator <nobr></nobr> </asp:RequiredFieldValidator> </EditItemTemplate> </asp:TemplateColumn>' from information_schema.columns where table_name = @table and table_catalog = @database --Set class properties from datagrid '_' + @table + '.' + column_name + ' = ((TextBox)e.Item.FindControl("txt' + column_name + '")).Text;' from information_schema.columns where table_name = @table and table_catalog = @database Could this be done start to finish with a CodeSmith template? Sure. Um... that's not what I did (this one goes to eleven). Sun's Project Looking Glass (video) is in the news - the story made the front page on google news, and it's on the front page of the sun site. The video's got quite a bit of eye-candy, but doesn't really seem to offer much in terms of usability. Flipping windows around and typing notes on the back might be nice, I guess. The rest looked great in a demo, but that's it. A comment in the Seattle PI article indicated that MS Research talked about something like this 4 years ago - TaskGallery (video). It's an ugly site, and comparing the two videos shows the huge advances video cards have made in 4 1/2 years, but if you can look past that you can compare the actual usability aspects (i.e. what you'd want to use day after day vs. what you'd want to show off), the TaskGallery approach goes quite a bit further: (you really have to watch the video to get the idea) Both are interesting, but I think I'll save the 3-D for Gorilla at Large. A funny sidenote - the Sun demo repeatedly points out that a 3-D desktop is the kind of thing that can happen when the software community as a whole approaches a problem as opposed to a single (unspoken “eeeevil”) company. Funny that MSFT was working on this 4 years ago... Good thing Sun's on an innovation rampage with the Java Desktop... Saw a link to this handy tool today on the ieHttpHeaders page: FullSource is an IE addin which displays the HTML after client side javascript gets done with it. Simple install, adds right click “Full Source” option: And if you haven't checked out ieHttpHeaders, it's pretty cool too. Really helpful for debugging redirect issues, forms auth flow, cookies, etc: MS is advertising a demo of FrontPage on Slashdot banners (which seems brave, but I'd be more than a little surprised if it causes *nix zealots to dash off to store.microsoft.com). Anyhow, the demo is using RunAware, and it's a pretty cool way to run a demo - you actually get to use the app without downloading anything (well, except for a java client applet which hosts a remote MetaFrame session). They've also got Visio and Project 2002 up there. I think it's a great way to try out software - you can use the whole, uncrippled application, and you don't have a big download or demo app to uninstall. I think they should throw VisualStudio up there as well. Update: Phil Weber let me know that VisualStudio (and Windows Server 2003, SQL Server 2000, Office 2003, the ASP.NET Starter Kits, etc.) is available via Terminal Services. You get to play around on a fully loaded server:
http://weblogs.asp.net/jgalloway/archive/2003/12.aspx
crawl-002
refinedweb
1,254
62.88
Hi all. My test question says: If it is NOT universally held belief that objects must have both data & methods, argue why you might want to create a class that had only Methods. ... a class that had only Data. Any inputs? thank you!!! Printable View Hi all. My test question says: If it is NOT universally held belief that objects must have both data & methods, argue why you might want to create a class that had only Methods. ... a class that had only Data. Any inputs? thank you!!! I could see I class with only methods if you inhereted another class with data from it. perhaps.. You write an interface class to enhance data hiding. Write the class that contains members to access the members of an implementation class. Define those members as friends of the implementation class and offer only the interface class to a user. Though technically this isn't needed as you could define the interface to be public and the implementation methods to be private and are called but the public methods. However, it could be a way of truly hiding data from another programmer by only allowing them use of the interface class and keeping the implementation class unknown and only used by the interface itself, not the programmer. or You derive a class from a base class and intend to use all of the members and methods defined in the base class with one or two extra methods but new members are not required. -Prelude aside from access if you don't need instantiation, you can also use namespaces. I have created such a class... I created a string helper class to perform things such as... void GetFileNameFromDirS( CString strFullPath ) void GetDirFromPath( CString strFullPath ) BOOL VerifyFileExtension( CString strFilename, CString strExtension) ...any way my point is that CString class is not intended to be inherited from.... No virtual destructor.... well you can do this through composition but which would require more work... I have also created similar classes to handle certain routines such as exceptions... consider this... try{ ....code .... } catch( CSomeException* e) { UtilException e; e.DisplayExceptionMessage( e ); ....handler code e->Delete(); } instead of catch( CSomeException* e ) { mystring msg; mystring s; for( int i = 0; i < e->GetErrorCount() ; i++) { s.Format( "Error Code %d,\nHelp Context: %d", e->m_nError, e->m_nContext); smg += s; smg += e->m_strDescription smg += "\n"; } DisplayMsg( smg ); } which implemetation would you prefer?
http://cboard.cprogramming.com/cplusplus-programming/9911-data-vs-method-how-do-you-argue-qtion-printable-thread.html
CC-MAIN-2016-30
refinedweb
399
62.78
![if !IE]> <![endif]> I/O streaming • A stream is an abstraction that either produces or consumes information. A stream is linked to a physical device by the Java I/O system. • Java 2 defines two types of streams: byte and character. • Byte streams provide a convenient means for handling input and output of bytes. Byte streams are used, for example, when reading or writing binary data. two abstract classes are InputStream and OutputStream. • Character streams provide a convenient means for handling input and output of characters. Two abstract classes are Reader and Writer. These abstract classes handle Unicode character streams. • The package needed is java.io Byte Streams • The byte stream classes provide a rich environment for handling byte-oriented I/O. • A byte stream can be used with any type of object, including binary data. This versatility makes byte streams important to many types of programs. Input Streams • Input streams read the bytes of data. • Java's basic input class is java.io.InputStream Methods • public abstract int read( ) throws IOException • public int read(byte[] input) throws IOException • public int read(byte[] input, int offset, int length) throws IOException • public long skip(long n) throws IOException • public int available( ) throws IOException • public void close( ) throws IOException Output Streams • Output streams write the bytes of data. • Java's basic output class is java.io.OutputStream Methods • public abstract void write(int b) throws IOException • public void write(byte[] data) throws IOException • public void write(byte[] data, int offset, int length) throws IOException • public void flush( ) throws IOException • public void close( ) throws IOException FileInputStream • The FileInputStream class creates an InputStream that you can use to read bytes from a file. • Its two constructors are FileInputStream (String filepath) FileInputStream (File fileObj) Either can throw a FileNotFoundException. The filepath is the full path name of a file, and fileObj is a File object that describes the file. Example import java.io.*; class FISDemo { public static void main(String args[]) throws Exception { int size; InputStream f = new FileInputStream("input.txt"); System.out.println("Total Available Bytes: " +(size = f.available())); int n = size; for (int i=0; i < n; i++) { System.out.print((char) f.read()); } } } FileOutputStream • FileOutputStream creates an OutputStream that you can use to write bytes to a file. • Its constructors are FileOutputStream (String filePath) FileOutputStream (File fileObj) Creation of a FileOutputStream is not dependent on the file already existing. FileOutputStream will create the file before opening it for output when you create the object. In the case where you attempt to open a read-only file, an IOException will be thrown. Example import java.io.*; class FOSDemo { public static void main(String args[]) throws Exception { String source = "This is the program to demonstrate file output stream‖; byte buf[] = source.getBytes(); OutputStream f0 = new FileOutputStream ("file1.txt"); for (int i=0; i < buf.length; i += 2) { f0.write (buf[i]); } f0.close (); OutputStream f1 = new FileOutputStream ("file2.txt"); f1.write(buf); f1.close(); } } ByteArrayInputStream • ByteArrayInputStream is an implementation of an input stream that uses a byte array as the source. • This class has two constructors, each of which requires a byte array to provide the data source ByteArrayInputStream(byte array[ ]) ByteArrayInputStream(byte array[ ], int start, int numBytes) Example import java.io.*; class BAISDemo { public static void main(String args[]) throws IOException {(); } } } • This example first reads each character from the stream and prints it as is, in lowercase. It then resets the stream and begins reading again, this time converting each character to uppercase before printing. The output is: abc ABC ByteArrayOutputStream • ByteArrayOutputStream is an implementation of an output stream that uses a byte array as the destination. • ByteArrayOutputStream has two constructors ByteArrayOutputStream( ) ByteArrayOutputStream(int numBytes) • In the first constructor, a buffer of 32 bytes is created. In the second, a buffer is created with a size equal to that specified by numBytes. Example import java.io.*; class ByteArrayOutputStreamDemo { public static void main(String args[]) throws IOException { ByteArrayOutputStream f = new ByteArrayOutputStream(); String s = "This should end up in the array"; byte buf[] = s.getBytes(); f.write(buf); System.out.println("Buffer as a string"); System.out.println(f.toString()); System.out.println("Into array"); byte b[] = f.toByteArray(); for (int i=0; i<b.length; i++) { System.out.print((char) b[i]); } System.out.println("\nTo an OutputStream()"); OutputStream f2 = new FileOutputStream ("test.txt"); f.writeTo(f2); f2.close (); System.out.println("Doing a reset"); f.reset(); for (int i=0; i<3; i++) f.write('X'); System.out.println(f.toString()); } } When you run the program, you will create the following output. Notice how after the call to reset ( ), the three X’s end up at the beginning. Buffer as a string This should end up in the array Into array This should end up in the array To an OutputStream () Doing a reset XXX Buffered Byte Streams • For the byte-oriented streams, a buffered stream extends a filtered stream class by attaching a memory buffer to the I/O streams. • This buffer allows Java to do I/O operations on more than a byte at a time, hence increasing performance. Because the buffer is available, skipping, marking, and resetting of the stream become possible. • The buffered byte stream classes are BufferedInputStream and BufferedOutputStream. PushbackInputStream also implements a buffered stream. Character Streams • While the byte stream classes provide sufficient functionality to handle any type of I/O operation, they cannot work directly with Unicode characters. • Since one of the main purposes of Java is to support the ―write once, run anywhere‖ philosophy, it was necessary to include direct I/O support for characters. Reader • Reader is an abstract class that defines Java‘s model of streaming character input. Methods • abstract void close( ) Closes the input source. Further read attempts will generate an IOException. • void mark(int numChars) Places a mark at the current point in the input stream that will remain valid until numChars characters are read. • boolean markSupported( ) Returns true if mark( )/reset( ) are supported on this stream. • int read( ) Returns an integer representation of the next available character from the invoking input stream. –1 is returned when the end of the file is encountered. • int read(char buffer[ ]) Attempts to read up to buffer.length characters into buffer and returns the actual number of characters that were successfully read. –1 is returned when the end of the file is encountered. • abstract int read(char buffer[ ], int offset, int numChars) Attempts to read up to numChars characters into buffer starting at buffer[offset], returning the number of characters successfully read. –1 is returned when the end of the file is encountered. • boolean ready( ) Returns true if the next input request will not wait. Otherwise, it returns false. • void reset( ) Resets the input pointer to the previously set mark. • long skip(long numChars) Skips over numChars characters of input, returning the number of characters actually skipped. Writer • Writer is an abstract class that defines streaming character output. Methods • abstract void close ( ) Closes the output stream. Further write attempts will generate an IOException. • abstract void flush ( ) Finalizes the output state so that any buffers are cleared. That is, it flushes the output buffers. • void write(int ch) Writes a single character to the invoking output stream. Note that the parameter is an int, which allows you to call write with expressions without having to cast them back to char. • void write(char buffer[ ]) Writes a complete array of characters to the invoking output stream. • abstract void write(char buffer[ ], int offset, int numChars) Writes a subrange of numChars characters from the array buffer, beginning at buffer[offset] to the invoking output stream. • void write(String str) Writes str to the invoking output stream. • void write(String str, int offset, int numChars) Writes a subrange of numChars characters from the array str, beginning at the specified offset. FileReader • The FileReader class creates a Reader that you can use to read the contents of a file. • Its constructors are FileReader (String filePath) FileReader (File fileObj) • Either can throw a FileNotFoundException. Here, filePath is the full path name of a file, and fileObj is a File object that describes the file. Example constructors are FileWriter(String filePath) FileWriter (String filePath, boolean append) FileWriter (File fileObj) FileWriter (File fileObj, boolean append) • They can throw an IOException. Here, filePath is the full path name of a file, and fileObj is a File object that describes the file. If append is true, then output is appended to the end of the file. Example-buffer.length/4,buffer.length/4); f2.close(); } } • This example creates a sample buffer of characters by first making a String and then using the getChars( ) method to extract the character array equivalent. It then creates three files. • The first, file1.txt, will contain every other character from the sample. The second, file2.txt, will contain the entire set of characters. Finally, the third, file3.txt, will contain only the last quarter. CharArrayReader • CharArrayReader is an implementation of an input stream that uses a character array as the source. • This class has two constructors, each of which requires a character array to provide the data source: CharArrayReader (char array [ ]) CharArrayReader (char array [ ], int start, int numChars) Here, array is the input source. The second constructor creates a Reader from a subset of your character array that begins with the character at the index specified by start and is numChars long. Example import java.io.*; public class CharArrayReaderDemo { public static void main(String args[]) throws IOException { String tmp = "abcdefghijklmnopqrstuvwxyz"; int length = tmp.length(); char c[] = new char[length]; tmp.getChars(0, length, c, 0); CharArrayReader input1 = new CharArrayReader(c); CharArrayReader input2 = new CharArrayReader(c, 0, 5); int i; System.out.println("input1 is:"); while((i = input1.read()) != -1) { System.out.print((char)i); } System.out.println(); System.out.println("input2 is:"); while((i = input2.read()) != -1) { System.out.print((char)i); } System.out.println(); } } • The input1 object is constructed using the entire lowercase alphabet, while input2 contains only the first five letters. Here is the output: input1 is: abcdefghijklmnopqrstuvwxyz input2 is: abcde CharArrayWriter • CharArrayWriter is an implementation of an output stream that uses an array as the destination. • CharArrayWriter has two constructors, CharArrayWriter ( ) CharArrayWriter (int numChars) • In the first form, a buffer with a default size is created. In the second, a buffer is created with a size equal to that specified by numChars. The buffer is held in the buf field of CharArrayWriter. The buffer size will be increased automatically, if needed. Example import java.io.*; class CharArrayWriterDemo { public static void main(String args[]) throws IOException { CharArrayWriter f = new CharArrayWriter (); String s = "This should end up in the array"; char buf[] = new char[s.length()]; s.getChars(0, s.length(), buf, 0); f.write(buf); System.out.println ("Buffer as a string"); System.out.println (f.toString()); System.out.println ("Into array"); char c[] = f.toCharArray(); for (int i=0; i<c.length; i++) { System.out.print(c[i]); } System.out.println ("\nTo a FileWriter ()"); FileWriter f2 = new FileWriter ("test.txt"); f.writeTo(f2); f2.close(); System.out.println ("Doing a reset"); f.reset(); for (int i=0; i<3; i++) f.write('X'); System.out.println(f.toString()); } } Related Topics Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.
https://www.brainkart.com/article/Java---I-O-streaming_10332/
CC-MAIN-2022-40
refinedweb
1,877
58.38
Calling Native Functions from Managed Code The common language runtime provides Platform Invocation Services, or PInvoke, that enables managed code to call C-style functions in native dynamic-linked libraries (DLLs). The same data marshaling is used as for COM interoperability with the runtime and for the "It Just Works," or IJW, mechanism. For more information, see: Using); } Advantages of IJW There is no need to write DLLImport attribute declarations for the unmanaged APIs the program uses. Just include the header file and link with the import library. The IJW mechanism is slightly faster (for example, the IJW stubs do not need to check for the need to pin or copy data items because that is done explicitly by the developer). It clearly illustrates performance issues. In this case, the fact that you are translating from a Unicode string to an ANSI string and that you have an attendant memory allocation and deallocation. In this case, a developer writing the code using IJW would realize that calling _putws and using PtrToStringChars would be better for performance. If you call many unmanaged APIs using the same data, marshaling it once and passing the marshaled copy is much more efficient than re-marshaling every time. Disadvantages of IJW Marshaling must be specified explicitly in code instead of by attributes (which often have appropriate defaults). The marshaling code is inline, where it is more invasive in the flow of the application logic. Because the explicit marshaling APIs return IntPtr types for 32-bit to 64-bit portability, you must use extra ToPointer calls. The specific method exposed by C++ is the more efficient, explicit method, at the cost of some additional complexity. If the application uses mainly unmanaged data types or if it calls more unmanaged APIs than .NET Framework APIs, we recommend that you use the IJW feature. To call an occasional unmanaged API in a mostly managed application, the choice is more subtle. PInvoke is); } The output is a message box that has the title PInvoke Test and contains the text Hello World!. The marshaling information is also used by PInvoke to look up functions in the DLL. In user32.dll there is in fact no MessageBox function, but CharSet=CharSet::Ansi enables PInvoke to use MessageBoxA, the ANSI version, instead of MessageBoxW, which is the Unicode version. In general, we recommend that you use Unicode versions of unmanaged APIs because that eliminates the translation overhead from the native Unicode format of .NET Framework string objects to ANSI. Using PInvoke is not appropriate for all C-style functions in DLLs. For example, suppose there is a function MakeSpecial in mylib.dll declared as follows: char * MakeSpecial(char * pszString); If we use PInvoke in a Visual C++ application, we might write something similar to the following: [DllImport("mylib")] extern "C" String * MakeSpecial([MarshalAs(UnmanagedType::LPStr)] String ^); The difficulty here is that we cannot delete the memory for the unmanaged string returned by MakeSpecial. Other functions called through PInvoke return a pointer to an internal buffer that does not have to be deallocated by the user. In this case, using the IJW feature is the obvious choice.); } With PInvoke, no marshaling is needed between managed and C++ native primitive types with the same form. For example, no marshaling is required between Int32 and int, or between Double and double. However, you must marshal types that do not have the same form. This includes char, string, and struct types. The following table shows the mappings used by the marshaler for various types: The marshaler automatically pins memory allocated on the runtime heap if its address is passed to an unmanaged function. Pinning prevents the garbage collector from moving the allocated block of memory during compaction. In the example shown earlier in this topic, the CharSet parameter of DllImport specifies how managed Strings should be marshaled; in this case, they should be marshaled to ANSI strings for the native side. You can specify marshaling information for individual arguments of a native function by using the MarshalAs attribute. There are several choices for marshaling a String * argument: BStr, ANSIBStr, TBStr, LPStr, LPWStr, and LPTStr. The default is LPStr. In this example, the string is marshaled as a double-byte Unicode character string, LPWStr. The output is the first letter of Hello World! because the second byte of the marshaled string is null, and puts interprets this as the end-of-string marker. //Str); } The MarshalAs attribute is in the System::Runtime::InteropServices namespace. The attribute can be used with other data types such as arrays. As mentioned earlier in the topic, the marshaling library provides a new, optimized method of marshaling data between native and managed environments. For more information, see Overview of Marshaling in C++..
https://msdn.microsoft.com/en-us/library/ms235282(v=vs.90).aspx
CC-MAIN-2016-36
refinedweb
790
53.31
#include "xfeatures2d.hpp" latch Class for computing the LATCH descriptor. If you find this code useful, please add a reference to the following paper in your work: Gil Levi and Tal Hassner, "LATCH: Learned Arrangements of Three Patch Codes", arXiv preprint arXiv:1501.03719, 15 Jan. 2015 LATCH is a binary descriptor based on learned comparisons of triplets of image patches. bytes is the size of the descriptor - can be 64, 32, 16, 8, 4, 2 or 1 rotationInvariance - whether or not the descriptor should compansate for orientation changes. half_ssd_size - the size of half of the mini-patches size. For example, if we would like to compare triplets of patches of size 7x7x then the half_ssd_size should be (7-1)/2 = 3. Note: the descriptor can be coupled with any keypoint extractor. The only demand is that if you use set rotationInvariance = True then you will have to use an extractor which estimates the patch orientation (in degrees). Examples for such extractors are ORB and SIFT. Note: a complete example can be found under /samples/cpp/tutorial_code/xfeatures2D/latch_match.cpp
https://docs.opencv.org/3.2.0/d6/d36/classcv_1_1xfeatures2d_1_1LATCH.html
CC-MAIN-2018-13
refinedweb
180
64.3
Hi, how can we simplify a groovy code for several sql request which are not same. i'm querying different table for different data so i have at least 50 sql queries and for each query i set propertis with data retrieved from table. My problem is: i have 50 sql queries. Is there a way to simplify this code. I cannot use loop because all tables are differents. example of query: StringBuilder builder = new StringBuilder() def tableValues = sql.eachRow("select type from order where id = "abc123"") { row -> builder.append( "${row.type}" ) } //Set properties String myvalue = builder.toString() testRunner.testCase.setPropertyValue( "type", type) StringBuilder builder2 = new StringBuilder() def tableValues2 = sql.eachRow("select name from product where desc = "yzx"") { row -> builder.append( "${row.name}" ) } //Set properties String myvalue2 = builder2.toString() testRunner.testCase.setPropertyValue( "name", name) Solved! Go to Solution. I would probably just put the results into the test run context. context.type = sql.firstRow('select type from order where id = ?', ["abc123"]).typecontext.name = sql.firstRow('select name from product where desc = ?', ["yzx"]).name That form uses prepared statements, which you should be doing and is not much more difficult. To get the values in a request's body: ${type} ${name} To get the values in another groovy script: context.type context.name View solution in original post Thank you so much JHunt. in the case i have this sql request: select type from order where number = "789" and id = "abc123"; i tried this: context.type = sql.firstRow('select type from order where number ="789" and id = ?', ["abc123"]).type it returns error
https://community.smartbear.com/t5/SoapUI-Open-Source/How-to-simplify-groovy-code-for-several-sql-request-in-different/m-p/189838
CC-MAIN-2020-50
refinedweb
261
60.82
SkyDrive and Windows 8.1 This blog post is part of a series of guest posts we’re publishing this week from different people in groups across Microsoft who helped us build Windows 8.1. – Brandon My name is Adam Czeisler and I am the Development Manager for one half of the SkyDrive team in Windows Services. Our team, called SkyDrive Devices & Roaming is responsible for building the client software that is the expression of SkyDrive on all devices, including Windows tablets and PCs. The other half of the team, called SkyDrive Cloud Services, is responsible for building the web service that powers these experiences. They also build the SkyDrive website at. In the past two years leading up to our work in Windows 8.1, our team built and released several SkyDrive apps on all major platforms. We released apps for Windows Phone, Xbox, iOS, Android, and a Store app for Windows 8. We also released sync clients that perform two-way synchronization of files to the SkyDrive service on Windows 8/7/Vista as well as on Mac OSX. This was a great learning experience for our team. Over the course of several updates to these apps we have learned how people want to access their files from their devices, what features matter most on each device, and also of course the unique set of development challenges and opportunities these different platforms offer. All of this work prepared us for this past year when we set out, together with experts in the Windows team to bring the best of all SkyDrive experiences to Windows. Before Windows 8.1 Before Windows 8.1, our engineering designs fell into two major categories, apps and sync clients. The apps communicate with the SkyDrive service using a stateless JSON API in a design pattern similar to a website. In fact the SkyDrive apps browse the hierarchy of files and folders in SkyDrive via the same JSON API used by the SkyDrive website. This design is helpful because we strive to make the feature set be consistent across all endpoints. The image below shows the SkyDrive app on Xbox 360, built on the JSON API (watch the video). In contrast the sync clients used a proprietary sync protocol directly with a lower layer of the SkyDrive service stack. This sync protocol is generally limited to supporting “CRUD” operations on files (Create, Read, Update and Delete). There are pros and cons to each engineering approach. The design used by the apps is good because it allows them to have a small footprint on the device. The apps are able to browse very large data sets relatively quickly because the JSON API lets them ask for any view of the tree independently. There is no need to download the full body of the files in order to browse the collection of files. Caching items like thumbnails, tree structure, etc. can be done in a dynamic way and the cache can be managed independently of the core logic. Because of this design choice, these apps are well suited to small storage devices such as phones or tablets. On the other hand, this approach has two major drawbacks, 1) offline operations can’t be done on the files and 2) since most operations are performed directly against the service, the performance is limited to the speed of the network rather than the speed of the local storage on the device. The sync clients have the opposite characteristics. On the positive side, they naturally have full CRUD operations available both offline and online. And when the apps are interacting with the files, they are doing this against the local storage in the device so the speed is often one or two orders of magnitude higher than when accessing this content from the network. But the sync approach has a major drawback – in order to browse or interact with any of the files on SkyDrive they all have to be fully downloaded on the device. This solution just isn’t acceptable on a device with limited storage because the slice of data that would fit on the devices is typically too small to be useful. Also the raw time to download and sometimes even the cost of downloading this data can be potentially very large. Best of Both Worlds. Sharing Expertise Our team has deep expertise in file sync technology. Several members have released multiple file sync products over the years; some have even worked on similar technology in Microsoft Research. But we didn’t have deep experience with the Windows Shell architecture, codebase or APIs. Clearly we would need this in order to deliver the best integration of SkyDrive into Windows possible. Luckily the team in Windows with the most expertise in this area was formed to help us design and implement the integration of SkyDrive into Windows. This team, called Cloud Experience, was our main partner in Windows during the year. After discussing the problem above they came up with a design for a new platform in Windows called “smart files”.. That sounded very promising, so this was the design we chose. Now we had a solution that reduced the space needed on the device to access the full SkyDrive namespace while still allowing full CRUD operations to be performed both online and offline via Explorer in the Desktop. But we also needed the SkyDrive modern app to be able to view and modify these files. So we rebuilt the SkyDrive app on top of the Windows Shell APIs, which are built on top of the core file system with additional behavior such as mentioned above for smart files. This meant that the SkyDrive app could now have offline access as well as have a spectrum of performance from network to local speeds depending on whether or not the files were downloaded to the local machine. Awesome. Now we just had to build it! Journey to Shipping Working directly in Windows this past year to integrate SkyDrive deeply into the operating system was a fantastic journey for our team. We learned firsthand about the tremendous breadth and depth of scenarios that the Windows Shell supports by integrating both as a provider in the sync engine and a consumer in the modern app. Through the process we gained a deep respect for the people who built that system over the past decades in such a layered and extensible way that allowed it to be augmented with a service-backed file system in such a short time with relatively few architectural changes. For example, the image below shows the existing file copy dialog in Explorer effortlessly lighting up a scenario where the sync engine is automatically invoked to download several images as they are being copied to the Desktop from a folder in SkyDrive. Stats like percent complete, throughput and remaining bytes are shown to the user. You can even pause or abort the operation through this user interface and because of the deep integration with the Windows Shell, the sync engine will respond to these commands directly. Another highlight was working closely with the group in Windows dedicated to performance analysis and improvements. With their help and guidance we made several key changes to minimize the impact of the sync engine on the system. For example, for more efficient monitoring of changes in the file system we integrated into a built-in Windows component called the Change Tracker. This allowed us to eliminate nearly all direct scans of the local files in SkyDrive, which in the standalone version of the sync client were necessary to ensure we didn’t miss any changes while the process wasn’t running. With help from the Cloud Experience team we cut the memory footprint in half by leveraging the search indexer to store file transfer status. We also did work to protect the CPU idle state by creating a special dormant mode in the sync engine so that when there was no outstanding work to be done we could “go to sleep” and have nearly zero impact on the rest of the system. No Dead Ends Switching gears to the modern SkyDrive app, I want to highlight another great partnership we had this year with the team that builds the apps and controls related to photo and image processing in Windows, called the Apps for Creative Expression team. In Windows 8 they built the modern Photos app that aggregated SkyDrive photos as well as photos from your local device. The app had a very nice interface for viewing photos but both of our teams were frustrated that customers could hit “dead ends” in both the SkyDrive app and the Photos app when interacting with images on SkyDrive. For example you couldn’t move, rename or edit photos in SkyDrive from the Photos app and in the SkyDrive app the viewer was not nearly as rich as the one in the Photos app. We really wanted to solve these problems together in Windows 8.1. The way we did this was to build both the Photos app and the SkyDrive app using nearly all of the same source code and binaries, with some small runtime differences in default views and theming. In addition to working directly in the core apps, the Apps for Creative Expression team designed and built the beautiful photo viewer/editor control that is used in both apps. For a great example of the types of scenarios that are enabled by the integration of the SkyDrive sync engine into Windows and the sharing of code between the Photos and SkyDrive apps, try browsing full-screen through some photos in SkyDrive using either the modern Photos or SkyDrive app, swipe to bring up the app bar and then choose ‘Edit’. The full file will be automatically downloaded from the SkyDrive service and the photo editor control will present you with a variety of useful and powerful touch-enabled tools to enhance your photos. If you save the changes, they will be automatically uploaded to SkyDrive via the sync engine. No matter whether you interact with your SkyDrive photos via the Photos app or the SkyDrive app you will always have the full set of appropriate features available to you. Clients as Extensions of the Service One of our philosophies as a team has been that clients are an extension of the service. We think of them as simply another endpoint that expresses the service to the customer, much like a browser. This means that all clients should have roughly the same feature set and capabilities. They will be of course tailored to the device but the customer experience should be familiar in every expression of the service. Besides making sure the experiences are consistent and familiar on every endpoint, we also need to ensure that the operations on all of our endpoints are reliable and robust. From an engineering point of view, in order for any service to achieve the kind of reliability that our customers expect there are three capabilities required. Engineers need to be able to 1) measure reliability, 2) diagnose issues and 3) deploy fixes. This process must be repeatable in a tight iterative loop. As the cycle continues, quality rises. When we designed our original standalone sync clients, we realized we had gaps in all three areas above that we needed to fill. So over the years we have iterated on solutions to increase our capabilities in measurement, diagnostics and deployment. We created a telemetry system that measures sync convergence across all of our clients on a daily basis. We built a pipeline to gather anonymous data that lets the team diagnose issues that are flagged in the telemetry system. And finally we built an updating mechanism for the Windows sync client through which we could deploy fixes to all endpoints with no user-interaction. When we integrated the SkyDrive sync engine into Windows 8.1 we were able to keep all of these capabilities by either bringing them into the operating system or using other pre-existing mechanisms. Because of this, we are able to continue our work to monitor and improve the experience of our customers using the product each day. Releasing GA to the World I am extremely excited for the GA release of Windows 8.1. Knowing that millions of customers will be using SkyDrive in more useful ways than they have ever been able to do before, with many of them using a cloud file service for the first time ever, is humbling, exciting and inspiring all at the same time. As a team we will face new challenges as we scale up to meet the demands of the millions of new customers we hope to gain. We will learn and we will improve continuously. Our goal is to delight every person who uses SkyDrive, every minute and every day they use it, so that they are customers for life. And by the way, there are tons of other great features in Windows 8.1 besides the SkyDrive integration. One of my favorites is the new small-size tiles on the Start screen. I can get all of my most important apps on one screen now, which is always alive with color and activity. Here is my own Start screen running Windows 8.1. I hope you enjoy it as much as I do! (This is Adam’s Start screen in Windows 8.1!) Adam Czeisler Development Manager SkyDrive Updated November 7, 2014 6:53 pm
http://blogs.windows.com/windowsexperience/2013/10/15/skydrive-and-windows-8-1/
CC-MAIN-2015-48
refinedweb
2,254
58.01
Mark Brown Tuition Physics, Mathematics and Computer Science Tuition & Resources Introduction to Python - Lesson Two Posted on 27-05-19 in Turtle Exercises This post is part last classes material - Manipulation of variables - Introduction to conditionals Review In the last class we learnt how to : - forward, left, right commands in Turtle - Using for _ in range(X) for looping - Using variables in place of numbers Review Exercises - Draw the following shapes Hint! If you're stuck review material covered in the previous class! Introducing Python to students Variables in more detail In the last class we began using variables. These are useful for holding useful information we need in multiple places in our code or which we wish to alter! For example, we can draw a spiral like import turtle as tl step = 50 angle = 90 for _ in range(16): tl.forward(step) tl.left(angle) step = step + 10 tl.exitonclick() produces So what's happening here? First we have created a variable and given it a value of 50, side = 50. In programming we use the equals symbol to mean assignment. Everything on the right side, is stored in a variable with the name on the left. Here are some examples of valid variables : side = 100 angle_in_degrees = 45 height_of_student = 150 Note how we use the underscore (_) character to write longer variable names. In Python it is good practice to make your variable names understandable. Don't use single letter variables unless it makes sense! x,y = 5,10 # this is fine as x,y mean a position i = 0 # i, short for index, is great for counting, do this. a = 45 # this is bad! We have no idea what a is! We can alter variables like we do in the spiral example. Here we have the line side = side + 10. The right hand side is run first. If side contains the number 50, we then add 10. The result 60 is stored as the new value of side. This is why we get a spiral! We can do many mathematical operations in Python side = side - 5will subtract 5 from side edge = edge / 2will divide edge by 2 triangle_edge = triangle_edge * 2will double triangle_edge Let's introduce a new method called width pen_width = 5 tl.width(pen_width) This will set the pen to be 5 units wide! Exercise Set 1 Alter the above spiral code using width to match the folllowing! Match the operations used to the spirals we see! What if? Introducing Conditionals To make anything more complicated, we will need to introduce conditionals. These are tests we can perform. If a test is True we can run that particular bit of code! Let's consider a case where we have width of 1 if the spiral side is less than or equal to 100 otherwise 10. In code we write this as import turtle as tl step = 30 angle = 90 for _ in range(16): if step <= 100: pen_width = 1 else: pen_width = 10 tl.width(pen_width) tl.forward(step) tl.left(angle) step = step + 10 tl.exitonclick() This produces The key bit of code here is if step <= 100: pen_width = 1 else: pen_width = 10 The first line is of the form if <TEST> : where <TEST> is, in our cases so far, just inequalities. In later classes we will look at many more tests. We can see, like in the loop, we must indent our code. We have two indented blocks here. The first runs if the test is True, the second runs if the test is false. Exercise Set 2 Question!var = 5 var < 6 Answer! : This would be True as 5 is less than 6. Determine if the following tests are True or False. edge = 2 edge >= -1 var = 0 var >= -2 edge = -2 edge < 4 edge = 7 edge >= -5 side = 0 side <= -4 edge = 7 edge > 7 square_length = -1 square_length <= 4 edge = 7 edge <= -1 square_length = 2 square_length < 9 var = 1 var <= -4 var = 0 var > 4 side = 1 side >= -1 var = 4 var < -2 var = 4 var >= 4 square_length = 1 square_length < -5 square_length = 9 square_length <= 3 var = -2 var <= 9 square_length = -1 square_length > 9 var = 3 var >= 8 edge = 1 edge <= 4 - Alter the conditional spiral code the produce the following images Conclusion Today we have learnt the width method, looked at more complex uses of variables and concluded with an introduction to conditionals.
https://markbrowntuition.co.uk/turtle-exercises/2019/05/27/introduction-to-python-lesson-two/
CC-MAIN-2022-33
refinedweb
730
71.44
FreeRTOS 10.0.1 With NXP S32 Design Studio 2018.R1 FreeRTOS 10.0.1 With NXP S32 Design Studio 2018.R1 Need help updating your FreeRTOS 10.0.1? Here's how to do it with the NXP S32 Design Studio 2018.R1. Join the DZone community and get the full member experience.Join For Free NXP not only sells a general purpose microcontroller, but a portfolio of automotive devices that that includes a component for FreeRTOS. But that component in S32DS 2018.R1 comes with an old V8.2.1 FreeRTOS component: FreeRTOS 8.2.1 in S32DS 2018.R1 So, what do I do if want to use the latest FreeRTOS (currently 10.0.1) with all the bells and whistles? This article describes how to upgrade it to the latest and greatest FreeRTOS V10.0.1: FreeRTOS 10.0.1 in S32DS 2018.R1 Outline The latest FreeRTOS V10.0.1 has many benefits: it is under a more permissible license, plus it comes with all the latest features like static memory allocation or direct task notification. Because it is not possible to directly update the FreeRTOS component in S32DS, I’m using the McuOnEclipse FreeRTOS component for S32DS. That component version is always up to the latest FreeRTOS version, supports multiple IDE’s (CodeWarrior classic, CodeWarrior for MCU 10.x, Kinetis Design Studio, MCUXpresso IDE and now as well S32DS) and a broad range of microcontroller (S08, S12, DSC, ColdFire, Kinetis and now as well S32DS). Additionally, it seamlessly integrates SEGGER SystemView/RTT and Percepio FreeRTOS Tracealyzer. At the time of this article, not all McuOnEclipse components have been ported to S32DS. More components will be released in the future. This article describes how to add and use the FreeRTOS 10.0.1 McuOnEclipse components in S32DS, followed by a tutorial how to create a FreeRTOS project for the S32K144EVB board. Example projects like the one discussed here are available on GithHub: Creating the Project In a first step, we create a basic project we can debug on the board. In S32DS, use the menu File > New > S32DS Application Project: New Application Project Provide a name for the project and select the device to be used: Create a S32 DS Project Press Next. Click on the browse button to select the SDK: Choose SDK Select the S32K144_SDK_gcc SDK: S32K144 SDK Press OK. Now, the SDK is selected: Press Finish to create the project. In Eclipse, I have now the project created: Basic Project Now, it would be good time to build (menu Project > Build Project) and debug (Menu Run > Debug) the project to verify everything is working so far. Doing the Debug, it will ask me for which configuration I want to use. I select the Debug one: Debug Configuration Depending on the debug connection, I can set it to OpenSDA: OpenSDA With this, I should be able to debug the project: Debugging the Initial Project Congratulations! You can now terminate the debug session with the red ‘stop’ button and switch back to the C/C++ perspective. Blinky LED In a next step, we mux the RGB LED pins on the S32K144EVB board. For this, double-click on the PinSettings component to open the Component Inspector for it: Pin Muxing Settings The RGB LEDs are on PTD0, PTD15, and PTD16. Route them in the Inspector as shown below: Routed Pins Then, generate code with the button in the components view: Generating Code Next, add the following code into main(): CLOCK_SYS_Init(g_clockManConfigsArr, CLOCK_MANAGER_CONFIG_CNT, g_clockManCallbacksArr, CLOCK_MANAGER_CALLBACK_CNT); CLOCK_SYS_UpdateConfiguration(0U, CLOCK_MANAGER_POLICY_FORCIBLE); PINS_DRV_Init(NUM_OF_CONFIGURED_PINS, g_pin_mux_InitConfigArr); PINS_DRV_SetPinsDirection(PTD, (1<<0U) | (1<<15U) | (1<<16U)); /* set as output */ PINS_DRV_SetPins(PTD, (1<<0U) | (1<<15U) | (1<<16U)); /* all LEDs off */ PINS_DRV_ClearPins(PTD, (1<<15U)); /* RED pin low ==> ON */ PINS_DRV_TogglePins(PTD, (1<<15U)); /* RED pin high => off */ Added Code to Main This is to initialize the clock and GPIO pin drivers, followed by turning all LEDs off and then the Red one on and off. Build and debug it on the board to verify everything is working as expected. McuOnEclipse Component Installation You need at the 1-July-2018 release or later. Download from SourceForge the latest zip file and unzip it. Use the menu Processor Expert > Import Component(s): Processor Expert Import Components Select *both* *.PEupd files and press Open: Open .PEupd Files.PEupd Files Specify/Select the component repository where to import the components: Component Repository If that repository does not exist yet, add a new one: I’m using below the McuOnEclipse folder inside the S32DS installation folder. Create that folder first if it does not exist yet. Add Repository Because the Processor Expert in S32DS does not include all needed Processor Expert include files, another manual step is required. These extra files are present inside the package you have downloaded from SourceForge: Adding FreeRTOS Next, add the FreeRTOS component from the McuOnEclipse repository to the project: Adding FreeRTOS Component to Project This will bring in a few other components into the project. Open the Inspector view for the McuLibConfig component: Configure it to use the S32K SDK: In the FreeRTOS settings, verify that the ARM core is matching your board: This completes the settings. Generate code: Initializing Component Drivers In other IDE’s (Kinetis Design Studio, CodeWarrior, …), the Processor Expert will initialize the component drivers. Because this is not implemented in the S32DS version, I have to call the Init functions separately. For this, add the following template to main.c: static void Components_Init(void) { #define CPU_INIT_MCUONECLIPSE_DRIVERS /* IMPORTANT: copy the content from Cpu.c! */ /*------------------------------------------------------------------*/ /* copy-paste code from Cpu.c below: */ /*------------------------------------------------------------------*/ } You find the code to copy at the end of Generated_Code\Cpu.c: Initialization Code in Cpu.c Copy that code and place it inside Components_Init() in main.c: Unfortunately, this is a manual process. Whenever you add/remove a component, make sure you update the Components_Init() function. Events The next manual thing is about Processor Expert events. In other IDE’s, the Processor Expert is creating proper event modules. In S32DS, it only adds the events to the Events.c, which is not complete. To solve this, first exclude the file Events.c from the build. Use the properties and turn on ‘Exclude resource from build’ (see “Exclude Source Files from Build in Eclipse“). Then, include the header file and source file into main.c: #include "Events.h" #include "Events.c" This uses the preprocessor to place the event code into main.c: FreeRTOS Task Add the following code for a FreeRTOS task to main.c which blinks the green LED: static void AppTask(void *param) { (void)param; /* not used */ for(;;) { PINS_DRV_TogglePins(PTD, (1<<16U)); /* blink green LED */ vTaskDelay(pdMS_TO_TICKS(1000)); /* wait 1 second */ } /* for */ } Next, create the task in main() and start the scheduler: if (xTaskCreate(AppTask, "App", 500/sizeof(StackType_t), NULL, tskIDLE_PRIORITY+1, NULL) != pdPASS) { for(;;){} /* error! probably out of memory */ } vTaskStartScheduler(); Now, build and debug. Debugging FreeRTOS Application in S32DS And, enjoy the blinking green LED: Summary It is great to see that Processor Expert at least continues to exist in the automotive part of NPX with the S32 Design Studio. However, that Processor Expert has been reduced to the S32K SDK and automatic component initialization and event handling needs a manual setup. Other than that, the first components work great in S32DS. And, for everyone using S32DS, the McuOnEclipse components offer the latest FreeRTOS and extra features like tickless idle mode, Segger RTT, Segger SystemViewer and Percepio Tracealyzer. Happy updating! Published at DZone with permission of Erich Styger , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/freertos-1001-with-nxp-s32-design-studio-2018r1
CC-MAIN-2019-30
refinedweb
1,281
56.15
Threading in python is used to run multiple threads (tasks, function calls) at the same time. Note that this does not mean that they are executed on different CPUs. Python threads will NOT make your program faster if it already uses 100 % CPU time. In that case, you probably want to look into parallel programming. If you are interested in parallel progamming with python, please see here. Python threads are used in cases where the execution of a task involves some waiting. One example would be interaction with a service hosted on another computer, such as a webserver. Threading allows python to execute other code while waiting; this is easily simulated with the sleep function. ExamplesEdit A Minimal Example with Function CallEdit Make a thread that prints numbers from 1-10, waits for 1 sec between: import threading import time def loop1_10(): for i in range(1, 11): time.sleep(1) print(i) threading.Thread(target=loop1_10).start() A Minimal Example with ObjectEdit #!/usr/bin/env python import threading import time from __future__ import print_function class MyThread(threading.Thread): def run(self): print("{} started!".format(self.getName())) # "Thread-x started!" time.sleep(1) # Pretend to work for a second print("{} finished!".format(self.getName())) # "Thread-x finished!" if __name__ == '__main__': for x in range(4): # Four times... mythread = MyThread(name = "Thread-{}".format(x + 1)) # ...Instantiate a thread and pass a unique ID to it mythread.start() # ...Start the thread time.sleep(.9) # ...Wait 0.9 seconds before starting another This should output: Thread-1 started! Thread-2 started! Thread-1 finished! Thread-3 started! Thread-2 finished! Thread-4 started! Thread-3 finished! Thread-4 finished! Note: this example appears to crash IDLE in Windows XP (seems to work in IDLE 1.2.4 in Windows XP though) There seems to be a problem with this, if you replace sleep(1) with (2), and change range(4) to range(10). Thread-2 finished is the first line before its even started. in WING IDE, Netbeans, Eclipse is fine.
http://en.m.wikibooks.org/wiki/Python_Programming/Threading
CC-MAIN-2015-14
refinedweb
338
68.06
Patent application title: METHOD AND SYSTEM FOR PREVENTING DNS CACHE POISONING Inventors: Antony Martin (Nozay, FR) Antony Martin (Nozay, FR) Serge Papillon (Nozay, FR) IPC8 Class: AG06F2100FI USPC Class: 726 22 Class name: Information security monitoring or scanning of software or data including attack prevention Publication date: 2012-11-22 Patent application number: 20120297478 Abstract: A method for preventing the poisoning of at least one DNS cache (5--i) within a computer network (B) including several DNS caches (5--1, 5--i, 5--n), this method comprising a step of comparing at least two DNS responses to a DNS query, returned by two different DNS caches. Claims: 1. A method for preventing a poisoning of at least one DNS cache within a computer network including several DNS caches, the method comprising the steps of: comparing at least two DNS responses to a DNS query returned by two different DNS caches, and analyzing the DNS query to identify a service with which said DNS query is associated before querying the DNS caches. 2. The method according to claim 1, wherein a number of DNS caches queried depends upon the service with which the DNS query is associated. 3. The method according to claim 1, wherein the step of comparing the at least two DNS responses further comprises a step of reversing a resolution of the DNS query by at least one DNS cache. 4. The method according to claim 1, wherein the step of comparing the at least two DNS responses further comprises a step of calculating ratios of the at least two DNS responses. 5. The method according to claim 4, wherein the DNS response with the highest ratio among a set of DNS responses returned by the DNS caches is the DNS response to the DNS query. 6. The method according to claim 1, wherein an inconsistency among the at least two DNS responses returned by the DNS caches triggers at least one of the following actions: notifying a source of the DNS query of a security problem; notifying a administrator of a computer network of a potential poisoning attack on at least one DNS cache; storing at least one of the at least two DNS responses in a database. 7. The method according to claim 4, wherein a Time To Live of a DNS cache returning a DNS response with a low ratio among the responses returned by the set of DNS caches is reduced to zero. 8. The method according to claim 1, wherein the at least two DNS responses comprise a DNS response returned by a DNS root server. 9. A system for preventing a poisoning of at least one DNS cache in a computer network including several DNS caches, the system comprising: an analyzer of at least two DNS responses to a DNS query returned by two different DNS caches, and a DNS query analyzer, equipped with a database of information on DNS queries, configured to identify a service associated with the DNS query. 10. The system according to claim 9, wherein the at least two DNS responses comprise a DNS response returned by a DNS root server. Description: [0001] This invention pertains to security techniques for domain name systems. [0002] Hereinafter, `domain name system` or `DNS server` (for Domain Name System) shall mean any system making it possible to establish a match between a domain name (or host name) and an IP address or, more generally, to find information using a domain name or an IP address. [0003] Additionally, `DNS query` shall mean a message requesting the resolution of a domain name or IP address. The response to a DNS query shall be called a `DNS response` here. In particular, a DNS response may comprise a domain name, an IP address, an error message, or an error code. It should be noted that the resolution of a DNS query concerns any application using the DNS protocol through a computer network such as, for example, Web browsing, e-mail, or a VPN connection. [0004] Because of the large number of domain names (or, equivalently, IP addresses), a DNS server, in reality, can only represent a limited set of data. Therefore, it cannot resolve all domain names. To do so, a distributed system of DNS servers is typically distinguished, in which each DNS server, when it receives a DNS query to which it has no response, [0005] relays this query to one or more other DNS servers in order to provide it with a response in return (recursive method); or [0006] designates another DNS server, which will then be solicited to respond to this DNS query (iterative method). [0007] In order to optimize the response time for future DNS queries, as well as to prevent the overload of a specific DNS server in the distributed system, most DNS servers also act as DNS caches. In other words, a DNS server holds the response obtained for a DNS query in memory, for a TTL (Time To Live) predefined by the DNS server administrator, so as not to carry out this process again later. [0008] However, this DNS cache is vulnerable to an attack commonly known as DNS cache poisoning, (DNS 2008 and the new (old) nature of critical infrastructure, Dan Kaminsky, Mexico, 2009). This attack aims to create a match between a valid (real) domain name of a public machine (, for example) and false information (an invalid IP address or false DNS response, for example) that will be stored in the DNS cache. [0009] Once a false DNS response to a DNS query concerning a certain domain is stored in the DNS cache, it will then automatically be the response, for TTL, to later DNS queries concerning the same domain. Therefore, all users of this DNS cache are vulnerable. [0010] In particular, DNS cache poisoning makes it possible to redirect a user to a site whose content may have malicious intent (virus propagation, phishing to collect personal data, or propaganda by redirecting a site to another competing site or to a nonexistent site, for example). [0011] One object of the present invention is to remedy the aforementioned drawbacks. [0012] Another object of this invention is to prevent the poisoning of a DNS cache belonging to a computer network having many DNS caches. [0013] Another object of this invention is to provide a distributed system of DNS caches with a method for preventing a DNS cache poisoning attack with a minimum amount of modification to the system. [0014] Another object of this invention is to propose a method and system for preventing poisoning attacks on DNS caches compatible with the DNS protocol used by DNS caches. [0015] Another object of this invention is to propose an autonomous system for preventing DNS cache poisoning attacks. [0016] Another object of this invention is to improve the consistency of DNS resolution in an Internet Service Provider network. [0017] Another object of this invention is to propose a method for preventing DNS cache poisoning attacks compatible with most Internet Service Provider (ISP) networks. [0018] Another object of this invention is to propose a counter-measure against DNS cache poisoning attacks within a computer network. [0019] Another object of this invention is to improve the computer security provided to users connected to an Internet Service Provider's network. [0020] To that end, the invention proposes, according to a first aspect, a method for preventing the poisoning of at least one DNS cache within a computer network including several DNS caches, this method comprising a step of comparing at least two DNS responses to a DNS query, returned by two different DNS caches. [0021] According to a second aspect, the invention relates to a system for preventing the poisoning of at least one DNS cache in a computer network including several DNS caches, this system comprising an analyzer of at least two DNS responses to a DNS query, returned by two different DNS caches. [0022] Advantageously, this system also comprises a DNS query analyzer equipped with a database of information on DNS queries making it possible to identify the service with which a DNS query is associated. [0023] Other characteristics and advantages of the invention will be clearer and more concretely understood after reading the following description of preferred embodiments, where reference is made to FIG. 1 which graphically illustrates the interactions between the modules of one embodiment. [0024] An ISP network B typically comprises several DNS caches 5_1, 5_2, . . . , 5--n (n>1) tasked with responding to DNS queries issued from at least one DNS resolver 1 belonging to a client A connected to the network B. A DNS resolver 1 is typically a client program that formulates DNS queries to be sent to the network B and interprets the DNS responses that are returned to it. [0025] In the event of an inability to respond to a DNS query based on the information available in the DNS caches 5_1, 5_2, . . . , 5--n, the DNS response is solicited from a DNS root server 9 belonging to a name server operator C. [0026] A DNS response is typically communicated to a DNS responder 10 on the network B tasked with relaying this response to the DNS resolver 1, from which the DNS query originated. [0027] A DNS cache management system 3 enables the simultaneous or individual control of the DNS caches 5_1, 5_2, . . . , 5--n. For example, the management system 3 makes it possible to modify the TTL for each DNS cache or to enable/disable a DNS cache. [0028] A poisoning attack on the DNS caches 5_1, 5_2, . . . , 5--n is prevented by using functional modules that can be adapted to any computer network B comprising several DNS caches, in particular one belonging to an Internet Service Provider (ISP). [0029] In particular, these modules comprise: [0030] A DNS query analyzer 2 that decides how to process a DNS query sent from a DNS resolver 1; [0031] A DNS query deconcentrator 6 serving several DNS caches; [0032] A comparator 7 of the DNS responses obtained from several DNS caches; [0033] An analyzer 8 of the DNS responses obtained from several DNS caches; and [0034] Several extensible information databases assisting the poisoning attack prevention system for the DNS caches 5_1, . . . , 5--n. [0035] When the DNS query analyzer 2 receives a DNS query issued from a DNS resolver 1 (link 12 on FIG. 1), the DNS query analyzer 2 decides which processing to carry out to resolve this DNS query. The decision is made based on information retrieved from: [0036] A database 4 with information on DNS queries such as the service (e.g. browsing, e-mail, streaming, e-commerce, and e-learning) and/or the protocol (e.g. HTTP, HTTPS, POP3, FTP, or SMTP) with which the DNS queries are associated; [0037] A database 11 of invalid DNS responses; and [0038] The management system 3 for the DNS caches 5_1, 5_2, . . . , 5--n that can be configured by the administrator of the network B. [0039] The database 4 of information on DNS queries uses the content of a DNS query (in particular, the domain name--for example, ebay.com or google.com--and the transport protocol--for example, HTTP, HTTPS, or SMTP) to identify the service with which this DNS query is associated. For example, if a DNS query comprises [0040] the domain name `ebay.com`, the information database 4 identifies this domain name and associates it with an e-commerce service; [0041] the domain name `home.americanexpress.com`, the information database identifies this domain name and associates it with an e-banking service; [0042] the SMTP protocol, then the information database associates this query with an e-mail application. [0043] The content of the information database 4 may be previously established manually and/or automatically enriched (automatic learning) with information contained in the DNS queries received. The information database 4 thus makes it possible to distinguish DNS queries assumed to be critical by the administrator of the network B (e.g. an e-commerce/e-banking service or an e-mail system). [0044] In one embodiment, the DNS query analyzer 2 labels each DNS query by level of importance (e.g. `critical`, `important`, `average`, or a number between 1 and 10) based on the service identified by the information database 4. [0045] It should also be noted that the choice of processing to be carried out to resolve a DNS query may be programmed from the DNS cache management system 3 based, for example, on [0046] the time: peak hours or not; [0047] availability of the DNS caches: maintenance, overrun; [0048] the source of the DNS queries: clients with different types of subscriptions; [0049] the service with which a DNS query is associated, e.g. e-commerce, e-banking, e-mail, or VPN. [0050] In one embodiment, three possible processing modes can be distinguished to resolve a DNS query: [0051] The DNS query is sent to a single DNS cache (for example, the DNS cache 5_1 as shown on FIG. 1: link 25); [0052] The DNS query is sent to a DNS query deconcentrator 6 (link 26 on FIG. 1); or [0053] The DNS query is sent directly to the DNS root server 9 (link 29 on FIG. 1). [0054] It should be noted that a DNS response may be obtained from the network B [0055] recursively: upon receiving a DNS query, a DNS server queries its local DNS cache 5--j (1<=j<=n) (for example, DNS cache 5_1 as shown on FIG. 1) concerning this query. If it has a response to this query locally, then this response is sent to the DNS response module 10 (link 51 on FIG. 1). Otherwise, the DNS server takes the role of resolver and transmits the DNS query to another DNS server more likely to have the requested information (in other words, a DNS server for which the probability that it has the requested information is sufficiently high). If no DNS server has the response, the query is finally sent to a DNS root server 9 (link 59 on FIG. 1) from which a copy of the DNS response (link 95 on FIG. 1) will be stored for a TTL in the DNS cache; or [0056] iteratively: if a DNS cache does not have a local response to a DNS query, it asks the DNS resolver 1 to send the query directly to another DNS server more likely to have the requested information. If no DNS server has the response, the query is finally sent to a DNS root server 9 (link 29 on FIG. 1). The DNS response returned by the DNS root server 9 is communicated to the DNS responder 10 (link 91 on FIG. 1). [0057] In another embodiment, the DNS response is obtained in a consolidated manner from several DNS caches as follows: [0058] As soon as a DNS query arrives to the DNS query deconcentrator 6, it is sent to a list of DNS caches (link 65 on FIG. 1) according to distribution criteria stored in a database 61. [0059] The DNS responses obtained by the list of DNS caches are all sent to the DNS response comparator 7 (link 57 on FIG. 1). [0060] Based on the DNS responses obtained, the comparator 7, assisted by an information database 70, [0061] either sends a DNS response to the DNS responder 10 (link 71 on FIG. 1) [0062] or sends the results obtained to a DNS response analyzer 8. [0063] The DNS response analyzer 8 studies the DNS responses and then sends one single DNS response to the DNS responder 10 (link 81 on FIG. 1). [0064] It should be noted that the distribution, by the deconcentrator 6, of a DNS query to a list of DNS caches is carried out based on information retrieved from the database 61. The database 61 comprises information on the DNS caches 5_1, 5_2, 5--n on the network B, such as the number, topology, geographic location, IP address, size of the contents, and number of users connected to the DNS caches 51, . . . , 5--n. [0065] Advantageously, based on the data available in the database 61, the deconcentrator 6 can relay the DNS query only to the DNS caches deemed to be relevant. Indeed, in one embodiment, the list of DNS caches to which a DNS query will be relayed by the deconcentrator 6, is selected based on: [0066] Information retrieved from the database 61 such as the location of the DNS servers. For example, by assuming that there is less risk of poisoning, by the same invalid data, of two spatially distant DNS caches, then: the further the DNS cache servers are separated, the greater the likelihood that a DNS response returned by the local DNS cache, identical (determined by the comparator 7) to the DNS response returned by the remote DNS cache, is valid (correct). In particular, this depends upon the topology of the computer network B; and/or [0067] Information provided by the DNS query analyzer 2: for example, if the DNS analyzer 2 marks a DNS query as `critical`, then, preferably, a larger number of DNS caches will be queried. In other words, the number of DNS caches to be queried is preferably dependent upon the service with which the DNS query is associated. This also makes it possible to optimize the performance of the DNS response verification process. [0068] Then the DNS response comparator 7 makes it possible to centralize and compare all the DNS responses obtained from the list of DNS caches queried (which is to say, designated by the DNS query deconcentrator 6). [0069] If all the DNS responses are identical, then this DNS response will be sent directly to the DNS responder 10 (link 71 on FIG. 1), which will then send it to the DNS resolver 1 or directly sent to the DNS resolver 1 (without going through the DNS responder 10). [0070] It should be noted that some domains have more than one IP address (or the inverse, which is to say one IP address that matches more than one domain name). in this case, having access to the IP prefixes stored in a database 70 (already allocated to companies, e.g. ebay®, Microsoft®, HSBC®, or YouTube®), the comparator 7 is capable of comparing the IP addresses of sub-networks to distinguish identical domain names. If a DNS response comprises an IP address that is not identified in the database 70, it may then be a potentially invalid DNS response. [0071] It is also possible to use reverse DNS resolution to compare two DNS responses returned by two different DNS caches: requiring, through a DNS cache (5_1, for example), the reverse resolution of a domain name associated with an IP address returned by another DNS cache (5_2, for example). A difference between the two DNS responses proves the poisoning of at least one of the two DNS caches. [0072] If the DNS responses are different, then they are sent to the DNS response analyzer 8. The DNS response analyzer 8 [0073] Calculates the ratios of the DNS responses; [0074] Classifies the DNS responses by their ratios; and [0075] Acts accordingly: [0076] Retaining a DNS response that will be sent to the DNS responder 10 (link 81 on FIG. 1) or directly to the resolver 1 and confirmed by at least the DNS caches queried (link 85 on FIG. 1); or [0077] In the event that a problem is detected, triggering an action such as: notifying the resolver 1 of a DNS cache poisoning attack, sending an error to the resolver 1, sending nothing to the resolver 1, or triggering an internal alert sent to the administrator of the network B indicating that there is a potential risk of DNS cache poisoning. [0078] Advantageously, the DNS response analyzer 8 may be configured/set up by an administrator of the network B (threshold ratios, or actions to be triggered if a DNS cache poisoning problem is detected, for example). [0079] As an example for illustrative purposes, if there are five DNS responses of which four are identical, the DNS response analyzer 8 deduces the presence of a dominant response that will be transmitted to the DNS responder 10 or directly to the resolver 1. [0080] If there is no consistency among the DNS responses returned by the DNS caches queried--such as if among five DNS responses, only three DNS responses are identical and the two others are different--then the DNS response analyzer 8 cannot conclude that there is a valid DNS response. In this case, the following actions may be undertaken: [0081] Notify the resolver 1 of a potential security problem. This information may be incorporated into the `TXT` field of a DNS response that comprises descriptive information about the domain; [0082] Store the invalid DNS responses (those with low ratios, for example) in the database of invalid DNS responses 11; [0083] Notify the administrator of the network B of a potential DNS cache poisoning attack. [0084] In one embodiment, the DNS response having the highest ratio among the set of DNS responses returned by the DNS caches is considered the DNS response to the DNS query. [0085] If an invalid DNS response is confirmed, it is added to the database of invalid DNS responses 11 (link 82 on FIG. 1). This will make it possible to warn the DNS caches when resolving later DNS queries. [0086] A communication protocol may be defined in accordance with RFC 5507 from the IETF for sending an error notice from the DNS response analyzer 8 to the DNS responder 10 (or equivalently, to the DNS resolver 1). [0087] In the event that a poisoning problem is deduced on one or more DNS caches based on the ratios of the DNS responses, a command to reduce the TTL of the DNS caches in question to 0 can be launched/programmed (for example, reduce the TTL of the DNS caches having returned a DNS response with a low ratio among the set of DNS responses). This command can be immediate: it consists of setting the TTL to 0 immediately. Alternatively, this command may be arithmetic: it may consist of ordering a continuous reduction of the TTL by a predetermined decrement (for example, 1 or 2 seconds). Alternatively, the command may be geometric, and may consist, for example, of ordering the TTL of the DNS caches in question to be divided in half. This command is intended to force the DNS caches to renew their caches. For example, an entry in a DNS cache with a TTL of 3600 seconds can be set to 0 seconds, thus becoming invalid. [0088] Alternative measures in the event of deducing a poisoning problem with one or more DNS caches based on the ratios of the DNS responses, are, for example, the expiration of a DNS zone, or the configuration of a persistent DNS entry in the DNS caches affected by the problem. This makes it possible to guarantee that the incriminated DNS caches return a valid value if they are queried later. This measure is temporary and must be deleted later to allow the dynamic constitution of DNS cache databases. [0089] In one embodiment, the DNS response comparator 7 and the DNS response analyzer are combined into a single functional module. [0090] Advantageously, the DNS response thus obtained is consolidated through several DNS caches. [0091] Advantageously, the method described here makes it possible to prevent a DNS cache poisoning attack through the intelligent use of the DNS cache servers already existing within an ISP network. [0092] In another embodiment, the DNS query deconcentrator 6 relays a DNS query [0093] To at least one DNS cache (DNS cache 5_1, for example) and; [0094] To at least one DNS root server 9 to then compare the DNS responses that they return. This makes it possible to have one additional entry for the DNS response comparator 7. [0095] Another embodiment making it possible to prevent DNS cache poisoning changes the way in which the validity of the DNS cache contents is verified. In other words, instead of exchanging information using the DNS protocol, another DNS cache content verification protocol is developed. [0096] Advantageously, the embodiments described here use the distributed DNS cache system already in use in most ISP networks. [0097] It should be noted that the embodiments described here are independent of the operating system used by the client A connected to the network B. [0098] In another embodiment, the DNS module 10 is optional, and the DNS responses are therefore transmitted directly to the DNS resolver 1. [0099] In another embodiment, the residential gateways of the ISPs, installed at their customers' homes, are the DNS caches. These residential gateways connected to the operator's network can then combine modules 2, 6, 5--i, and optionally 7, 8, 10, and optionally the databases associated with these modules. Patent applications by Antony Martin, Nozay FR Patent applications by Serge Papillon, Nozay FR Patent applications in class MONITORING OR SCANNING OF SOFTWARE OR DATA INCLUDING ATTACK PREVENTION Patent applications in all subclasses MONITORING OR SCANNING OF SOFTWARE OR DATA INCLUDING ATTACK PREVENTION User Contributions: Comment about this patent or add new information about this topic:
http://www.faqs.org/patents/app/20120297478
CC-MAIN-2014-42
refinedweb
4,208
50.09
pymworks 1.2 Requirements ---- - Numpy (for np.inf and np.nan) - (optional) pytables (for conversion to hdf5) ==== Intro ==== MWorks files are made up of events. Each event has time Unsigned integer. Time of event (in microseconds). May be relative to system time or server start time. code Unsigned integer. Number assigned to this type of event. Some are standard (0 = codec, etc...) others are experiment dependent. value Flexible type. The 'payload' of the event. May be a dict, list, int, None, etc... One special event (name = 'codec', code = 0) is useful for understanding other events. The codec contains (as a value) a dictionary of codes (as keys) and names (as values). Opening files ---- An MWorks file can be opened in pymworks using pymworks.open_file. :: import pymworks fn = 'foo.mwk' df = pymworks.open_file(fn) By default, open_file with index the file (speeding up event fetching). This index is written to disk as as a hidden file ('.' pre-pended). For the above example (opening foo.mwk) a index file '.foo.mwk' would be created if it did not already exist. If you do not want to index the file, set the indexed kwarg to False for open_file: :: df = pymworks.open_file(fn, indexed=False) The codec for this datafile is accessable as df.codec and for convenience a reversed version (keys=names, values=codes) is available as df.rcodec :: df.codec # dict, keys = codes, values = event names df.rcodec # dict, keys = event names, values = codes Reading events ---- Events can be accessed several ways, the easiest being df.get_events. :: evs = df.get_events() # get all events cevs = df.get_events(0) # get all events with code 0 # get all events with name 'success' sevs = df.get_events('success') # get_events also accepts a list of names (or codes) toevs = df.get_events(['success', 'failure', 'ignore']) # or a timerange (in microseconds) eevs = df.get_events(time_range=[0, 60 * 1E6]) # events during first minute Events (type pymworks.datafile.Event) each contain a time, code and value :: e = df.get_events('success')[0] # get first success event e.time # time of event (in microseconds) e.code # event code e.value # value for event ==== Notes ==== LDOBinary.py and ScarabMarshal.py are originally from the mworks/mw_data_tools repo LDOBinary.py was fixed to actually work and not just throw errors - Package Index Owner: braingram - DOAP record: pymworks-1.2.xml
https://pypi.python.org/pypi/pymworks/1.2
CC-MAIN-2016-40
refinedweb
384
79.97
When I was browsing Python HMAC module source code today I found out that it contains global variable _secret_backdoor_key # A unique object passed by HMAC.copy() to the HMAC constructor, in order # that the latter return very quickly. HMAC("") in contrast is quite # expensive. _secret_backdoor_key = [] class HMAC: """RFC 2104 HMAC class. Also complies with RFC 4231. This supports the API for Cryptographic Hash Functions (PEP 247). """ blocksize = 64 # 512-bit HMAC; can be changed in subclasses. def __init__(self, key, msg = None, digestmod = None): """Create a new HMAC object. key: key for the keyed hash object. msg: Initial input for the hash, if provided. digestmod: A module supporting PEP 247. *OR* A hashlib constructor returning a new hash object. Defaults to hashlib.md5. """ if key is _secret_backdoor_key: # cheap return To create a copy of the HMAC instance, you need to create an empty instance first. The _secret_backdoor_key object is used as a sentinel to exit __init__ early and not run through the rest of the __init__ functionality. The copy method then sets the instance attributes directly: def copy(self): """Return a separate copy of this hashing object. An update to this copy won't affect the original object. """ other = self.__class__(_secret_backdoor_key) other.digest_cons = self.digest_cons other.digest_size = self.digest_size other.inner = self.inner.copy() other.outer = self.outer.copy() return other You could get the same effect with self.__class__('') (an empty string), but then HMAC.__init__ does a lot of unnecessary work as the attributes on the instance created are going to be replaced anyway. Note that using HMAC('') is a valid way to create an instance, you'd not want an instance devoid of any state in that case. By passing in the sentinel, HMAC.copy() can avoid all that extra work. You could use a different 'flag' value, like False, but it is way too easy to pass that in because of a bug in your own code. You'd want to be notified of such bugs instead. By using a 'secret' internal sentinel object instead, you avoid such accidental cases. Using [] as a sentinel unique object is quite an old practice. These days you'd use object() instead. The idea is that the sentinel is a unique, single object that you test against for identity with is. You can't re-create that object elsewhere, the is test only works if you pass in an reference to the exact same single object.
https://codedump.io/share/D4UsRwJIs90Z/1/what-is-the-reason-for-secretbackdoorkey-variable-in-python-hmac-library-source-code
CC-MAIN-2017-26
refinedweb
407
67.04
Scatter plot with histograms¶ Show the marginal distributions of a scatter as histograms at the sides of the plot. For a nice alignment of the main axes with the marginals, two options are shown below. - the axes positions are defined in terms of rectangles in figure coordinates - the axes positions are defined via a gridspec An alternative method to produce a similar figure using the axes_grid1 toolkit is shown in the Scatter Histogram (Locatable Axes) example. Let us first define a function that takes x and y data as input, as well as three axes, the main axes for the scatter, and two marginal axes. It will then create the scatter and histograms inside the provided axes. import numpy as np import matplotlib.pyplot as plt # Fixing random state for reproducibility np.random.seed(19680801) # some random data x = np.random.randn(1000) y = np.random.randn(1000) def scatter_hist(x, y, ax, ax_histx, ax_histy): # no labels ax_histx.tick_params(axis="x", labelbottom=False) ax_histy.tick_params(axis="y", labelleft=False) # the scatter plot: ax.scatter(x, y) # now determine nice limits by hand: binwidth = 0.25 xymax = max(np.max(np.abs(x)), np.max(np.abs(y))) lim = (int(xymax/binwidth) + 1) * binwidth bins = np.arange(-lim, lim + binwidth, binwidth) ax_histx.hist(x, bins=bins) ax_histy.hist(y, bins=bins, orientation='horizontal') Axes in figure coordinates¶ To define the axes positions, Figure.add_axes is provided with a rectangle [left, bottom, width, height] in figure coordinates. The marginal axes share one dimension with the main axes. # definitions for the axes left, width = 0.1, 0.65 bottom, height = 0.1, 0.65 spacing = 0.005 rect_scatter = [left, bottom, width, height] rect_histx = [left, bottom + height + spacing, width, 0.2] rect_histy = [left + width + spacing, bottom, 0.2, height] # start with a square Figure fig = plt.figure(figsize=(8, 8)) ax = fig.add_axes(rect_scatter) ax_histx = fig.add_axes(rect_histx, sharex=ax) ax_histy = fig.add_axes(rect_histy, sharey=ax) # use the previously defined function scatter_hist(x, y, ax, ax_histx, ax_histy) plt.show() Using a gridspec¶ We may equally define a gridspec with unequal width- and height-ratios to achieve desired layout. Also see the Customizing Figure Layouts Using GridSpec and Other Functions tutorial. # start with a square Figure fig = plt.figure(figsize=(8, 8)) # Add a gridspec with two rows and two columns and a ratio of 2 to 7 between # the size of the marginal axes and the main axes in both directions. # Also adjust the subplot parameters for a square plot. gs = fig.add_gridspec(2, 2, width_ratios=(7, 2), height_ratios=(2, 7), left=0.1, right=0.9, bottom=0.1, top=0.9, wspace=0.05, hspace=0.05) ax = fig.add_subplot(gs[1, 0]) ax_histx = fig.add_subplot(gs[0, 0], sharex=ax) ax_histy = fig.add_subplot(gs[1, 1], sharey=ax) # use the previously defined function scatter_hist(x, y, ax, ax_histx, ax_histy) plt.show() References The use of the following functions, methods, classes and modules is shown in this example: Total running time of the script: ( 0 minutes 1.177 seconds) Keywords: matplotlib code example, codex, python plot, pyplot Gallery generated by Sphinx-Gallery
https://matplotlib.org/3.4.3/gallery/lines_bars_and_markers/scatter_hist.html
CC-MAIN-2022-33
refinedweb
520
52.87
Im having problems with getting 2 of my functions too loop to each other. so i made 2 functions and at the end i want them each to go back to the other one but i keep getting this error: error: use of undeclared identifier 'b' i understand what the problem is but i haven't found a solution to it can somebody please help me figure it out? When C++ builds, you are trying to reference a function that doesn't exist yet. Because C++ is compiled instead of interpreted, it scans through the function before running it. Because of this, you cannot make a recursive function loop in a compiled language. Try doing this in Python. After realizing that my explanation was difficult to understand Ok. Let's try this again. Imagine you were the C++ compiler, you would read the .cpp file line-by-line. Look at this code #include <iostream> // Use iostream void a() { // Starts defining a function std:cout << "test"; // Print "test" b(); // Calls unknown function! AHH! // Raise error and stop If this were Python code, it would be different. def a(): # Starts defining function print("test") # I don't care, not being run b() # I don't care, not being run def b(): # Starts defining function print("test") # IDC, not being run a() # IDC, not being run a() # Starts function, sees that both are defined, and runs CPP and Python handle defining functions diferently. If this were JS, it would create all the variables (functions are variables, technically) first, and then run the code. I hope that explanation was a little more read-able :P but of course, since C++ is compiled, all functions are in fact already declared. you just need to do a forward declaration in order to say "hey, this exists!" int add(int, int); // hey, I exist! void addloop(int x, int y) { add(x,y); } int add(int x, int y) { // this is how I work! int z = x + y; addloop(z,y); } I guess that's a good point, but still they both parse functions differently. @xxpertHacker What you're looking for are forward declarations. Normally, if an undefined/undeclared indentifier is parsed, the C++ parser stops immediately with a fatal error. (identifiers are variable names, parameter names, function names, or type names) Forward declarations declare it early, allowing code to be parsed correctly, and allowing early type checking. Example: The early defined type is known as a "function prototype," whereas the actual function with it's body is known as the "function declaration." Hopefully it helps. @xxpertHacker yes this did help me thank you so much!
https://replit.com/talk/ask/Im-having-problems-with-getting-2-of-my-functions-too-loop-to-each-other/131059
CC-MAIN-2021-17
refinedweb
439
71.44
Programming LambdaTest by John Nestor - • - March 20, 2017 - • - scala• lambdatest• open source• scala• functional programming• sbt• testing This post examines an important feature of LambdaTest: its programmability. Traditionally, unit tests are long boring sequences of simple tests. We believe that testing is a task for programmers (and one of the most important programming activities). The goal should be to make tests as comprehensive as possible and do it in a highly leveraged way (where a small amount of test code has maximum value). In this post, we look at two of LambdaTest’s programming mechanisms: ScalaCheck and test generation. LambdaTest was introduced in a previous blog post Introducing LambdaTest. LambdaTest is a new small clean library for testing Scala code developed by 47 Degrees. Tests can be run either via SBT or directly. All code is open source with an Apache 2 license. View LambdaTest on GitHub or visit the LambaTest microsite for more information. ScalaCheck ScalaCheck is an awesome library for doing automated property-based testing of Scala code. ScalaCheck can be used directly from LambdaTest (assertSC is just another kind of assertion). Here are a few simple examples of using ScalaCheck inside LambdaTest. import com.fortysevendeg.lambdatest._ import org.scalacheck.Prop._ class ScalaCheckTest extends LambdaTest { def brokenReverse[X](xs: List[X]): List[X] = if (xs.length > 4) xs else xs.reverse def act = { test("String Length") { assertSC() { forAll { s: String => s.length >= 0 } } } + test("Abs") { assertSC() { forAll { x: Int => Math.abs(x) >= 0 } } } + test("List") { assertSC() { forAll { (xs: List[Int]) => xs.length > 0 ==> (xs.last == brokenReverse(xs).head) } } } } } And here is the output from those tests. ***** running ScalaCheck Test Test: String Length Ok: Passed 100 tests (ScalaCheckTest.scala Line 10) Test: Abs Fail: Falsified after 8 passed tests. > ARG_0: -2147483648 (ScalaCheckTest.scala Line 18) Test: List Fail: Falsified after 4 passed tests. > ARG_0: List("0", "0", "0", "0", "1") > ARG_0_ORIGINAL: List("-1627881204", "-2147483648", "0", "575454471", "1984585744") (ScalaCheckTest.scala Line 25) ***** ScalaCheck Test: 3 tests 2 failed 0.725 seconds These examples only scratch the surface of the programmability of ScalaCheck. You can see these and other more powerful examples in the following excellent blog post: Practical ScalaCheck LambdaTest API and Test Generation LambdaTest has a very simple API that makes test generation very easy. Tests are generated by writing Scala code whose output is a LambdaTest action. Tests are composed of actions (with type LambdaAct). An action is a function that transforms one testing state to a new testing state. Testing states (with type LambdaState) are immutable. Actions include assertions and grouping actions such as test and label. Actions are composed in two ways. First, two Actions can be combined into a single action using the infix + operator. Second, compound actions are actions that can contain other actions. The result is that a test class has an act that is a tree of actions. Unlike other test systems whose APIs are often a complex tangle of traits and classes, the LambdaTest API is based on the single class LambdaAct. The following two sections contain examples of test generation. Test List Here, tests are generated based on the elements of a list: import com.fortysevendeg.lambdatest._ class TestList extends LambdaTest { def checkList(name: String, s: List[Int]): LambdaAct = { changeOptions(_.copy(onlyIfFail = true)) { s.zipWithIndex.map { case (i, j) => { test(s"Test list element $name($j)") { assertEq(i, j) } } } } } val s1 = List(0, 5, 6, 3) val s2 = List(0, 1, 2, 3, 3, 5, 5) val act = { checkList("s1", s1) + checkList("s2", s2) } } The changeOption action turns off output for any tests that succeed. The body of changeOptions has type List[LambdaAct]. There is an implicit conversion that converts that to LambdaAct. Here is the output: ***** running Test List Test: Test list element s1(1) Fail: [5 != 1] (TestList.scala Line 10) Test: Test list element s1(2) Fail: [6 != 2] (TestList.scala Line 10) Test: Test list element s2(4) Fail: [3 != 4] (TestList.scala Line 10) Test: Test list element s2(6) Fail: [5 != 6] (TestList.scala Line 10) ***** Test List: 11 tests 4 failed 0.018 seconds Test Tree Here assertions are generated by recursively walking a tree: import com.fortysevendeg.lambdatest._ class TestTree extends LambdaTest { trait Tree case class Inner(left: Tree, Right: Tree) extends Tree case class Leaf(v: Int) extends Tree def checkTree1(t: Tree, pos: String = ""): LambdaAct = { t match { case Inner(l, r) => checkTree1(l, pos = pos + "L") + checkTree1(r, pos = pos + "R") case Leaf(v) => assert(v > 0, s"Test leaf $pos = $v") } } def checkTree(name: String, t: Tree): LambdaAct = { test(name)(checkTree1(t)) } val t = Inner(Inner(Leaf(2), Leaf(-3)), Leaf(0)) val act = checkTree("Test Tree", t) } Here is the output: ***** running Test Tree Test: Test Tree Ok: Test leaf LL = 2 (TestTree.scala Line 15) Fail: Test leaf LR = -3 (TestTree.scala Line 15) Fail: Test leaf R = 0 (TestTree.scala Line 15) ***** Test Tree: 1 tests 1 failed 0.008 seconds To Learn More See the LambdaTest documentation on GitHub for complete documentation and lots of examples. And keep an eye out for future blog posts covering some of the more advanced features of LambdaTest and follow @47deg.
https://www.47deg.com/blog/programming-lambdatest/
CC-MAIN-2017-22
refinedweb
867
66.23
Winter 2015 01/29/2015 9:00 am: Coffee and the realization that presentations are tomorrow 10:00 am: Matplotlib 10:15 am: Work time 11:00 am: Hypothesis testing examples 11:15 am: Work time 12:00 pm: Eat something 1:30 pm: Work time 5:00 pm: Home time 2013_movies.csv (7.6 KB) w3d3_Interactive_Matplotlib_Plotters_Demo.ipynb (4.5 KB) Interactive.ipynb (1.7 KB) plotters.py (5.7 KB) interactivenamepopper.py (2.7 KB) plot_budget_vs_gross.py (1.0 KB) Matplotlib documentation on event handling To easily make your matplotlib graphs more beautiful, you can use seaborn: import seaborn as sns (Of course you will have to install it first) pip install seaborn More on hypothesis testing Scipy has functions for many hypothesis tests
http://senoni.com/nasdag/metis/metisw3d4.html
CC-MAIN-2019-30
refinedweb
125
56.55
10 Jul 01:53 2010 Re: gtkpod 1.0 beta 2 Leandro Lucarella <luca <at> llucax.com.ar> 2010-07-09 23:53:09 GMT 2010-07-09 23:53:09 GMT Leandro Lucarella, el 9 de julio a las 18:42 me escribiste: > I tried to break tm_add_track_to_track_model() using GDB to see how long > took each song to be scanned and it seems to be instantly really, so > I guess there is something else going on. I'm doing some profiling and, correct me if I'm wrong, but gtkpod load all the songs at the beginning of the program, and not each time one clicks on the iPod playlist, so I that is correct, an I/O problem involving the device is discarded. It looks like it's GTK which somehow is slow. Manipulating the TreeViews seems to be what's taking so long. I've compiled gtkpod setting the macro DEBUG_TIMING and added a couple more prints and this is the result: pm_selection_changed_cb enter: 2098.201221 sec pm_selection_changed_cb before listing: 2098.214338 sec pm_selection_changed_cb after listing: 2150.997785 sec pm_selection_changed_cb exit: 2151.172406 sec st_selection_changed_cb enter (inst: 0): 2151.504683 sec st_selection_changed_cb after st_init: 2178.739254 sec st_selection_changed_cb before loading tracks: 2178.739273 sec st_selection_changed_cb after loading tracks: 2232.149276 sec st_selection_changed_cb exit: 2232.376281 sec st_selection_changed_cb enter (inst: 1): 2232.376300 sec st_selection_changed_cb after st_init: 2260.392100 sec st_selection_changed_cb before loading tracks: 2260.392119 sec st_selection_changed_cb after loading tracks: 2315.961767 sec st_selection_changed_cb exit: 2316.018309 sec st_selection_changed_cb enter (inst: 1): 2316.116774 sec st_selection_changed_cb after st_init: 2343.740650 sec st_selection_changed_cb before loading tracks: 2343.740677 sec st_selection_changed_cb after loading tracks: 2399.858041 sec st_selection_changed_cb exit: 2399.920906 sec st_selection_changed_cb enter (inst: 1): 2399.920933 sec st_selection_changed_cb after st_init: 2428.241284 sec st_selection_changed_cb before loading tracks: 2428.241303 sec st_selection_changed_cb after loading tracks: 2483.904216 sec st_selection_changed_cb exit: 2483.962219 sec The after and before listing/loading tracks was added like this: #if DEBUG_TIMING g_get_current_time (&time); printf ("pm_selection_changed_cb before listing: %ld.%06ld sec\n", time.tv_sec % 3600, time.tv_usec); #endif for (gl=new_playlist->members; gl; gl=gl->next) { /* add all tracks to sort tab 0 */ Track *track = gl->data; st_add_track (track, FALSE, TRUE, 0); } #if DEBUG_TIMING g_get_current_time (&time); printf ("pm_selection_changed_cb after listing: %ld.%06ld sec\n", time.tv_sec % 3600, time.tv_usec); #endif iAt display_playlists.c:1518 and: #if DEBUG_TIMING || DEBUG_CB_INIT g_get_current_time (&time); printf ("st_selection_changed_cb before loading tracks: %ld.%06ld sec\n", time.tv_sec % 3600, time.tv_usec); #endif for (gl = new_entry->members; gl; gl = gl->next) { /* add all member tracks to next instance */ Track *track = gl->data; st_add_track(track, FALSE, TRUE, inst+1); } #if DEBUG_TIMING || DEBUG_CB_INIT g_get_current_time (&time); printf ("st_selection_changed_cb after loading tracks: %ld.%06ld sec\n", time.tv_sec % 3600, time.tv_usec); #endif (at display_sorttabs.c:~1930) "the after st_init()" was added after any st_init() call in st_selection_changed_cb(). have no idea why this is happening though, all I can say is that I'm using other GTK applications that make heavy use of TreeView; like gmpc, where I can list about 25k songs (much more than I have in the iPod) in a fraction of a second. To makes things worse, libgtk-2.0 is the same version in the Ubuntu box where it's slow than in the box where it works fine (which BTW is a Pentium M 1.7GHz, much less processing power than the box where it's incredibly slow). Any ideas or suggestions are welcome. -- -- Leandro Lucarella (AKA luca) ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Y tuve amores, que fue uno sólo El que me dejó de a pie y me enseñó todo... ------------------------------------------------------------------------------ This SF.net email is sponsored by Sprint What will you do first with EVO, the first 4G phone? Visit sprint.com/first -- _______________________________________________ Gtkpod-questions mailing list Gtkpod-questions <at> lists.sourceforge.net
http://permalink.gmane.org/gmane.comp.ipod.gtkpod.user/2033
CC-MAIN-2014-52
refinedweb
642
69.28
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. How update field value on button click using odoo v8 . Hello , How to call method on button using v8 codding standard? Here i have state selection field on my model, i have button when i click on that button than state should be change via button . What i need to do? need help. thanks. Hello Prince, Here you go! **In xml file** <button name="your_method_name" string="Button label" type="object" class="oe_hightlight"/> ** In py file** from openerp import models,fields,api class your_class_name(models.Model) _name = 'model.model' #your model name #here is your method defination which you have called from xml file. @api.multi def your_method_name(self) #body of your method #here in this method you can direct access any field value for current record using self return True #return True/False or as per your requrement. Hope This will Help
https://www.odoo.com/forum/help-1/question/how-update-field-value-on-button-click-using-odoo-v8-72209
CC-MAIN-2017-04
refinedweb
171
66.84
On 27 September 2000, Pete Shinners said: i've got a python extension that is simply a wrapping around another C library. (not an uncommon case :]) I take it you mean a C library that's already installed on the target system, not when you ship the source for. I think you're getting at auto-configuation here. There are some initial stabs at general, cross-platform auto-configuration in the Distutils, but they are unfinished and will remain so until after Distutils 1.0. (Bummer.) my main concern is getting it working with windows (and keeping it crossplatform happy for linux). my best bet in windows seems to be walk around the parent directory tree trying to find directories for the dependency?? I think it's probably best to avoid trying to roll-your-own auto-configuration code. Well, maybe: if you do go this route, you will probably learn a lot about cross-platform auto-configuration, and maybe you can help out with the auto-configuration support for Distutils 1.1. The cheap and easy way to do this is: let the user worry about it. As of Distutils 0.9.3, there's a very nice way to let the user worry about -- include an old-fashioned Setup file! That's right, you can put something like this in your distribution: # uncomment this lines for Unix/Linux (possibly changing library dir) #foo foo.c -L/usr/lib -lfoo # or this one for Windows #foo foo.c -L"C:\Program Files\Foo" -lfoo (I'm assuming you're compiling foo.c to create the "foo" extension module, linking against in /usr/lib/libfoo.{a,so} or "C:\Program Files\Foo\foo.dll".) Of course, your README will have to tell users to edit the Setup file before building. This is icky, but it beats editing setup.py and will have to do until we have proper auto-configuration. Then, your setup script would do this: from distutils.extension import read_setup_file ... extensions = read_setup_file("Setup") if not extensions: sys.exit("you forgot to edit Setup!") setup(name = "foo", version = "1.0", ..., extensions = extensions) and away-you-go. Let me know how it works... this is pretty new code, and It Worked For Me (TM). This does tie you to Distutils 0.9.3, which of course ties you to Distutils 0.9.4 since 0.9.3 was broken (oops). Oh well, it keeps everyone in practice, constantly upgrading the Distutils. ;-) Greg
https://mail.python.org/archives/list/distutils-sig@python.org/message/G4QFJWAKQND6PT6FNLJHE5LMXVU22KWG/
CC-MAIN-2021-39
refinedweb
412
60.11
ah ok, thanks^^ ah ok, thanks^^ Just practice - has no other purpose. ah ok, but with NUL char yoou do not mean '\0', since '\0' is also existing within every char-array at the end and hence also within every initialized string? All right, thanks:) Hi! #include <stdio.h> int stringlen(char* field, int length) { int i = 0; while(field[i] != '\0') { ............... Hi! Thanks for the reply - Im afraid I still do not fully get it. Hi! Im not sure why the following output of the following code is: 2 15 6 8 10 Code being: #include <stdio.h> About your hint regards to recursion - you have basically pinpointed well my general problem which I have with recursion: Even regards to the simplest, basic recursion-examples to illustrate things,... Hi! Printing out a number in reversed order, I was able to do interative: #include <stdio.h> int digits_reversed_iterative(int num) { int erg, reversed, digit, i; Hey, oh indeed now it works. Thanks anyways for all the input and hints. Hi! When the user-input is 0, it should imo enter the else-cascade and print out " 1 1" ? But this does not happen - can anyone see why not? #include <stdio.h>#include <stdlib.h> I mean, it's not that I have not been trying to solve the given exercise (number-sequences) myself - then I have posted my results where I can see what the code does different than what I want it to... Somehow this thread does not bring me closer on topic tbh - I will try your exercise advice and the last one (pen and paper is anyways my default start), but can someone please concretely post here... Thanks salem! I will post in a bit my new code with the new adjustment (still cannot excel your given task before, fully). @christop: Thanks for the explantation - makes sense to me. The... Hey, I did put the relevant code in text using the CODE-tags - is this what you mean? The second code can be ignored. Hi! Thanks for your initial help. Before I try again to print out an X, I want to understand and finally execute your exercise (printing out the choosen sequence). I post here my first attempt... Hi! I want to print out a 'X' with as many rows, as an input-value would have. This picture will describe hopefully better, what I want to do: Screenshot - 0781ad21c51043868573c8ecc3b4b448 -... Thanks for the suggestions - I will take a look and try to understand the sourcecode myself. Hi! First of all, I know that my code in regard to solving the task (logic) is still wrong and I will deal with it afterwards. For now, I am stuck at the technical details, since my program is even... Thanks, helps a lot. Just did not imagine for an unknown reason that expressions are getting executed when they appear within the brackets of a for-loop. But in hindsight, of course the expression... Ah, now I indeed got it - thanks a lot! Also, thanks Salem for your hint.^^ Hey, thanks a lot, your hint helped me out: 15701 The problem is though that I do not understand why we must assign the value directly before the inner loop begins, i.e. directly before we... Hi! Here is a small piece of code with the output: 15699 I do understand that given the 2nd initialization (i = 1) , the following condition is not true and hence the program leaves the loop... Hi! As an exercise I would like to turn following nested for loop into a nested while loop: 15696 But somehow it does not work out the way I want it to and obviously after all iterations are...
https://cboard.cprogramming.com/search.php?s=1c567572103f58fe286ab160eeff7c16&searchid=4918499
CC-MAIN-2020-24
refinedweb
617
71.04
jQuery UI Plugin 33% of Grails users Dependency: compile "org.grails.plugins:jquery-ui:1.10.4" Summary Simply supplies jQuery UI resources, depends on jQuery plugin. Use this plugin to avoid resource duplication and conflicts. Installation Most common way: Or, to get the last version: grails install-plugin jquery-ui - use the link above to download the archive from Github, - place it in the lib directory of your Grails project, - rename it to jquery-ui-<version> - install the plugin: grails install-plugin jquery-ui <version> - or, if you use the Maven 2 integration for Grails: mvn org.grails:grails-maven-plugin:install-plugin -DpluginName=jquery-ui -DpluginVersion=<version> Description OverviewThis plugin supplies jQuery UI resources, and depends on the jQuery plugin to include the core jquery libraries.Use this plugin in your own apps and plugins to avoid resource duplication and conflicts. It includes all the standard widgets and effects, so that you don't have to worry about anything being missing - plus there is no workable "module" system for jQuery UI that I know of at this time.When you have done your initial development and are ready for production, you may want to optimize these resources / use some minifying on them.This plugin supports the Grails Resources framework Note for developers working with this pluginThis plugin just provides the resources and a tag to include them. It must not include tags to add new functionality / wrap jQ UI features.You must not change the theme shipped with the plugin, and you must include all the standard jQuery UI widgetsIf you need a newer version of jQuery UI than is currently available with this plugin you can update this plugin it. - Always include jqueryui resources that include all widgets and effects - Always include the default theme as ui-lightness - Don't add any functionality tags to this plugin - Only update the jquery core library dependency when absolutely necessary, i.e. when jquery ui requires it. ConventionsThe version number of this plugin must always follow the version number of the jQuery UI version it bundles, with possible 4th-level point releases for patches/iterations of the plugin with the same jQuery library.E.g. the first release of this plugin is 1.8 - because it ships jQuery UI 1.8. If jQuery UI upgrades to 1.8.2 or similar, this plugin would need to be upgraded to use it and use the version number 1.8.2. If there was a problem with the grails plugin release, this might change to 1.8.2.1 or 1.8.2.2 etc.The key part is that any apps/plugins can install or dependsOn "jquery-ui" of a given version e.g. 1.8 to pull in that version of jQuery UI.This plugin must dependsOn the minimum version of jQuery required by the jQuery UI version. E.g. in this release that is jQuery 1.3.2+ which is a separate grails plugin (1.3.2.1) that also matches this versioning convention Using jQuery UI with the Resources frameworkThis plugin integrates fully with the Grails resources framework. By installing this plugin you automatically get modules declared for jQuery UI and also for jQuery (because of Grails transitive plugin dependencies).The modules available are: - jquery-ui - The core jQuery UI code, which automatically depends on query. - jquery-theme - The default jQuery 'lightness" theme. The jQuery-UI module depends on this module automatically. You can override the CSS file of this resource using the Resources framework. The CSS resource's id is "theme". Alternatively you can override the dependsOn of 'jquery-ui' to depend on some other module you define containing your theme. - jquery-ui-dev - The non-minified version of jQuery for development mode. Usage with the Resources frameworkSimple add the following to your GSP or site mesh layout: <r:require Overriding the Default ThemeIf you are using the jquery-ui plugin with the Resources Framework, you can easily override the jquery-theme module to use your own custom jQuery theme. See Resources Framework Documentation for more details. Everything else is done for you. You do not need any of the legacy tags or configuration specified in the rest of this documentation. grails.resources.modules = { … overrides { 'jquery-theme' { resource id:'theme', url:'/css/path/to/jquery-ui-1.8.17.custom.css' } } } Legacy Tags These tags are only for developers not using the Grails Resources frameworkThere is one tag - <jqui:resources/> - which pulls in the resources needed. It does not currently pull in the core jQuery resources, you should use the <g:javascript tag for that.You can override the theme used by jQuery UI by specifying the theme attribute with a value of the theme name, and the theme will then be located by pulling it from your app using the path:<appcontext>/jquery-ui/themes/<themename>/jquery-ui-<ver>.custom.cssor, by using the themeDir attribute:<appcontext>/<themeDir>/<themename>/jquery-ui-<ver>.custom.cssAlternatively you may want to specify the full URI to the theme CSS so that your custom themes work even if the version of jquery ui plugin you use changes. To do this, specify themeCss: <jqui:resources Configuration Like any config param you can make those below environment specific (e.g minified=false for DEV, and cdn='googlecode' for production) minifiedYou can choose if the minified version of the .js shoud be used (or not) by using the following config parameterjqueryUi.minified = true|false ;default: serve minified Google CDNIf yout want to load the resources from the Google CDN (which makes a lot of sense for high traffic sites), you can specifiyjqueryUi.cdn = 'googlecode'default: loading from CDN is disabled. Date Picker exampleHere is the code for a gsp view using datePicker (see reference at ):testDatePicker.gsp Don't forget to add a def to your controller to test it <html> <head> <title>Simple GSP page</title> <g:javascript <jqui:resources <script type="text/javascript"> $(document).ready(function() { $("#datepicker").datepicker({dateFormat: 'yy/mm/dd'}); }) </script> </head> <body> <div> <p> Between <input type="text" id="datepicker"> </p> </div> </body> </html> def testDatePicker = { } css/ui-darknesshas been extracted from the zip file to the directory home/xxx/myProject/web-app/jquery-ui/themes/.N.B. : you may have to rename the file /home/xxx/myProject/web-app/jquery-ui/themes/ui-darkness/jquery-ui-1.8.1.custom.cssto match with your jquery version (e.g., in this case, jquery-ui-1.8.1.custom.csshad to be renamed to jquery-ui-1.8.custom.css).
http://www.grails.org/plugin/jquery-ui
CC-MAIN-2017-26
refinedweb
1,091
54.73
Okay, I see that by "fails", you probably meant "raises this exception" rather than fails the usual way (i.e. raises anAssertionError). On 22.12.2020 22:38, Alan G. Isaac wrote: Here, `seq1 == seq2` produces a boolean array (i.e., an array of boolean values). hth, Alan Isaac On 12/22/2020 2:28 PM, Ivan Pozdeev via Python-Dev wrote: You sure about that? For me, bool(np.array) raises an exception: In [12]: np.__version__ Out[12]: '1.19.4' In [11]: if [False, False]==np.array([False, False]): print("foo") <...> ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() On 22.12.2020 21:52, Alan G. Isaac wrote: The following test fails because because `seq1 == seq2` returns a (boolean) NumPy array whenever either seq is a NumPy array. import unittest import numpy as np unittest.TestCase().assertSequenceEqual([1.,2.,3.], np.array([1.,2.,3.])) I expected `unittest` to rely only on features of a `collections.abc.Sequence`, which based on, I believe are satisfied by a NumPy array. Specifically, I see no requirement that a sequence implement __eq__ at all much less in any particular way. In short: a test named `assertSequenceEqual` should, I would think, work for any sequence and therefore (based on the available documentation) should not depend on the class-specific implementation of __eq__. Is that wrong? Thank you, Alan Isaac _______________________________________________:
https://mail.python.org/archives/list/python-dev@python.org/message/MKNN64A4JFKSBG37KZJ4HFYQ5TIE35AX/
CC-MAIN-2022-33
refinedweb
242
70.39
.net, c#, java,sql, OOAD and more mad memory dumps... Monday, June 29, 2009 #! Thursday, January 22, 2009 #. Tuesday, November 18, 2008 # /// <summary> /// Add a user to a Sharepoint group /// </summary> /// <param name="userLoginName">Login name of the user to add</param> /// <param name="userGroupName">Group name to add</param> private void AddUserToAGroup(string userLoginName, string userGroupName) { //Executes this method with Full Control rights even if the user does not otherwise have Full Control SPSecurity.RunWithElevatedPrivileges(delegate { //Don't use context to create the spSite object since it won't create the object with elevated privileges but with the privileges of the user who execute the this code, which may casues an exception using (SPSite spSite = new SPSite(Page.Request.Url.ToString())) { using (SPWeb spWeb = spSite.OpenWeb()) { try { //Allow updating of some sharepoint lists, (here spUsers, spGroups etc...) spWeb.AllowUnsafeUpdates = true; SPUser spUser = spWeb.EnsureUser(userLoginName); if (spUser != null) { SPGroup spGroup = spWeb.Groups[userGroupName]; if (spGroup != null) spGroup.AddUser(spUser); } } catch (Exception ex) { //Error handling logic should go here } finally { spWeb.AllowUnsafeUpdates = false; } } } }); } Here in this method you have to set "spWeb.AllowUnsafeUpdates = true" to allow updating some sharepoint lists. Tuesday, November 11, 2008 # Check these really good articles by James Tsai. Understand SharePoint Permissions - Part 1. SPBasePermissions in Hex, Decimal and Binary - The Basics Understand SharePoint Permissions - Part 2. Check SharePoint user/group permissions with Permissions web service and JavaScript Thanks Suranja for sending me these links. Tuesday, September 02, 2008 # I have a small question for sharepoint experts. :) Can we deploy a Content Type as a feature with some extended settings like "Workflow settings" and "Information management policy settings"? If so please enlighten me on how to do that. Monday, September 01, 2008 # Type below in the command line replacing <MC name or IP> with the machine name or IP you want to connect. This works with WinXP and Vista. mstsc -v:<MC name or IP> /f -console Updated: :D There is a easy way. Just type mstsc /console in the command prompt. Tuesday, August 26, 2008 #. Monday, August 25, 2008 # was always tedious and tricky job. Most of the time we used not to handle the error at the T-SQL level but handling it in the upper most level. (data access layer or business layer, handling database errors in business layer is a totally wrong practice) But now in SQL 2005, 2008 you have proper error handling mechanism just like in modern OO languages. You can use try catch in the stored procedures and functions. Actually I knew that I can use try catch in stored procedures but only today I got to know about a more interesting method we can use. After catching an error what we can to do was a question for me. If it’s in C# or Java we can log the error and may be throw a customized exception to the next layer. (There are so many options in handling errors) Can you remember throw exception in C#? Of cause you know it. :) Well… we can do the same in T-SQL using RAISERROR function. Using the RAISERROR is as follows, BEGIN TRY -- RAISERROR with severity 11-19 will cause execution to RAISERROR ('Error raised in TRY block.', -- Message text. 16, -- Severity. 1 -- State. ); END TRY BEGIN CATCH DECLARE @ErrorMessage NVARCHAR(4000); DECLARE @ErrorSeverity INT; DECLARE @ErrorState INT; ; Pretty good ha… You can follow up this more in MSDN. Tuesday, January 29, 2008 # Well... After 6 months I’m gonna do it again. I don’t know people who used to visit my blog still are doing that. Anyway will tell ya what happened in last 6 months later. :D Well today I was working on deploying a SharePoint site in 64 bit server. This is the first time I’m putting my hands on x64 bit machines. If you are used to do or want to do SharePoint deployments and never done that in 64 bit servers just keep in mind below. There are 2 programming files folders in 64 bit machines. Normal "programming files" folder for x64 bit programs and "programming files (x86)" for 32 bit programs. So when you are using script files or solutions (wsp files) for deployment make aware of this. More info will follow…. :) Thursday, July 26, 2007 # Google. Tuesday, July 17, 2007 # Visual WebGui is the .net answer for GWT (Google Web Toolkit). But it’s seems more powerful than GWT even though it’s not coming from Microsoft and yet it’s open source. Still I hadn’t got time to put my hands on deeply but you can feel it by just browsing their web site and checking the features and comparing those with GWT. Not like GWT, in Visual WebGui you can use existing windows controls to create your UI. Major advantage of Visual WebGui over GWT is we can deploy Visual WebGui applications as web applications or desktop applications. That’s a huge step taken by Visual WebGui. And most importantly Visual WebGui is open source. And with world best IDE (Visual Studio 2005) you have the ultimate power of designing your graphical user interfaces. Just check these features out by visiting Visual WebGui. User-Friendly – Visual WebGui was designed to be the next VB6 for the web. Simple to program, simple to deploy. With a full WinForms API and design time support you can start developing complex AJAX applications in seconds with no web know-how. Secured – Visual WebGui was designed to provide for military grade secured AJAX applications by eliminating client side service consumption and business logic processing using an empty client concept. The browser is used as a looking glass to the server that runs the application. Productive – With full WinForms API and design time support, Visual WebGui is almost as productive as R.A.D. platforms without limiting your options. Debug your application the same way you would debug any .NET application free of script debugging nightmares. Powerful – Visual WebGui was designed to support enterprise class applications with unlimited complexity supported by full object oriented programming. Using our unique AJAX transport, Visual WebGui applications consume 1% of bandwidth compared to any alternative AJAX framework. Feature-Rich – Visual WebGui contains most of WinForm's components including non trivial implementations of controls such as the PropertyGrid that provides a simple way to edit live objects. Supported - Visual WebGui is supported by its Core Team of developers and a dedicated international community. Through online forums and our support@visualwebgui.com mail box support is always close at hand (commercial support will be available soon). Easily Installed – Visual WebGui comes with a simple installation that will get you started on developing your AJAX application in no time. Visual WebGui's toolbox and templates are integrated into Visual Studio so they are always available. Localized – Visual WebGui includes full .NET and WinForms multi-language localization support which allows you to localize your application in the designer the same way you localize a WinForms application. Open Source – Visual WebGui SDK is provided free, as open-source software, and licensed under a standard LGPL agreement. It allows individuals to do whatever they wish with the application framework, both commercially and non-commercially. Cutting-Edge – Visual WebGui provides the developer with full object oriented .NET support allowing utilization of all the .NET capabilities including reflection, generics and more. This is enabled by a unique architecture that provides an alternative HTTP processing pipeline that does not include serializing JavaScript. Extensible – Visual WebGui is provided with many customization and extensibility features including custom control creation, theme creation and gateways. Interopable – As an extension to ASP.NET, Visual WebGui can also interact with standard ASP.NET pages hosting them within your Visual WebGui application or calling Visual WebGui dialogs and forms from your ASP.NET code. Visual WebGui's roadmap includes… Mono deployment - Allowing your Visual WebGui application to run on non Microsoft servers (Visual WebGui for .NET 1.1 is already compatible with mono). Legacy to web – Migrating WinForms or VB6 applications to web with out rewriting your application. Dual mode deployment – Deploy your Visual WebGui application as a desktop application or a web application enjoying the best of both worlds. Sunday, July 15, 2007 #. Saturday, July 14, 2007 # Doing some R&D works on GWT a few weeks back, I really fascinated on the Google frameworks. Next Google framework I’m gonna puts my hands on is Guice: Google way of implementing Dependency Injection. That isn’t a company assignment but I think its worth to learn and try. Guice (pronounced "juice") is an ultra-lightweight, next-generation dependency injection container for Java 5 and later. If you have ever tried out to achieve loose coupling to the ultimate extent you really should know about dependency injection. It’s not a buzz word anymore. People are using several of methods and frameworks to implement DI. Couple of years back when I was at my good old place (Logical Systems now known as Kandysoft) I did my first application design on my own. It was an e-commerce web site and I got instructions to develop the application framework as generic as possible. So we can re-use some of it functionalities in other applications. Our focus was on product catalog and shopping cart. Some applications product catalog and shopping cart is not separated but I thought it has to be developed as separated. So I did it and it was a very successful one. At that time I actually had heard about DI and use of it to achieve loose coupling. But only article I had read was Martin fowlers one and I couldn’t understand it much of it at that time. And for the worse case I had limited time frame to complete and deliver the project as we always do. Anyway I gained what I wanted by using interfaces and now I know I have implemented dependency injection by hand even though I didn’t know I was doing that. BTW now there are so many application frameworks that leverage use of DI and some related functionalities. I think spring is in the top. They are hitting both java and .net so it’s a great framework to developers like me who focused on both java and .net. But if you are new to DI and doing java I think Guice is the best to put your hands on first. Like all Google frameworks it simple and easy to learn. I’ll shed some posts later on DI and Guice. Wednesday, July 11, 2007 # Check this out. Worth to see. Thx Senthil for the url. Sunday, June 24, 2007 # There is a good audio talk show in .net Rocks with Mark Pollack who founded Spring.Net. Particularly this show would be more useful if you are not aware of the potential of Spring.Net. BTW Spring.Net has announced the release of Spring.NET 1.1 M1 (Milestone 1). New features and improvements in the release are below. * NUnit Integration: Aids in writing integration tests. Configuration of test cases via dependency injection and automatic transaction rollback * NHibernate 1.0/1.2 Integration: Simplify use of NHibernate and participation in Spring's declarative management * ASP.NET AJAX Integration: Export 'plain .NET objects' as web service, configure and apply aspects to them, and then expose inside client side Javascript. * Transaction and AOP XML namespaces to simplify configuration. * AOP support for methods with out/ref parameters. * Sample NHibernate application. * Numerous bug fixes and improvements. Skin design by Mark Wagner, Adapted by David Vidmar
http://geekswithblogs.net/madhawa/Default.aspx
crawl-002
refinedweb
1,929
57.77
MCP23017-RK (community library) Summary Particle driver for 16-port I2C GPIO Expander MCP23017 Example Build Testing Device OS Version: This table is generated from an automated build. Success only indicates that the code compiled successfully. Library Read Me This content is provided by the library maintainer and has not been validated or approved. MCP23017-RK Particle driver for MCP23017 16-port I2C GPIO expander Pinouts One Side: - 1 GPB0 - 2 GPB1 - 3 GPB2 - 4 GPB3 - 5 GPB4 - 6 GPB5 - 7 GPB6 - 8 GPB7 - 9 VDD (3.3 or 5V, red) - 10 VSS (GND, black) - 11 NC - 12 SCL (to Photon D1, blue) - 13 SDA (to Photon D0, green) - 14 NC Other Side: - 15 A0 - 16 A1 - 17 A2 - 18 /RESET - 19 INTB - 20 INTA - 21 GPA0 - 22 GPA1 - 23 GPA2 - 24 GPA3 - 25 GPA4 - 26 GPA5 - 27 GPA6 - 28 GPA7. [image unavailable]. Initialization Typically you create an global object like this in your source: MCP23017017 gpio(Wire1, 0); begin void begin(); You must call begin(), typically during setup(), to initialize the Wire interface. pinMode void pinMode(uint16_t pin, PinMode mode); Sets the pin mode of a pin (0-15).. Pins GPA0 - GPA7 are 0 - 7, and pins GPB0 - GPB7 are 8 - 15. digitalWrite void digitalWrite(uint16_t pin, uint8_t value); Sets the value of a pin (0-15) to the specified value. Values are typically: - 0 (or false or LOW) - 1 (or true or HIGH) digitalRead int32_t digitalRead(uint16_t pin); Reads the value of the pin (0-15). This will be HIGH (true, 1) or LOW (false, 0). If used on an output pin, returns the current output state. getPinMode PinMode getPinMode(uint16_t pin); Returns the pin mode of pin (0-15), which will be one of: - INPUT (default) - INPUT_PULLUP - OUTPUT pinAvailable bool pinAvailable(uint16_t pin); Returns true if 0 <= pin <= 15. Example Programs Simple Example #include "Particle.h" #include "MCP23017-RK.h" MCP23008 gpio(Wire, 0); void setup() { Serial.begin(9600); gpio.begin(); gpio.pinMode(0, OUTPUT); gpio.digitalWrite(0, HIGH); } void loop() { } Other Example The other example outputs a square wave on pins 6, 7, 8, 9: - GPA6: 1000 ms. period (1 Hz) - GPA7: 666 ms. period - GPB0: 200 ms. period (5 Hz) - GPB1: 20 ms. period (50 Hz) This should result in the following: [image unavailable] You can also connect a jumper from GPB7 to one of those pins. It echoes the value on the GPB7 input to the blue D7 LED on the Photon, so you can see the different frequencies. Browse Library Files
https://docs.particle.io/cards/libraries/m/MCP23017-RK/
CC-MAIN-2021-39
refinedweb
418
63.39
This action might not be possible to undo. Are you sure you want to continue? Release 1.0 Selenium Project November 08, 2009 7 8 9 11 11 11 13 14 14 15 15 15 18 18 21 24 24 25 27 28 29 29 29 35 35 37 42 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .ese Selenium Commands 5.1 Verifying Page Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Locating Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Matching Text Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i . . . . . . . . . . . . . JavaScript and Selenese Parameters . . . . . 9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Using User-Extensions With Selenium RC . . . . . . 6.4 Locator Strategies . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . . . . . . Popups. . . . . . . . . . . . . . 8 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .NET client driver configuration 11 Java Client Driver Configuration 105 11. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Accessors/Assertions . . 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Installation .8 Adding Some Spice to Your Tests . . . . . . . . and Multiple Windows . . . . . . . . . . . . . .1 Introducing Test Design . . . . 44 44 44 45 46 46 47 49 49 49 51 53 57 61 62 64 67 71 71 75 76 76 83 83 83 85 85 86 87 87 89 89 89 89 90 93 95 95 95 95 96 97 97 101 Selenium-RC 6. . . . . . 105 ii . . . . . . . . . . . . . . . . . . . . . .4 5. . . . . . . . . . . . .11 Organizing Your Test Suites . . . .9 Solving Common Web-App Problems . . . . . . . . . . . . . . . . . . .7 UI Mapping . . . . . . . . . . . .3 Verifying Expected Results: Assert vs. . . . . . . . . . . . . . . . . . . . . . 9. . . . . . . . . . . . . . . . . . . . . . . . . .2 How Selenium-RC Works . . . . . . . . . . . . . . . . . . .8 5. . . . 6. . .1 Introduction . . . . . . . . . . . . . . . . . echo . . . . 6. . . . . . . . . .8 Bitmap Comparison . .5 Programming Your Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 Supporting Additional Browsers and Browser Configurations 6. . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . .5 5. . . . . . 6. . . . . . . . . . . . . . . . .1 Configuring Selenium-RC With Eclipse .14 Troubleshooting Common Problems . . . . . . . . . . . . . . .6 Learning the API . . . . . . . 6. Verify? Element vs. . 7. . . . . . . . . . . . . . . . . . . Actual Content? 7. 6. . . . . . .6 Testing Ajax Applications .4 Locating UI Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Using User-Extensions With Selenium-IDE 9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alerts. . . . . . . . . . . .10 6 The “AndWait” Commands . . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . .12 Handling Errors . . . . . . . . . . . . . . 7. . . . . . .4 From Selenese to a Program . . . . . . . . . . . . . . . Store Commands and Selenium Variables . . . . . . . . . .5 Location Strategy Tradeoffs . . . . . .7 5. . . . 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The waitFor Commands in AJAX applications Sequence of Evaluation and Flow Control . . . .2 What to Test? . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 Server Options . . . . . . . . . . . . . . . . . . . . . . . . .2 Actions . . . . . . . . . . . . . . . 7. . . . .6 5. . . . . . . . 7 Test Design Considerations 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .The Selenese Print Command . . 7. . . . . . . . . . . . . . . . . . 9.5. . . . 6. . .1 Introduction . . . . . . . .10 Specifying the Path to a Specific Browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Selenium-Grid User-Extensions 9. . . . . . . .11 Selenium-RC Architecture . . . . . .12 Handling HTTPS and Security Popups . 9. . . . 6. . . . .7 Reporting Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 . . . . . . . . . .10 Organizing Your Test Scripts . . . . . . . . . . .9 5. . . . . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11. . . . . . . . . . . . 137 iii . . . . . . . . . . . . . 121 12 Python Client Driver Configuration 133 13 Locating Techniques 137 13. . . . . . 137 13. . . . . . . . . . . . . . . . . . . . . . . . . . .2 Starting to use CSS instead of XPATH . . .1 Useful XPATH patterns . . . . . . . . . . . . . . . . . . . . .2 Configuring Selenium-RC With Intellij . . . . . . . . iv . Selenium Documentation.0 Contents: CONTENTS 1 . Release 1. Release 1.Selenium Documentation.0 2 CONTENTS . We have also already added some valuable information that more experienced users will appreciate. we believe this documentation will truly help to spread the knowledge around. We are very excited to promote Selenium and. There are planned areas we haven’t written yet. Also. experienced users and “newbies” will benefit from our Selenium User’s Guide. This document will be a ‘live’ document on the SeleniumHQ website where frequent updates will occur as we complete the additional planned documentation. we really want to “get the word out” about Selenium. we have written the beginning chapters first so newcomers can get started more smoothly. Thanks very much for reading. We feel its extensibility and flexibility. along with its tight integration with the browser. is unmatched by available proprietary tools. and to thank you for being interested in Selenium. In short. However. or have been using it for awhile. very hard on this document. we have aimed to write so that those completely new to test automation will be able to use this document as a stepping stone. We truly believe you will be similarly excited once you learn how Selenium approaches test automation. We have worked very. to expand its user community. Please realize that this document is a work in progress. hopefully. No doubt. It’s quite different from other tools.CHAPTER ONE NOTE TO THE READER Hello. – the Selenium Documentation Team 3 . Why? We absolutely believe this is the best tool for web-application testing. and welcome to Selenium! The Documentation Team would like to welcome you. Whether you are brand-new to Selenium. Release 1. Note to the Reader .Selenium Documentation.0 4 Chapter 1. Utilizing these alternatives would in most cases greatly improve the efficiency of their software development by adding efficiencies to their testing. For instance. software applications today are written as web-based applications to be run in an Internet browser. The effectiveness of testing these applications varies widely among companies and organizations. however there are alternatives to manual testing that many organizations are unaware of. manual testing may be more effective. or lack the skills to perform. then manual testing is the best solution. perhaps most. there is currently no test automation available. Software testing is often conducted manually.. sometimes there simply is not enough time to build test automation. For the short term. There are times when manual testing may be more appropriate. Test automation means using a tool to run repeatable tests against the target application whenever necessary. Also. However. Test automation is often the answer. if the application’s user interface will change considerably in the near future. automation has specific advantages for improving the long-term efficiency of a software team’s testing processes. and it’s imperative that the testing get done within that time frame. it can be argued that disciplined testing and quality assurance practices are still underdeveloped in many organizations.2 Test Automation for Web Applications Many. In an era of continuously improving software processes. such as eXtreme programming (XP) and Agile. then any automation would need to be rewritten.CHAPTER TWO INTRODUCING SELENIUM 2. At times. If an application has a very tight deadline. 5 . this is effective.1 To Automate or Not to Automate? That is the Question! Is automation always advantageous? When should one decide to automate test cases? It is not always advantageous to automate test cases. calling Selenium commands to run tests on each item. Many examples are provided. teaches its most widely used features. technical information on the internal structure of Selenium and recommended uses of Selenium are provided as contributed by a consortium of experienced Selenium users. Introducing Selenium . and provides useful advice in best practices accumulated from the Selenium community. It operates as a Firefox add-on and provides an easy-to-use interface for developing and running individual test cases or entire test suites. 2. This guide introduces Selenium. Each one has a specific role in aiding the development of web application test automation.0 There are many advantages to test automation. the programming language’s iteration support can be used to iterate through the result set. There are a number of commercial and open source tools available for assisting with the development of test automation. 2. Selenium provides a rich set of testing functions specifically geared to the needs of testing of a web application.4. Most are related to the repeatability of the tests and the speed at which the tests can be executed. Selenium-IDE also offers full editing of test cases for more precision and control. 6 Chapter 2. 2. This user’s guide will assist both new and experienced Selenium users in learning effective techniques in building test automation for web applications. if the application under test returns a result set. allowing many options for locating UI elements and comparing expected test results against actual application behavior. For instance.3 Introducing Selenium Selenium is a robust set of tools that supports rapid development of test automation for web-based applications.4 Selenium Components Selenium is composed of three major tools.4. which will keep account of user actions as they are performed and store them as a reusable script to play back. Selenium-IDE has a recording feature. It also has a context menu (right-click) integrated with the Firefox browser.2 Selenium-RC (Remote Control) Selenium-RC allows the test automation developer to use a programming language for maximum flexibility and extensibility in developing test logic. which allows the user to pick from a list of assertions and verifications for the selected location.1 Selenium-IDE Selenium-IDE is the Integrated Development Environment for building Selenium test cases. Also. Selenium is possibly the most widelyused open source solution. It is our hope that this guide will get additional new users excited about using Selenium for test automation. Release 1. One of Selenium’s key features is the support for executing one’s tests on multiple browser platforms. and if the automated test program needs to run tests on each element in the result set. We hope this guide will assist in “getting the word out” that quality assurance and software testing have many options beyond what is currently practiced. tests created in it can also be run against other browsers by using Selenium-RC and specifying the name of the test suite on the command line.Selenium Documentation. These operations are highly flexible. Although Selenium-IDE is a Firefox only add-on. 2. We hope this user’s guide and Selenium itself provide a valuable aid to boosting the reader’s efficiency in his or her software testing processes. C#. run tests Start browser.6 Flexibility and Extensibility You’ll find that Selenium is highly flexible. run tests Start browser. run tests Start browser. run tests Start browser. 2. there may be technical limitations that would limit certain features. PHP. Perl. 2. With Selenium-Grid multiple instances of Selenium-RC are running on various operating system and browser configurations. ** Selenium-RC server can start any executable. Python.5.0 Selenium-RC provides an API (Application Programming Interface) and library for each of its supported languages: HTML. run tests Start browser. This allows for running tests in parallel. This is.3 Selenium-Grid Selenium-Grid allows the Selenium-RC solution to scale for large test suites or test suites that must be run in multiple environments.0 Beta-1 & 1. Mac Windows As applicable * Tests developed on Firefox via Selenium-IDE can be executed on any other supported browser via a simple Selenium-RC command line.0 Beta-2: Record and playback tests 1. Release 1. When tests are sent to the hub they are then redirected to an available Selenium-RC. and Ruby.0 Beta-1: Record and playback tests Selenium-RC Start browser. Linux. each of these when launching register with a hub.Selenium Documentation. which will launch the browser and run the test. 2. perhaps. This ability to use Selenium-RC with a highlevel programming language to develop test cases also allows the automated testing to be integrated with a project’s automated build environment. but depending on browser security settings. Supported Browsers 7 . Selenium’s strongest characteristic when compared with proprietary test automation tools and other open source solutions. Mac Windows. Java. run tests Partial support possible** Operating Systems Windows. Mac Windows.5 Supported Browsers Browser Firefox 3 Firefox 2 IE 8 IE 7 Safari 3 Safari 2 Opera 9 Opera 8 Google Chrome Others Selenium-IDE 1. Linux.4. There are multiple ways in which one can add functionality to Selenium’s framework to customize test automation for one’s specific testing needs. Linux. Mac Windows Windows Mac Mac Windows. run tests Start browser. run tests Under development Start browser. Linux. Selenium-RC support for multiple programming and scripting languages allows the 2. with the entire test suite theoretically taking only as long to run as the longest individual test. Also provides a general description of Selenium commands and syntax. or configurations. this section describes some configurations available for extending and customizing how the Selenium-IDE supports test case development. that Selenium-RC supports are described. pop-ups and the opening of new windows. Selenium-IDE Teaches how to build test cases using the Selenium Integrated Development Environment. This chapter also describes useful techniques for making your scripts more readable when interpreting defects caught by your Selenium tests. We cover examples of source code showing how to report defects in the application under test. A number of solutions to problems which are often difficult for the new user.Selenium Documentation. Selenium-IDE allows for the addition of user-defined “user-extensions” for creating additional commands customized to the user’s needs. 2. Also. Many examples are presented in both a programming language and a scripting language. Finally. Test Design Considerations Presents many useful techniques for using Selenium efficiently. It introduces the novice to Selenium test automation. The remaining chapters of the reference present: Selenium Basics Introduces Selenium by describing how to select the Selenium component most appropriate for your testing tasks. the installation and setup of Selenium-RC is covered here.7 About this Book This reference documentation targets both new users of Selenium and those who have been using Selenium and are seeking additional knowledge. https requests. Selenium-RC Explains how to develop an automated test program using the Selenium-RC API. The experienced Selenium user will also find this reference valuable. along with their trade-offs and limitations. 8 Chapter 2. We do not assume the reader has experience in testing beyond the basics. Release 1. It compiles in one place a set of useful Selenium techniques and best practices by drawing from the knowledge of multiple experienced Selenium QA professionals. This includes handling Security Certificates. This allows users to customize the generated code to fit in with their own test frameworks. The various modes. Finally. This chapter shows what types of actions.0 test writer to build any logic they need into their automated testing and to use a preferred programming or scripting language of one’s choice. We also cover techniques commonly asked about in the user forums such as how to implement data-driven tests (tests where one can vary the data between different test passes). Selenium-Grid This chapter is not yet developed. verifications and assertions can be made against a web application. User extensions Presents all the information required for easily extending Selenium. We explain how your test script can be “exported” to the programming language of your choice. This includes scripting techniques and programming techniques for use with Selenium-RC. This section allows you to get a general feel for how Selenium approaches test automation and helps you decide where to begin. Introducing Selenium . are described in this chapter. Selenium is an Open Source project where code can be modified and enhancements can be submitted for contribution. it is possible to re-configure how the Selenium-IDE generates its Selenium-RC code. Architecture diagrams are provided to help illustrate these points. Selenium Commands Describes a subset of the most useful Selenium commands in detail. Also. 2 Current Authors • Mary Ann May-Pumphrey • Peter Newhook In addition to the original team members who are still involved (May ‘09). And of course. Peter has provided assistance with restructuring our most difficult chapter and has provided valuable advice on topics to include. 2. Mary Ann is actively writing new subsections and has provided editorial assistance throughout the document. 2. and Peter have recently made major contributions. and to Amit Kumar for participating in our discussions and for assisting with reviewing the document. Mary Ann. but in the end.8. His enthusiasm and encouragement definitely helped drive this project.1 The Original Authors • Dave Hunt • Paul Grandjean • Santiago Suarez Ordonez • Tarun Kumar The original authors who kickstarted this document are listed in alphabetical order. the reader. Their reviewing and editorial contributions proved invaluable.8 The Documentation Team 2. As an administrator of the SeleniumHQ website.8. Each of us contributed significantly by taking a leadership role in specific areas.8.8.Selenium Documentation.3 Acknowledgements A huge special thanks goes to Patrick Lightbody. we must recognize the Selenium Developers. He also set us up with everything we needed on the SeleniumHQ website for developing and releasing this user’s guide. his support has been invaluable. 2. Patrick has helped us understand the Selenium community–our audience. each of us made significant contributions to each chapter throughout the project. The Documentation Team 9 . Also thanks goes to Andras Hatvani for his advice on publishing solutions. we would not have such a great tool to pass on to you. Each chapter originally had a primary author who kicked off the intial writing.0 2. Release 1. Their enthusiasm and dedication has been incredibly helpful. We hope they continue to be involved. and the continued efforts of the current developers. They have truly designed an amazing tool. Without the vision of the original designers. 0 10 Chapter 2.Selenium Documentation. Release 1. Introducing Selenium . In selenese. running and developing tests. and many other web-application features. See the chapter on Selenium-IDE for specifics. This is what we recommend. one can test the existence of UI elements based on their HTML tags. One can run test scripts from a web-browser using the HTML interface TestRunner. At the time of writing (April 09) it is still available and may be convenient for some. 3. it does not support iteration.2. such as testing each element of a variable length list requires running the script from a programming language. It has limitations though. It’s simple to use and is recommended for lesstechnical users. It’s an easy way to get familiar with Selenium commands quickly. more are using these tools rather than Selenium-Core. Similar to Selenium-IDE. any tests requiring iteration. mouse position. Ajax functionality. In addition Selenium commands support testing of window size. This is the original method for running Selenium commands. pop up windows. The command set is often called selenese. Finally. Selenium-IDE does not support iteration or condition statements. input fields. test for broken links. Since the development of SeleniumIDE and Selenium-RC. When programming logic is required Selenium-RC must be used.html. The IDE allows developing and running tests without the need for programming skills as required by Selenium-RC. You can develop your first script in just a few minutes. Selenium-Core is another way of running tests. submitting forms.org) lists all the available commands. However. 11 . You may also run your scripts from the Selenium-IDE. If one has an understanding of how to conduct manual testing of a website they can easily transition to using the Selenium-IDE for both. Selenium-Core also cannot switch between http and https protocols. The Command Reference (available at SeleniumHQ. alerts. selection list options. These commands essentially create a testing language.CHAPTER THREE SELENIUM BASICS 3.2 Introducing Selenium Commands 3. test for specific content. For example.1 Selenium Commands – Selenese Selenium provides a rich set of commands for fully testing your web-app in virtually any way you may imagine. The Selenium-IDE can serve as an excellent way to train junior-level employees in test automation. and table data among other things.1 Getting Started – Choosing Your Selenium Tool Most people get started with Selenium-IDE. event handling. Selenium-IDE is also very easy to install. Some testing tasks are too complex though for the Selenium-IDE. the Selenium community is encouraging the use Selenium-IDE and RC and discouraging the use of Selenium-Core. Support for Selenium-Core is becoming less available and it may even be deprecated in a future release. Release 1. the test will continue execution. They do things like “click this link” and “select that option”. “waitFor” commands wait for some condition to become true (which can be useful for testing Ajax applications). They are also used to automatically generate Assertions. • a text pattern for verifying or asserting expected page content • a text pattern or a selenium variable for entering text in an input field or for selecting an option from an option list. Locators. and ” waitFor”. e. Here are a couple more examples: goBackAndWait verifyTextPresent type type Welcome to My Home Page (555) 666-7066 ${myVariableAddress} id=phone id=address1 The command reference describes the parameter requirements for each command. This allows a single “assert” to ensure that the application is on the correct page. When a “verify” fails. selenium variables.Selenium Documentation. • Accessors examine the state of the application and store the results in variables.2. they will fail and halt the test if the condition does not become true within the current timeout setting (see the setTimeout action below). They will succeed immediately if the condition is already true. they consist of the command and two parameters.0 A command is what tells Selenium what to do. If an Action fails. This suffix tells Selenium that the action will cause the browser to make a call to the server. 12 Chapter 3. and that Selenium should wait for a new page to load. and still in others the command may take no parameters at all. e. in others one parameter is required. labels. Examples include “make sure the page title is X” and “verify that this checkbox is checked”. Accessors and Assertions.2 Script Syntax Selenium commands are simple. you can “assertText”. or has an error. “clickAndWait”.g. logging the failure. However. and the commands themselves are described in considerable detail in the section on Selenium Commands. “verify”. etc. however they are typically • a locator for identifying a UI element within a page. • Assertions are like Accessors. It depends on the command. Many Actions can be called with the “AndWait” suffix. text patterns. the execution of the current test is stopped. Selenium commands come in three “flavors”: Actions. When an “assert” fails. “storeTitle”. the test is aborted. For example. • Actions are commands that generally manipulate the state of the application.g. but they verify that the state of the application conforms to what is expected. All Selenium Assertions can be used in 3 modes: “assert”. Selenium Basics . 3. followed by a bunch of “verify” assertions to test form field values. “verifyText” and “waitForText”. Parameters vary. In some cases both are required. For example: verifyText //div//a[2] Login The parameters are not always required. the second is a target and the final column contains a value.3 Test Suites A test suite is a collection of tests.3./Login. from the Selenium-IDE. 3. An example tells it all. When using Selenium-IDE. If using an interpreted language like Python with Selenium-RC than some simple programming would be involved in setting up a test suite. The second and third columns may not require values depending on the chosen Selenium command. Nunit could be employed. <html> <head> <title>Test Suite Function Tests . This is done via programming and can be done a number of ways.0 Selenium scripts that will be run from Selenium-IDE may be stored in an HTML text file format. Test Suites 13 . Additionally.Selenium Documentation.Priority 1</title> </head> <body> <table> <tr><td><b>Suite Of Tests</b></td></tr> <tr><td><a href= ".html" >Test Save</a></td></tr> </table> </body> </html> A file similar to this would allow running the tests all at once. Here is an example of a test that opens a page. if C# is the chosen language. Since the whole reason for using Sel-RC is to make use of programming logic for your testing this usually isn’t a problem. Release 1. With a basic knowledge of selenese and Selenium-IDE you can quickly produce and run testcases./SaveValues. but they should be present. An HTML table defines a list of tests where each row defines the filesystem path to each test. Commonly Junit is used to maintain a test suite if one is using Selenium-RC with Java. The first column is used to identify the Selenium command. test suites also can be defined using a simple HTML file.html" >Login</a></td></tr> <tr><td><a href= ".html" >Test Searching for Values</a></td></tr> <tr><td><a href= ". Each table row represents a new Selenium command. Test suites can also be maintained when using Selenium-RC. one after another. The syntax again is simple.. Often one will run all the tests in a test suite as one continuous batch-job./SearchValues. This consists of an HTML table with three columns. as defined by its HTML tag. Chapter 3 gets you started and then guides you through all the features of the Selenium-IDE. 14 Chapter 3. in present on the page. waitForPageToLoad pauses execution until an expected new page loads. click/clickAndWait performs a click operation. right-click. verifyElementPresent verifies an expected UI element. verifyText verifies expected text and it’s corresponding HTML tag are present on the page. Selenium Basics . We recommend beginning with the Selenium IDE and its context-sensitive.Selenium Documentation.4 Commonly Used Selenium Commands To conclude our introduction of Selenium. verifyTable verifies a table’s expected contents. as defined by it’s HTML tag.0 3. open opens a page using a URL. 3. waitForElementPresent pauses execution until an expected UI element. and you can have a simple script done in just a minute or two. Called automatically when clickAndWait is used. These are probably the most commonly used commands for building test. menu. verifyTitle/assertTitle verifies an expected page title.5 Summary Now that you’ve seen an introduction to Selenium. we’ll show you a few typical Selenium commands. is present on the page. Release 1. and optionally waits for a new page to load. verifyTextPresent verifies expected text is somewhere on the page. This will allow you to get familiar with the most common Selenium commands quickly. you’re ready to start writing your first scripts. This chapter is all about the Selenium IDE and how to use it effectively. 15 . This is not only a time-saver.CHAPTER FOUR SELENIUM-IDE 4.1 Introduction The Selenium-IDE (Integrated Development Environment) is the tool you use to develop your Selenium test cases.2 Installing the IDE Using Firefox. but also an excellent way of learning Selenium script syntax.. It’s an easy-to-use Firefox plug-in and is generally the most efficient way to develop test cases. first. 4. you’ll be presented with the following window. download the IDE from the SeleniumHQ downloads page When downloading from Firefox. 16 Chapter 4. Selenium-IDE . first showing a progress bar. and when the download is complete.0 Select Install Now. The Firefox Add-ons window pops up. displays the following.Selenium Documentation. Release 1. 2.Selenium Documentation. Installing the IDE 17 .0 Restart Firefox. After Firefox reboots you will find the Selnium-IDE listed under the Firefox Tools menu. Release 1. 4. 4. open and save test case and test suite files.4 IDE Features 4. undo and select all operations for editing the commands in your test case.0 4.1 Menu Bar The File menu allows you to create. Release 1.4. paste. simply select it from the Firefox Tools menu. Selenium-IDE . It opens as follows with an empty script-editing window and a menu for loading. The 18 Chapter 4.3 Opening the IDE To run the Selenium-IDE. The Edit menu allows copy. delete.Selenium Documentation. or creating new test cases. Speed Control: controls how fast your test case runs. The Help menu is the standard Firefox Help menu. 4. You can set the timeout value for certain commands. 4. Most users will probably not need this button. the one with the red-dot. Record: Records the user’s browser actions. This button is for evaluating test cases for backwards compatibility with the TestRunner. Step: Allows one to “step” through a test case by running it one command at a time. Use for debugging test cases.4.0 Options menu allows the changing of settings. Apply Rollup Rules: This advanced feature allows repetitive sequences of Selenium commands to be grouped into a single action. Run: Runs the currently selected test. IDE Features 19 . The right-most button. Run All: Runs the entire test suite when a test suite with multiple test cases is loaded.4. only one item on this menu–UI-Element Documentation–pertains to Selenium-IDE.2 Toolbar The toolbar contains buttons for controlling the execution of your test cases. When only a single test is loaded this button and the Run All button have the same effect.3 Test Case Pane Your script is displayed in the test case pane. 4. including a step feature for debugging your test cases. TestRunner Mode: Allows you to run the test case in a browser loaded with the Selenium-Core TestRunner.Selenium Documentation. Release 1. one for displaying the command and their parameters in a readable “table” format. It has two tabs.4. add user-defined user extensions to the base set of Selenium commands. Pause/Resume: Allows stopping and re-starting of a running test case. The TestRunner is not commonly used now and is likely to be deprecated. Detailed documentation on rollup rules can be found in the UI-Element Documentation on the Help menu. is the record button. and specify the format (language) used when saving your test cases. Log When you run your test case.Selenium Documentation. Selenium-IDE . The first parameter specified for a command in the Reference tab of the bottom pane always goes in the Target field. error messages and information messages showing the progress are displayed in this pane automatically. and Value entry fields display the currently selected command along with its parameters. you can then select your desired command from the drop-down. Release 1. The Source view also allows one to edit the test case in its raw form.4 Log/Reference/UI-Element/Rollup Pane The bottom pane is used for four different functions–Log. The Command. If you start typing in the Command field. including copy.0 The Source tab displays the test case in the native format in which the file will be stored. and Rollup– depending on which tab is selected. These messages are often useful for test case debugging. See the Options menu for details.4. this is HTML although it can be changed to a programming language such as Java or C#. If a second parameter is specified by the Reference tab. Also notice the Info button is a drop-down allowing selection of different levels of information to display. cut and paste operations. Notice the Clear button for clearing the Log. Target. These are entry fields where you can modify the currently selected command. 4. UI-Element. or a scripting language like Python. it always goes in the Value field. even if you do not first select the Log tab. Reference. a drop-down list will be populated based on the first characters you type. By default. 20 Chapter 4. 5. Building Test Cases 21 . the order of parameters provided must match the order specified.5 Building Test Cases There are three primary methods for developing test cases.5. While the Reference tab is invaluable as a quick reference. it is critically important to ensure that the parameters specified in the Target and Value fields match those specified in the parameter list specified in the Reference pane. it is still often necessary to consult the Selenium Reference document. UI-Element and Rollup Detailed information on these two panes (which cover advanced features) can be found in the UIElement Documentation on the Help menu of Selenium-IDE. the record button is ON by default. 4. Note: This can be set to OFF as a default with an available user extension. whether from Table or Source mode. When entering or modifying commands. The number of parameters provided must match the number specified. the command will not run correctly. If there is a mismatch in any of these three areas. Release 1. a test developer will require all three techniques. When Selenium-IDE is first opened. 4.1 Recording Many first-time users begin by recording a test case from their interactions with a website. Frequently. the Reference pane will display documentation on the current command. 4. and the type of parameters provided must match the type specified.0 Reference The Reference tab is the default selection whenever you are entering or modifying Selenese commands and parameters in Table mode. In Table mode.Selenium Documentation. Let’s see how this works. Once you select these other options. For example.2 Adding Verifications and Asserts With the Context Menu Your test cases will also need to check the properties of a web-page. Here we’ll simply describe how to add them to your test case. A paragraph or a heading will work fine. We won’t describe the specifics of these commands here. notice the Show All Available Commands menu option. Typically. • Following a link usually records a click command. You may need to use Show All Available Commands to see options other than verifyTextPresent. there may only be one Selenium command listed. for testing your currently selected UI element. right-click the selected text. you will find additional commands will quickly be added to this menu. Selenium-IDE will automatically insert commands into your test case based on your actions. For now though. again. Selenium-IDE . You will often need to change this to clickAndWait to ensure your test case pauses until the new page is completely loaded. or a user control like a button or a checkbox.click command Here are some “gotchas” to be aware of: • The type command may require clicking on some other area of the web page for it to record.Selenium Documentation. along with suggested parameters. Try a few more UI elements. these commands will be explained in detail in the chapter on Selenium commands. This will cause unexpected test case failures. As you use the IDE however. This requires assert and verify commands. The first time you use Selenium.click or clickAndWait commands • entering values . this will include: • clicking a link .select command • clicking checkboxes or radio buttons . This shows many. many more commands. With Selenium-IDE recording. along with the parameters. Open a web-page of your choosing and select a block of text on the page. that is in the chapter on “Selenese” Selenium Commands. Selenium-IDE will attempt to predict what command. the more commonly used ones will show up on the primary context menu.type command • selecting options from a drop-down listbox . Now. Try right-clicking an image. You will see a context menu showing verify and/or assert commands.5. 22 Chapter 4. Again. go to the browser displaying your test application and right click anywhere on the page. The context menu should give you a verifyTextPresent command and the suggested parameter should be the text itself.0 During recording. selecting verifyElementPresent for an image should later cause that command to be available on the primary context menu the next time you select an image and right-click. 4. Release 1. feel free to use the IDE to record and select commands into a test case and then run it. Also. You can learn a lot about the Selenium commands simply by experimenting though the IDE. you will need for a selected UI element on the current web-page. your test case will continue running commands before the page has loaded all its UI elements. Otherwise. or comment.5.5. Right-click and select Insert Command.. Target. Table View Select the point in your test case where you want to insert the comment. Now use the Command field to enter the comment. Edit a Command or Comment Table View Simply select the line to be changed and edit it using the Command. parameter. i. if one is required). An empty command will cause an error during execution. Source View Since Source view provides the equivalent of a WYSIWYG editor. first parameter (if one is required by the Command). Your comment will appear in purple font. 4. <!– your comment here –>. These comments are ignored when the test case is run. Source View Select the point in your test case where you want to insert the command. simply modify which line you wish– command. Right-click and select Insert Comment. and Value fields. Source View Select the point in your test case where you want to insert the comment.3 Editing Insert Command Table View Select the point in your test case where you want to insert the command. you must create empty comments. Be sure to save your test before switching back to Table view. Building Test Cases 23 . and second parameter (again.e. In order to add vertical white space (one or more blank lines) in your tests.Selenium Documentation. Release 1. Now use the command editing text fields to enter your new command and its parameters.0 4. and enter the HTML tags needed to create a 3-column row containing the Command. Insert Comment Comments may be added to make your test case more readable. Add an HTML-style comment. To set a startpoint. Selenium-IDE will then create an absolute URL by appending the open command’s argument onto the end of the value of Base URL. close down the IDE and restart it (you don’t need to close the browser itself).com. Selenium-IDE displays its Selenium commands in the test case pane.5. However. You can double-click it to see if it runs correctly. Stop in the Middle You can set a breakpoint in the test case to cause it to stop on a particular command.com/about.0 4. It lets you immediately test a command you are constructing. where at times. This is useful for debugging your test case. and from the context menu select Toggle Breakpoint. right-click. the test case below would be run against. select a command. This also is used for debugging. Suppose that a site named. This will fix the problem.Selenium Documentation. such operations have their own menu entries near the bottom. Start from the Middle You can tell the IDE to begin running from a specific command in the middle of the test case. right-click. Save and Save As menu commands behave similarly to opening and saving files in most other programs. select a command. You can run a test case all at once. Note: At the time of this writing. stop and start it.portal. To continue click Resume. Run Any Single Command Double-click any single command to run it by itself. This is useful when writing a single command. Save. If you see this. Stop and Start The Pause button can be used to stop the test case while it is running. Any test cases for these sites that begin with an open statement should specify a relative URL as the argument to open rather than an absolute URL (one starting with a protocol such as http: or https:). when the IDE is first opened and then you select File=>Open.news. run a single command you are currently developing.html: 24 Chapter 4.4 Opening and Saving a Test Case The File=>Open. and you can do a batch run of an entire test suite.6 Running Test Cases The IDE allows many options for running your test case. When you open an existing test case.7 Using Base URL to Run Test Cases in Different Domains The Base URL field at the top of the Selenium-IDE window is very useful for allowing test cases to be run across different domains. and Save As items are only for files.portal. Release 1. 4. there’s a bug. Run a Test Suite Click the Run All button to run all the test cases in the currently loaded test suite. nothing happens. the Open. For example. when you are not sure if it is correct. Selenium-IDE . Run a Test Case Click the Run button to run the currently displayed test case. Execution of test cases is very flexible in the IDE.com had an in-house beta site named. This is also available from the context menu. and from the context menu select Set/Clear Start Point. 4. The icon of this button then changes to indicate the Resume button. Test suite files can also be opened and saved via the File menu. run it one line at a time. To set a breakpoint. portal. and from the context menu select Toggle Breakpoint. suppose your test case first logs into the website and then performs a series of tests and you are trying to debug one of those tests. To set a breakpoint. right-click. This is a normal part of test case development.8. We won’t teach debugging here as most new users to Selenium will already have some basic experience with debugging. However. one can run up to a specific command in the middle of the test case and inspect how the test case behaves at that point. Then click the Run button to run your test case from the beginning up to the breakpoint. select a command. 4. To do this. from any point within the test case. 4.8. set a breakpoint on the command just before the one to be examined. select a command. Debugging 25 .html: Base URL setting would be run against 4. we recommend you ask one of the developers in your organization. then run your test case from a startpoint placed after the login portion of your test case.Selenium Documentation. To set a startpoint. and from the context menu select Set/Clear Start Point.1 Breakpoints and Startpoints The Sel-IDE supports the setting of breakpoints and the ability to start and stop the running of a test case. It is also sometimes useful to run a test case from somewhere in the middle to the end of the test case or up to a breakpoint that follows the starting point. For example. If this is new to you. That will prevent you from having to manually logout each time you rerun your test case. right-click. That is.8 Debugging Debugging means finding and fixing errors in your test case. you only need to login once. but you need to keep rerunning your tests as you are developing them.0 This same test case with a modified. You can login once. Release 1. This is useful when building a locator for a command’s first parameter (see the section on locators in the Selenium Commands chapter). Simply. Then rightclick the webpage and select View Selection Source. 26 Chapter 4.4 Page Source for Debugging Often. Now look on the webpage displayed in the Firefox browser. when debugging a test case.8. with highlighting on the portion representing your selection. Firefox makes this easy. Repeatedly select the Step button.2 Stepping Through a Testcase To execute a test case one command at a time (“step through” it). the separate HTML window will contain just a small amount of source. Release 1. Alternatively. Start the test case running with the Run button from the toolbar. 4. This feature can be very useful for learning more about locators.e. Selenium-IDE . and is often needed to help one build a different type of locator than the type that was recorded.8. and certain assert and verify commands. select any command that has a locator parameter.3 Find Button The Find button is used to see which UI element on the currently displayed webpage (in the browser) is used in the currently selected Selenium command.Selenium Documentation. There should be a bright green rectangle enclosing the element specified by the locator parameter. among others. follow these steps: 1.5 Locator Assistance Whenever Selenium-IDE records a locator-type argument. type. The HTML opens in a separate window.8. In this case. 4.0 Then click the Run button to execute the test case beginning at that startpoint. you simply must look at the page source (the HTML for the webpage you’re trying to test) to determine a problem. 4. Use its Search feature (Edit=>Find) to search for a keyword to find the HTML for the UI element you’re trying to test.8. From Table view. Click the Find button. right-click the webpage and select Page Source. i. 1. click. clickAndWait. Immediately pause the executing test case with the Pause button. 1. 4. It can be used with any command that must identify a UI element on a webpage. select just that portion of the webpage for which you want to see the source. it stores additional information which allows the user to view other possible locator-type arguments that could be used instead. Selenium Documentation.> 4. Writing a Test Suite 27 . In the latter case. A test suite file is an HTML file containing a one-column table. Each cell of each row in the <tbody> section contains a link to a test case.0 This locator assistance is presented on the Selenium-IDE window as a drop-down list accessible at the right end of the Target field (only when the Target field contains a recorded locator-type argument).9.9 Writing a Test Suite A test suite is a collection of test cases which is displayed in the leftmost pane in the IDE. whereas the second column indicates the type of each alternative. the new test case will appear immediately below the previous test case. Note that the first column of the drop-down provides alternative locators. 4. Below is a snapshot showing the contents of this drop-down for one command. The test suite pane will be automatically opened when an existing test suite is opened or when the user selects the New Test Case item from the File menu. Release 1. Selenium-IDE does not yet support loading pre-existing test cases into a test suite. The test suite pane can be manually opened or closed via selecting a small dot halfway down the right edge of the pane (which is the left edge of the entire Selenium-IDE window if the pane is closed). Users who want to create or modify a test suite by adding pre-existing test cases must manually edit a test suite file. look at the page created by its author. To install this extension. that is indeed the case. Information on writing your own extensions can be found near the bottom of the Selenium Reference document.html" ".html" "./b. Selenium-IDE . put the pathname to its location on your computer in the Selenium Core extensions field of Selenium-IDE’s Options=>Options=>General tab.html" ". Any change you make to an extension will also require you to close and reopen SeleniumIDE.html" >A >B >C >D Links</a></td></tr> Links</a></td></tr> Links</a></td></tr> Links</a></td></tr> Note: Test case files should not have to be co-located with the test suite file that invokes them. 28 Chapter 4. at the time of this writing. After selecting the OK button. 4. This extension is the goto_sel_ide.Selenium Documentation./d./c.10 User Extensions User extensions are JavaScript files that allow one to create his or her own customizations and features to add additional functionality. a bug prevents Windows users from being able to place the test cases elsewhere than with the test suite that invokes them.js. However.0 <tbody> <tr><td><a <tr><td><a <tr><td><a <tr><td><a </tbody> </table> </body> </html> href= href= href= href= ". Release 1. Often this is in the form of customized commands although this extensibility is not limited to additional commands. Perhaps the most popular of all Selenium-IDE extensions is one which provides flow control in the form of while loops and primitive conditionals. you must close and reopen Selenium-IDE in order for the extensions file to be read. And on Mac OS and Linux systems. There are a number of useful extensions created by users. For an example of how to use the functionality provided by this extension./a. 13 Troubleshooting Below is a list of image/explanation pairs which describe frequent sources of problems with SeleniumIDE: 4. Each supported language has configuration settings which are editable. The default is HTML. The -htmlSuite command-line option is the particular feature of interest. allows you to select a language for saving and displaying the test case. This topic is covered in the Run Selenese tests section on Selenium-RC chapter.Selenium Documentation. However the author has altered the C# format in a limited manner and it has worked well. Java. using a simple command-line interface that invokes the Selenium-RC server.12 Executing Selenium-IDE Tests on Different Browsers While Selenium-IDE can only run tests against Firefox. this feature is not yet supported by the Selenium developers. program code supporting your test is generated for you by Selenium-IDE. 4. you will be using with Selenium-RC for developing your test programs. Essentially. Your test case will be translated into a series of functions in the language you choose. Then simply save the test case using File=>Save. 4. i. Select the language. This is under the Options=>Options=>Format tab.e. Also. you can alter it by editing a configuration file which defines the generation process. note that if the generated code does not suit your needs. If you will be using Selenium-RC to run your test cases. tests developed with Selenium-IDE can be run against other browsers. under the Options menu.11 Format Format. this feature is used to translate your test case into a programming language. PHP.0 4. Format 29 . Release 1.11. Note: At the time of this writing. Selenium-IDE . Release 1. 30 Chapter 4. The solution is to close and reopen Selenium IDE.Selenium Documentation. The bug has been filed as SIDE-230.0 This problem occurs occasionally when Selenium IDE is first brought up. Release 1.Selenium Documentation. This type of error may indicate a timing problem. In the example above. If so. Use File=>Open Test Suite instead. investigate using an appropriate waitFor* or *AndWait command immediately before the failing command. 4. Troubleshooting 31 .0 You’ve used File=>Open to try to open a test suite file. Try putting a pause 5000 before the command to determine whether the problem is indeed related to timing. the first required parameter must go in the Target field.. the element specified by a locator in your command wasn’t fully loaded when the command was executed. i. Whenever your attempt to use variable substitution fails as is the case for the open command above.e. the two parameters for the store command have been erroneously placed in the reverse order of what is required. and the second required parameter (if one exists) must go in the Value field. For any Selenese command.13. This is sometimes due to putting the variable in the Value field when it should be in the Target field or vice versa. it indicates that you haven’t actually created the variable whose value you’re trying to access. Your extension file’s contents have not been read by Selenium-IDE. 32 Chapter 4. Selenium-IDE is very space-sensitive! An extra space before or after a command will cause it to be unrecognizable. Make sure that the test case is indeed located where the test suite indicates it is located. make sure that your actual test case files have the . Also.html extension both in their filenames. Be sure you have specified the proper pathname to the extensions file via Options=>Options=>General in the Selenium Core extensions field. Selenium-IDE must be restarted after any change to either an extensions file or to the contents of the Selenium Core extensions field.0 One of the test cases in your test suite cannot be found. Release 1.Selenium Documentation. Also. Selenium-IDE . and in the test suite file where they are referenced. Thus. which is confusing. note that the parameter for verifyTitle has two spaces between the words “System” and “Division. Selenium-IDE is correct to generate an error.Selenium Documentation. In the example above.0 This type of error message makes it appear that Selenium-IDE has generated a failure where there is none.” The page’s actual title has only one space between these words. Selenium-IDE is correct that the actual value does not match the value specified in such test cases. 4. However. Release 1.13. Troubleshooting 33 . The problem is that the log file error messages collapse a series of two or more spaces into a single space. Selenium-IDE . Release 1.Selenium Documentation.0 34 Chapter 4. you’ll probably want to abort your test case so that you can investigate the cause and fix the issue(s) promptly..CHAPTER FIVE SELENESE SELENIUM COMMANDS Selenium commands.1. Selenese allows multiple ways of checking for UI elements. There’s very little point checking that the first paragraph on the page is the correct one if your test has already failed when checking that the browser is displaying the expected page. will you test that. often called selenese. are the set of commands that run your tests. An example follows: 35 . the text and its position at the top of the page are probably relevant for your test. On the other hand. If..1 Verifying Page Elements Verifying UI elements on a web page is probably the most common feature of your automated tests. 5. A sequence of these commands is a test script. 1. For example. It is important that you understand these different methods because these methods define what you are actually testing. 5. and we present the many choices you have in testing your web application when using Selenium. an element is present somewhere on the page? 2. then you only want to test that an image (as opposed to the specific image file) exists somewhere on the page. you may want to check many attributes of a page without aborting the test case on the first failure as this will allow you to review all failures on the page and take the appropriate action. Effectively an assert will fail the test and abort the current test case. and the web designers frequently change the specific image file along with its position on the page. specific text is at a specific location on the page? For example. you are testing for the existence of an image on the home page.1 Assertion or Verification? Choosing between assert and verify comes down to convenience and management of failures. specific text is somewhere on the page? 3. The best use of this feature is to logically group your test commands. Here we explain those commands in detail. if you are testing a text heading. however. If you’re not on the correct page. and start each group with an assert followed by one or more verify test commands. whereas a verify will fail the test and continue to run the test case. and only if this passed will the remaining cells in that row be verified. 5. For example: verifyTextPresent Marketing Analysis This would cause Selenium to search for.1 1. divisions <div>. one can verify that specific text appears at a specific location on the page relative to other UI components on the page. and that it follows a <div> tag and a <p> tag.3 verifyElementPresent Use this command when you must test for the presence of a specific UI element.2.2. verifyElementPresent verifyElementPresent verifyElementPresent verifyElementPresent verifyElementPresent verifyElementPresent //div/p //div/a id=Login link=Go to Marketing Research //a[2] //head/title These examples illustrate the variety of ways a UI element may be tested. One common use is to check for the presence of an image. Only if this passes will the following command run and verify that the text is present in the expected location. Do not use this when you also need to test where the text occurs on the page. Here are a few more examples. 5. The test case then asserts the first column in the second row of the first table contains the expected value. Use verifyTextPresent when you are interested in only the text itself being present on the page. It takes a single argument–the text pattern to be verified.1. verifyText must use a locator. Locators are explained in the next section. rather then its content. 2008 1. 36 Chapter 5. paragraphs.0 open assertTitle verifyText assertTable verifyTable verifyTable /download/ Downloads //h2 1.Selenium Documentation. Selenese Selenium Commands .1. that the text string “Marketing Analysis” appears somewhere on the page currently being tested. Again. One can check the existence of links.1. The first (and only) parameter is a locator for telling the Selenese command how to find the element. Release 1. specified by the existence of an <img> HTML tag. 5.2 1. only the HTML tag.3 Downloads Selenium IDE June 3. and verify. locators are explained in the next section.0 beta 2 The above example first opens a page and then asserts that the correct page is loaded by comparing the title with the expected value. verifyElementPresent //div/p/img This command verifies that an image. etc. is present on the page. This verification does not check the text.4 verifyText Use verifyText when both the text and its UI element must be tested.2. verifyElementPresent can be used to check the existence of any HTML tag within the page. If one chooses an XPath or DOM locator.2 verifyTextPresent The command verifyTextPresent is used to verify specific text exists somewhere on the page. 5. See Locating by DOM • Locators starting with “//” will use the XPath locator strategy. If no element has a matching id attribute. For instance. See Locating by Identifier. The various locator types are explained below with examples for each. This target identifies an element in the content of the web application. then the first element with an name attribute matching the location will be used.2. the identifier= in the first three examples above is not necessary. 5.Selenium Documentation.1 Default Locators You can choose to omit the locator type in the following situations: • Locators starting with “document” will use the DOM locator strategy.2.2 Locating Elements For many Selenium commands.. • Locators that start with anything other than the above or a valid locator type will default to using the identifier locator strategy.0 verifyText //table/tr/td/div/p This is my text and it occurs right after the div inside the table. and consists of the location strategy followed by the location in the format locatorType=location. The locator type can be omitted in many cases. Locating Elements 37 . Release 1.2 Locating by Identifier This is probably the most common method of locating elements and is the catch-all default when no recognised locator type is used. a target is required. 5. With this strategy. See Locating by XPath. 5.2. the first element with the id attribute value matching the location will be used. 38 Chapter 5. the test will still pass. If multiple elements have the same value for a name attribute. or really via any HTML property. testing via id and name attributes. then you can use filters to further refine your location strategy. One may or may not want to also test whether the page structure changes. Use this when you know an element’s id attribute. but also more explicit. but its functionality must be regression tested. In the case where web designers frequently alter the page.2. Release 1. The default filter type is value (matching the value attribute). becomes very important.Selenium Documentation..0 5.3 Locating by Id This type of locator is more limited than the identifier locator type. So if the page structure and organization is altered. Selenese Selenium Commands . 1 2 3 4 5 6 7 8 9 10 <html> <body> <form id= "loginForm" > <input name= "username" <input name= "password" <input name= "continue" <input name= "continue" </form> </body> <html> type= type= type= "password" /> "submit" value= "Login" /> "button" value= "Clear" /> • id=loginForm (3) 5.4 Locating by Name The name locator type will locate the first element with a matching name attribute. the three types of locators above allow Selenium to test a UI element independent of its location on the page.2. This is much less likely to change and can make your tests more robust.Selenium Documentation. or relative to an element that does have an id or name attribute. it is not necessary to include the xpath= label when specifying an XPath locator. Locating Elements 39 .2.) . By finding a nearby element with an id or name attribute (ideally a parent element) you can locate your target element based on the relationship. Selenium users can leverage this powerful language to target elements in their web applications. One of the main reasons for using XPath is when you don’t have a suitable id or name attribute for the element you wish to locate. and opens up all sorts of new possibilities such as locating the third checkbox on the page. Since only xpath locators start with “//”. You can use XPath to either locate the element in absolute terms (not advised). Release 1. XPath locators can also be used to specify elements via attributes other than id and name.5 Locating by XPath XPath is the language used for locating nodes in an XML document. Absolute XPaths contain the location of all elements from the root (html) and as a result are likely to fail with only the slightest adjustment to the application.First form element with an input child element with @name of ‘username’ • //input[@name=’username’] (4) .Absolute path (would break if the HTML was changed only slightly) • //form[1] (3) .First input child element of the form element with @id of ‘loginForm’ • //input[@name=’continue’][@type=’button’] (7) .Input with @name ‘continue’ and @type of ‘button’ • //form[@id=’loginForm’]/input[4] (7) . the following references are recommended: 5.The form element with @id of ‘loginForm’ • xpath=//form[input/\@name=’username’] (4) . but in order to learn more.First input element with @name of ‘username’ • //form[@id=’loginForm’]/input[1] (4) .Fourth input child element of the form element with @id of ‘loginForm’ These examples cover some basics.2.First form element in the HTML • xpath=//form[@id=’loginForm’] (3) . XPath extends beyond (as well as supporting) the simple methods of locating by id or name attributes. As HTML can be an implementation of XML (XHTML).0 5. html" >Continue</a> <a href= "cancel. This location strategy takes JavaScript that evaluates to an element on the page. Selenese Selenium Commands .6 Locating Hyperlinks by Link Text This is a simple method of locating a hyperlink in your web page by using the text of the link.XPath suggestions are just one of the many powerful features of this very useful add-on. If two links with the same text are present. Since only dom locators start with “document”. Release 1. which can be simply the element’s location using the hierarchical dotted notation.html" >Cancel</a> </body> <html> • link=Continue (4) • link=Cancel (5) 5. 1 2 3 4 5 6 7 <html> <body> <p>Are you sure you want to do this?</p> <a href= "continue. then the first match will be used.2. There are also a couple of very useful Firefox Add-ons that can assist in discovering the XPath of an element: • XPath Checker .Selenium Documentation. it is not necessary to include the dom= label when specifying a dom locator.with interactive examples.2.0 • W3Schools XPath Tutorial • W3C XPath Recommendation • XPath Tutorial . • Firebug .suggests XPath and can be used to test XPath results.getElementById(’loginForm’) (3) 40 Chapter 5.7 Locating by DOM The Document Object Model represents an HTML document and can be accessed using JavaScript. 1 2 3 4 5 6 7 8 9 10 <html> <body> <form id= "loginForm" > <input name= "username" <input name= "password" <input name= "continue" <input name= "continue" </form> </body> <html> type= type= type= "password" /> "submit" value= "Login" /> "button" value= "Clear" /> • dom=document. 5. 0 • dom=document. Release 1. 5.elements[3] (7) You can use Selenium itself as well as other sites and extensions to explore the DOM of your web application.8 Locating by CSS CSS (Cascading Style Sheets) is a language for describing the rendering of HTML and XML documents. You’ll find additional references there.forms[0] (3) • document.required[type="text"] (4) • css=input.forms[0].elements[0] (4) • document.forms[0]. the best place to go is the W3C publication. 5.passfield (5) • css=#loginForm input[type="button"] (4) • css=#loginForm input:nth-child(2) (5) For more information about CSS Selectors.forms[0]. Note: Most experienced Selenium users recommend CSS as their locating strategy of choice as it’s considerably faster than XPath and can find the most complicated objects in an intrinsic HTML document.2. These Selectors can be used by Selenium as another locating strategy.elements[’username’] (4) • document. A good reference exists on W3Schools. CSS uses Selectors for binding style properties to elements in the document.forms[’loginForm’] (3) • dom=document. Locating Elements 41 ..username (4) • document.2.Selenium Documentation.forms[0]. the click command will work even if the link text is changed to “Film & Television Department” or “Film and Television Department”. because globbing patterns are the default. globbing includes a third special character. Using a pattern for both a link and a simple test that the link worked (such as the verifyTitle above does) can greatly reduce the maintenance for such test cases.2 Regular Expression Patterns Regular expression patterns are the most powerful of the three types of patterns that Selenese supports. and exact. And as has been mentioned above. patterns are a type of parameter frequently required by Selenese commands. Selenese Selenium Commands .1 Globbing Patterns Most people are familiar with globbing as it is utilized in filename expansion at a DOS or Unix/Linux command line such as ls *. by using a pattern rather than the exact text.. However. 5.” A dash (hyphen) can be used as a shorthand to specify a range of characters (which are contiguous in the ASCII character set). one can also omit the label and specify just the pattern itself. Examples of commands which require patterns are verifyTextPresent. To specify a globbing pattern parameter for a Selenese command. verifyAlert. a single character. Below is an example of two commands that use globbing patterns. Globbing is fairly limited. the verifyTitle will pass as long as the two words “Film” and “Television” appear (in that order) anywhere in the page’s title. one can prefix the pattern with a glob: label.3. Selenium globbing patterns only support the asterisk and character class. link locators can utilize a pattern.c. or many characters.3. Patterns allow one to describe.” the test would still pass. if the page’s owner should shorten the title to just “Film & Television Department.c extension that exist in the current directory. via the use of special characters. In this case. regular expressions. globbing is used to display all the files ending with a .3 Matching Text Patterns Like locators. verifyTitle.Selenium Documentation. Only two special characters are supported in the Selenium implementation: * which translates to “match anything. verifyText. The glob pattern’s asterisk will match “anything or nothing” between the word “Film” and the word “Television”.” i. [ ] (character class) which translates to “match any single character found inside the square brackets.0 5. By using a pattern rather than the exact text. For example. There are three types of patterns: globbing. what text is expected rather than having to specify that text exactly. The actual link text on the page being tested was “Film/Television Department”. click verifyTitle link=glob:Film*Television Department glob:*Film*Television* The actual title of the page reached by clicking on the link was “De Anza Film And Television Department . Release 1. many text editors. A few examples will make the functionality of a character class clear: [aeiou] matches any lowercase vowel [0-9] matches any digit [a-zA-Z0-9] matches any alphanumeric character In most other contexts.e.Menu”. However. and verifyPrompt. nothing. 42 Chapter 5. Regular expressions are also supported by most high-level programming languages. the ?. assertConfirmation. 5. Below are a subset of those special characters: PATTERN . The former is case-sensitive. Release 1. The more complex example below tests that the Yahoo! Weather page for Anchorage.3. Alaska contains info on the sunrise time: open verifyTextPresent. if there was an earlier select option labeled “Real Numbers.*Television.html regexp:Sunrise: *[0-9]{1.* instead of just *).yahoo.3 Exact Patterns The exact type of Selenium pattern is of marginal usefulness. The first one uses what is probably the most commonly used regular expression pattern–. click verifyTitle link=regexp:Film.com/forecast/USAK0012. the exact pattern would be one way to do that. In Selenese. So. sed.2}:[0-9]{2} [ap]m Let’s examine the regular expression above one part at a time: Sunrise: * [0-9]{1.” it would be the option selected rather than the “Real *” option.Selenium Documentation. Matching Text Patterns 43 . the latter is case-insensitive. including the Linux/Unix command-line utilities grep.* (“dot star”). Whereas Selenese globbing patterns support only the * and [ ] (character class) features. regular expression patterns allow a user to perform many tasks that would be very difficult otherwise.*Television Department regexp:.0 and a host of tools. So. if one needed to look for an actual asterisk character (which is special for both globbing and regular expression patterns). if one wanted to select an item labeled “Real *” from a dropdown. It uses no special characters at all. Selenese regular expression patterns offer the same wide array of special characters that exist in JavaScript. the following code might work or it might not. and awk. For example.3.” It is the equivalent of the one-character globbing pattern * (a single asterisk).*Film. 5. [] * + ? {1. This two-character sequence can be translated as “0 or more occurrences of any character” or more simply. A few examples will help clarify how regular expression patterns can be used with Selenese commands. regexp: [0-9]+ is a simple pattern that will match a decimal number of any length.* The example above is functionally equivalent to the earlier example that used globbing patterns for this same test. “anything or nothing. The asterisk in the glob:Real * pattern will match anything or nothing. The only differences are the prefix (regexp: instead of glob:) and the “anything or nothing” pattern (. For example. leading to test failures. click) will do the action and continue with the following command as fast as it can.). When flow control is needed. 5. clickAndWait) tells Selenium to wait for the page to load after the action has been done. data is retrieved from server without refreshing the page. possibly involving multiple pages. which wait dynamically.6 Sequence of Evaluation and Flow Control When a script runs. Selenese. for a functional test of dynamic content. does not support condition statements (if-else. 2. Using andWait commands will not work as the page is not actually refreshed.4 The “AndWait” Commands The difference between a command and its AndWait alternative is that the regular command (e.5 The waitFor Commands in AJAX applications In AJAX driven web applications. Release 1. checking for the desired condition every second and stop as soon as the condition is met. it simply runs in sequence.). one command after another. 44 Chapter 5.g. The best approach would be to wait for the needed element in a dynamic period and then continue the execution as soon as element is found. by itself. globbing patterns and regular expression patterns are sufficient for the vast majority of us. while. This happens because Selenium will reach the AndWait‘s timeout without seeing any navigation or refresh being made. The AndWait alternative is always used when the action causes the browser to navigate to another page or reload the present one. your test will fail. there are three options: 1. However. Selenese Selenium Commands . Many useful tests can be conducted without flow control. Run a small JavaScript snippet from within the script using the storeEval command. causing Selenium to raise a timeout exception. load or other uncontrolled factors of the moment. Run the script using Selenium-RC and a client library such as Java or PHP to utilize the programming language’s flow control features.g. etc. as waitForElementPresent or waitForVisible. 5. This is done using waitFor commands.Selenium Documentation. 5.) or iteration (for. Pausing the test execution for certain period of time is also not a good approach as web element might appear later or earlier than the stipulated period depending on the system’s responsiveness. programming logic is often needed. Be aware. etc. Thus. if you use an AndWait command for an action that does not trigger a navigation/refresh.0 select //select glob:Real * In order to ensure that the “Real *” item would be selected. while the AndWait alternative (e. Most testers will export the test script into a programming language file that uses the Selenium-RC API (see the Selenium-IDE chapter). from another program. It takes two parameters. Also.7 Store Commands and Selenium Variables One can use Selenium variables to store constants at the beginning of a script. the text value to be stored and a selenium variable. It uses a locater to identify specific page text. Selenium variables can be used to store values passed to your test program from the command-line.7. However. some organizations prefer to run their scripts from Selenium-IDE whenever possible (such as when they have many junior-level people running tests for them. If this is your case.js extension. 5. 5. An equivalent store command exists for each verify and assert command. verifyText //div/p ${userName} A common use of variables is for storing input for an input field.7. 5. 5. if found. Store Commands and Selenium Variables 45 . It simply stores a boolean value–“true” or “false”–depending on whether the UI element is found. enclose the variable in curly brackets ({}) and precede it with a dollar sign like this. The text. StoreEval allows the test to store the result of running the script in a variable. store paul@mysite. when combined with a data-driven test design (discussed in a later section). A Selenium variable may also be used within a locator expression. or when programming skills are lacking).Selenium Documentation. Install the goto_sel_ide. 5. Embedding JavaScript within Selenese is covered in the next section. Release 1.7.js extension. consider a JavaScript snippet or the goto_sel_ide. or from a file. type id=login ${userName} Selenium variables can be used in either the first or second parameter and are interpreted by Selenium prior to any other operations performed by the command. StoreText can be used to extract text from the page being tested.2 storeText StoreText corresponds to verifyText.1 storeElementPresent This corresponds to verifyElementPresent. you’ll want to use the stored value of your variable.0 3. To access the value of a variable. is stored in the variable. The plain store command is the most basic of the many store commands and can be used to simply store a constant value in a selenium variable. Here are a couple more commonly used store commands.3 storeEval This command takes a script as its first parameter.org userName Later in your script.7. Use the standard variable naming conventions of only alphanumeric characters when choosing a name for your variable. toUpperCase() storedVars[’name’]. store storeEval storeEval Edith Wharton storedVars[’name’].9 echo . in this case. Below is an example in which the type command’s second parameter value is generated via JavaScript code using this special syntax: store type league of nations q searchString javascript{storedVars[’searchString’]. Username is ${userName} 46 Chapter 5. you must refer to it as storedVars[’yourVariableName’]. A Selenium-IDE user would simply place a snippet of JavaScript code into the appropriate field.1 JavaScript Usage with Script Parameters Several Selenese commands specify a script parameter including assertEval.The Selenese Print Command Selenese has a simple command that allows you to print text to your test’s output. This is useful for providing informational progress notes in your test which display on the console as your test is running. even when the parameter is not specified to be of type script.8. However. storeEval. you’ll want to access and/or manipulate a test case variable inside the JavaScript snippet used as a Selenese parameter. All variables created in your test case are stored in a JavaScript associative array. special syntax is required–the JavaScript snippet must be enclosed inside curly braces and preceded by the label javascript. Finally.2 JavaScript Usage with Non-Script Parameters JavaScript can also be used to help generate values for parameters.0 5. in this case the JavaScript String object’s toUpperCase method and toLowerCase method. which can be useful for finding where a defect exists on a page in the event your test finds a problem. The example below illustrates how a JavaScript snippet can be used to perform a simple numerical calculation: store storeXpathCount storeEval 10 //blockquote storedVars[’hits’]-storedVars[’blockquotes’] hits blockquotes paragraphs This next example illustrates how a JavaScript snippet can include calls to methods. In most cases. An associative array has string indexes rather than sequential numeric indexes.toUpperCase()} 5. echo echo Testing page footer now. These parameters require no special syntax.toLowerCase() name uc lc 5. echo statements can be used to print the contents of Selenium variables. verifyEval. 5. as in javascript {*yourCodeHere*}. normally the Target field (because a script parameter is normally the first or only parameter). Release 1. and waitForEval. The associative array containing your test case’s variables is named storedVars.8 JavaScript and Selenese Parameters JavaScript can be used with two types of Selenese parameters–script and non-script (usually expressions). Whenever you wish to access or manipulate a variable within a JavaScript snippet.Selenium Documentation.8. Selenese Selenium Commands . These notes also can be used to provide context within your test result reports. and Multiple Windows This section is not yet developed.Selenium Documentation.10 Alerts.0 5. Release 1.10. Popups. Alerts. Popups. and Multiple Windows 47 . 5. Selenese Selenium Commands .0 48 Chapter 5.Selenium Documentation. Release 1. 6. 49 . querying a database.CHAPTER SIX SELENIUM-RC 6.2 How Selenium-RC Works First. What logic could this be? For example. we will describe how the components of Selenium-RC operate and the role each plays in running your test scripts.1 Introduction Selenium-RC is the solution for tests that need more than simple browser actions and linear execution. Selenium-IDE does not directly support: • condition statements • iteration • logging and reporting of test results • error handling. Selenium-RC uses the full power of programming languages to create more complex tests like reading and writing files. particularly unexpected errors • database testing • test case grouping • re-execution of failed tests • test case dependency • screenshot capture of test failures Although these tasks are not supported by Selenium directly. In the Adding Some Spice to Your Tests section. you’ll find examples that demonstrate the advantages of using a programming language for your tests. You’ll want to use Selenium-RC whenever your test requires logic not supported by Selenium-IDE. all of them can be achieved by using programming techniques with a language-specific Selenium-RC client library. emailing test results. . Then the server passes the Selenium command to the browser using Selenium-Core JavaScript commands.2. This runs the Selenese action or verification you specified in your test script..0 6. executes the Selenium command. The diagram shows the client libraries communicate with the Server passing each Selenium command for execution. The browser.. 50 Chapter 6. using its JavaScript interpreter. interprets and runs the Selenese commands passed from the test program. Here is a simplified architecture diagram.1 RC Components Selenium-RC components are: • The Selenium Server which launches and kills browsers. Selenium-RC . • Client libraries which provide the interface between each programming language and the Selenium-RC Server.Selenium Documentation. Release 1. and acts as an HTTP proxy. intercepting and verifying HTTP messages passed between the browser and the AUT. This occurs when your test program opens the browser (using a client library API function).0 6. Selenium-Core is a JavaScript program. and reports back to your program the results of running those tests.2.3 Installation After downloading the Selenium-RC zip file from the downloads page. 6. actually a set of JavaScript functions which interprets and executes Selenese commands using the browser’s built-in JavaScript interpreter.e. you simply write a program that runs a set of Selenium commands using a client library API. if you already have a Selenese test script created in the SeleniumIDE. which doesn’t require any special installation. Installation 51 . you’ll notice it has several subfolders.jar). The RC server bundles Selenium Core and automatically injects it into the browser.1 Installing Selenium Server The Selenium-RC server is simply a Java jar file (selenium-server. a set of functions. Release 1. • Set up a programming project using a language specific client driver. This means you can use any programming language that can send HTTP requests to automate Selenium tests on the browser. A Selenium client library provides a programming interface (API). there is a programming function that supports each Selenese command. And. There is a different client library for each supported language. or possibly taking corrective action if it was an unexpected error.2 Selenium Server Selenium Server receives Selenium commands from your test program.2. 6. interprets them. you simply need to: • Install the Selenium-RC Server.3.Selenium Documentation.3 Client Libraries The client libraries provide the programming support that allows you to run Selenium commands from a program of your own design. i.3. Just downloading the zip file and extracting the server in the desired directory is suffiient. optionally. 6. Within each interface. The Selenium-IDE can translate (using its Export menu item) its Selenium commands into a client-driver’s API function calls. So to create a test program. The client library also receives the result of that command and passes it back to your program. See the Selenium-IDE chapter for specifics on exporting RC code from Selenium-IDE. The Server receives the Selenese commands from your test program using simple HTTP GET/POST requests.).. 6. Once you’ve chosen a language to work with. which run Selenium commands from your own program. you can generate the Selenium-RC code. Your program can receive the result and store it into a program variable and reporting it as a success or failure. These concepts are explained later in this section. 6.py 52 Chapter 6.5 or later). • Add to your project classpath the file selenium-java-client-driver.jar. Go to the directory where Selenium-RC’s server is located and run the following from a command-line console. • Add to your test’s path the file selenium. IntelliJ. For the server to run you’ll need Java installed and the PATH environment variable correctly configured to run it from the console. You can either use JUnit.3. You can check that you have Java correctly installed by running the following on a console: java -version If you get a version number (which needs to be 1.3 Using the Java Client Driver • Download Selenium-RC from the SeleniumHQ downloads page.bat on Windows and .0 6.jar. etc. Selenium-RC . or TestNg to run your test. Then make a shortcut to that executable on your desktop and simply double-click the icon to start the server.py • Either write your Selenium test in Python or export a script from Selenium-IDE to a python file. project.2 Running Selenium Server Before starting any tests you must start the server. • Add the selenium-java-client-driver. • Run Selenium server from the console. • Extract the file selenium-java-client-driver.) • Create a new project. • Execute your test from the Java IDE or from the command-line.Selenium Documentation. NetBeans. you’re ready to start using Selenium-RC. export a script to a Java file and include it in your Java. java -jar selenium-server. or you can write your own simple main() program. For details on Java test project configuration. Release 1.3. • Open your desired Java IDE (Eclipse. see the Appendix sections Configuring Selenium-RC With Eclipse and Configuring Selenium-RC With Intellij. • From Selenium-IDE.sh on Linux) containing the command above.jar This can be simplified by creating a batch or shell executable file (.3.jar files to your project as references. 6. Netweaver. or write your Selenium test in Java using the selenium-java-client API.4 Using the Python Client Driver • Download Selenium-RC from the SeleniumHQ downloads page • Extract the file selenium. The API is presented later in this chapter. From Selenese to a Program 53 . clickAndWait btnG assertTextPresent Results * for selenium rc Note: This example would work with the Google search page. 6. from the NUnit GUI or from the command line For specific details on . VB. we provide several different language-specific examples. • Write your own simple main() program or you can include NUnit in your project for running your test.dll • Write your Selenium test in a . ThoughtWorks.core.1 Sample Test Script Let’s start with an example Selenese test script.NET Client Driver • Download Selenium-RC from the SeleniumHQ downloads page • Extract the folder • Download and install NUnit ( Note: You can use NUnit as your test engine. however NUnit is very useful as a test engine. In this section. These concepts are explained later in this chapter.com 6.NET client driver configuration with Visual Studio.dll and ThoughtWorks.5 Using the .) • Open your desired .Net language (C#. Release 1.NET client driver configuration.dll.4 From Selenese to a Program The primary task for using Selenium-RC is to convert your Selenese into a programming language. see the appendix Python Client Driver Configuration.dll) • Add references to the following DLLs: nmock.dll.3.Net). 6.Net IDE (Visual Studio.4.Selenium.dll.Selenium.Core. or export a script from Selenium-IDE to a C# file and copy this code into the class file you just created. IntegrationTests. you can also write a simple main() function to run your tests. If you’re not familiar yet with NUnit. 6.UnitTests. SharpDevelop.4. MonoDevelop) • Create a class library (.google.Selenium Documentation. Imagine recording the following test with Seleniumopen / type q selenium rc IDE.Selenium. • Run Selenium server from console • Run your test either from the IDE. nunit. nunit. framework.dll. see the appendix .0 • Run Selenium server from the console • Execute your test from a console or your Python IDE For details on Python client driver configuration. ThoughtWorks. RegularExpressions.4.WaitForPageToLoad( "30000" ). System. } [TearDown] public void TeardownTest() { try { selenium. Assert.Framework. To see an example in a specific language. } catch (Exception) { // Ignore errors if unable to close the browser } Assert.oriented programming language.Open( "/" ).IsTextPresent( "Results * for selenium rc" )).Stop().Text.2 Selenese as Programming Code Here is the test script exported (via Selenium-IDE) to each of the supported programming languages. In C#: using using using using using using System.AreEqual( "" .IsTrue(selenium. Selenium. "selenium rc" ). verificationErrors. selenium. } } } 54 Chapter 6.Type( "q" . Release 1. selenium. select one of these buttons. ". selenium. 4444. you will understand how Selenium runs Selenese commands by reading one of these examples. System. verificationErrors = new StringBuilder().Click( "btnG" ). selenium. Selenium-RC .Start().0 6. "*firefox" .Threading. private StringBuilder verificationErrors. [SetUp] public void SetupTest() { selenium = new DefaultSelenium( "localhost" .ToString()). namespace SeleniumTests { [TestFixture] public class NewTest { private ISelenium selenium.Text. System. If you have at least basic knowledge of an object.Selenium Documentation. } [Test] public void TheNewTest() { selenium. NUnit. Test::WWW::Selenium.thoughtworks. Time::HiRes qw( sleep ) . my $sel = Test::WWW::Selenium->new( host => "localhost" . assertTrue(selenium. selenium.Pattern.Selenium Documentation.isTextPresent( "Results * for selenium rc" )). import com. In PHP: <?php require_once ’PHPUnit/Extensions/SeleniumTestCase.com/ " ). $sel->wait_for_page_to_load_ok( "30000" ). From Selenese to a Program 55 . browser => "*firefox" . $sel->click_ok( "btnG" ). $sel->type_ok( "q" . $this->setBrowserUrl( ". } 6. $sel->is_text_present_ok( "Results * for selenium rc" ).click( "btnG" ). } public void testNew() throws Exception { selenium.util. import java. } } In Perl: use use use use use use strict. selenium.google.type( "q" .com/" . $sel->open_ok( "/" ).php’ . "selenium rc" ).google.selenium. Test::Exception. public class NewTest extends SeleneseTestCase { public void setUp() throws Exception { setUp( ". "selenium rc" ).*. class Example extends PHPUnit_Extensions_SeleniumTestCase { function setUp() { $this->setBrowser( " *firefox " ). Test::More "no_plan" .example.open( "/" ).waitForPageToLoad( "30000" ).0 In Java: package com.com/" ). selenium. port => 4444. Release 1.4. browser_url => ". "*firefox" ). warnings.tests. Selenium-RC .type( " q " . ". } } ?> in Python: from selenium import selenium import unittest.new( " localhost " .open( " / " ) sel.stop() self.verificationErrors = [] self. " *firefox " .selenium.selenium sel.assertEqual([].failUnless(sel.wait_for_page_to_load( " 30000 " ) self. Release 1. " selenium rc " ) sel.selenium. re class NewTest(unittest.set_context( " test_new " ) end def teardown @selenium.Selenium Documentation. self. time. " selenium rc " ).click( " btnG " ) sel.google.TestCase): def setUp(self): self.verificationErrors) in Ruby: require " selenium " require " test/unit " class NewTest < Test::Unit::TestCase def setup @verification_errors = [] if $selenium @selenium = $selenium else @selenium = Selenium::SeleniumDriver.stop unless $selenium 56 Chapter 6. " ht @selenium. $this->waitForPageToLoad( " 30000 " ). 4444.start end @selenium.0 function testMyTestCase() { $this->open( " / " ). $this->click( " btnG " ). " *firefox " .selenium = selenium( " localhost " . $this->assertTrue($this->isTextPresent( " Results * for selenium rc " )).start() def test_new(self): sel = self.is_text_present( " Results * for selenium rc " )) def tearDown(self): self. 4444.com/ " ) self. $this->type( " q " . Here. people use either Junit or TestNG as the test engine. If you are already a “java-shop” chances are your developers will already have some experience with one of these test frameworks. "*iehta". Optionally. " selenium rc " @selenium.5.5 Programming Your Test Now we’ll illustrate how to program your own tests using examples in each of the supported programming languages. we show language-specific examples. Some development environments like Eclipse have direct support for these via plug-ins. 4444. Ruby 6. write a very simple main program that executes the generated code.0 assert_equal [].example. Release 1. There are essemtially two tasks. PHP.type " q " . Also.1 Java For Java. ". package com. • Java • C# • Python • Perl. or NUnit for . optionally modifying the result.wait_for_page_to_load " 30000 " assert @selenium.Selenium Documentation. Programming Your Test 57 . This example has coments added manually for additional clarity. Teaching JUnit or TestNG is beyond the scope of this document however materials may be found online and there are publications available. @verification_errors end def test_new @selenium. The Selenium-IDE generated code will look like this.tests. The language-specific APIs tend to differ from one to another.5. you will need to change the browser-open parameters in the statement: selenium = new DefaultSelenium("localhost". * Generate your script into a programming language from Selenium-IDE.is_text_present( " Results * for selenium rc " ) end end In the next section we’ll explain how to build a test program using the generated code. This makes it even easier. so you’ll find a separate explanation for each. // We specify the package of our tess 6. 6.com/"). You will probably want to rename the test class from “NewTest” to something of your own choosing. you can adopt a test engine platform like JUnit or TestNG for Java. * And two.NET if you are using one of those languages.google.click " btnG " @selenium.open " / " @selenium. selenium.com/"). Selenium. It includes the using statement for NUnit along with corresponding NUnit attributes identifying the role for each member function of the test class. You can see this in the generated code below. // These are the real test steps } } 6.waitForPageToLoad( "30000" ). namespace SeleniumTests { [TestFixture] 58 Chapter 6. Selenium-IDE assumes you will use NUnit as your testing framework. Selenium-RC . Also. NUnit. // This is the driver’s import.util.NET.isTextPresent( "Results * for selenium rc" )). The generated code will look similar to this. You will probably have to rename the test class from “NewTest” to something of your own choosing. selenium.Text. 4444. You can remove the module if it’s not used in your // script. System.0 import com. // We instantiate and start the browser } public void testNew() throws Exception { selenium.google.click( "btnG" ).NET Client Driver works with Microsoft. Release 1.Framework.NET testing framework like NUnit or the Visual Studio 2005 Team System.5.com/" . "*iehta". using using using using using using System.regex. "selenium rc" ).type( "q" .2 C# The . assertTrue(selenium. public class NewTest extends SeleneseTestCase { // We create our Selenium test case public void setUp() throws Exception { setUp( ". "( "/" ). "*firefox" ). System. You’ll use this for instantiating a // browser and making it do what you need. import java.Text.Pattern. selenium.Selenium Documentation.*.Threading.thoughtworks. System.google. // Selenium-IDE add the Pattern module because it’s sometimes used for // regex validations. It can be used with any .selenium. you will need to change the browser-open parameters in the statement: selenium = new DefaultSelenium("localhost". 5. "Selenium OpenQA" ). selenium. selenium. verificationErrors = new StringBuilder(). [SetUp] public void SetupTest() { selenium = new DefaultSelenium( "localhost" .AreEqual( "Google" . selenium. Programming Your Test 59 . // Provide search term as "Selenium OpenQA" selenium. "( "" . Assert.com/" ).google.com/" ).Click( "btnG" ). 6. // Click on Search button.Type( "q" . } catch (Exception) { // Ignore errors if unable to close the browser } Assert. 4444.WaitForPageToLoad( "5000" ). private StringBuilder verificationErrors. // Read the keyed search term and assert it.Start(). selenium. selenium. } [TearDown] public void TeardownTest() { try { selenium.Stop(). Release 1.0 public class NewTest { private ISelenium selenium. } [Test] public void TheNewTest() { // Open Google search engine.GetValue( "q" )). verificationErrors.google.AreEqual( "Selenium OpenQA" . // Assert Title of page. Assert.ToString()). // Wait for page to load.GetTitle()). "*iehta" .Selenium Documentation. selenium.Open( ". GetTitle()). 4444. The basic test structure is: from selenium import selenium # This is the driver’s import. Release 1.Selenium Documentation.Google Search" Assert."Selenium OpenQA .openqa.AreEqual( "Selenium OpenQA .org" is available in search results. selenium. TheNewTest(). " *firefox " .click( " btnG " ) sel. } } } You can allow NUnit to manage the execution of your tests. re # This are the basic imports added by Selenium-IDE by default.html>_.5.wait_for_page_to_load( " 30000 " ) 60 Chapter 6.verificationErrors = [] # This is an empty array where we will store any verification errors # we find in our tests self.start() # We instantiate and start the browser def test_new(self): # This is the test code. To learn pyunit refer to its official documentation < // Assert that ". Here you should put the actions you need # the browser to do during your test. import unittest. // Assert that page title is .selenium. " selenium rc " ) sel.IsTrue(selenium. 6. # You can remove the modules if they are not used in your script. class NewTest(unittest.selenium" each time we want to call the browser). you can write a simple main() program that instantiates the test object and runs each of the three methods.com/ " ) self. sel = self.org" )).3 Python Pyunit is the test framework to use for Python. "( " / " ) sel. time. You’ll use this class for instantiating a # browser and making it do what you need.org/library/unittest. Selenium-RC . and TeardownTest() in turn. Or alternatively.type( " q " . SetupTest(). sel. Assert.Google Search" .IsTextPresent( " # We assign the browser to the variable "sel" (just to save us from # typing "self.selenium = selenium( " localhost " .TestCase): # We create our unittest test case def setUp(self): self. verificationErrors) # And make the test fail if we found that any verification errors # were found 6. $this->setBrowserUrl("" . PHP or Ruby. port => 4444. aspects of the API.com/" selenium.assertEqual([].stop() # we close the browser (I’d recommend you to comment this line while # you are creating and debugging your tests) self. Here. PHP. Ruby The members of the documentation team have not used Sel-RC with Perl.0 self.5. assuming you understand Selenese. 6.1 Starting the Browser In C#: selenium = new DefaultSelenium( "localhost" .failUnless(sel. ". Learning the API 61 .Selenium Documentation. We would love to include some examples from you and your experiences support Perl and PHP users. In Java: setUp( ". much of the interface will be self-explanatory. "*firefox" ). 6. browser_url => ". however.google.4 Perl.is_text_present( " Results * for selenium rc " )) # These are the real test steps def tearDown(self): self.6 Learning the API The Selenium-RC API uses naming conventions that. In PHP: $this->setBrowser("*firefox").6.google.google. we explain the most critical and possibly less obvious.com/"). 4444.Start(). self. browser => "*firefox" . Release 1. "*firefox" . If you are using Selenium-RC with either of these two languages please contact the Documentation Team (see the chapter on contributing). In Python: 6. In Perl: my $sel = Test::WWW::Selenium->new( host => "localhost" .com/" ).selenium. i.com/ " ) self. 6. essentially identical to a user typing input into the browser. like open or type or the verify commands. That’s great.selenium = selenium( " localhost " . 4444. Release 1. port Specifies the TCP/IP socket where the server is listening waiting for the client to establish a connection.type( " field-id " .new( " localhost " .0 self. so in this case localhost is passed.e. ". Note that some of the client libraries require the browser to be started explicitly by calling its start() method. 4444. These methods execute the Selenium commands. The parameters required when creating the browser instance are: host Specifies the IP address of the computer where the server is located.selenium. This is a required parameter. In some clients this is an optional parameter. browser The browser in which you want to run the tests. it allows you to build your reporting customized to your needs using features of your chosen programming language.6. but what if you simply want something quick that’s already done for you? Often an existing library or test framework will exist that can meet your needs faster than developing your own test reporting code. Rather. to call the type method of the selenium object: selenium. For example.2 Running Commands Once you have the browser initialized and assigned to a variable (generally named “selenium”) you can make it run Selenese commands by calling the respective methods from the browser variable.google. by using the locator and the string you specified during the method call. 62 Chapter 6. This also is optional in some client drivers. " *firefox " . 6. " http:/ @selenium. this is the same machine as where the client is running. url The base url of the application under test. " string to type " ) In the background the browser will actually perform a type operation.start Each of these examples opens the browser and represents that browser by assigning a “browser instance” to a program variable.7 Reporting Results Selenium-RC does not have its own mechanism for reporting results. Usually.start() In Ruby: if $selenium @selenium = $selenium else @selenium = Selenium::SeleniumDriver. This is required by all the client libs and is integral information for starting up the browser-proxy-AUT communication. Selenium-RC .Selenium Documentation. " *firefox " . This browser variable is then used to call methods from the browser. 7.4 Test Reporting Examples To illustrate. after the initial. ReportNG provides a simple. The TestNG framework generates an HTML report which list details of tests. That may gradually lead to you developing your own reporting. that’s beyond the scope of this user guide. Regardless. • ReportNG is a HTML reporting plug-in for the TestNG framework. we’ll direct you to some specific tools in some of the other languages supported by Selenium.7. possibly in parallel to using a library or test framework.NET also has its own. Refer to JUnit Report for specifics. JUnit and TestNG. It is intended as a replacement for the default TestNG HTML report. See TestNG Report for more. These often support a variety of formats such as HTML or PDF.Selenium Documentation. • If Selenium Test cases are developed using TestNG then no external task is required to generate test reports. colour-coded view of the test results. We won’t teach the frameworks themselves here. 6.1 Test Framework Reporting Tools Test frameworks are available for many programming languages.7. The ones listed here are commonly used and have been used extensively (and therefore recommended) by the authors of this guide. As you begin to use Selenium no doubt you will start putting in your own “print statements” for reporting progress. For example. • Also. . 6. along with their primary function of providing a flexible test engine for executing your tests.3 What’s The Best Approach? Most people new to the testing frameworks will being with the framework’s built-in reporting features. 6. include library code for reporting results. Release 1.7. See ReportNG for more. learning curve you will naturally develop what works best for your own situation. Java has two commonly used test frameworks. Test Reports in Java • If Selenium Test cases are developed using JUnit then JUnit Report can be used to generate test reports. From there most will examine any available libraries as that’s less time consuming than developing your own.0 6.2 Test Report Libraries Also available are third-party libraries specifically created for reporting test results in your chosen programming language. Their are good books available on these test frameworks however along with information on the internet. but short. for a very nice summary report try using TestNG-xslt. 6. These. We will simply introduce the framework features that relate to Selenium along with some techniques you can apply.7. A TestNG-xslt Report looks like this. NUnit. Reporting Results 63 . Refer to RSpec Report for more. Test Reports for Python • When using Python Client Driver then HTMLTestRunner can be used to generate a Test Report. take a look at Selenium Server Logging 6. adding programming logic to your tests. you may need exception-handling for error recovery. Basically. You will find as you transition from the simple tests of the existence of page elements to tests of dynamic functionality involving multiple web-pages and varying data that you will require programming logic for verifying expected results.0 See TestNG-xslt for more. For these reasons and others. In addition you can report progress information using I/O. You can do some conditions by embedding javascript in Selenese parameters. we have written this section to illustrate the use of common programming techniques to give you greater ‘verification power’ in your automated testing.Selenium Documentation. Test Reports for Ruby • If RSpec framework is used for writing Selenium Test Cases in Ruby then its HTML report can be used to generate test report. In addition. and most conditions will be much easier in a programming language. See HTMLTestRunner. 64 Chapter 6. It’s the same as for any program. In this section we’ll show some examples of how programming language constructs can be combined with Selenium to solve common testing problems.8 Adding Some Spice to Your Tests Now we’ll get to the whole reason for using Selenium-RC. Program flow is controlled using condition statements and iteration. Logging the Selenese Commands • Logging Selenium can be used to generate a report of all the Selenese commands in your test along with the success of failure of each. Release 1. Note: If you are interested in a language independent log of what’s going on. Please refer to Logging Selenium. Logging Selenium extends the Java client driver to add this Selenense logging ability. the Selenium-IDE does not support iteration and standard condition statements. Selenium-RC . however iteration is impossible. Release 1. String[] arr = { "ide" . when running the following line: selenium. "grid" }. But multiple copies of the same code is not good program practice because it’s more work to maintain. "rc" . By using a programming language. If element ‘q’ is not on the page then an exception is thrown: 6. In C#: // Collection of String values.click( "btnG" ).waitForPageToLoad( "30000" ).type( "q" . For example. you may want to to execute a search multiple times. sel. foreach (String s in arr) { sel. } 6.8. although the code is simple and can be easily adapted to the other supported languages. "selenium " +s). Adding Some Spice to Your Tests 65 .. // Execute loop for each String in array ’arr’.2 Condition Statements To illustrate using conditions in tests we’ll start with an example. Or.Selenium Documentation.type( "q" . If you have some basic knowledge of an object-oriented programming language you shouldn’t have difficulty understanding this section. let’s check the Selenium the search results. A common problem encountered while running Selenium tests occurs when an expected element is not available on page. 6. For example. "selenium " +s).8. assertTrue( "Expected text: " +s+ " is missing on page. sel.isTextPresent( "Results * for selenium " + s))." .open( "/" ). we can iterate over the search results for a more flexible and maintainable solution. sel.1 Iteration Iteration is one of the most common things people need to do in their tests. perhaps for verifying your test results you need to process a “result set” returned from a database.8.0 The examples in this section are written in Java. Using the same Google search example we used earlier. sel. Selenium Documentation. // Create array in java scrip script += "inputFields = window.document." .selenium.out.printf( "Element: " +q+ " is not available on page. script += "inputId.// Create array in java scri script += "var cnt = 0. A better approach is to first validate if the element is really present and then take alternatives when it it is not.8." .id !=’undefined’ " + "&& inputFields[i]. getEval method of selenium API can be used to execute java script from selenium RC. String[] checkboxIds = selenium.type( "q" . Let’s look at this using Java.thoughtworks.split( ".SeleniumException: ERROR: Element q not found This can cause your test to abort.0 com. // Split the s return checkboxIds.getEval( "window.// Convert array in to string. 66 Chapter 6.id . But often that is not desireable as your test script has many other subsequent tests to perform. script += "for(var i=0. } To count number of images on a page: selenium. Selenium-RC . i++) {" . For some tests that’s what you want.images. Consider an application having check boxes with no static identifiers. if(selenium.getEval(script). "}" + // end of if.toString(). } else { System.isElementPresent( "q" )) { selenium.length." + // increment the counter.document. "}" ." . // Loop through the script += "if(inputFields[i]." ) } The aadvantage of this approach is to continue with test execution even if some UI elements are not available on page. In this case one could evaluate js from selenium RC to get ids of all check boxes and then exercise them. Release 1. // end of for. // If element is available on page then perform type operation. Remember to use window object in case of dom expressions as by default selenium window is referred and not the test window." . script += "var inputFields = new Array(). i<inputFields.3 Executing Javascript from Your Test Javascript comes very handy in exercising application which is not directly supported by selenium." ). // Counter for check box ids.getElementsByTagName(’input’). // If input fie script += "inputId[cnt]=inputFields[i]. "Selenium rc" )." + // Save check box id to inp "cnt++.id !=null " + "&& inputFields[i].getAttribute(’type’) == ’checkbox’) {" ." ). public static String[] getAllCheckboxIds () { String script = "var inputId = new Array()." .length. 6. 9 Server Options When the server is launched. 6.com -Dhttp. so we’ve provided explanations for some of the more important options.proxyPort. command line options can be used to change the default server behaviour.9.Selenium Documentation. Recall. Selenium by default ran the application under test in a sub frame as shown here.jar -h You’ll see a list of all the options you can use with the server and a brief description of each. 6. The provided descriptions will not always be enough. Server Options 67 .9.0.proxyHost.0 6.1 Proxy Configuration If your AUT is behind an HTTP proxy which requires authentication then you should you can configure http. run the server with the -h option. $ java -jar selenium-server. prior to version 1. http. since multiwindow mode is the default behavior. the server is started by running the following. 6. $ java -jar selenium-server.jar -Dhttp. Release 1.proxyUser and http.proxyPort=8080 -Dhttp.proxyHost=proxy.0 you can probably skip this section.jar To see the list of options.proxyPassword using the following command. However. $ java -jar selenium-server.9.2 Multi-Window Mode If you are using Selenium 1. http. Selenium Documentation. Selenium-RC .0 Some applications didn’t run correctly in a sub frame. and needed to be loaded into the top frame of the window. Release 1. 68 Chapter 6. The multi-window mode option allowed the AUT to run in a separate window rather than in the default frame where it could then have the top frame it required. follow this procedure.). to create a separate Firefox profile.9. First.0 and later runs in a separate profile automatically. Selenium-RC 1.0 For older versions of Selenium you must specify multiwindow mode explicitely with the following option: -multiwindow In Selenium-RC 1. Open the Windows Start menu. using the standard for earlier Selenium versions) you can state this to the Selenium Server using the option -singlewindow 6. so if you are using Selenium 1.0.0. if you want to run your test within a single frame (i. you will need to explicitly specify the profile. select “Run”. Server Options 69 . then type and enter one of the following: 6.e.3 Specifying the Firefox Profile Firefox will not run two instances simultaneously unless you specify a separate profile for each instance.Selenium Documentation. However.9. Release 1. you can probably skip this section. log This log file is more verbose than the standard console logs (it includes DEBUG level logging messages).server.0 firefox. For example: 20:44:25 DEBUG [12] org. . Note this requires you to pass in an HTML Selenese suite.9.jar -log selenium. 6. This command line is very long so be careful when you type it.exe -P Create the new profile using the dialog. Selenium-RC . Note: When using this option.SeleniumDriverResourceHandler Browser 465828/:top frame1 posted START NEW 70 Chapter 6.5 Selenium Server Logging Server-Side Logs When launching selenium server the -log option can be used to record valuable debugging information reported by the Selenium Server to a text file. regardless of whether they are profile files or not.9. if the test doesn’t complete within that amount of time.jar -htmlSuite "*firefox" " Run Selenese Directly Within the Server Using -htmlSuite You can run Selenese html files directly within the Selenium Server by passing the html file to the server’s command line. and the ID number of the thread that logged the message.google. Release 1. More information about Firefox profiles can be found in Mozilla’s Knowledge Base 6. The when you run Selenium Server. the server will start the tests and wait for a specified number of seconds for the test to complete.exe -profilemanager firefox. java -jar selenium-server. Also be aware the -htmlSuite option is incompatible with -interactive You cannot run both at the same time.. the command will exit with a non-zero exit code and no results file will be generated.openqa.selenium. The log file also includes the logger name.com" "c:\absolute This will automatically launch your HTML suite. not a single test. For instance: java -jar selenium-server. run all the tests and save a nice HTML report with the results.Selenium Documentation. When specifying the run mode. Specifying the Path to a Specific Browser 71 .Selenium Documentation. This is called XSS (Cross-site Scripting). The Same Origin Policy dictates that any code loaded within the browser can only operate within that website’s domain. use the *custom specifier followed by the full path to the browser’s executable: *custom <path to browser> 6. If this were possible. to log browserSideLogs (as well as all other DEBUG level logging messages) to a file.11. if the browser loads javascript code when it loads www. pass the -browserSideLog argument to the Selenium Server. java -jar selenium-server. Selenium-Core (and its JavaScript commands that make all the magic happen) must be placed in the same origin as the Application Under Test (same URL). This is useful if you have different versions of the same browser. a script placed on any website you open. Also.1 The Same Origin Policy The main restriction that Selenium’s has faced is the Same Origin Policy.10 Specifying the Path to a Specific Browser You can specify to Selenium-RC a path to a specific browser.mysite2.10. in many cases. 6. So for example. To understand in detail how Selenium-RC Server works and why it uses proxy injection and heightened privilege modes you must first understand the same origin policy. Release 1. would be able to read information on your bank account if you had the account page opened on other tab. 6. it cannot run that loaded code against www. 6.0 The message format is TIMESTAMP(HH:mm:ss) LEVEL [THREAD] LOGGER . It cannot perform functions on another website.com. but could be useful for understanding some of the problems you can find in the future. and you wish to use a specific one. this is used to allow your tests to run against a browser not directly supported by Selenium-RC.mysite.com–even if that’s another of your sites. To access browser-side logs. Browser-Side Logs JavaScript on the browser side (Selenium Core) also logs important messages.jar -browserSideLog -browserSideLog must be combined with the -log argument. It’s not fundamental for a Selenium user to know this.MESSAGE This message may be multiline.11 Selenium-RC Architecture Note: This topic tries to explain the technical implementation behind Selenium-RC. To work within this policy. This security restriction is applied by every browser in the market and its objective is to ensure that a site’s content will never be accessible by a script from other site. these can be more useful to the end-user than the regular Selenium Server logs. Note: You can find additional information about this topic on Wikipedia pages about Same Origin Policy and XSS. the Selenium Server acts as a client-configured 1 HTTP proxy 2 . that sits between the browser and the Application Under Test. 6. however. Selenium-RC . It acts as a “web server” that delivers the AUT to the browser. essentially. Here is an architectural diagram. tells the browser that the browser is working on a single “spoofed” website that the Server provides. gives the capability of “lying” about the AUT’s real URL. Selenium-Core was limited by this problem since it was implemented in Javascript. Release 1. In Proxy Injection Mode. The proxy is a third person in the middle that passes the ball between the two parts.Selenium Documentation. Its use of the Selenium Server as a proxy avoids this problem. Selenium-RC is not.11.2 Proxy Injection The first method Selenium used to avoid the The Same Origin Policy was Proxy Injection. It then masks the AUT under a fictional URL (embedding Selenium-Core and the set of tests and delivering them as if they were coming from the same origin). 1 72 Chapter 6.0 Historically. Being a proxy. It. this is why any HTTP request that the browser does will pass through Selenium server and the response will pass through it and not from the real server. restricted by the Same Origin Policy. 2 The browser is launched with a configuration profile that has set localhost:4444 as the HTTP proxy. 4. The client-driver passes a Selenese command to the server. 3. 2. Selenium-RC server communicates with the Web server asking for the page and once it receives it. 6.). Selenium-RC Architecture 73 . The Server interprets the command and then triggers the corresponding javascript execution to execute that command within the browser. The browser receives the open request and asks for the website’s content to the Selenium-RC server (set as the HTTP proxy for the browser to use). 7. the following happens: 1. The client/driver establishes a connection with the selenium-RC server. 6. typically opening a page of the AUT. Selenium-RC server launches a browser (or reuses an old one) with an URL that injects SeleniumCore’s javascript into the browser-loaded web page.Selenium Documentation.11. Release 1.0 As a test suite starts in your favorite language. The client/driver establishes a connection with the selenium-RC server. or filling file upload inputs and pretty useful stuff for Selenium).Selenium Documentation.0 8. Selenium-RC . Selenium-Core gets the first instruction from the client/driver (via another HTTP request made to the Selenium-RC Server). 74 Chapter 6. Selenium Core is able to directly open the AUT and read/interact with its content without having to pass the whole AUT through the Selenium-RC server. As a test suite starts in your favorite language.11. 6. 3. which allows websites to do things that are not commonly permitted (as doing XSS. Selenium-RC server launches a browser (or reuses an old one) with an URL that will load Selenium-Core in the web page. By using these browser modes.3 Heightened Privileges Browsers This workflow on this method is very similar to Proxy Injection but the main difference is that the browsers are launched in a special mode called Heightened Privileges. Here is the architectural diagram. the following happens: 1. 2. The browser receives the web page and renders it in the frame/window reserved for it. Release 1. you will not need to install any special security certificates. and should not be used unless required by legacy test programs. 6. In version 1. Release 1. Their use will present limitations with security certificate handling and with the running of multiple windows if your application opens additional browser windows.0 you do not need.Selenium Documentation.0 4. you must use a run mode that supports this and handles the security certificate for you. In Selenium-RC 1. Selenium-RC. to your client machine in a place where the browser can access it. these older run modes. Handling HTTPS and Security Popups 75 . you may need to explicitly install this security certificate. 5. Using these run modes. and these popups cannot be closed using Selenium-RC. When dealing with HTTPS in a Selenium-RC test. use *chrome or *iehta. However. These are provided only for backwards compatibility only. 6. Once the browser receives the web page. 6. The browser receives the open request and asks the Web Server for the page. This tricks the browser into thinking it’s accessing a site different from your AUT and effectively suppresses the popups. for the run mode. To get around this.0 beta 1. and should not use. (again when using a run mode that support this) will install its own security certificate. The browser now thinks untrusted software is trying to look like your application. it will assume that application is not ‘trusted’. In earlier versions of Selenium-RC. temporarily. You can check this in your browser’s options or internet properties (if you don’t know your AUT’s security certificate ask you system administrator).0 the run modes *firefox or *iexplore are recommended.1 Security Certificates Explained Normally. the browser will need a security certificate. typically opening a page of the AUT. When this occurs the browser displays security popups. when the browser accesses the AUT using HTTPS. Most users should no longer need to do this however. Otherwise. there are additional run modes of *iexploreproxy and *firefoxproxy.12. When Selenium loads your browser it injects code to intercept messages between the browser and the server. These were considered ‘experimental modes although they became quite stable and many used them.12.12 Handling HTTPS and Security Popups Many applications switch from using HTTP to HTTPS when they need to send encrypted information such as passwords or credit card information. This is common with many of today’s web applications. Selenium-RC will handle it for you. To ensure the HTTPS site is genuine. Selenium-Core acts on that first instruction. including Selenium-RC 1. if you are running Selenium-RC in proxy injection mode. your browser will trust the application you are testing by installing a security certificate which you already own. It responds by alerting you with popup messages. renders it in the frame/window reserved for it. In earlier versions.0 beta 2 and later use *firefox or *iexplore for the run mode. . Selenium-RC supports this. If you are using Selenium 1. You specify the run mode when your test program initializes Selenium. Selenium-RC . If so. Unix users should avoid launching the browser using a shell script.g. be sure you started the Selenium Server. We present them along with their solutions here. See the SeleniumHQ.14.. For example. firefox-bin) directly. it’s generally better to use the binary executable (e. This can also be done from the Server in interactive mode. In addition. cmd=getNewBrowserSession&1=*custom c: \P rogram Files \M ozilla Firefox \M yBrowser..13 Supporting Additional Browsers and Browser Configurations The Selenium API supports running against multiple browsers in addition to Internet Explorer and Mozilla Firefox.14 Troubleshooting Common Problems When getting started with Selenium-RC there’s a few potential problems that are commonly encountered.. One may need to set the MOZ_NO_REMOTE environment variable to make Mozilla browsers behave a little more predictably. you pass in the path to the browsers executable within the API call as follows. you can launch Firefox with a custom configuration like this: cmd=getNewBrowserSession&1=*custom c: \P rogram Files \M ozilla Firefox \f irefox.e. but instructions for this can differ radically from browser to browser. Normally this just means opening your browser preferences and specifying “localhost:4444” as an HTTP proxy. It should display this message or a similar one: "Unable to connect to remote server. 76 Chapter 6. you can force Selenium RC to launch the browser as-is. Release 1.Inner Exception Message: No connection could be made because the target machine actively refused it.13.org website for supported browsers.Selenium Documentation. 6. but if you launch the browser using the “*custom” run mode.exe&2=htt Note that when launching the browser this way. then there is a problem with the connectivity between the Selenium Client Library and the Selenium Server. 6. Be aware that Mozilla browsers can vary in how they start and stop. an exception will be thrown in your test program. you may still run your Selenium tests against a browser of your choosing by using the “*custom” run-mode (i. Consult your browser’s documentation for details. you must manually configure the browser to use the Selenium Server as a proxy.NET and XP Service Pack 2) If you see a message like this.0 6..1 Running Tests with Different Browser Configurations Normally Selenium-RC automatically configures the browser.exe&2=h 6. when a browser is not directly supported. in place of *firefox or *iexplore) when your test application starts the browser.1 Unable to Connect to Server When your test program cannot connect to the Selenium Server. without using an automatic configuration.." (using . With this.. you do want to run Selenium Server on a remote machine. you can use common networking tools like ping. not a friendly error message. Troubleshooting Common Problems 77 .14.0) cannot start because the browser is already open and you did not specify a separate profile. If.Selenium Documentation. the most likely cause is your test program is not using the correct URL. Check to be sure the path is correct. 6. • You specified the path to the browser explicitly (using “*custom”–see above) but the path is incorrect. the connectivity should be fine assuming you have valid TCP/IP connectivity between the two machines. however. In truth. If you have difficulty connecting.14. Assuming your operating system has typical networking and TCP/IP settings you should have little difficulty. To do this use “localhost” as your connection parameter. ipconfig(Unix)/ifconfig (Windows). See the section on Firefox profiles under Server Options. The error from the test program looks like this: Error: java.14.0 When starting with Selenium-RC. 6. many people choose to run the tests this way.RuntimeException: Firefox refused shutdown while preparing a profile Here’s the complete error msg from the server: 6. This can easily happen.3 Selenium Cannot Find the AUT If your test program starts the browser successfully. You must manually change the URL to the correct one for your application to be tested. your system administrator can assist you. sorry. telnet. etc to ensure you have a valid network connection. you didn’t specify a separate profile when you started the Selenium Server. most people begin by running thier test program (with a Selenium Client Library) and the Selenium Server on the same machine.14. 6. Release 1. Check the parameters you passed to Selenium when you program opens the browser. When you use Selenium-IDE to export you script.2 Unable to Load the Browser Ok. Also check the forums to be sure there are no known issues with your browser and the “*custom” parameters. it inserts a dummy URL.4 Firefox Refused Shutdown While Preparing a Profile This most often occurs when your run your Selenium-RC test program against Firefox. but you already have a Firefox browser session running and. but if the Selenium Server cannot load the browser you will likley see this error.lang. If unfamilar with these. (500) Internal Server Error This could be caused by • Firefox (prior to Selenium 1. • The run mode you’re using doesn’t match any browser on your machine. We recommend beginning this way since it reduces the influence of potential networking problems which you’re getting started. but the browser doesn’t display the website you’re testing. you could be having a problem with Internet Explorer’s proxy settings. java -version You should see a message showing the Java version. • *iexplore: If the browser is launched using *iexplore.. Standard Edition (build 1.. or you may simply need to add it to your PATH environment variable. The Selenium Server requires Java 1.browserlaunchers.openqa. Selenium-RC . use the latest release version of Selenium with the most widely used version of your browser.. 16:20:27.. java version "1. When in doubt. Try 78 Chapter 6..FirefoxCustomProfileLaunc her. Caused by: org.822 WARN .. it only appears to exist when the proxy is properly configured.selenium. 6. Proxy Configuration highly depends on how the browser is launched with *firefox. or *custom.lock To resolve this. At times you may be lucky (I was).selenium.5.0)” while starting server This error says you’re not using a correct version of Java.1 java.... You must make sure that those are correctly configured when Selenium Server launches the browser..com HTTP/1.5 or higher.RuntimeException: Firefox refused shutdown while preparing a profile at org. But don’t forget to check which browser versions are supported by the version of Selenium you are using. Release 1.....919 INFO .server. run this from the command line.7 404 error when running the getNewBrowserSession command If you’re getting a 404 error while attempting to open a page on “ Firefox profile..5. 6.minor version 49..0_07-b03. The “selenium-server” directory doesn’t exist on google. see the section on Specifying a Separate Firefox Profile 6. To check double-check your java version. Selenium Server attempts To configure the global proxy settings in the Internet Options Control Panel..5..openqa.92 does not support Firefox 3.qa..14.com.GET /selenium-server/driver/?cmd=getNewBrowserSession&1=*fir efox&2=http%3a%2f%2fsage-webapp1. Selenium-RC 0. mixed mode) If you see a lower version number. *iexplore.5 Versioning Problems Make sure your version of Selenium supports the version of your browser.14.0 16:20:03.14.browserlaunchers.lang.0_07" Java(TM) 2 Runtime Environment. you may need to update the JRE.google.0_07-b03) Java HotSpot(TM) Client VM (build 1..waitForFullProfileToBeCreated(FirefoxCustomProfileLauncher.java:277) ..6 Error message: “(Unsupported major.idc.FirefoxCustomProfileLaunc her$FileLockRemainedException: Lock file still present! C:\DOCUME~1\jsvec\LOCALS ~1\Temp\customProfileDir203138\parent.. then it must be because the Selenium Server was not correctly configured as a proxy.Selenium Documentation.com/seleniumserver/“. *opera. For example.. This error can be intermittent. Release 1.0 looking at your Internet Options control panel. Double-check that you’ve configured your proxy settings correctly. accesses a page from and then accesses a page from) or switching protocols (moving from to). • For other browsers (*firefox. you should never see SSL certificate warnings. • SSL certificate warnings: Selenium RC automatically attempts to spoof SSL certificates when it is enabled as a proxy. or with *iehta browser launcher. To login to a site that requires HTTP basic authentication. then the browser will be unable to connect to the Internet. which is one way to make sure that one is adjusting the relevant settings. Try configuring the browser to use the wrong proxy server hostname. To check whether you’ve configured the proxy correctly is to attempt to intentionally configure the browser incorrectly. – You may also try configuring your proxy manually and then launching the browser with *custom. otherwise you’ll get a 404 error.14. • *custom: When using *custom you must configure the proxy correctly(manually). This error can also occur when JavaScript attempts to find UI objects which are not yet available (before the page has completely loaded). If your browser is configured correctly.com/blah/blah/blah“). see more on this in the section on HTTPS. If you’re encountering 404 errors and have followed this user guide carefully post your results to user forums for some help from the user community.8 Permission Denied Error The most common reason for this error is that your session is attempting to violate the same-origin policy by crossing domain boundaries (e. Often it is impossible to reproduce the problem with a debugger because the trouble stems from race conditions which are not reproducable when the debugger’s overhead is added to the system.g. Each type of popup needs to be addressed differently. you’ll need to start Selenium Server with “-Dhttp. If you had successfully configured the browser’s proxy settings incorrectly. *opera) we automatically hard-code the proxy for you.9 Handling Browser Popup Windows There are several kinds of “Popups” that you can get during a Selenium test. Permission issues are covered in some detail in the tutorial.Selenium Documentation. use a username and password in the URL. Click on the “Connections” tab and click on “LAN Settings”. as described in RFC 1738.14. Read the section about the The Same Origin Policy. You may need to know how to manage these.. and so ther are no known issues with this functionality. or are no longer available (after the page has started to be unloaded). see the Proxy Configuration for more details. • HTTP basic authentication dialogs: These dialogs prompt for a username/password to login to the site. Proxy Injection carefully.14. 6. or the wrong port. Troubleshooting Common Problems 79 . You may not be able to close these popups by running selenium commands if they are initiated by the browser and not your AUT.. 6. like this: open(“”. the real firefox-bin is located on: 80 Chapter 6. versions of Selenium before 1. 6. you may find yourself getting empty verify strings from your tests (depending on the programming language used). 6. leaving the browser running.page”. If you’re seeing an alert pop-up.0 needed to invoke “firefox-bin” directly. make sure that the real executable is on the path. You can specify the path to firefox-bin directly.13 Problems With Verify Commands If you export your tests from Selenium-IDE. like this.15 Firefox on Linux On Unix/Linux.page”.startup. cmd=getNewBrowserSession&1=*firefox /usr/local/firefox/firefox-bin&2=firm and window.14.onload() function runs)? No.14.prompt) so they won’t stop the execution of your page. 6. Release 1.11 Firefox *chrome doesn’t work with custom profile Check Firefox profile folder -> prefs.” and try again. when it comes time to kill the browser Selenium RC will kill the shell script.14 Safari and MultiWindow Mode Note: This section is not yet developed.12 Is it ok to load a custom pop-up as the parent page is loading (i. Note: This section is not yet developed. These interceptors work best in catching new windows if the windows are loaded AFTER the onload() function. so make sure that executable is on the path. Selenese contains commands for asserting or verifying alert and confirmation popups. 6. Again.google.14. If executing Firefox through a shell script. 0). 0). Selenium may not recognize windows loaded before the onload function. • modal JavaScript alert/confirmation/prompt dialogs: Selenium tries to conceal those dialogs from you (by replacing window.Selenium Documentation.10 On Linux.14.js -> user_pref(“browser.c 6. On most Linux distributions.e. refer to the HTTPS section for how to do this..0 browser to trust our dangerous “CyberVillains” SSL certificate authority. so if you are using a previous version. Comment this line like this: “//user_pref(“browser.14. it’s probably because it fired during the page load process. Selenium relies on interceptors to determine window names as they are being loaded. before the parent page’s javascript window.startup. 6.alert. See the sections on these topics in Chapter 4. Selenium-RC . window. which is usually too early for us to protect the page.14. why isn’t my Firefox browser session closing? On Unix/Linux you must invoke “firefox-bin” directly. x. For example: //td[@style="background-color:yellow"] This would work perfectly in Firefox.x/ Where the x.x/" If necessary.17 Where can I Ask Questions that Aren’t Answered Here? Try our user forums 6.x is the version number you currently have.0 /usr/lib/firefox-x. even if the source code is in lowercase.bashrc file: export PATH= "$PATH:/usr/lib/firefox-x.x/firefox-bin " 6. So. Release 1.x. IE interprets the keys in @style as uppercase. you should use: //td[@style="BACKGROUND-COLOR:yellow"] This is a problem if your test is intended to work on multiple browsers.16 IE and Style Attributes If you are running your tests on Internet Explorer and you cannot locate elements using their style attribute. 6.14. you can specify the path to firefox-bin directly in your test.x. Troubleshooting Common Problems 81 .Selenium Documentation.14. but you can easily code your test to detect the situation and try the alternative locator that only works in IE. you will have to add the following to your .14. to add that path to the user’s path. Opera or Safari but not with IE.x. like this: " *firefox /usr/lib/firefox-x. So. Release 1. Selenium-RC .Selenium Documentation.0 82 Chapter 6. We will define some terms here to help us categorize the types of testing typical for a web-application. element on a particular page. the priority for each of those tests.2 What to Test? What elements of your application will you test? Of course. 7. that depends on aspects of your project– end-user expectations. • privacy policy. We decided not to hold back on information just because a chapter was not ready.2. does each page have the correct text within that header? 83 . This may not be new to you. We have some content here already though. For instance • Does each page have it’s expected page title? This can be used to verify your test found an expected page after following a link. Once the project boundaries are defined though. These terms are by no means standard in the industry. • Does the application’s home page contain an image expected to be at the top of the page? • Does each page of the website contain a footer area with links to the company contact page. and trademarks information? • Does each page begin with heading text using the <h1> tag? And. 7. priorities set by the project manager and so on. you the tester will of course make many decisions on what aspects of the application to test. nonchanging.1 Content Tests The simplest type of test for a web-application is to simply test for the existence of an static. time allowed for the project.CHAPTER SEVEN TEST DESIGN CONSIDERATIONS NOTE: This chapter is currently being developed.1 Introducing Test Design In this subsection we describe a few types of different tests you can do with Selenium. but we provide this as a framework for relating Selenium test automation to the decisions a quality assurance professional will make when deciding what tests to perform. although the concepts we present here are typical for web-applications. 7. and whether to automate those tests or not. however. drop-down lists. content tests may prove valuable. had a unique identifier for each specific document.. the search results page returns a data set with one set of documents and their correponding identifiers. So. Should that go in this section or in a separate section? 7. requiring some type of user input.INCLUDE A GOOD DEFINITION OF AJAX OFF THE INTERNET.3 Function Tests These would be tests of a specific function within your application. for a particular search. In this case 7. your application will be undergoing platform changes. vary with each different instance of the page that contains them. data is retrieved from the application server with out refreshing the page. Dynamic HTML of an object might look as: <input type= "checkbox" value= "true" id= "addForm:_id74:_id75:0:_id79:0:checkBox" name= This is HTML snippet for a check box..2.2.0 You may or may not need content tests. 7. An example will help. . or anyother other browser-supported input. 7. in a different search.2 Link Tests A frequent source of errors for web-sites is broken links and missing pages behind those broken links. Need to include a description of how to design this test and a simple example.. Dynamic content involves UI elements who identifying properties change each time you open the page displaying them. For example. characteristics used to locate the element. If. that is.4 Dynamic Content Dynamic content is a set of page elements whose identifiers. and one or more response pages.Selenium Documentation. Submit and Cancel operations. Testing for these involves clicking each link and veryfying the expected page behind that link loads correctly. and returning some type of results. or files will likely be moved to different locations. If your page content is not likely to be affected then it may be more efficient to test page content manually. Often a function test will involve multiple pages with a formbased input page containing a collection of input fields. User input can be via text-input fields. Suppose each data result. checkboxes.. Its id and name (addForm:_id74:_id75:0:_id79:0:checkBox) both are same and both are dynamic (they will change the next time you open the application). NOTE . This is usually on a result page of some given function. say for example a list of documents. the search results page returns a different data set where each document in the result set uses different identifiers. Release 1.5 Ajax Tests Ajax is a technology which supports dynamic real-time UI elements such as animation and RSS feeds. Then. in. Test Design Considerations . In AJAX-driven web applications. 84 Chapter 7..2.2. An example would be a result set of data returned to the user. Verify? Element vs. In this case normal object identification would look like: selenium.3 Verifying Expected Results: Assert vs.2 Identifying Dynamic Objects This section has not been reviewed or edited.click( "adminHomeForm" ). verifyElementPresent..click("addForm:_id74:_id75:0:_id79:0:checkBox). Release 1.4 Locating UI Elements 7. Static HTML Objects might look as: <a class= "button" id= "adminHomeForm" onclick= "return oamSubmitForm(’adminHomeForm’. Dynamic HTML of an object might look as: <input type= "checkbox" value= "true" id= "addForm:_id74:_id75:0:_id79:0:checkBox" name= This is HTML snippet for a check box. for your test script to click this button you just have to use the following selenium command. // Collect all input ids on page. Verify? Element vs. if(!GenericValidator.1 Assert vs. Actual Content? 7.IsBlankOrNull(checkboxIds[i])) // If collected id is not null..indexOf( "addForm" ) > -1) { selenium.4.3. The best way is to capture this id dynamically from the website itself. 7.0 7.2 When to verifyTextPresent. So..3. That is. Actual Content? 85 . Its id and name (addForm:_id74:_id75:0:_id79:0:checkBox) both are same and both are dynamic (they will change the next time you open the application). Verify: Which to Use? 7. For example. selenium. These are UI elements who identifying properties change each time you open the page displaying them. . Verifying Expected Results: Assert vs.Selenium Documentation.1 Locating Static Objects This section has not been reviewed or edited.. This id remains constant within all instances of this page.getAllFields().3.4. { // If the id starts with addForm if(checkboxIds[i].check(checkboxIds[i]). Given the dynamic nature of id this approach would not work. It can be done as: String[] checkboxIds = selenium. this UI element will always have this identifier. 7.’ad This is HTML snippet for a button and its id is “adminHomeForm”. when this page is displayed. or verifyText 7.. Consider one more example of a Dynamic object. for(String linkID: links) { // If retrieved link is not null if(!GenericValidator. String editTermSectionInfo = selenium. // Desired link. 7.click(editInfo). } // Set the second appearance of Autumn term link to true as isSecondInstanceLink = true.5.isBlankOrNull(linkID)) { // Find the inner HTML of link.getAllLinks(). String[] links = selenium. Now if href is used to click the link. it would always be clicking on first element.document. A page with two links having the same name (one which appears on page) and same html name. } } } // Click on link. These element IDs should be explicitly created by the 86 Chapter 7. label.0 } } This approach will work only if there is one field whose id has got the text ‘addForm’ appended to it.Selenium Documentation. table.1 How can I avoid using complex xpath expressions to my test? If the elements in HTML (button. String editInfo = null. then one can reliably retrieve all elements without ever resorting to xpath.getEval( "window. Test Design Considerations . Click on second element link can be achieved as following: // Flag for second appearance of link. // Loop through collected links. // Collect all links.5 Location Strategy Tradeoffs This section is not yet developed. 7. etc) have element IDs.getElementById(’" // If retrieved link is expected link. Release 1.equalsIgnoreCase( "expectedlink" )) { // If it is second appearance of link then save the link id and break the lo if(isSecondInstanceLink) { editInfo = linkID. boolean isSecondInstanceLink = false. selenium. if(editTermSectionInfo. break. using Selenium’s waitForPageToLoad wouldn’t work as the page is not actually loaded to refresh the AJAX element.7 UI Mapping A UI map is a centralized location for an application’s UI elements and then the test script uses the UI Map for locating elements to be tested. each time the application is deployed. Testing Ajax Applications 87 . UI maps have several advantages. consider a page which brings a link (link=ajaxLink) on click of a button on page (without refreshing the page) This could be handled by Selenium using a for loop. if (second >= 60) break.6. second++) { // If loop is reached 60 seconds then break the loop. This makes script maintanence easier and more efficient. Pausing the test execution for a specified period of time is also not a good approach as web element might appear later or earlier than expected leading to invalid test failures (reported failures that aren’t actually failures). • Having centralized location for UI objects instead of having them scattered through out the script. 7. a non-specific element id makes it hard for automation testers to keep track of and determine which element ids are required for testing. id_147) tends to cause two problems: first. } 7. Thread. different element ids could be generated. for (int second = 0.sleep(1000). But non-descriptive element ID (i.5. Second. try { if (selenium. A better approach would be to wait for a predefined period and then continue execution as soon as the element is found. // Loop initialization.Selenium Documentation.. Consider following example (in java) of selenium tests for a website: 7.isElementPresent( "link=ajaxLink" )) break.0 application.e. A UI map is a repository for all test script objects. // Search for element "link=ajaxLink" and if available then break loop. } catch (Exception e) // Pause for 1 second. For instance. Release 1.6 Testing Ajax Applications 7. • Cryptic HTML identifiers and names can be given more human-readable increasing the readability of test scripts.6.1 Waiting for an AJAX Element In AJAX-driven web applications. You might consider trying the UI-Element extension in this situation.2 Locator Performance Considerations 7. loginbutton). "xxxxxxxx" ). // Click on Cancel button. selenium. "xxxxxxxx" ).waitForPageToLoad( "30000" ).cancel). A better script would have been: public void testNew() throws Exception { selenium.click(admin.loginbutton).click(admin.cancel).events. selenium. Test Design Considerations .viewoldevents).click( "adminHomeForm:_activityold" ). selenium. selenium.events.test.click(admin. selenium.click(admin. selenium.events. Even the regular users of application would not be able to figure out as to what script does.click(admin.createnewevent).open( "(admin.waitForPageToLoad( "30000" ). selenium.test. selenium. } 88 Chapter 7. // Click on Login button. // Click on Create New Event button. public void testNew() throws Exception { // Open app url. selenium.click( "adminHomeForm:_activitynew" ).click( "addEditEventForm:_idcancel" ).waitForPageToLoad( "30000" ). // Provide admin username.createnewevent).waitForPageToLoad( "30000" ). selenium. // Click on View Old Events button.type(admin.username. selenium. selenium. selenium. selenium.com" ).open( "( "30000" ). } There is hardly any thing comprehensible from script.test.com" ). selenium. selenium.events.click( "loginForm:btnLogin" ). selenium. "xxxxxxxx" ).events. selenium.waitForPageToLoad( "30000" ).com" ). } Though again there are no comments provided in the script but it is more comprehensible because of the keywords used in scripts.waitForPageToLoad( "30000" ). selenium.waitForPageToLoad( "30000" ).events. Release 1. selenium.click(admin. selenium. selenium.open( ". selenium.viewoldevents).waitForPageToLoad( "30000" ). selenium.click(admin.type(admin. selenium.username.Selenium Documentation.0 public void testNew() throws Exception { selenium.type( "loginForm:tbUsername" . (please beware that UI Map is not a replacement for comments!) A more comprehensible script could look like this. 7.0 The whole idea is to have a centralized location for objects and using comprehensible names for those objects. Bitmap Comparison 89 . properties files can be used in java. Consider a property file prop.loginbutton = loginForm:btnLogin admin.11. Values can be read from the properties file and used in Test Class to implement UI Map. 7. Release 1.1 Data Driven Testing This section needs an introduction and it has not been completed yet.txt " .9 Solving Common Web-App Problems This section has not been developed yet. " r " ) values = source.Selenium Documentation.readlines() source. where each key and value are strings.viewoldevents = adminHomeForm:_activityold Our objects still refer to html objects. • Handling Login/Logout State • Processing a Result Set 7.events. For more on Properties files follow this URL.close() # Execute For loop for each String in the values array 7.11 Organizing Your Test Suites This section has not been developed yet. A properties file contains key/value pairs.8. but we have introduced a layer of abstraction between the test script and UI elements. 7.username = loginForm:tbUsername admin.events.events.10 Organizing Your Test Scripts This section has not been developed yet. 7.properties which has got definition of HTML object used above admin.8 Bitmap Comparison This section has not been developed yet. In Python: # Collection of String values source = open( " input_file.createnewevent = adminHomeForm:_activitynew admin. To achieve this.cancel = addEditEventForm:_idcancel admin. 168. The code then saves this in an array of strings. 7.click( " btnG " ) sel. This file contains a different search string on each line. generally handle this as it’s often a common reason for building test automation to support manual testing methods.3 Database Validations Since you can also do database queries from your favorite programming language. String url = "jdbc:sqlserver://192.12.microsoft.12.sqlserver.12.2 Recovering From Failure A quick note though–recognize that your programming language’s exception.0 for search in values: sel. Specific cases of establishing DB connection and retrieving data from DB would be: In Java: // Load Microsoft SQL Server JDBC driver. 7. Test automation tools. why not using them for some data validations/retrieval on the Application Under Test? Consider example of Registration process where in registered email address is to be retrieved from database. This section has not been developed yet.handling support can be used for error handling and recovery. 7. and at last. // Prepare connection url.forName( "com.180:1433. This is called Data Driven Testing and is a very common testing task.open( " / " ) sel. Release 1.jdbc.SQLServerDriver" ). it’s iterating over the strings array and doing the search and assert on each. 90 Chapter 7. Test Design Considerations .1 Error Reporting 7. Refer to Selnium RC wiki for examples on reading data from spread sheet or using data provider capabilities of TestNG with java client driver.12 Handling Errors Note: This section is not yet developed.DatabaseName=TEST_DB" . search) sel.failUnless(sel. assuming you have database support functions. This is a very basic example of what you can do.waitForPageToLoad( " 30000 " ) self.Selenium Documentation.type( " q " .is_text_present( " Results * for " + search)) Why would we want a separate file with data in it for our tests? One important method of testing concerns running the same test repetetively with differnt data values. but the idea is to show you things that can easily be done with either a programming or scripting language when they’re difficult or even impossible to do using Selenium-IDE. Class.1. The Python script above opens a text file. Selenium included. getConnection(url. "password" ). public static Connection con = DriverManager. // Use the fetched value to login to application. // Create statement object which would be used in writing DDL and DML // SQL statement.executeQuery ( "select top 1 email_address from user_register_table" ). A more complex test could be to validate that inactive users are not able to login to application.getString( "email_address" ). emailaddress).executeQuery // method which returns the requested information as rows of data in a // ResultSet object.type( "userid" .12. Handling Errors 91 . Release 1. 7. "username" . public static Statement stmt = con.0 // Get connection to DB.createStatement(). // Send SQL SELECT statements to the database via the Statement. String emailaddress = result.Selenium Documentation. This is very simple example of data retrieval from DB in Java. selenium. This wouldn’t take too much work from what you’ve already seen. // Fetch value of "email_address" from "result" object. ResultSet result = stmt. Release 1.0 92 Chapter 7. Test Design Considerations .Selenium Documentation. CHAPTER EIGHT SELENIUM-GRID Please refer to the Selenium Grid website. 93 . please contact the Documentation Team. and would like to contribute.html This section is not yet developed.org/how_it_works.seleniumhq. If there is a member of the community who is experienced in SeleniumGrid. We would love to have you contribute. Selenium Documentation. Release 1. Selenium-Grid .0 94 Chapter 8. 2 Actions All methods on the Selenium prototype beginning with “do” are added as actions. Selenium will automatically look through methods on these prototypes.prototype. // Replace the element text with the new text this. An action method can take up to two parameters. 9. and the PageBot object prototype. For each action foo there is also an action fooAndWait registered.findElement(locator).3 Accessors/Assertions All getFoo and isFoo methods on the Selenium prototype are added as accessors (storeFoo).1 Introduction It can be quite simple to extend Selenium.CHAPTER NINE USER-EXTENSIONS NOTE: This section is close to completion. }. On startup. which will also auto-generate “verify” and “waitFor” commands. 9. that makes sure that the element 95 . Selenium.replaceText(element.page(). which will be passed the second and third column values in the test. The following examples try to give an indication of how Selenium can be extended with JavaScript. assertions and locators. An assert method can take up to 2 parameters. assertions and locator-strategies. Example: Add a “typeRepeated” action to Selenium. 9. // Create the text to type var valueToType = text + text. but it has not been reviewed and edited. which types the text twice into a text box. For each accessor there is an assertFoo. verifyFooa nd waitForFoo registered. valueToType).doTypeRepeated = function(locator. Example: Add a valueRepeated assertion. text) { // All locator-strategies are automatically handled by "findElement" var element = this. You can also define your own assertions literally as simple “assert” methods. which will be passed the second and third column values in the test.page(). using name patterns to recognize which ones are actions. This is done with JavaScript by adding methods to the Selenium object prototype. adding your own actions. }. i++) { var testElement = allElements[i]. verifyNotTextLength. The 2 commands that would be available in tests would be assertValueRepeated and verifyValueRepeated. Also note that the assertValueRepeated method described above could have been implemented using isValueRepeated.0 value consists of the supplied text repeated. assertNotTextLength. }. // Loop through all elements. verifyFoo. // Create the text to verify var expectedValue = text + text. if you add a getTextLength() method. Selenium.getTextLength = function(locator. verifyTextLength. looking for ones that have // a value === our expected value var allElements = inDocument.prototype.findElement(locator). text) { return this. assertFoo.getElementsByTagName( "*" ). and the second being the document in which to search. i < allElements.3. and waitForNotFoo commands.length. assertNotFoo. 9. PageBot.4 Locator Strategies All locateElementByFoo methods on the PageBot prototype are added as locator-strategies. waitForTextLength. actualValue).matches(expectedValue. Example. the first being the locator string (minus the prefix). assertFoo. A locator strategy takes 2 parameters. waitForValueRepeated and waitForNotValueRepeated.getText(locator).prototype. Selenium. // The "inDocument" is a the document you are searching. waitForFoo and waitForNotFoo for every getFoo All getFoo and isFoo methods on the Selenium prototype automatically result in the availability of storeFoo. Example: Add a “valuerepeated=” locator. with the added benefit of also automatically getting assertNotValueRepeated.length. for (var i = 0.prototype. 9. storeValueRepeated.1 Automatic availability of storeFoo. 96 Chapter 9. // Get the actual element value var actualValue = element. verifyNotFoo.value. // Make sure the actual value matches the expected Assert. User-Extensions . waitForFoo. Release 1.page(). the following commands will automatically be available: storeTextLength.Selenium Documentation.assertValueRepeated = function(locator. assertTextLength. inDocument) { // Create the text to search for var expectedValue = text + text. text) { // All locator-strategies are automatically handled by "findElement" var element = this. that finds the first element a value attribute equal to the the supplied value repeated. assertNotFoo. and waitForNotTextLength commands.locateElementByValueRepeated = function(text. js file. If you are using client code generated by the Selenium-IDE you will need to make a couple small edits. Place your user extension in the same directory as your Selenium Server. Using User-Extensions With Selenium-IDE 97 . ". 9.Selenium Documentation. Next. Below. Open Firefox and open Selenium-IDE. 2. your user-extension should now be an uptions in the Commands dropdown. create a new command. In your empty test. you must close and restart Selenium-IDE.ca/" ). While this name isn’t technically necessary. instantiate that HttpCommandProcessor object DefaultSelenium object. 4444. as you would the proc = new HttpCommandProcessor( "localhost" . } } return null. it’s good practice to keep things consistent. }.js. just below private StringBuilder verificationErrors. First. 2.6. Click on Tools. Your user-extension will not yet be loaded. 3. Create your user extension and save it as user-extensions. 5. 9. "*iexplore" . Click on OK.value === expectedValue) { return testElement. In Selenium Core Extensions click on Browse and find the user-extensions.5 Using User-Extensions With Selenium-IDE User-extensions are very easy to use with the selenium IDE. is the official Selenium suggested approach.1 Example C# 1. you will need to create an HttpCommandProcessor object with class scope (outside the SetupTest method.) HttpCommandProcessor proc. This can be done in the test setup.6 Using User-Extensions With Selenium RC If you Google “Selenium RC user-extension” ten times you will find ten different approaches to using this feature. Release 1. 9. 6. 9. 1. 1.value && testElement. Options 4.0 if (testElement.5. Instantiate the DefaultSelenium object using the HttpCommandProcessor object you created. your test will fail if you begin this command with a capital.0 1.DoCommand( "alertWrapper" . This method takes two arguments: a string to identify the userextension method you want to use and string array to pass arguments. //selenium = new DefaultSelenium("localhost".jar -userExtensions user-extensions. NUnit. User-Extensions . 1. Start the test server using the -userExtensions argument and pass in your user-extensinos.Text. } 98 Chapter 9. 4444.Framework. private HttpCommandProcessor proc. Within your test code. regardless of the capitalization in your user-extension. Because JavaScript is case sensitive. proc. inputParams). System.Selenium Documentation. Selenium. Selenium automatically does this to keep common JavaScript naming conventions. private StringBuilder verificationErrors. "*iexpl selenium.RegularExpressions.Text. "*iexpl selenium = new DefaultSelenium(proc). java -jar selenium-server.js using using using using using using System. selenium = new DefaultSelenium(proc). execute your user-extension by calling it with the DoCommand() method of HttpCommandProcessor. verificationErrors = new StringBuilder(). 4444. namespace SeleniumTests { [TestFixture] public class NewTest { private ISelenium selenium. string[] inputParams = { "Hello World" }.Threading. Remember that user extensions designed for Selenium-IDE will only take two arguments. [SetUp] public void SetupTest() { proc = new HttpCommandProcessor( "localhost" . Notice that the first letter of your function is lower case. 1. but a longer array will map each index to the corresponding user-extension parameter. Release 1. System. System.js file. inputParams is the array of arguments you want to pass to the JavaScript user-extension. In this case there is only one string in the array because there is only one parameter for our user extension.Start(). ToString()).Open( "/" ).DoCommand( "alertWrapper" .Stop(). Using User-Extensions With Selenium RC 99 . inputParams). string[] inputParams = { "Hello World" .6.}. } } } End Appendixes: 9. proc. verificationErrors. } [Test] public void TheNewTest() { selenium.AreEqual( "" .Selenium Documentation. } catch (Exception) { // Ignore errors if unable to close the browser } Assert. Release 1.0 [TearDown] public void TeardownTest() { try { selenium. 0 100 Chapter 9.Selenium Documentation. User-Extensions . Release 1.. 101 Selenium Documentation, Release 1.0 • A Class (.cs) is created. Rename it as appropriate. • Under right hand pane of Solution Explorer right click on References > Add References. 102 103 . 104 Chapter 10. Release 1.0 With This Visual Studio is ready for Selenium Test Cases.Selenium Documentation.NET client driver configuration . PHP and more. It is written primarily in Java and is used to develop applications in this language and. • Select File > New > Other. 105 .1 Configuring Selenium-RC With Eclipse Eclipse is a multi-language software development platform comprising an IDE and a plug-in system to extend it. in other languages as well as C/C++. Cobol. Python. (Europa Release).CHAPTER ELEVEN. Perl.3. Following lines describes configuration of Selenium-RC with Eclipse . by means of the various plug-ins. It should not be too different for higher versions of Eclipse • Launch Eclipse.0.Version: 3: 11. 0 • Java > Java Project > Next 106 Chapter 11. Java Client Driver Configuration . Release 1.Selenium Documentation. 0 • Provide Name to your project.Selenium Documentation. Release 1. Configuring Selenium-RC With Eclipse 107 .1. Select JDK in ‘Use a project Specific JRE’ option (JDK 1.5 selected in this example) > click Next 11. Java Client Driver Configuration . (This described in detail in later part of document.) 108 Chapter 11. Release 1.Selenium Documentation.0 • Keep ‘JAVA Settings’ intact in next window. Project specific libraries can be added here. 1.0 • Click Finish > Click on Yes in Open Associated Perspective pop up window. 11. Configuring Selenium-RC With Eclipse 109 .Selenium Documentation. Release 1. Release 1.Selenium Documentation. Java Client Driver Configuration .0 This would create Project Google in Package Explorer/Navigator pane. 110 Chapter 11. Release 1. Configuring Selenium-RC With Eclipse 111 .0 • Right click on src folder and click on New > Folder 11.1.Selenium Documentation. Release 1. Java Client Driver Configuration . 112 Chapter 11.Selenium Documentation. • This should get com package insider src folder.0 Name this folder as com and click on Finish button. 1. Release 1. Configuring Selenium-RC With Eclipse 113 .Selenium Documentation.0 • Following the same steps create core folder inside com 11. Java Client Driver Configuration . This is a place holder for test scripts. Please notice this is about the organization of project and it entirely depends on individual’s choice / organization’s standards.0 SelTestCase class can be kept inside core package. Test scripts package can further be segregated depending upon the project requirements. Create one more package inside src folder named testscripts. 114 Chapter 11.Selenium Documentation. Release 1. 1.e.Selenium Documentation.0 • Create a folder called lib inside project Google. Release 1. This is a place holder for jar files to project (i. Configuring Selenium-RC With Eclipse 115 . Selenium client driver. selenium server etc) 11. Right click on Project name > New > Folder. 0 This would create lib folder in Project directory.Selenium Documentation. Release 1. Java Client Driver Configuration . 116 Chapter 11. 1. Configuring Selenium-RC With Eclipse 117 .0 • Right click on lib folder > Build Path > Configure build Path 11.Selenium Documentation. Release 1. Release 1. Java Client Driver Configuration . Select the jar files which are to be added and click on Open button.0 • Under Library tab click on Add External Jars to navigate to directory where jar files are saved.Selenium Documentation. 118 Chapter 11. Selenium Documentation.1.0 After having added jar files click on OK button. Release 1. Configuring Selenium-RC With Eclipse 119 . 11. Selenium Documentation, Release 1.0 Added libraries would appear in Package Explorer as following: 120 121 Selenium Documentation, Release 1.0 • Provide name and location to Project. • Click Next and provide compiler output path. 122 Chapter 11. Java Client Driver Configuration 2. Release 1.Selenium Documentation. • Click Next and select Single Module Project. Configuring Selenium-RC With Intellij 123 .0 • Click Next and select the JDK to be used. 11. • Click Next and select Source directory. • Click Next and provide Module name and Module content root.Selenium Documentation. Java Client Driver Configuration .0 • Click Next and select Java module. 124 Chapter 11. Release 1. 0 • At last click Finish. Release 1. • Click on Project Structure in Settings pan. 11. This will launch the Project Pan. Configuring Selenium-RC With Intellij 125 .2.Selenium Documentation. Adding Libraries to Project: • Click on Settings button in the Project Tool bar. 126 Chapter 11. Java Client Driver Configuration . Release 1.Selenium Documentation.0 • Select Module in Project Structure and browse to Dependencies tab. jar. (Multiple Jars can be selected b holding down the control key.). 11. Release 1.0 • Click on Add button followed by click on Module Library. • Browse to the Selenium directory and select selenium-java-client-driver.jar and seleniumserver.2.Selenium Documentation. Configuring Selenium-RC With Intellij 127 . 128 Chapter 11.Selenium Documentation. Java Client Driver Configuration .0 • Select both jar files in project pan and click on Apply button. Release 1. Selenium Documentation, Release 1.0 • Now click ok on Project Structure followed by click on Close on Project Settings pan. Added jars would appear in project Library as following. 11.2. Configuring Selenium-RC With Intellij 129 Selenium Documentation, Release 1.0 • Create the directory structure in src folder as following. 130 131 Java Client Driver Configuration . Release 1.0 132 Chapter 11.Selenium Documentation. py • Run Selenium server from the console • Execute your test from a console or your Python IDE The following steps describe the basic installation procedure. • Installing Python Note: This will cover python installation on Windows and Mac only.x-win32-x86. Run the installer downloaded (ActivePython-x. – Windows 1. Download Active python’s installer from ActiveState’s official site: • Either write your Selenium test in Python or export a script from Selenium-IDE to a python file.msi) 133 . After following this.CHAPTER TWELVE PYTHON CLIENT DRIVER CONFIGURATION • Download Selenium-RC from the SeleniumHQ downloads page • Extract the file selenium.x.com/Products/activepython/index.x. the user can start using the desired IDE. as in most linux distributions python is already pre-installed by default. (even write tests in a text processor and run them from command line!) without any extra work (at least on the Selenium side).mhtml 2. • Add to your test’s path the file selenium. org/ (packages for Python 2. Release 1.0 • Mac The latest Mac OS X version (Leopard at this time) comes with Python pre-installed.x). To install an extra Python.pythonmac. get a universal binary at. Python Client Driver Configuration .5.Selenium Documentation. 134 Chapter 12. Selenium Documentation. Congratulations. Extract the content of the downloaded zip file 3. 135 .pkg file that you can launch. It contains a . you’re done! Now any python script that you create can import selenium and start interacting with the browsers.0 You will get a . Release 1. Copy the module with the Selenium’s driver for Python (selenium. • Installing the Selenium driver client for python 1. Download the last version of Selenium Remote Control from the downloads page 2.dmg file that you can mount. it’s located inside seleniumpython-driver-client. You will find the module in the extracted folder.py) in the folder C:/Python25/Lib (this will allow you to import it directly in any script you write). 0 136 Chapter 12.Selenium Documentation. Release 1. Python Client Driver Configuration . 13. To demonstrate.2 Starting to use CSS instead of XPATH 13. if your dynamic ids have the format <input id="text-12345" /> where 12345 is a dynamic number you could use the following XPath: //input[starts-with(@id. Incidentally.CHAPTER THIRTEEN LOCATING TECHNIQUES 13.1 Useful XPATH patterns 13.1. ’article-heading’)] 137 . One simple solution is to use XPath functions and base the location on what you do know about the element. this would be much neater (and probably faster) using the CSS locator strategy css=span. For example. however with CSS locators this is much simpler (and faster).1. which can make them difficult to locate.3 contains If an element can be located by a value that could be surrounded by other text. ’heading’)].2 starts-with Many sites use dynamic values for element’s id attributes. • XPath: //div[contains(@class. Useful for forms and tables.1 text Not yet written .1 Locating elements based on class In order to locate an element based on associated class in XPath you must consider that the element could have multiple classes and defined in any order.locate elements based on the text content of the node. the contains function can be used.4 siblings Not yet written .1. ’text-’)] 13.heading 13.1. the element <span class="top heading bold"> can be located based on the ‘heading’ class without having to couple it with the ‘top’ and ‘bold’ classes using the following XPath: //span[contains(@class. 13.locate elements based on their siblings.2. article-heading 138 Chapter 13.0 • CSS: css=div.Selenium Documentation. Release 1. Locating Techniques . This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/doc/48872300/Selenium-Documentation
CC-MAIN-2016-40
refinedweb
28,086
52.15
Do you like python .... Of course you do ... if you have used it ... i mean really used it ... there is no way that you haven't fallen in love with it ... I did ... AFTER learning c++ then java then c# then vb then php then actionscript then javascript .... learnt python and loved my life! Now you have a web project at hand with a shot timeline. So you want to use python. You research and find really nice web frameworks. There's django, pylons, turbogears, cherry py (I love cherrypy!), zope (plone). They are all good but you want something more. You want your user interface to blow the client's mind away! You have heard of RIA and would love if there was someway to integrate with FLEX. You do a google search and you find pyamf. Looks promising. But you are afraid to use it because of all the actionscript code you are going to have to write interface python with it. Well .... you are a python programmer right. Why not write a python script to generate action script for you! Well you are in luck... been there ... done that :) So what was this all about again ? Enter pyamfcodegen Just write you code for the server (using django for session management) and you can use complete object oriented data transfers! There are just three things you need to do : - #s:serviceurl e.g.: #s: class coolservice: This would mean you want to expose coolserviceat - #f:returntype:parameteronetype,parametertwotype,... To expose functions e.g.: #f:bool:String,String def login(self,username,password): And finally: - #d:propertytype self.username=username #d:String HAPPY CODING :) Code at: Frameworks: Screencast : Python Code Generator from Basarat Ali Syed on Vimeo.
http://basaratali.blogspot.com/2009_02_01_archive.html
CC-MAIN-2017-17
refinedweb
288
86.4
Business Case:We had a requirement to send multiple files located in different directories from the Business System (FTP server) to target Business System (FTP system) and create files based on the payload. Procedure:In order to achieve the above business case we need to process the multiple files at sender communication channel using file adapter. Sender file adapter has the option Advanced Selection for Source File. Check the option and maintain the list of directories from the source FTP server. As per the above mentioned directories place the respective files in their folders. For Example sender folder holds the file out.xml file in the same way place the rest of the files in the sender1 and sender2 directories. If you want to process all files with extension ‘.xml’, and want to exclude files that begin with the letter ‘a’. To do this enter *.xml for File Name, and a* for Exclusion Mask. In file receiver communication channel to Create Multiple Files in the target system we are achieving this using Variable Substitution. In Advanced tab of the communication channel check the Enable under variable substitution, maintain variable name and corresponding reference of the variable. The variable can refer to an attribute of the message header or can refer to an element in the payload. However to create multiple files in the target system we are referring to the ‘Name’ element in the payload and using the prefix as payload. payload:MT_Employee,1,Employee,1,Details,1,Name,1 If we want to get the values from the Header message use prefix as message: instead of payload:. By using message:interface_namespace as a reference value to the variable. For instance in above figure the variable ‘var1’ holds the reference of ‘Name’ element, which is use in the target filename schema for creating files in target system. Output: Will this work for files at sender sidenwith dynamic names. Thanks. Hari what i understood from your question is,if we does not maintain the actual file names, we can use *.* as file name and we can pick the files located in that particular folder and in the receiver side we are using the Variable substitution so based on the payloads in the source side i think that many files get generated……….i did not try but i think it will work Thanks Pavan kumar Nukala
https://blogs.sap.com/2009/08/04/processing-multiple-files-in-source-system-and-target-system/
CC-MAIN-2020-40
refinedweb
394
62.38
Script Editor Contents - 1 The Lianja Script Editor - 2 The Lianja CodeAssistant for Lianja/VFP - 2.1 Intellisense - 2.2 IntelliTips - 2.3 Statement Completion - 2.4 Auto Indenting - 2.5 Code Snippets - 2.6 Code Beautifier - 2.7 Code Folding - 2.8 Integration with the Documentation Wiki - 2.9 Extending Lianja with your own Intellisense Definitions - 3 See Also The Lianja Script Editor. Select the "Apps" workspace. Select the "Apps Files" Tab in the App Inspector. Double click a filename to edit it. Overview The Script Editor has a tabbed UI. Double click a Tab and the editor will be detached into its own floating window. Double click the window title bar and it will be attached back as a Tab. Edit a Script Files are categorized in the Files explorer in the sidebar or App Inspector. Double-clicking a file name opens it in the Script Editor. As a power user you can create and edit script files in the console. ed filename.ext Note: - Files can also be opened for editing by first selecting the file name then clicking on the edit button in the actionbar or right-clicking and selecting 'Open File' or 'Open File in Window' from the context menu. - Image Files are also listed in the Files explorers. Double-clicking on an image file opens it using the system default app. - Electron template files generated in the Build Workspace are listed in the Apps workspace Files explorer under the Electron Files category. Double-clicking on a .html, .js or .json file name will open it in the Script Editor. Context Menus Right-click in the Script Editor panel to display the editing context menu: Insert Unicode control character With a file open, right-click on the tab containing the filename to display the tab context menu: Note: when 'Quick Deploy' is selected, if the file is a script file, it is automatically compiled then deployed if the compilation is successful (from v5.0). New Script To create a new script of any of the file types listed in the table above, click the + button in the actionbar at the bottom of the Files explorer, or right-click in the Files explorer panel and click 'New ...'. This will display the 'Create a new file' explorer in the current App directory (Apps workspace) or Lianja Library (Library workspace), allowing a file type to be selected from the pulldown and a file name to be entered. Saving a Script Scripts can be saved using any of the following: Find Panel and Advanced Panel From the Find Panel, you can position on a specific line number and do search and replace operations. The Advanced Panel has a number of tabs providing the results of App or Library wide searches along with script and compilation output. To hide the Find and Advanced Panels, click their x. Press [Ctrl] + f to display the Find Panel and click 'Advanced...' to display the Advanced Panel. Find Panel Operations Advanced Panel Tabs To refresh a tab, click the Refresh icon in the bottom left hand corner of the panel. Themes From Lianja v4.1 you can change the color theme of the Script Editor. The theme can be chosen from a pre-built selection, or you can create your own theme, assigning colors to the different display categories: background, foreground, keyword, declaration, single line comment, multi-line comment, class, object, string, placeholder, number, operator. Select the Themes tab at the bottom of the Advanced panel. Available themes are listed in the pulldown. Select a theme to apply it. To create your own theme, first click the New button and enter a unique theme name. Then click on the display categories to choose their colors. Here the chosen color will be used for the editor background. Select a scripting language from the pulldown to see how sample code looks when using your theme. Once your theme is complete, click the Save button and your theme will be saved and applied. Your theme will also now be listed in the available themes pulldown. Keyboard Reference The Lianja CodeAssistant for Lianja/VFP The Lianja CodeAssistant includes: - Intellisense - IntelliTips - Statement Completion - Auto Indenting - Code Snippets - Code Beautifier - Code Folding - Integration with the Documentation Wiki Intellisense Lianja provides Intellisense for commands, functions, object variables and cursors. Intellisense is implemented in a background thread so that it is kept up-to-date in real time. Move the mouse cursor over a variable in the file. Press and release the Shift key. All occurrences of that variable will be highlighted. Command Intellisense Typing the first letter of a command pops up a pick list of commands beginning with that letter. l As you continue to type, the list is filtered. lo When the required command is highlighted, press [Tab] or [Return] to select it and display a pick list of its keywords and clauses. Similarly, typing a command followed by a space pops up the context sensitive pick list of keywords and clauses for the command being typed. list<space> Selecting a clause from the pick list e.g for <lExp> will guide you as you type in the statement. At any time you can type [Ctrl] + [Space] to enable/disable the command pick list. Command Abbreviations Command abbreviations are a coding productivity aid used to speed input of two-word commands. Any combination of letters that uniquely identifies a command by the first letter(s) of the first word followed by the first letter of the second word can be used. For example, typing s Displays a pick list of all the commands beginning with 's'. But typing sd Offers 'save datasession' only, as no commands begin with 'sd' and no other two-word commands have a second word beginning with 'd'. Here are some more examples of abbreviations: od -> open database ld -> list databases lm -> list memory clod -> close database Special Command Intellisense Some commands have special pick lists depending on context. open database<space> This will popup a pick list containing database names. use<space> This will popup a pick list containing table names. modify command<space> or ed<space> or mc<space> This will popup a pick list containing program script filenames. Function Intellisense Typing a function name followed by an open bracket pops up an intellitip for the function and auto-inserts the function template. s = substr( After typing in the code for the highlighted argument, type a comma (,) or press Tab or Return to move on to the next argument. s = substr(products.productname, at( Type ) to close. Nested intellitips are stacked and unstacked when a ) is typed. Object Variable Intellisense Typing an object variable name followed by a . pops up a pick list of properties and methods for the object variable. Hovering the mouse over items in the pick list displays a tooltip with a short description of the item. Object variable Intellisense requires any of the following to be present in the file being edited. local|private|public|parameter|lparameter name as classname or name = createObject("classname") or obj.addObject("name","classname") or name = Lianja.getElementByID("Id") name = Lianja.get("id") In the latter case, the specified 'id' is introspected to identify the class based on the pages, sections and formitems in the currently open App. The Lianja system object is known to the script editor so now typing: lianja. Pops up the intellisense for it. Object Variable Intellisense Heuristics When typing a variable name followed by a . (dot), if the variable is untyped and it begins with "page" it is implied that it is a PageBuilder class and the pick list for that will be displayed. Similarly, if the variable begins with "section" it is implied that it is a Section, and if the variable begins with "field" it is implied that it is a FormItem. As well as the PageBuilder properties and methods, the Page's pick list also includes its sections: As well as the Section properties and methods, the Section's pick list also includes its formitems: Note that for a Section, you can also access the internal Grid (Grid Section), Pageframe (TabView Section) or Webview (Webview Section). If you follow the suggested format for naming object variables as described on MSDN then Lianja will use heuristics to determine the class of untyped object variables. Cursor Intellisense Tables that are open during editing are known to the editor so when for example you have the products table open and you type: products. The columns in the products table are displayed as a pick list. Hovering the mouse over a column name will display a tooltip containing useful information regarding the column e.g. data type, width, decimals. You can toggle Intellisense on and off by pressing [Ctrl] + [Space]. IntelliTips Moving the mouse cursor over a command while pressing the control key will popup the IntelliTip for the command. Moving the mouse cursor over a function name followed by a ( while pressing the control key will popup the IntelliTip for the function. Moving the mouse cursor over a variable name or an objectname.propertyname or a cursorname.columnname while pressing the control key will popup a tooltip displaying the current value. You can toggle IntelliTips on and off by pressing [Ctrl] + /. Statement Completion Typing a statement closing tag (e.g. endif in an if statement block) followed by the return key will close the statement block off for you and move the cursor onto the next line at the previous block indentation. Auto Indenting When you press the return key while typing commands the cursor will move onto the next line and auto indent for you. Code Snippets Code snippets are a productivity aid when coding. As you type a command, any code snippets that match it are displayed in a pick list. e.g. Here is the snippet called 'ife' for an if...else...endif statement. if ${condition} ${insert your code here} else ${insert your else code here} endif Typing if displays the matching snippets 'if' and 'if else'. Press [Return] to select 'if' or double-click on 'if else' to select 'if else'. Typing ife displays the matching snippet 'if else'. Press [Return] to select 'if else'. After selecting 'if else' the snippet is inserted. Snippets can contain code insertion marks: ${name} or with a default value: ${name:something} or for optional input: ${name:} If the parameter insertion point contains a colon (:) pressing [Tab] will insert the text following the colon. e.g. Here is the for...endfor snippet. Pressing [Tab] on 'var' and 'start' will insert 'i' and '1' respectively. for ${var:i} = ${start:1} to ${end} ${insert your code here} endfor After typing in a parameter, press [Tab] to move onto the next one. There is also an invisible code insertion mark which causes the cursor to move to that position after the snippet is inserted. ${} Note: code snippet files (found in lianja\help\) can also be empty, in which case the name of the file is inserted with '_' replaced by a space. For example, the snippet file 'list_structure.snippet' appears as 'list structure' in the pick list and, if selected, will insert list structure into the command line. You can edit your own snippets in the Snippet Manager by pressing [Alt] + s or selecting the option from the system menu. Here, I've created the 'sw' snippet to open the southwind database and certain tables. Now, when I type sw, the 'sw' snippet appears in the picklist and pressing [Return] inserts the snippet code. Note that the ${} invisible code insertion mark at the end of the snippet, is not inserted, but rather determines the cursor position after snippet insertion. Code Beautifier While editing you can press [Ctrl] + b to beautify your code with statement block indentation. Code Folding Code blocks such as if...endif, for...endfor, scan...endscan and do case...endcase are automatically indicated in the left margin with a small + or - icon at the start of the block and an arrow at the end. Click the - icon to fold the code block. Click the + icon to unfold a folded code block. Hovering with the mouse over a - icon highlights the whole code block by changing the background color. Hovering with the mouse over a + icon displays the folded code in an intellitip. Code Folding Keyboard Reference Custom Foldable Blocks You can create your own foldable code block using regions. Here, the usage information has been placed in a region. Precede the block with the following line, giving it a helpful description: #region description and terminate the block with: #endregion The indicator and hover actions are the same as for built-in code blocks. Here, the block has been folded. Note that regions cannot be nested. Integration with the Documentation Wiki Pressing [F1] fetches and displays the help page from the online Documentation Wiki for the command or function being typed or for the selected command. For commands with the same first word, follow the links to show the required command. Extending Lianja with your own Intellisense Definitions If you have existing libraries of classes or functions, you can place your own intellisense files in the lianja\help directory. If you look at the existing files in that directory you will see that they are just text files that are pre-loaded at startup. e.g. intellitips_vfp_yourcompany.properties (for function definitions) intellisense_vfp_yourcompany.properties (for class definitions) Alternatively, you can create a file called references_vfp.config and place it in your app directory. This file can contain function and global variable type definitions that intellisense will use as hints. The file should contain triple-slash comments like this: /// < /// < type can be any Lianja classname or Any, Character, Numeric, Logical, Date, Datetime, Currency, Array, Object Any of the files you edit can also include type definitions by adding: /// <reference path="filename" /> where filename should exist in the lianja help directory. Or alternatively, prefix the name with lib:/ or app:/ to reference definition files in an App or in the library. See Also Editor Settings, Guide to the Apps Workspace (Video), Lianja 3 App Inspector (Video)
https://www.lianja.com/doc/index.php/Script_Editor
CC-MAIN-2019-26
refinedweb
2,351
63.8
Do. You can use Vanity Guids for: - Branded software (e.g., your company or product ID) - Debugging (e.g., easily spot a specific Guid in a running program or long list of IDs) - Geek humor or insults So how do you make a Vanity Guid? Answer: The brute force (and easy) way is to create billions of Guids until you find one that meets your desired pattern. But why not just manually create whatever Guid you want, such as aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa? Answer: With a manual approach, you cannot guarantee the Guid is globally-unique. In addition to obvious patterns, you can also embed words in Vanity Guids. Of course, with a Guid you’re limited to hexadecimal digits (0-9,A-F). But you can be creative and substitute numbers for letters, such as: babeface-81f0-4a99-9e10-3ba203c54f4e badb100d-0ea7-4208-bce9-043bb590e2c2 b19b00b5-3772-4a86-aece-0fe19729adf0 Following is a simple console program that generates Vanity Guids whose first block contains all the same character, as shown in the list at top. The list is written to a text file and then opened in Notepad. using System; using System.Diagnostics; using System.IO; using System.Text; namespace CSharp411 { class Program { static void Main( string[] args ) { StringBuilder sb = new StringBuilder(); int count = 0; while (count < 10) { string guid = Guid.NewGuid().ToString(); char c = guid[0]; if (c == guid[1] && c == guid[2] && c == guid[3] && c == guid[4] && c == guid[5] && c == guid[6] && c == guid[7]) { count++; sb.Append( guid ); sb.Append( "rn" ); Console.WriteLine( guid ); } } string path = @"C:tempGuids.txt"; File.WriteAllText( path, sb.ToString() ); Process.Start( "notepad.exe", path ); } } } BTW, I modified the sample program to run until it generated the complete list shown at top (i.e., one Vanity Guid for each hex digit). It ran overnight and finished after 10 hours 36 minutes! The other interesting thing is it generated all but 4 Guids within the first hour. Took another hour for the next 2. Then eight more hours for the final 2. The laptop fan was running all night! Perhaps there is a programmatic way to approach this directly, i.e. create a Guid manually and instantly, while remaining globally unique. But I’ll leave that as an exercise to my smart readers. 🙂 You can’t guarantee that a System.Guid will be globally unique either, to the point of it being fairly pointless using the Guid class to create a few billion GUIDs just to force retrieve what is essentially a manually created one anyway I think this article is a joke. But a Guid is mathematically likely to be globally unique, which is why it’s called a Guid. From Wikipedia: This number is so large that the probability of the same number being generated twice is extremely small: assuming the universe is 13.75 billion years old, and that today’s fastest supercomputer (the Tianhe-1A) at 2.5 petaflops could generate 2.5×1015 random GUIDs every second, if it had been dedicated exclusively to this task nonstop since the Big Bang, it still would have odds of less than one in 300,000 of ever having generated a duplicate. How about you generate a couple of guids and replace some of its characters with what you want? I’lld think the chances of those guids being unique are the same as those of the ones your code generates. The uniqueness of manually-generated GUIDs drops drastically and cannot be guaranteed. Though this is a theoretical article just for fun, one should never manually generate an ID and expect it to be unique.
http://www.csharp411.com/vanity-guids/
CC-MAIN-2020-05
refinedweb
607
66.03
hello i have got little problem :) create a c program which starts with 3 processes the first proccess generates 50 random numbers and writes them inti common memory. the second proccess reads them and whrites the even numbers in file A,the odd numbers in file B. #include <stdlib.h> #include <stdio.h> int main(void) { int A; srand(time(0)); for (A = 0; A < 50; ++A) { int number; number = rand(); printf("%d\n"); } return 0; } i create first part of the program i also know how to find odd and even numbers if ( x % 2 == 0 ) { //even } else { /odd } i know that i must insert "fd = open("blabla",O_RDWR|O_CREAT,0)" after "if" but my knowlodge isnt enogh :( thanks
https://www.daniweb.com/programming/software-development/threads/123917/random-numbers-finding-odd-and-even-numbers
CC-MAIN-2017-39
refinedweb
120
60.99
I have a model class that caches data in redis. The first time I call a method on the model, it computes a JSON/Hash value and stores it in Redis. Under certain circumstances I 'flush' that data and it gets recomputed on the next call. Here's the code snippet similar to the one I use to store the data in Redis: def cache_data self.data_values = data_to_cache REDIS.set(redis_key,ActiveSupport::JSON.encode(self.data_values)) REDIS.get(redis_key) end def data_to_cache // generate a hash of values to return end I like to have redis running while the tests are running. Redis, unlike e.g. postgres, is extremely fast and doesn't slow down test run time noticeably. Just make sure you call REDIS.flush in a before(:each) block, or the corresponding cucumber hook. You can test data_to_cache independently of redis, but unless you can fully trust the redis driver you're using and the contract it provides, it's safer to actually test cache_data (and the corresponding cache fetch method) live. That also allows you to switch to a different redis driver (or to a different fast KV store) without a wholesale rewrite of your tests.
https://codedump.io/share/wgmP94G1FhVK/1/writing-tests-with-rspec-for-redis-with-rails
CC-MAIN-2017-17
refinedweb
198
56.25
django step through the code is it possible to to a step through the code in django ( i mean step through while debugging) Answers To address debugging, instead of step-based debugging in the framework itself it is more preferable in the Django community to provide unit tests. If you are building a module, Django provides facilities to test applications. For step-through debugging you may need an IDE to handle it: AFAIK Django doesn't provide a facility to do that. Yes, you can do that by using the Python Debugger module, pdb. I have covered the topic of debugging Django applications on my blog before. In an nutshell, if you are using the Django development server, you can easily step through your Django application by placing a breakpoint with the statements import pdb; pdb.set_trace() at any point in your view code where you want to start debugging, and then step through the debugger that is invoked on the shell where the Django development server was running from. Yes, as long as you're running in the development server. If so, just put this into your code at the point you want to stop: import pdb; pdb.set_trace() and you will be dumped into the debugger on the console, from where you can step through to your heart's content. Need Your Help xcode 6.0.1 crash after running(On Device) an “old app” on build on Xcode 5 iphone crash device xcode6xcode 6.0.1 crash after running(On Device) an "old app" on build on Xcode 5 getting two sets of values from a hashmap into two arrays, fast and easy java android arrays collectionshow in an easy and fast way can I get the two sets of values from a hashmap and load them into two separate arrays? ordering is not important.
http://unixresources.net/faq/4348202.shtml
CC-MAIN-2019-13
refinedweb
308
66.98
Based on our analysis, we would not be able to stabilize the JDBC EoD RI without slipping the Java SE 6 schedule by at least 8 weeks, which is something that none of us want. The JDBC 4.0 Expert Group agrees that we should not delay Java SE 6 for this feature and as of build 101 of Java SE 6, the JDBC EoD RI has been removed. I do believe that the proposed features in the JDBC EoD API are useful and we will work toward including an improved version of the API in Java SE 7. Are you talking about the API or the RI? Posted by: mernst on October 05, 2006 at 10:14 AM Both have been removed from SE 6 Posted by: lancea on October 05, 2006 at 10:20 AM Looking at TS-3280, pretty much everything in JDBC 4.0 was labelled as "Ease of Development", which I'm assuming is what you mean by EoD. So, what will actually remain, what will go? For example, the extra BLOB and CLOB features are really handy, although the object persistence and the automatic driver registration features seem less critical. Thanks in advance for clarifying! - Chris Posted by: chris_e_brown on October 06, 2006 at 01:06 AM I was expecting this, because I found a couple of problems with DataSets myself. Also the SQL parsing for ?1 parameters was buggy. Posted by: fuerte2 on October 06, 2006 at 05:27 AM The feature that is being removed, is what is defined in Chapter 19, Ease of Development. This is the DataSet and associated Annotations such as Select and Update. All other features are on schedule as planned as as the LOB enhancements. Regards Lance Posted by: lancea on October 06, 2006 at 06:50 AM Personally, I had limited interest in the Query/DataSet class, but I had hoped to see my two pet peeves addressed: Eliminating Class.forName and standard exception chaining. Posted by: cayhorstmann on October 06, 2006 at 06:51 AM Your pet peeves, Auto Discovery of java.sql.Drivers and standard exception chaining are still included in JDBC 4.0 Posted by: lancea on October 06, 2006 at 06:54 AM You took away the single most important reason why I would have loved to upgrade to java 6. I guess I will have to stick to 1.4 at my day job and wait for a java replacement to show up. When you guys came with java 5 , most programmers in my company had little incentive to learn it. Reason websphere did not support 1.5 up until few months ago. Even if websphere supported it, 2 big features were generics and annotations. Most of the programmers had learned to live without generics. and sun did not added any annotation applications which would convince people that its worth learning. So instead of spending time on java, a lot of people in my company spent time learning spring. And now spring is one of the key technology for for our new development. I was very happy that sun added two application for annotations (in jdbc and web services) as part of java se. Now u have removed the annotations for jdbc which would have appealed to maximum number of developers. Number of people writing JDBC is lot more than people writing web services in the enterprise apps. It should come as no surprise that a lot of java developers will not migrate to java 6. Posted by: nakuja on October 06, 2006 at 09:50 AM Hi Lance, Even though it is understandable the exclusion of the JDBC EoD API, it is kind of sad for me. I was pretty excited about the DataSet and Annotation stuff. Anyway, are you guys planning to release the JDBC EoD API as a separate download (as the API improves) or do we have to wait for Java SE 7? Thanks in advance, Alex. Posted by: alruiz15 on October 06, 2006 at 11:41 AM Hi Alex, Thanks for the feedback. It might be possible to release the API to get feedback sooner then Java SE 7. Are there specific things that you liked in the current API or would like to see in an improved version if possible? -lance Posted by: lancea on October 06, 2006 at 11:47 AM My JOKE :) the EoD is removed bcuz it is too similize to Java Persistence API :) Posted by: fcmmok on October 06, 2006 at 01:49 PM nakuja, check out JPA (= EJB3 entities). That might make annotations worth learning. It's not the right solution for everything, and it is obviously at a higher level than the JDBC annotations. But it saves a huge amount of drudgery, and you can put it to work even if you aren't interested in the rest of EJB. Posted by: cayhorstmann on October 06, 2006 at 01:54 PM Thank very much for you for your reply. I only have been playing with DataSets using toy examples. I apologize in advance if my question does not make sense. Let's assume we have these simple classes: public class EmailAccount { public String accountName; public Email email; // some behavior here } and public class Email { public String address; // some behavior here } I would like to do something like this in one of my query interfaces: @Select("SELECT ACCOUNT_NAME accountName, EMAIL_ADDRESS email.address FROM EMAIL_ACCOUNTS") AFAIK, only mapping of simple fields is supported. I would like to see mapping of more complex properties (a la iBATIS). Do you think it makes sense? Thank you in advance, Alex Posted by: alruiz15 on October 10, 2006 at 10:56 PM Alex, Thanks for the suggestion. The original intent of the API was to keep it similar to a ResultSet so there is a 1-1 mapping of the returned results to the DataSet. Your suggestion i think starts to move more towards an ORM which is where the Java Persistence API would come into play. The DataSet supported simple fields and Java Bean style property accessors. Posted by: lancea on October 11, 2006 at 10:50 AM The example I used in my previous comment is still property mapping from the result of a SQL query. IMHO ORM is more than just adding one level of depth in the mapping. Is there any chance that this use case can be considered in JDBC 4.0 EoD? Thanks, Posted by: alruiz15 on October 11, 2006 at 07:29 PM Unless i am misunderstanding your example, you are asking for a totally different intent. The results returned from a query are assigned to a single DataSet object. What i believe you are asking for is the ability to map the returned results to multiple DataSet objects. This was not intended as part of the original design. The intent was to have results map like they are in a Rowset or ResultSet where you have 1 object returned. Now, i am not saying that this could not be looked at when we review the API, i am just trying to explain the original intent. Thank you for your feedback Lance Posted by: lancea on October 12, 2006 at 07:27 AM Sorry for the misunderstanding. I should have added this line to the example: @Select("SELECT ACCOUNT_NAME accountName, EMAIL_ADDRESS email.address FROM EMAIL_ACCOUNTS") DataSet allEmailAccounts(); And the table in the db has the columns ACCOUNT_NAME and EMAIL_ADDRESS (both VARCHAR). The mapping still involves one DataSet object. When mapping the value from EMAIL_ADDRESS, a new Email object will be created with that value and then assigned to a new EmailAccount. Alex Posted by: alruiz15 on October 12, 2006 at 10:13 AM Data type for DataSet didn't show up: @Select("SELECT ACCOUNT_NAME accountName, EMAIL_ADDRESS email.address FROM EMAIL_ACCOUNTS") DataSet<EmailAccount> allEmailAccounts(); Posted by: alruiz15 on October 12, 2006 at 10:14 AM It's disappointing, but it won't stop me from using 6.0 ASAP. How does this affect anyone who wants to use toplink-essentials.jar and the annotations (for both applications and web-applications)? Excuse me if that should be obvious to me. Posted by: mike__rainville on October 13, 2006 at 12:54 PM Thanks Alex. I will see what we can do in the next version of the API. What you are asking for is something that we had not planned to provide but might be worth re-visiting. Posted by: lancea on October 13, 2006 at 01:13 PM The removal the the JDBC EoD API has *nothing* to do with the Java Persistence API, so you can use the toplink-essetinals.jar with no issues. Regards Posted by: lancea on October 13, 2006 at 01:14 PM Thanks Lance! :) Posted by: alruiz15 on October 16, 2006 at 10:37 AM Why is it necessary to wait until JSE 7 to release JDBC 4.0 EoD enhancements? If it was that close, why not release when it's ready? Bill Posted by: billfly on February 12, 2007 at 11:05 AM I unfortunately (maybe stupidly) was already using the EoD RI for some projects, when I found it missing from the Java 6 release, I started my own OSS project EoD SQL that clones almost all of the features, and adds quite a few extra that I found lacking in the original RI (like rubber-stamping, and simple return types (Lists, Sets, Arrays, etc.)). I look forward to seeing how the EoD RI has evolved when it's released in Java 7 :) Posted by: lemnik on March 10, 2007 at 02:55 AM Nice... Harvard - Harvard Stanford - Stanford Yale - Yale Posted by: jamesdalton on January 21, 2008 at 12:32 AM
http://weblogs.java.net/blog/lancea/archive/2006/10/jdbc_eod_api_de.html
crawl-002
refinedweb
1,620
70.43
Qt5 C++ GUI Programming Cookbook Use Qt5 to design and build a graphical user interface that is functional, appealing, and user-friendly for your software application Lee Zhi Eng BIRMINGHAM - MUMBAI Qt5 C++ GUI Programming Cook20716 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK. ISBN 978-1-78328-027-8 Credits Author Lee Zhi Eng Reviewer Symeon Huang Commissioning Editor Kartikey Pandey Acquisition Editor Indrajit Das Content Development Editor Priyanka Mehta Technical Editors Dhiraj Chandanshive Ravikiran Pise Copy Editor Safis Editing Project Coordinator Izzat Contractor Proofreader Safis Editing Indexer Rekha Nair Production Coordinator Aparna Bhagat Cover Work Aparna Bhagat About the Author. zhieng.com, or you can find out about his company at. About the Reviewer Symeon Huang is an experienced C++ GUI software developer and the author of Qt 5 Blueprints, Packt Publishing. He has finished his master's degree in high performance computing and has been working as a software engineer in industry. I'd like to thank Packt Publishing for giving me the opportunity to review this book. As a reviewer, I've also learnt from this book and I'm sure this book will be of great use to all readers.? ff Fully searchable across every book published by Packt ff Copy and paste, print, and bookmark content ff On demand and accessible via a web browser Table of Contents Preface Chapter 1: Look and Feel Customization v 1 Introduction Use style sheets with Qt Designer Basic style sheet customization Creating a login screen using style sheets Using resources in style sheets Customizing properties and sub-controls Styling in QML Exposing QML object pointer to C++ 1 2 6 11 19 23 27 36 Chapter 2: States and Animations 39 Chapter 3: QPainter and 2D Graphics 65 Introduction Property animation in Qt Using easing curves to control property animation Creating an animation group Creating a nested animation group State machines in Qt States, transitions, and animations in QML Animating widget properties using animators Sprite animation 39 39 42 44 47 50 53 57 59 Introduction Drawing basic shapes on screen Exporting shapes to SVG files Coordinate transformation Displaying images on screen 65 66 69 75 80 i Table of Contents Applying image effects to graphics Creating a basic paint program 2D canvas in QML 85 88 94 Chapter 4: OpenGL Implementation 99 Introduction Setting up OpenGL in Qt Hello world! Rendering 2D shapes Render 3D shapes Texturing in OpenGL Lighting and texture filter in OpenGL Moving an object using keyboard controls 3D canvas in QML 99 100 103 106 109 114 118 122 125 Chapter 5: Building a Touch Screen Application with Qt5 131 Chapter 6: XML Parsing Made Easy 167 Chapter 7: Conversion Library 187 Introduction Setting up Qt for mobile applications Designing a basic user interface with QML Touch events Animation in QML Displaying information using Model View Integrating QML and C++ Introduction Processing XML data using stream reader Writing XML data using Stream Writer Processing XML data using the QDomDocument class Writing XML data using the QDomDocument class Using Google's Geocoding API Introduction Data conversion Image conversion Video conversion Currency conversion ii 131 132 138 142 149 155 160 167 167 173 176 179 182 187 187 192 196 202 Table of Contents Chapter 8: Accessing Databases 207 Chapter 9: Developing a Web Application Using Qt Web Engine 245 Index 279 Introduction Connecting to a database Writing basic SQL queries Creating a login screen with Qt Displaying information from a database on a model view Advanced SQL queries Introduction Introduction to Qt WebEngine WebView and web settings Embedding Google Maps in your project Calling C++ functions from JavaScript Calling JavaScript functions from C++ 207 213 216 221 227 233 245 246 252 259 264 271 iii Preface The continuous growth of the computer software market leads to a very competitive and challenging era. Not only does your software need to be functional and easy to use, it must also look appealing and professional to the users. In order to gain an upper hand and a competitive advantage over other software products in the market, the look and feel of your product is of utmost importance and should be taken care of early in the production stage. In this book, we will teach you how to create a functional, appealing, and user friendly software using the Qt5 development platform. What this book covers Chapter 1, Look and Feel Customization, shows how to design your program's user interface using both the Qt Designer as well as the Qt Quick Designer. Chapter 2, States and Animations, explains how to animate your user interface widgets by empowering the state machine framework and the animation framework. Chapter 3, QPainter and 2D Graphics, covers how to draw vector shapes and bitmap images on screen using Qt's built-in classes. Chapter 4, OpenGL Implementation, demonstrates how to render 3D graphics in your program by integrating OpenGL in your Qt project. Chapter 5, Building a Touch Screen Application with Qt5, explains how to create a program that works on a touch screen device. Chapter 6, XML Parsing Made Easy, shows how to process data in the XML format and use it together with the Google Geocoding API to create a simple address finder. Chapter 7, Conversion Library, covers how to convert between different variable types, image formats, and video formats using Qt's built-in classes as well as third-party programs. v Preface Chapter 8, Accessing Databases, explains how to connect your program to an SQL database using Qt. Chapter 9, Developing a Web Application Using Qt Web Engine, covers how to use the web rendering engine provided by Qt and develop programs that empower the web technology. What you need for this book The following are the prerequisites for this book: 1. Qt5 (for all chapters) 2. FFmpeg (for Chapter 7, Conversion Library) 3. XAMPP (for Chapter 8, Accessing Databases) Who this book is for This book intended for those who want to develop software using Qt5. If you want to improve the visual quality and content presentation of your software application, this book will suit you best. Sections. vi Preface There's more… This section consists of additional information about the recipe in order to make the reader more knowledgeable about the recipe. See also This section provides helpful links to other useful information for the recipe.: "In the mylabel.cpp source file, define a function called SetMyObject() to save the object pointer." A block of code is set as follows: QSpinBox::down-button { image: url(:/images/spindown.png); subcontrol-origin: padding; subcontrol-position: right bottom; } When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold: QSpinBox::down-button { image: url(:/images/spindown.png); subcontrol-origin: padding; subcontrol-position: right bottom; } New terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: "Go to the Imports tab in the Library window and add a Qt Quick module called QtQuick.Controls to your project." vii http://. If you purchased this book elsewhere, you can visit. packtpub.com/support. 4. Enter the name of the book in the Search box. 5. Select the book for which you're looking to download the code files. 6. Choose from the drop-down menu where you purchased this book from. 7. viii Click on Code Download. Preface: ff WinRAR / 7-Zip for Windows ff Zipeg / iZip / UnRarX for Mac ff 7-Zip / PeaZip for Linux The code bundle for the book is also hosted on GitHub at PacktPublishing/Qt5-C-GUI-Programming-Cookbook. We also have other code bundles from our rich catalog of books and videos available downloads/Qt5CGUIProgrammingCookbook. ix Preface 1 Look and Feel Customization In this chapter we will cover the following recipes: ff Using style sheets with Qt Designer ff Basic style sheet customization ff Creating a login screen using style sheets ff Using resources in style sheets ff Customizing properties and sub-controls ff Styling in QML ff Exposing QML object pointer to C++ Introduction Qt allows us to easily design our program's user interface through a method that most people are familiar with. Qt not only provides us with a powerful user interface toolkit called Qt Designer, which enables us to design our user interface without writing a single line of code, but it also allows advanced users to customize their user interface components through a simple scripting language called Qt Style Sheets. 1 Look and Feel Customization Use style sheets with Qt Designer. How to do it… 1. The first thing we need to do is open up Qt Creator and create a new project. If this is the first time you have used Qt Creator, you can either click the big button that says New Project with a + sign, or simply go to File | New File or New Project. 2. Then, select Application under the Project window and select Qt Widgets Application. 3. After that, click the Choose button at the bottom. A window will then pop out and ask you to insert the project name and its location. 4. Once you're done with that, click Next several times and click the Finish button to create the project. We will just stick to all the default settings for now. Once the project has been created, the first thing you will see is the panel with tons of big icons on the left side of the window that is called the Mode Selector panel; we will discuss this more later in the How it works... section. 5. Then, you will also see all your source files listed on the Side Bar panel which is located right next to the Mode Selector panel. This is where you can select which file you want to edit, which, in this case, is mainwindow.ui because we are about to start designing the program's UI! 6. Double-click mainwindow.ui and you will see an entirely different interface appearing out of nowhere. Qt Creator actually helped you to switch from the script editor to the UI editor (Qt Designer) because it detected the .ui extension on the file you're trying to open. 7. You will also notice that the highlighted button on the Mode Selector panel has changed from the Edit button to the Design button. You can switch back to the script editor or change to any other tools by clicking one of the buttons located in the upper half of the Mode Selector panel. 8. Let's go back to the Qt Designer and look at the mainwindow.ui file. This is basically the main window of our program (as the filename implies) and it's empty by default, without any widget on it. You can try to compile and run the program by pressing the Run button (green arrow button) at the bottom of the Mode Selector panel, and you will see an empty window popping up once the compilation is complete: 2 Chapter 1 9. Now, let's add a push button to our program's UI by clicking on the Push Button item in the widget box (under the Buttons category) and dragging it to your main window in the form editor. Then, keep the push button selected, and now you will see all the properties of this button inside the property editor on the right side of your window. Scroll down to somewhere around the middle and look for a property called styleSheet. This is where you apply styles to your widget, which may or may not inherit to its children or grandchildren recursively depending on how you set your style sheet. Alternatively, you can also right-click on any widget in your UI at the form editor and select Change Style Sheet from the pop-up menu. 10. You can click on the input field of the styleSheet property to directly write the style sheet code, or click on the … button besides the input field to open up the Edit Style Sheet window which has a bigger space for writing longer style sheet code. At the top of the window you can find several buttons, such as Add Resource, Add Gradient, Add Color, and Add Font, that can help you to kick-start your coding if you can't remember the properties' names. Let's try to do some simple styling with the Edit Style Sheet window. 11. Click Add Color and choose color. 12. Pick a random color from the color picker window, let's say, a pure red color. Then click OK. 13. Now, you will see a line of code has been added to the text field on the Edit Style Sheet window, which in my case is as follows: color: rgb(255, 0, 0); 14. Click the OK button and now you will see the text on your push button has changed to a red color. 3 Look and Feel Customization How it works... Let's take a bit of time to get ourselves familiar with Qt Designer's interface before we start learning how to design our own UI: 1. Menu bar: The menu bar houses application-specific menus that provide easy access to essential functions such as create new projects, save files, undo, redo, copy, paste, and so on. It also allows you to access development tools that come with Qt Creator, such as the compiler, debugger, profiler, and so on. 2. Widget box: This is where you can find all the different types of widget provided by Qt Designer. You can add a widget to your program's UI by clicking one of the widgets from the widget box and dragging it to the form editor. 3. Mode selector: The mode selector is a side panel that places shortcut buttons for easy access to different tools. You can quickly switch between the script editor and form editor by clicking the Edit or Design buttons on the mode selector panel which is very useful for multitasking. You can also easily navigate to the debugger and profiler tools in the same speed and manner. 4. Build shortcuts: The build shortcuts are located at the bottom of the mode selector panel. You can build, run, and debug your project easily by pressing the shortcut buttons here. 4 Chapter 1 5. Form editor: Form editor is where you edit your program's UI. You can add different widgets to your program by selecting a widget from the widget box and dragging it to the form editor. 6. Form toolbar: From here, you can quickly select a different form to edit, click the drop-down box located above the widget box and select the file you want to open with Qt Designer. Beside the drop-down box are buttons for switching between different modes for the form editor and also buttons for changing the layout of your UI. 7. Object inspector: The object inspector lists all the widgets within your current .ui file. All the widgets are arranged according to its parent-child relationship in the hierarchy. You can select a widget from the object inspector to display its properties in the property editor. 8. Property editor: Property editor will display all the properties of the widget you selected either from the object inspector window or the form editor window. 9. Action Editor and Signals & Slots Editor: This window contains two editors, Action Editor and the Signals & Slots Editor, which can be accessed from the tabs below the window. The action editor is where you create actions that can be added to a menu bar or toolbar in your program's UI. 10. Output panes: Output panes consist of several different windows that display information and output messages related to script compilation and debugging. You can switch between different output panes by pressing the buttons that carry a number before them, such as 1-Issues, 2-Search Results, 3-Application Output, and so on. There's more… In the previous section, we discussed how to apply style sheets to Qt widgets through C++ coding. Although that method works really well, most of the time the person who is in charge of designing the program's UI is not the programmer, but accurate visual representation of the final result, which means whatever you design with Qt Designer will turn out exactly the same when the program is compiled and run. 5 Look and Feel Customization The similarities between Qt Style Sheets and CSS are as follows: ff CSS: h1 { color: red; background-color: white;} ff Qt Style Sheets: QLineEdit { color: red; background-color: white;} ff As you can see, both of them contain a selector and a declaration block. Each declaration contains a property and a value, separated by a colon. ff In Qt, a style sheet can be applied to a single widget by calling QObject::setStyleSheet() function in C++ code, for example: myPushButton->setStyleSheet("color : blue"); ff The preceding code will turn the text of a button with the variable name myPushButton to a blue color. You can also achieve the same result by writing the declaration in the style sheet property field in Qt Designer. We will discuss more about Qt Designer in the next section. ff Qt Style Sheets also supports all the different types of selectors defined in CSS2 standard, including Universal selector, Type selector, Class selector, ID selector, and so on, which allows us to apply styling to a very specific individual or group of widgets. For instance, if we want to change the background color of a specific line edit widget with the object name usernameEdit, we can do this by using an ID selector to refer to it: QLineEdit#usernameEdit { background-color: blue } To learn about all the selectors available in CSS2 (which are also supported by Qt Style Sheets), please refer to this document: TR/REC-CSS2/selector.html. Basic style sheet customization In the previous example, long run. How to do it… 1. First of all, let's remove the style sheet from the push button by selecting it and clicking the small arrow button besides the styleSheet property. This button will revert the property to the default value, which in this case is the empty style sheet. 6 Chapter 1 2. Then, add a few more widgets to the UI by dragging them one by one from the widget box to the form editor. I've added a line edit, combo box, horizontal slider, radio button, and a check box. 3. For the sake of simplicity, delete the menu bar, main toolbar, and the status bar from your UI by selecting them from the object inspector, right click, and choose Remove. Now your UI should look similar to this: 4. Select the main window either from the form editor or the object inspector, then right click and choose Change Stylesheet to open up the Edit Style Sheet. Insert the following style sheet: border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; 5. Now what you will see is a completely; } 7 Look and Feel Customization 6. This time, only the push button will get the style described in the preceding code, and all other widgets will return to the default styling. You can try to add a few more push buttons to your UI and they will all look the same: 7. This happens because we specifically tell the selector to apply the style to all the widgets with the class called QPushButton. We can also apply the style to just one of the push buttons by mentioning its name in the style sheet, like so: QPushButton#pushButton_3 { border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; } 8. Once you understand this method, we can add the following code to the style sheet : QPushButton { color: red; border: 0px; padding: 0 8px; background: white; } QPushButton#pushButton_2 { border: 1px solid red; border-radius: 10px; } 8 Chapter 1 QPushButton#pushButton_3 { border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; } 9. What it does is basically change the style of all the push buttons as well as some properties of a specific button named pushButton_2. We keep the style sheet of pushButton_3 as it is. Now the buttons will look like this: 10. The first set of style sheet will change all widgets of QPushButton type to a white rectangular button with no border and red text. Then the second set of style sheet changes only the border of a specific QPushButton widget called pushButton_2. Notice that the background color and text color of pushButton_2 remain white and red respectively because we didn't override them in the second set of style sheet, hence it will return to the style described in the first set of style sheet since it's applicable to all QPushButton widgets. Do notice that the text of the third button has also changed to red because we didn't describe the color property in the third set of style sheet. 11. After that, create another set of style using the universal selector, like so: * { background: qradialgradient(cx: 0.3, cy: -0.4, fx: 0.3, fy: -0.4, radius: 1.35, stop: 0 #fff, stop: 1 #888); color: rgb(255, 255, 255); border: 1px solid #ffffff; } 9 Look and Feel Customization 12. The universal selector will affect all the widgets regardless of their type. Therefore, the preceding style sheet will apply a nice gradient color to all the widgets' background as well as setting their text as white and giving them a one-pixel solid outline which is also in white. Instead of writing the name of the color (that is, white), we can also use the rgb function (rgb(255, 255, 255)) or hex code (#ffffff) to describe the color value. 13. Just having influence on a widget. This is how the UI will look now: How it works... If you are ever involved in web development using HTML and CSS, Qt's style sheet works exactly the same way as CSS. Style sheets provide the definitions for describing the presentation of the widgets – what the colors are for each element in the widget group, how thick the border should be, and so on and so forth. If you specify the name of the widget to the style sheet, it will change the style of a particular push button widget with the name you provide. None of the other widgets will be affected and will remain as the default style. To change the name of a widget, select the widget either from. 10 Chapter 1 Creating a login screen using style sheets Next, we will learn how to put all the knowledge we. How to do it… 1. The first thing we need to do: 2. Now that we know exactly how the login screen should look, let's go back to Qt Designer again. 3. We will be placing the widgets at the top panel first, then the logo and the login form below it. 4. Select the main window and change its width and height from 400 and 300 to 800 and 600 respectively because we'll need a bigger space in which to place all the widgets in a moment. 5. Click and drag a label under the Display Widgets category from the widget box to the form editor. 6. Change the objectName property of the label to currentDateTime and change its Text property to the current date and time just for display purposes, such as Monday, 25-10-2015 3:14 PM. 11 Look and Feel Customization 7. Click and drag a push button under the Buttons category to the form editor. Repeat this process one more time because we have two buttons on the top panel. Rename the two buttons restartButton and shutdownButton respectively. 8. Next, select the main window and click the small icon button on the form toolbar that says Lay Out Vertically when you mouse-over it. Now you will see the widgets are being automatically arranged on the main window, but it's not exactly what we want yet. 9. Click and drag a horizontal layout widget under the Layouts category to the main window. 10. Click and drag the two push buttons and the text label into the horizontal layout. Now you will see the three widgets being arranged in a horizontal row, but vertically they are located in the middle of the screen. The horizontal arrangement is almost correct, but the vertical position is totally off. 11. Click and drag a vertical spacer from the Spacers category and place it below the horizontal layout we created previously (below the red rectangular outline). Now you will see all the widgets are being pushed to the top by the spacer. 12. Now, place a horizontal spacer between the text label and the two buttons to keep them apart. This will make the text label always stick to the left and the buttons align to the right. 13. Set both the Horizontal Policy and Vertical Policy properties of the two buttons to Fixed and set the minimumSize property to 55x55. Then, set the text property of the buttons to empty as we will be using icons instead of text. We will learn how to place an icon in the button widgets in the following section. 14. Now your UI should look similar to this: Next, we will be adding the logo by using the following steps: 1. Add a horizontal layout between the top panel and the vertical spacer to serve as a container for the logo. 2. After adding the horizontal layout, you will find the layout is way too thin in height to be able to add any widgets to it. This is because the layout is empty and it's being pushed by the vertical spacer below it into zero height. To solve this problem, we can set its vertical margin (either layoutTopMargin or layoutBottomMargin) to be temporarily bigger until a widget is added to the layout. 12 Chapter 1 3. Next, add a label to the horizontal layout that you just created and rename it logo. We will learn more about how to insert an image into the label to use it as a logo in the next section. For now, just empty out the text property and set both its Horizontal Policy and Vertical Policy properties to Fixed. Then, set the minimumSize property to 150x150. 4. Set the vertical margin of the layout back to zero if you haven't done so. 5. The logo now looks invisible, so we will just place a temporary style sheet to make it visible until we add an image to it in the next section. The style sheet is really simple: border: 1px solid; 6. Now your UI should look similar to this: Now let's create the login form by using the following steps: 1. Add a horizontal layout between the logo's layout and the vertical spacer. Just as we did previously, set the layoutTopMargin property to a bigger number (that is, 100) so that you can add a widget to it more easily. 2. After that, add a vertical layout inside the horizontal layout you just created. This layout will be used as a container for the login form. Set its layoutTopMargin to a number lower than that of the horizontal layout (that is, 20) so that we can place widgets in it. 3. Next, right click the vertical layout you just created and choose Morph into -> QWidget. The vertical layout is now does make sense, considering that it does not have any size properties. After you have converted the layout to a QWidget object, it will automatically inherit all the properties from the widget class, and so we are now able to adjust its size to suit our needs. 13 Look and Feel Customization 4. Rename the QWidget object, which we just converted from the layout, to loginForm and change both its Horizontal Policy and Vertical Policy properties to Fixed. Then, set the minimumSize to 350x200. 5. Since we already placed the loginForm widget inside the horizontal layout, we can now set its layoutTopMargin property back to zero. 6. Add the same style sheet as the logo to the loginForm widget to make it visible temporarily, except this time we need to add an ID selector in front so that it will only apply the style to loginForm and not its children widgets: #loginForm { border: 1px solid; } 7. Now your UI should look something like this: We are not done with the login form yet. Now that we have created the container for the login form, it's time to put more widgets into the form: 1. Place two horizontal layouts into the login form container. We need two layouts as one for the username field and another for the password field. 2. Add a label and a line edit to each of the layouts you just added. Change the text property of the upper label to Username: and the one below as Password:. Then, rename the two line edits as username and password respectively. 3. Add a push button below the password layout and change its text property to Login. After that, rename it as loginButton. 14 Chapter 1 4. You can add a vertical spacer between the password layout and the login button to distance them slightly. After the vertical spacer has been placed, change its sizeType property to Fixed and change the Height to 5. 5. Now, select the loginForm container and set all its margins to 35. This is to make the login form look better by adding some space to all its sides. 6. You can also set the Height property of the username, password, and loginButton widgets to 25 so that they don't look so cramped. 7. Now your UI should look something like this: We're not done yet! As you can see, the login form and the logo are both sticking to the top of the main window due to the vertical spacer below them. The logo and the login form should be placed at the center of the main window instead of the top. To fix this problem, use the following steps: 1. Add another vertical spacer between the top panel and the logo's layout. This way it will counter the spacer at the bottom which balances out the alignment. 2. If you think that the logo is sticking too close to the login form, you can also add a vertical spacer between the logo's layout and the login form's layout. Set its sizeType property to Fixed and the Height property to 10. 3. Right click the top panel's layout and choose Morph into -> QWidget. Then, rename it topPanel. The reason why the layout has to be converted into QWidget is that, we cannot apply style sheets to a layout, as it doesn't have any properties other than margins. 15 Look and Feel Customization 4. Currently you can see there is a little bit of margin around the edges of the main window – we don't want that. To remove the margins, select the centralWidget object from the object inspector window, which is right under the MainWindow panel, and set all the margin values to zero. 5. At this point, you can run the project by clicking the Run button (with the green arrow icon) to see what your program looks like now. If everything went well, you should see something like this: 6. After we've done the layout, it's time for us to add some fanciness. 7. Right click on MainWindow from the object inspector window and choose Change Stylesheet. 8. Add the following code to the style sheet: #centralWidget { background: rgba(32, 80, 96, 100); } 9. Now you will see that the background of the main window changes its color. We will learn how to use an image for the background in the next section, so the color is just temporary. 16 Chapter 1 10. In Qt, if you want to apply styles to the main window itself, you must apply it to its central widget instead of the main window itself because the window is just a container. 11. Then, we will add a nice gradient color to the top panel: #topPanel { background-color: qlineargradient(spread:reflect, x1:0.5, y1:0, x2:0, y2:0, stop:0 rgba(91, 204, 233, 100), stop:1 rgba(32, 80, 96, 100)); } 12. After that, we will apply black color to the login form and make it look semi-transparent. After that, we will also make the corners of the login form container slightly rounded by setting the border-radius property: #loginForm { background: rgba(0, 0, 0, 80); border-radius: 8px; } 13. After we're done applying styles to the specific widgets, we will apply styles to the general types of widgets instead: QLabel { color: white; } QLineEdit { border-radius: 3px; } 14. The preceding style sheets will change all the labels' texts to a white color, which includes the text on the widgets as well because, internally, Qt uses the same type of label on the widgets that have text on it. Also, we made the corners of the line edit widgets slightly rounded. 15. Next, we will apply style sheets to all the push buttons on our UI: QPushButton { color: white; background-color: #27a9e3; border-width: 0px; border-radius: 3px; } 16. The preceding style sheet changes the text of all the buttons to a white color, then sets its background color to blue, and makes its corners slightly rounded as well. 17. To push things even further, we will change the color of the push buttons when we mouse-over it, using the keyword hover: QPushButton:hover { background-color: #66c011; } 17 Look and Feel Customization 18. The preceding style sheet will change the background color of the push buttons to green when we mouse-over them. We will talk more about this in the following section. 19. You can further adjust the size and margins of the widgets to make them look even better. Remember to remove the border line of the login form by removing the style sheet that we applied directly to it earlier. 20. Now your login screen should look something like this: How it works... This example focuses more on the layout system of Qt. The Qt layout system provides a simple and powerful way of automatically arranging child widgets within a widget to ensure that they make good use of the available space. another on the right side of the widget. The widget will then be pushed to the middle of the layout by the two spacers. 18 Chapter 1 Using resources in style sheets Qt provides us with a platform-independent resource system which allows us to store any type of files in our program's executable for later use. There is no limit to the types of files we can store in our executable—images, audio, video HTML, XML, text files, binary files, and so on, are all permitted. This is useful if your application always needs a certain set of files (icons, translation files, and so on) and you don't want to run the risk of losing the files. To achieve this, we must tell Qt which files we want to add to its resource system in the .qrc file and Qt will handle the rest during the build process. How to do it not be created and automatically opened by Qt Creator. You don't have to edit the .qrc file directly in the XML format as Qt Creator provides you the user interface to manage your resources. To add images and icons to your project, first you need to make sure that the images and icons are being placed in your project's directory. While the .qrc file is opened in Qt Creator, click the Add button followed by Add Prefix button. The prefix is used to categorize your resources so that it can be better managed when you have a ton of resources in your project: 1. Rename the prefix you just created /icons. 2. Then, create another prefix by clicking Add followed by Add Prefix. 3. Rename the new prefix /images. 4. After that, select the /icon prefix and click Add followed by Add Files. 5. A file selection window will appear; use that to select all the icon files. You can select multiple files at a time by holding the Ctrl key on your keyboard while clicking on the files to select them. Click Open once you're done. 6. Then, select the /images prefix and click the Add button followed by the Add Files button. The file selection window will pop up again, and this time we will select the background image. 19 Look and Feel Customization 7. Repeat the preceding steps, but this time we will add the logo image to the /images prefix. Don't forget to save once you're done by pressing Ctrl + S. Your .qrc file should now look like this: 8. After that, open back to our mainwindow.ui file; we will now make use of the resources we have just added to our project. First, we will select the restart button located on the top panel. Then, scroll down the property editor until you see the icon property. Click the little button with a drop-down arrow icon and click Choose Resources from its menu. 9. The Select Resource window will then pop up. Click on the icons prefix on the left panel and then select the restart icon on the right panel. After that, press OK. 10. You will now see a tiny icon appearing on the button. The icon looks very tiny because the default icon size is set at 16x16. Change the iconSize property to 50x50 and you will see the icon appear bigger now. Repeat the preceding steps for the shutdown button, except this time we will choose the shutdown icon instead. 11. Once you're done, the two buttons should now look like this: 12. Next, we will use the image we added to the resource file as our logo. First, select the logo widget and remove the style sheet that we added earlier to render its outline. 13. Scroll down the property editor until you see the pixmap property. 20 Chapter 1 14. Click the little drop-down button behind the pixmap property and select Choose Resources from the menu. After that, select the logo image and click OK. You will now see the logo size no longer follow the dimension you set previously and follow the actual dimension of the image instead. We cannot change its dimension because this is simply how pixmap works. 15. If you want more control over the logo's dimension, you can remove the image from the pixmap property and use a style sheet instead. You can use the following code to apply an image to the icon container: border-image: url(:/images/logo.png); 16. To obtain the path of the image, right click the image name on the file list window and choose Copy path. The path will be saved to your operating system clipboard and now you can just paste it to the preceding style sheet. Using this method will ensure that the image fits exactly the dimension of the widget that you applied the style to. Your logo should now appear like so: 17. Lastly, we will apply the wallpaper image to the background using a style sheet. Since the background dimension will change according to the window size, we cannot use pixmap in this case. Instead, we will use the border-image property in a style sheet to achieve this. Right click the main window and select Change styleSheet to open up the Edit Style Sheet window. We will add a new line under the style sheet of the central widget: #centralWidget { background: rgba(32, 80, 96, 100); border-image: url(:/images/login_bg.png); } 21 Look and Feel Customization 18. It's really that simple and easy! Your login screen should now look like this: How it works... The resource system in Qt stores binary files, such as images, translation files, and so on, in the executable when it gets compiled. It reads the resource collection files (.qrc) in your project to locate the files that need to be stored in the executable and include them into the build process. A .qrc file looks something like this: <> It uses XML format to store the paths of the resource files which are relative to the directory containing it. Do note that the listed resource files must be located in the same directory as the .qrc file, or one of its sub-directories. 22 Chapter 1 Customizing properties and sub-controls Qt's style sheet system enables us to create stunning and professional-looking UIs with ease. In this example, we will learn how to set custom properties to our widgets and use them to switch between different styles. How to do it… 1. Let's try out the scenario described in the preceding paragraph by creating a new Qt project. I have prepared the UI for this purpose. The UI contains three buttons on the left side and a tab widget with three pages located at the right side, as shown in the following screenshot: 2. The three buttons are blue in color because I've added the following style sheet to the main window (not to the individual button): QPushButton { color: white; background-color: #27a9e3; border-width: 0px; border-radius: 3px; } 23 Look and Feel Customization 3. Next, I will explain to you what pseudo states are in Qt by adding the following style sheet to the main window, which you might be familiar with: QPushButton:hover { color: white; background-color: #66c011; border-width: 0px; border-radius: 3px; } 4. We used the preceding style sheet in the previous tutorial to make the buttons change color when there is a mouse-over. This is made possible by Qt Style Sheet's pseudo state, which in this case is the word hover separated from the QPushButton class by a colon. Every widget has a set of generic pseudo states, such as active, disabled, enabled, and so on, and also a set of pseudo states which are applicable to their widget type. For example, states such as open and flat are available for QPushButton, but not for QLineEdit. Let's add the pressed pseudo state to change the buttons' color to yellow when the user clicks on it: QPushButton:pressed { color: white; background-color: yellow; border-width: 0px; border-radius: 3px; } 5. Pseudo states allow the users to load a different set of style sheet; } 24 Chapter 1 6. What it does is basically change the push button's background color to red if the property called pagematches returns true. Obviously, this property does not exist in the QPushButton class. However, we can add it to our buttons by using QObject::setProperty(): In your MainWindow.cpp source code, add the following code right after ui->setupUi(this);: ui->button1->setProperty("pagematches", true); The preceding code will add a custom property called pagematches to the first button and set its value as true. This will make the first button turn red by default. After that, right click on the tab widget and choose Go to slot. A window will then pop up; select the currentChanged(int) option from the list and click Ok. Qt will generate a slot function for you, which looks something like this: private slots: void on_tabWidget_currentChanged(int index); The slot function will be called whenever we change page of the tab widget. We can then decide what we want it to do by adding our code into the slot function. To do that, open up mainwindow.cpp and you will see the function's declaration there. Let's add some code to the function:); } 25 Look and Feel Customization 7. The preceding code basically does this: when the tab widget switches its current page, it sets the pagematches properties of all three buttons to false. Just be sure to reset everything before we decide which button should change to red. 8. Then, check the index variable supplied by the event signal, which will tell you the index number of the current page. Set the pagematches property of one of the buttons to true based on the index number. 9. Lastly, refresh the style of all three buttons by calling polish(). Then, build and run the project. You should now see the three buttons changing their color to red whenever you switch the tab widget to a different page. Also, the buttons will change color to green when there is a mouse-over, as well as change their color to yellow when you click on them: How it works..., then there is no way the buttons would know when they should change their color, because Qt itself has no built-in context for this type of situation. To solve this issue, Qt provides us a method to add our own properties to the widgets, which is using a generic function called QObject::setProperty(). To read the custom property, we can use another function called QObject::property(). 26 Chapter 1 Next, we will talk about sub-controls in Qt Style Sheets. It's actually quite self-explanatory by looking at the term sub-controls. Often, a widget is not just a single object but a combination of more than one object or control in order to form a more complex widget, and single sub-control using a style sheet, if we wanted to. We can do so by specifying the name of the sub-control behind the widget's class name, separated by a double colon. For instance, if I want to change the image of the down button in a spin box, I can write my style sheet like this:. Visit the following link to learn more about pseudo states and subcontrols in Qt: Styling in QML Qt Meta Language or Qt Modeling Language (QML) is a Javascript-inspired user interface mark-up language used by Qt for designing. How to do it… 1. Create a new project by going to File | New File or Project. Select Application under Project category and choose Qt Quick Application. 2. Press the Choose button, and that will bring you to the next window. Insert a name for your project and click the Next button again. 27 Look and Feel Customization 3. Another window will now appear and ask you to choose a minimum required Qt version. Pick the latest version installed on your computer and click Next. 4. After that, click Next again followed by Finish. Qt Creator will now create a new project for you. 5. Once the project is being created, you will see there are some differences compare to a C++ Qt project. You will see two .qml files, namely main.qml and MainForm. ui.qml, inside the project resource. These two files are the UI description files using the QML mark-up language. If you double click main.qml file, Qt Creator will open up the script editor and you will see something like this: import QtQuick 2.5 import QtQuick.Window 2.2 Window { visible: true MainForm { anchors.fill: parent mouseArea.onClicked: { Qt.quit(); } } } 6. This file basically tells Qt to create a window and insert a set of UI called MainForm which is actually from the other .qml file called MainForm.ui.qml. It also tells Qt that when the user clicks on the mouseArea widget, the entire program should be terminated. 7. Now, try to open the MainForm.ui.qml file by double-clicking on it. This time, Qt Designer (UI editor) will be opened instead, and you will see a completely different UI editor compared to the C++ project we did previously. This editor is also called the Qt Quick Designer, specially designed for editing QML-based UI only. 8. If you open up the main.cpp file in your project, you will see this line of code: QQmlApplicationEngine engine; engine.load(QUrl(QStringLiteral("qrc:/main.qml"))); 9. The preceding code basically tells Qt's QML engine to load the main.qml file when the program starts. If you want to load the other .qml file instead of main.qml, you know where to look for the code. 28 Chapter 1 10. When main.qml is loaded by the QML engine, it will also import MainForm. ui.qml into the UI, since MainForm is being called in the main.qml file. Qt will check if MainForm is a valid UI by searching for its .qml file based on the naming convention. Basically the concept is similar to the C++ project we did in the previous section, whereby the main.qml file acts like the main.cpp file and MainForm. ui.qml acts like the MainWindow class. You can also create other UI templates and use them in main.qml. Hopefully this comparison will make it easier to understand how QML works. 11. Now let's open up MainForm.ui.qml. You should see three items listed on the navigator window: Rectangle, mouseArea, and Text. When these items are interpreted by the QML engine, it produces the following result on the canvas: 12. The Rectangle item is basically the base layout of the window, which cannot be deleted. It is similar to the centralWidget we used in the previous section. The mouseArea item is an invincible item that gets triggered when the mouse is clicking on it, or when a finger is touching it (for mobile platforms). The mouse area is also used in a button component, which we will be using in a while. The Text component is self-explanatory: it is a label that displays a block of text on the application. 29 Look and Feel Customization 13. On the Navigator window, we can hide or show an item by clicking on the icon besides the item which resembles an eye. When an item is hidden, it will not show on the canvas nor the compiled application. Just like the widgets in a C++ Qt project, Qt Quick components are arranged in a hierarchy based on the parent-child relationship. All the children items will be placed below the parent item with an indented position. In our case, you can see the mouseArea and Text items are all positioned slightly to the right compared to the Rectangle item, because they are both the children of the Rectangle item. We can re-arrange the parent-child relationship as well as their position in the hierarchy by using a click-and-drag method from the navigator window. You can try clicking on the Text item and dragging it on top of mouseArea. You will then see the Text item changes its position and is now located below the mouseArea with a wider indentation: 14. We can also re-arrange them by using the arrow buttons located on top of the navigator window, as shown in the preceding screenshot. Anything that happens to the parent item will also affect all its children, such as moving the parent item, hide and show the parent item, and so on. the horizontal scroll bar of the canvas, scrolling the mouse will move the view to the left and right. 15. Next, delete both the mouseArea and Text items as we will be learning how to create a user interface from scratch using QML and Qt Quick. 16. After you've done, let's set the Rectangle item's size to 800x600, as we're going to need a bigger space for the widgets. 30 Chapter 1 17. Open up main.qml and remove these lines of code: mouseArea.onClicked: { Qt.quit(); } This is because the mouseArea item no longer exists and it will cause an error when compiling. 18. After that, remove the following code from MainForm.ui.qml: property alias mouseArea: mousearea 19. This is removed for the same reason as the previous code, because the mouseArea item no longer exists. 20. Then, copy the images we used in the previous C++ project over to the QML project's folder, because we are going re-create the same login screen, with QML! 21. Add the images to the resource file so that we can use them for our UI. 22. Once you're done with that, open up Qt Quick Designer again and switch to the resource window. Click and drag the background image directly to the canvas. Then, switch over to the Layout tab on the properties pane and click the fill anchor button marked in red circle. This will make the background image always stick to the window size: 23. Next, click and drag a Rectangle component from the library window to the canvas. We will use this as the top panel for our program. 24. For the top panel, enable top anchor, left anchor, and right anchor so that it sticks to the top of the window and follow its width. Make sure all the margins are set to zero. 25. Then, go to the Color property of the top panel and select Gradient mode. Set the first color to #805bcce9 and the second color to #80000000. This will create a halftransparent panel with a blue gradient. 26. After that, add a text widget to the canvas and make it a child of the top panel. Set its text property to the current date and time (for example, Monday, 26-10-2015 3:14 PM) for display purposes. Then, set the text color to white. 31 Look and Feel Customization 27. Switch over to the Layout tab and enable top anchor and left anchor so that the text widget will always stick to the top left corner of the screen. 28. Next, add a mouse area to the screen and set its size to 50x50. Then, make it a child of the top panel by dragging it on top of the top panel in the navigator window. 29. Set the color of the mouse area to blue (#27a9e3) and set its radius to 2 to make its corners slightly rounded. Then, enable top anchor and right anchor to make it stick to the top right corner of the window. Set the top anchor's margin to 8 and right anchor's margin to 10 to give out some space. 30. After that, open up the resources window and drag the shutdown icon to the canvas. Then, make it a child of the mouse area item we created a moment ago. Then, enable the fill anchor to make it fit the size of the mouse area. 31. Phew, that's a lot of steps! Now your items should be arranged like this on the Navigator window: 32. The parent-child relationship and the layout anchors are both very important to keep the widgets in the correct positions when the main window changes its size. 33. At this point, your top panel should look something like this: 34. Next, we will be working on the login form. First, add a new rectangle to the canvas by dragging it from the Library window. Resize the rectangle to 360x200 and set its radius to 15. 32 Chapter 1 35. Then, set its color to #80000000, which will change it to black with 50% transparency. 36. After that, enable the vertical center anchor and the horizontal center anchor to make it always align to the center of the window. Then, set the margin of the vertical center anchor to 100 so that it moves slightly lower to the bottom to give space to the logo. The following screenshot illustrates the settings of the anchors: 37. Add the text widgets to the canvas. Make them the children of the login form (rectangle widget) and set their text property to Username: and Password: respectively. Then, change their text color to white and position them accordingly. We don't need to set a margin this time because they will follow the rectangle's position. 38. Next, add two text input widgets to the canvas and place them next to the text widgets we created just now. Make sure the text inputs are also the children of the login form. Since the text inputs don't contain any background color property, we need to add two rectangles to the canvas to use as their background. 39. Add two rectangles to the canvas and make each of them a child of one of the text inputs we created just now. Then, set the radius property to 5 to give them some rounded corners. After that, enable fill anchors on both of the rectangles so that they will follow the size of the text input widgets. 33 Look and Feel Customization 40. After that, we're going to create the login button below the password field. First, add a mouse area to the canvas and make it a child of the login form. Then, resize it to your preferred dimension and move it into place. 41. Since the mouse area also does not contain any background color property, we need to add a rectangle widget and make it a child of the mouse area. Set the color of the rectangle to blue (#27a9e3) and enable the fill anchor so that it fits nicely with the mouse area. 42. Next, add a text widget to the canvas and make it a child of the login button. Change its text color to white and set its text property to Login. Finally, enable the horizontal center anchor and the vertical center anchor to align it to the center of the button. 43. You will now get a login form that looks pretty similar to the one we made in the C++ project: 44. After we have done the login form, it's time to add the logo. It's actually very simple. First, open up the resources window and drag the logo image to the canvas. 45. Make it a child of the login form and set its size to 512x200. 46. Position it above the login form and you're done! 47. This is what the entire UI look like when compiled. We have successfully re-created the login screen from the C++ project, but this time we did it with QML and Qt Quick! 34 Chapter 1 How it works... Qt Quick editor uses a very different approach for placing widgets in the application compared to the form editor. It's entirely up to the user which method is best suited for him/her. The following screenshot shows what the Qt Quick Designer looks like: We will now look at the various elements of the editor's UI: 1. Navigator: The Navigator window displays the items in the current QML file as a tree structure. It's similar to the object operator window in the other Qt Designer we used in previous section. 2. Library: The Library window displays all the Qt Quick Components or Qt Quick Controls available in QML. You can click and drag it to the canvas window to add to your UI. You can also create your own custom QML components and display it here. 3. Resources: The Resources window displays all the resources in a list which can then be used in your UI design. 4. Imports: The Imports window allows you to import different QML modules into your current QML file, such as a bluetooth module, webkit module, positioning module, and so on, to add additional functionality to your QML project. 35 Look and Feel Customization 5. State pane: Stat pane displays the different states in the QML project which typically describe UI configurations, such as the UI controls, their properties and behavior, and the available actions. 6. Properties pane: Similar to the property editor we used in previous section, this properties pane in QML Designer displays the properties of the selected item. You can also change the properties of the items in the code editor as well. 7. Canvas: Canvas is the working area where you create QML components and design applications. Exposing QML object pointer to C++ Sometimes we want to modify the properties of a QML object through C++ scripting, such as changing the text of a label, hiding/showing the widget, changing its size, and so on. Qt's QML engine allows you to register your QML objects to C++ types which automatically exposes all its properties. How to do it… We want to create a label in QML and change its text occasionally. In order to expose the label object to C++, we can do the following steps. First, create a C++ class called MyLabel that extends from QObject class: source file, define a function called SetMyObject() to save the object pointer. This function will later be called in QML: mylabel.cpp: void MyLabel::SetMyObject(QObject* obj) { // Set the object pointer myObject = obj; } 36 Chapter 1 After that, in main.cpp, include MyLabel header and register it to QML engine using the function qmlRegisterType(): #include "mylabel.h" int main(int argc, char *argv[]) { // Register your class to QML qmlRegisterType<MyClass>( for importing your class to QML later on. Now that the QML engine is fully aware of our custom label class, we can then map it to our label object in QML and import the class library we defined earlier by calling import MyLabelLib 1.0 in our QML file. Notice that the library name and its version number have to match with the one you declared in main.cpp, otherwise it will throw you an error. After declaring MyLabel in QML and setting its ID as mylabels, call mylabel. SetMyObject(myLabel) to expose its pointer to C/C++ right after the label is being initialized: import MyLabelLib 1.0 ApplicationWindow { id: mainWindow width: 480 height: 640 MyLabel { id: mylabel } Label { id: helloWorldLabel text: qsTr("Hello World!") Component.onCompleted: { mylabel.SetMyObject(hellowWorldLabel); } } } 37 Look and Feel Customization Please be aware that you need to wait until the label is fully initiated before exposing its pointer to C/C++, otherwise you may cause the program to crash. To make sure it's fully initiated, call SetMyObject() within Component.onCompleted and not any other places. Now that the QML label has been exposed to C/C++, we can change any of its properties by calling setProperty() function. For instance, we can set its visibility to true and change its text to Bye bye world!: // QVariant automatically detects your data type myObject->setProperty("visible", QVariant(true)); myObject->setProperty("text", QVariant("Bye bye world!")); Besides changing the properties, we can also call its functions by calling QMetaObject::invokeMethod(): QVariant returnedValue; QVariant message = "Hello world!"; QMetaObject::invokeMethod(myObject, "myQMLFunction", Q_RETURN_ARG(QVariant, returnedValue), Q_ARG(QVariant, message)); qDebug() << "QML function returned:" << returnedValue.toString(); Or simply, we can call the invokedMethod() function with only two parameters if we do not expect any values to be returned from it: QMetaObject::invokeMethod(myObject, "myQMLFunction"); How it works... QML is designed to be easily extensible through C++ code. The classes in the Qt QML module enable QML objects to be loaded and manipulated from C++, and the nature of the QML engine's integration with Qt's meta object system enables C++ functionality to be invoked directly from QML. To provide some C++ data or functionality to QML, it must be made available from a QObject-derived class.. 38 2 States and Animations In this chapter, we will cover the following recipes: ff Property animation in Qt ff Using easing curves to control property animation ff Creating the animation group ff Creating the nested animation group ff State machine in Qt ff States, transitions, and animations in QML ff Animation widget properties using animators ff Sprite animation Introduction Qt provides an easy way to animate widgets or any other objects that inherit the QObject class, through its powerful animation framework. The animation can be used either on its own or used together with the state machine framework, which allows different animations to be played based on the current active state of the widget. Qt's animation framework also supports grouped animation, which allows you to move more than one graphics item simultaneously, or move them in sequence one after the other. Property animation in Qt In this example, we will learn how to animate our Graphical User Interface (GUI) elements using Qt's property animation class, part of its powerful animation framework, which allows us to create fluid looking animation with minimal effort. 39 States and Animations How to do it… 1. First, let's create a new Qt Widgets Application project. After that, open up mainwindow.ui with Qt Designer and place a button on the main window, as shown here: 2. Next, open up mainwindow.cpp and add the following line of code at the beginning of the source code: #include <QPropertyAnimation> 3. After that, open up mainwindow.cpp and add the following code to the constructor: QPropertyAnimation *animation = new QPropertyAnimation (ui->pushButton, "geometry"); animation->setDuration(10000); animation->setStartValue(ui->pushButton->geometry()); animation->setEndValue(QRect(200, 200, 100, 50)); animation->start(); How it works... One of the more common methods to animate a GUI element is through the property animation class provided by Qt, known as the QPropertyAnimation class. This class is part of the animation framework and it makes use of the timer system in Qt to change the properties of a GUI element over a given duration. What we are trying to accomplish here is to animate the button from one position to another, while at the same time we also enlarge the button size along the way. By including the QPropertyAnimation header in our source code in Step 2, we will be able to access the QPropertyAnimation class provided by Qt and make use of its functionalities. 40 Chapter 2 The code in Step 3 basically creates a new property animation and applies it to the push button we just created in Qt Designer. We specifically request the property animation class changes the geometry properties of the push button and sets its duration to 3,000 milliseconds (3 seconds). Then, the start value of the animation is set to the initial geometry of the push button, because obviously we want it to start from where we initially place the button in Qt Designer. The end value is then set to what we want it to become; in this case we will move the button to a new position at x: 200, y: 200 while changing its size to width: 100, height: 50 along the way. After that, call animation->start() to start the animation. Compile and run the project and now you should see the button start to move slowly across the main window while expanding in size a bit at a time, until it reaches its destination. You can change the animation duration and the target position and scale by altering the values in the preceding code. It's really that simple to animate a GUI element using Qt's property animation system! There's more… Qt provides us with several different sub-systems to create animations for our GUI, including timer, timeline, animation framework, state machine framework, and graphics view framework: ff Timer: Qt provides us with repetitive and single-shot timers. When the timeout value is reached, an event callback function will be triggered through Qt's signal-and-slot mechanism. You can make use of a timer to change the properties (color, position, scale, and so on) of your GUI element within a given interval, in order to create an animation. ff Timeline: Timeline calls a slot periodically to animate a GUI element. It is quite similar to a repetitive timer, but instead of doing the same thing all the time when the slot is triggered, timeline provides a value to the slot to indicate its current frame index, so that you can do different things (such as offset to a different space of the sprite sheet) based on the given value. ff Animation framework: The animation framework makes animating a GUI element easy by allowing its properties to be animated. The animations are controlled by using easing curves. Easing curves describe a function that controls what the speed of the animation should be, resulting in different acceleration and deceleration patterns. The types of easing curve supported by Qt include: linear, quadratic, cubic, quartic, sine, exponential, circular, and elastic. ff State machine framework: Qt provides us with classes for creating and executing state graphs, which allow each GUI element to move from one state to another when triggered by signals. The state graph in the state machine framework is hierarchical, which means every state can also be nested inside of other states. 41 States and Animations ff Graphics view framework: The graphics view framework is a powerful graphics engine for visualizing and interacting with a large number of custom-made 2D graphical items. You can use the graphics view framework to draw your GUI and have them animated in a totally manual way if you are an experienced programmer. By making use of all the powerful features mentioned here, we're able to create an intuitive and modern GUI with ease. In this chapter, we will look into the practical approaches to animating GUI elements using Qt. Using easing curves to control property animation In this example, we will learn how to make our animation more interesting by utilizing easing curves. We will still use the previous source code, which uses the property animation to animate a push button. How to do it… 1. Define an easing curve and add it to the property animation before calling the start() function: QPropertyAnimation *animation = new QPropertyAnimation(ui->pushButton, "geometry"); animation->setDuration(3000); animation->setStartValue(ui->pushButton->geometry()); animation->setEndValue(QRect(200, 200, 100, 50)); QEasingCurve curve; curve.setType(QEasingCurve::OutBounce); animation->setEasingCurve(curve); animation->start(); 2. Call the setLoopCount() function to set how many loops you want it to repeat for: QPropertyAnimation *animation = new QPropertyAnimation(ui->pushButton, "geometry"); animation->setDuration(3000); animation->setStartValue(ui->pushButton->geometry()); animation->setEndValue(QRect(200, 200, 100, 50)); QEasingCurve curve; Curve.setType(EasingCurve::OutBounce); animation->setEasingCurve(curve); animation->setLoopCount(2); animation->start(); 42 Chapter 2 3. Call setAmplitude(), setOvershoot(), and setPeriod() before applying the easing curve to the animation: QEasingCurve curve; curve.setType(QEasingCurve::OutBounce); curve.setAmplitude(1.00); curve.setOvershoot(1.70); curve.setPeriod(0.30); animation->setEasingCurve(curve); animation->start(); How it works... In order to let an easing curve control the animation, all you need to do is to define an easing curve and add it to the property animation before calling the start() function. You can also try several other types of easing curve and see which one suits you best. Here is an example: animation->setEasingCurve(QEasingCurve::OutBounce); If you want the animation to loop after it has finished playing, you can call the setLoopCount() function to set how many loops you want it to repeat for, or set the value to -1 for an infinite loop: animation->setLoopCount(-1); There are several parameters that you can set to refine the easing curve before applying it to the property animation. These parameters include amplitude, overshoot, and period: ff Amplitude: The higher the amplitude, the higher the bounce or elastic spring effect that will be applied to the animation. ff Overshoot: Some curve functions will produce an overshoot (exceeding its final value) curve due to damping effect. By adjusting the overshoot value, we are able to increase or decrease this effect. ff Period: Setting a small period value will give a high frequency to the curve. A large period will give it a small frequency. These parameters, however, are not applicable to all curve types. Please refer to the Qt documentation to see which parameter is applicable to which curve type. 43 States and Animations There's more… While the property animation works perfectly fine, sometimes it feels a little boring to look at a GUI element animated at a constant speed. We can make the animation look more interesting by adding an easing curve to control the motion. There are many types of easing curve that you can use in Qt, and here are some of them: As you can see from the preceding diagram, each easing curve produces a different ease-in and ease-out effect. For the full list of easing curves available in Qt, please refer to the Qt documentation at. html#Type-enum. Creating an animation group In this example, we will learn how to use an animation group to manage the states of the animations contained in the group. 44 Chapter 2 How to do it… 1. We will use the previous example, but this time, we add two more push buttons to the main window, like so: 2. Next, define the animation for each of the push buttons in the main window's constructor: QPropertyAnimation *animation1 = new QPropertyAnimation(ui->pushButton, "geometry"); animation1->setDuration(3000); animation1->setStartValue(ui->pushButton->geometry()); animation1->setEndValue(QRect(50, 200, 100, 50)); QPropertyAnimation *animation2 = new QPropertyAnimation(ui->pushButton_2, "geometry"); animation2->setDuration(3000); animation2->setStartValue(ui->pushButton_2->geometry()); animation2->setEndValue(QRect(150, 200, 100, 50)); QPropertyAnimation *animation3 = new QPropertyAnimation(ui->pushButton_3, "geometry"); animation3->setDuration(3000); animation3->setStartValue(ui->pushButton_3->geometry()); animation3->setEndValue(QRect(250, 200, 100, 50)); 45 States and Animations 3. After that, create an easing curve and apply the same curve to all three animations: QEasingCurve curve; curve.setType(QEasingCurve::OutBounce); curve.setAmplitude(1.00); curve.setOvershoot(1.70); curve.setPeriod(0.30); animation1->setEasingCurve(curve); animation2->setEasingCurve(curve); animation3->setEasingCurve(curve); 4. Once you have applied the easing curve to all three animations, we will then create an animation group and add all three animations to the group: QParallelAnimationGroup *group = new QParallelAnimationGroup; group->addAnimation(animation1); group->addAnimation(animation2); group->addAnimation(animation3); 5. Call the start() function from the animation group we just created: group->start(); How it works... Since we are using an animation group now, we no longer call the start() function from the individual animation, but instead we will be calling the start() function from the animation group we just created. If you compile and run the example now, you will see all three buttons being played at the same time. This is because we are using the parallel animation group. You can replace it with a sequential animation group and run the example again: QSequentialAnimationGroup *group = new QSequentialAnimationGroup; This time, only a single button will play its animation at a time, while the other buttons will wait patiently for their turn to come. The priority is set based on which animation is added to the animation group first. You can change the animation sequence by simply rearranging the sequence of an animation being added to the group. For example, if we want button 3 to start the animation first, followed by button 2, and then button 1, the code will look like this: group->addAnimation(animation3); group->addAnimation(animation2); group->addAnimation(animation1); 46 Chapter 2 Since property animations and animation groups are both inherited from the QAbstractAnimator class, it means that you can also add an animation group to another animation group to form a more complex, nested animation group. There's more… Qt allows us to create multiple animations and group them into an animation group. A group is usually responsible for managing the state of its animations (that is, it decides when to start, stop, resume, and pause them). Currently, Qt provides two types of class for animation groups, QParallelAnimationGroup and QSequentialAnimationGroup: ff QParallelAnimationGroup: As its name implies, a parallel animation group runs all the animations in its group at the same time. The group is deemed finished when the longest-lasting animation has finished running. ff QSequentialAnimationGroup: A sequential animation group runs its animations in sequence, meaning it will only run a single animation at a time, and only play the next animation when the current one has finished. Creating a nested animation group One good example of using a nested animation group is when you have several parallel animation groups and you want to play the groups in a sequential order. How to do it… 1. We will use the UI from the previous example and add a few more buttons to the main window, like so: 47 States and Animations 2. First, create all the animations for the buttons, then create an easing curve and apply it to all the animations: QPropertyAnimation *animation1 = new QPropertyAnimation(ui->pushButton, "geometry"); animation1->setDuration(3000); animation1->setStartValue(ui->pushButton->geometry()); animation1->setEndValue(QRect(50, 50, 100, 50)); QPropertyAnimation *animation2 = new QPropertyAnimation(ui->pushButton_2, "geometry"); animation2->setDuration(3000); animation2->setStartValue(ui->pushButton_2->geometry()); animation2->setEndValue(QRect(150, 50, 100, 50)); QPropertyAnimation *animation3 = new QPropertyAnimation(ui->pushButton_3, "geometry"); animation3->setDuration(3000); animation3->setStartValue(ui->pushButton_3->geometry()); animation3->setEndValue(QRect(250, 50, 100, 50)); QPropertyAnimation *animation4 = new QPropertyAnimation(ui->pushButton_4, "geometry"); animation4->setDuration(3000); animation4->setStartValue(ui->pushButton_4->geometry()); animation4->setEndValue(QRect(50, 200, 100, 50)); QPropertyAnimation *animation5 = new QPropertyAnimation(ui->pushButton_5, "geometry"); animation5->setDuration(3000); animation5->setStartValue(ui->pushButton_5->geometry()); animation5->setEndValue(QRect(150, 200, 100, 50)); QPropertyAnimation *animation6 = new QPropertyAnimation(ui->pushButton_6, "geometry"); animation6->setDuration(3000); animation6->setStartValue(ui->pushButton_6->geometry()); animation6->setEndValue(QRect(250, 200, 100, 50)); QEasingCurve curve; curve.setType(QEasingCurve::OutBounce); curve.setAmplitude(1.00); curve.setOvershoot(1.70); curve.setPeriod(0.30); 48 Chapter 2 animation1->setEasingCurve(curve); animation2->setEasingCurve(curve); animation3->setEasingCurve(curve); animation4->setEasingCurve(curve); animation5->setEasingCurve(curve); animation6->setEasingCurve(curve); 3. Create two animation groups, one for the buttons in the upper column and another one for the lower column: QParallelAnimationGroup *group1 = new QParallelAnimationGroup; group1->addAnimation(animation1); group1->addAnimation(animation2); group1->addAnimation(animation3); QParallelAnimationGroup *group2 = new QParallelAnimationGroup; group2->addAnimation(animation4); group2->addAnimation(animation5); group2->addAnimation(animation6); 4. We will create yet another animation group, which will be used to store the two animation groups we created previously: QSequentialAnimationGroup *groupAll = new QSequentialAnimationGroup; groupAll->addAnimation(group1); groupAll->addAnimation(group2); groupAll->start(); How it works... What we're trying to do here is to play the animation of the buttons in the upper column first, followed by the buttons in the lower column. Since both of the animation groups are parallel animation groups, the buttons belonging to the respective groups will be animated at the same time when the start() function is called. This time, however, the group is a sequential animation group, which means only a single parallel animation group will be played at a time, followed by the other when the first one is finished. Animation groups are a very handy system that allows us to create very complex GUI animations with simple coding. Qt will handle the difficult part for us so we don't have to. 49 States and Animations State machines in Qt State machines can be used for many purposes, but in this chapter we will only cover topics related to animation. How to do it… 1. First, we will set up a new user interface for our example program, which looks like this: 2. Next, we will include some headers in our source code: #include <QStateMachine> #include <QPropertyAnimation> #include <QEventTransition> 3. After that, in our main window's constructor, add the following code to create a new state machine and two states, which we will be using later: QStateMachine *machine = new QStateMachine(this); QState *s1 = new QState(); QState *s2 = new QState(); 4. Then, we will define what we should do within each state, which in this case will be to change the label's text, as well as the button's position and size: QState *s1 = new QState(); s1->assignProperty(ui->stateLabel, "text", "Current state: 1"); s1->assignProperty(ui->pushButton, "geometry", QRect(50, 200, 100, 50)); 50 Chapter 2 QState *s2 = new QState(); s2->assignProperty(ui->stateLabel, "text", "Current state: 2"); s2->assignProperty(ui->pushButton, "geometry", QRect(200, 50, 140, 100)); 5. Once you are done with that, let's proceed by adding event transition classes to our source code: QEventTransition *t1 = new QEventTransition(ui->changeState, QEvent::MouseButtonPress); t1->setTargetState(s2); s1->addTransition(t1); QEventTransition *t2 = new QEventTransition(ui->changeState, QEvent::MouseButtonPress); T2->setTargetState(s1); s2->addTransition(t2); 6. Next, add all the states we have just created to the state machine and define state 1 as the initial state. Then, call machine->start() to start running the state machine: machine->addState(s1); machine->addState(s2); machine->setInitialState(s1); machine->start(); 7. If you run the example program now, you will notice everything works fine, except the button is not going through a smooth transition and it simply jumps instantly to the position and size we set previously. This is because we have not used a property animation to create a smooth transition. 8. Go back to the event transition step and add the following lines of code: QEventTransition *t1 = new QEventTransition(ui->changeState, QEvent::MouseButtonPress); t1->setTargetState(s2); t1->addAnimation(new QPropertyAnimation(ui->pushButton, "geometry")); s1->addTransition(t1); QEventTransition *t2 = new QEventTransition(ui->changeState, QEvent::MouseButtonPress); t2->setTargetState(s1); t2->addAnimation(new QPropertyAnimation(ui->pushButton, "geometry")); s2->addTransition(t2); 51 States and Animations 9. You can also add an easing curve to the animation to make it look more interesting: QPropertyAnimation *animation = new QPropertyAnimation(ui->pushButton, "geometry"); animation->setEasingCurve(QEasingCurve::OutBounce); QEventTransition *t1 = new QEventTransition(ui->changeState, QEvent::MouseButtonPress); t1->setTargetState(s2); t1->addAnimation(animation); s1->addTransition(t1); QEventTransition *t2 = new QEventTransition(ui->changeState, QEvent::MouseButtonPress); t2->setTargetState(s1); t2->addAnimation(animation); s2->addTransition(t2); How it works... There are two push buttons and a label on the main window layout. The button at the top-left corner will trigger the state change when pressed, while the label at the top-right corner will change its text to show which state we are currently in, and the button below will animate according to the current state. The QEventTransition classes define what will trigger the transition between one state and another. In our case, we want the state to change from state 1 to state 2 when the ui->changeState button (the one at the upper left) is clicked. After that, we also want to change from state 2 back to state 1 when the same button is pressed again. This can be achieved by creating another event transition class and setting the target state back to state 1. Then, add these transitions to their respective states. Instead of just assigning the properties directly to the widgets, we tell Qt to use the property animation class to smoothly interpolate the properties toward the target values. It is that simple! There is no need to set the start value and end value, because we have already called the assignProperty() function, which has automatically assigned the end value. There's more… The state machine framework in Qt provides classes for creating and executing state graphs. Qt's event system is used to drive the state machines, where transitions between states can be triggered by using signals, then the slots on the other end will be invoked by the signals to perform an action, such as playing an animation. 52 Chapter 2 Once you understand the basics of state machines, you can use them to do other things as well. The state graph in the state machine framework is hierarchical. Just like the animation group in the previous section, states can also be nested inside of other states: States, transitions, and animations in QML If you prefer to work with QML instead of C++, Qt also provides similar features in Qt Quick that allow you to easily animate a GUI element with the minimum lines of code. In this example, we will learn how to achieve this with QML. How to do it… 1. First we will create a new Qt Quick Application project and set up our user interface like so: 53 States and Animations 2. Here is what my main.qml file looks like: import QtQuick 2.3 import QtQuick.Window 2.2 Window { visible: true width: 480; height: 320; Rectangle { id: background; anchors.fill: parent; color: "blue"; } Text { text: qsTr("Hello World"); anchors.centerIn: parent; color: "white"; font.pointSize: 15; } } 3. Add the color animation to the Rectangle object: Rectangle { id: background; anchors.fill: parent; color: "blue"; SequentialAnimation on color { ColorAnimation { to: "yellow"; duration: 1000 } ColorAnimation { to: "red"; duration: 1000 } ColorAnimation { to: "blue"; duration: 1000 } loops: Animation.Infinite; } } 4. Then, add a number animation to the text object: Text { text: qsTr("Hello World"); anchors.centerIn: parent; color: "white"; font.pointSize: 15; SequentialAnimation on opacity { 54 Chapter 2 NumberAnimation { to: 0.0; duration: 200} NumberAnimation { to: 1.0; duration: 200} loops: Animation.Infinite; } } 5. Next, add another number animation to it: Text { text: qsTr("Hello World"); anchors.centerIn: parent; color: "white"; font.pointSize: 15; SequentialAnimation on opacity { NumberAnimation { to: 0.0; duration: 200} NumberAnimation { to: 1.0; duration: 200} loops: Animation.Infinite; } NumberAnimation on rotation { from: 0; to: 360; duration: 2000; loops: Animation.Infinite; } } 6. Define two states, one called the PRESSED state and another called the RELEASED state. Then, set the default state to RELEASED: Rectangle { id: background; anchors.fill: parent; state: "RELEASED"; states: [ State { name: "PRESSED" PropertyChanges { target: background; color: "blue"} }, State { name: "RELEASED" PropertyChanges { target: background; color: "red"} } ] } 55 States and Animations 7. After that, create a mouse area within the Rectangle object so that we can click on it: MouseArea { anchors.fill: parent; onPressed: background.state = "PRESSED"; onReleased: background.state = "RELEASED"; } 8. Add some transitions to the Rectangle object: transitions: [ Transition { from: "PRESSED" to: "RELEASED" ColorAnimation { target: background; duration: 200} }, Transition { from: "RELEASED" to: "PRESSED" ColorAnimation { target: background; duration: 200} } ] How it works... The main window consists of a blue rectangle and static text that says Hello World. We want the background color to change from blue to yellow, then to red, and back to blue in a loop. This can be easily achieved using the color animation type in QML. What we're doing at Step 3 is basically creating a sequential animation group within the Rectangle object, then creating three different color animations within the group, which will change the color of the object every 1,000 milliseconds (1 second). We also set the animations to loop infinitely. In Step 4, we want to use the number animation to animate the alpha value of the static text. We created another sequential animation group within the Text object and created two number animations to animate the alpha value from 0 to 1 and back. Then, we set the animations to loop infinitely. Then in Step 5, we rotate the Hello World text by adding another number animation to it. In Step 6, we wanted to make the Rectangle object change from one color to another when we click on it. When the mouse is released, the Rectangle object will change back to its initial color. To achieve that, first we need to define the two states, one called the PRESSED state and another called the RELEASED state. Then, we set the default state to RELEASED. 56 Chapter 2 Now, when you compile and run the example, the background will instantly change color to blue when pressed and change back to red when the mouse is released. That works great and we can further enhance it by giving it a little transition when switching color. This can be easily achieved by adding transitions to the Rectangle object. There's more… In QML, there are eight different types of property animation you can use: ff Anchor animation: Animates changes in anchor values ff Color animation: Animates changes in color values ff Number animation: Animates changes in qreal-type values ff Parent animation: Animates changes in parent values ff Path animation: Animates an item along a path ff Property animation: Animates changes in property values ff Rotation animation: Animates changes in rotation values ff Vector3d animation: Animates changes in QVector3d values Just like the C++ version, these animations can also be grouped together in an animation group to play the animations in sequence or in parallel. You can also control the animations using easing curves and determine when to play these animations using state machines, just like what we have done in the previous section. Animating widget properties using animators In this recipe, we will learn how to animate the properties of our GUI widgets using the animator feature provided by QML
https://pl.b-ok.org/book/3629081/36d876
CC-MAIN-2019-26
refinedweb
14,803
60.24
The help desk software for IT. Free. Track users' IT needs, easily, and with only the features you need. By creating an account, you're agreeing to our Terms of Use and our Privacy Policy. step-by-step article discusses how to restore user accounts, computer accounts,mailbox, and their group memberships after they have been deleted from Active Directory 1.First Create a new user in active directory without any mail box (For Eg: recover) 2.Download Adrestore A free utility to recover deleted user from active directory 3.Extract the zip file,Open command prompt,type adrestore.exe (Without any options) See Image If you want to recover the deleted user type adrestore /r Click yes For restore user See Image get-mailboxdatabase |clean-mailboxdatabase see the image It will Display the deleted users mail box Go forward with default option till it complete Go to Microsoft Office outlook express and Configure "recover" as mail box user Export mail box to pst file create a mail box with deleted user account import the pst file to mail box Yes, could come in handy at some time. thanks... good to know about the adrestore utility any hope of a similar recover if you are on Exchange 2003? EDIT: Figured it out. :) Thanks, this saved my bacon when I blew away a troublesome user profile. (Lesson Learned!) How far does ADRestore go back? Or is it for all deleted accounts I know these are flagged as Tombstones so are they always there? Thanks One of the easiest ways to restore deleted accidental data from Exchange Server is to make use of any third party exchange recovery tool like Kernel Exchange Recovery Software, Unistal EDB Repair, PCvita Recovery, Stellar Phoenix Exchange Server Recovery Software. Out of these I have tried Kernel and Stellar Exchange Recovery Software both in different corruption scenarios and found Stellar to be better than Kernel. So, I would recommend Stellar Exchange Recovery Software. You can download its free demo from its product page to test the credibility of the software. Commands to recover deleted active directory users mailbox in exchange 2016 Please ...
https://community.spiceworks.com/how_to/1388-recover-mailbox-after-delete-active-directory-user
CC-MAIN-2017-51
refinedweb
355
60.55
In Tic-Tac-Toe with Tabular Q-learning, we developed a tic-tac-toe agent using reinforcement learning. We used a table to assign a Q-value to each move from a given position. Training games were used to gradually nudge these Q-values in a direction that produced better results: Good results pulled the Q-values for the actions that led to those results higher, while poor results pushed them lower. In this article, instead of using tables, we'll apply the same idea of reinforcement learning to neural networks. Neural Network as a Function We can think of the Q-table as a multivariable function: The input is a given tic-tac-toe position, and the output is a list of Q-values corresponding to each move from that position. We will endeavour to teach a neural network to approximate this function. For the input into our network, we'll flatten out the board position into an array of 9 values: 1 represents an X, -1 represents an O, and 0 is an empty cell. The output layer will be an array of 9 values representing the Q-value for each possible move: A low value closer to 0 is bad, and a higher value closer to 1 is good. After training, the network will choose the move corresponding to the highest output value from this model. The diagram below shows the input and output for the given position after training (initially all of the values hover around 0.5): As we can see, the winning move for X, A2, has the highest Q-value, 0.998, and the illegal moves have very low Q-values. The Q-values for the other legal moves are greater than the illegal ones, but less than the winning move. That's what we want. Model The network (using PyTorch) has the following structure: class TicTacNet(nn.Module): def __init__(self): super().__init__() self.dl1 = nn.Linear(9, 36) self.dl2 = nn.Linear(36, 36) self.output_layer = nn.Linear(36, 9) def forward(self, x): x = self.dl1(x) x = torch.relu(x) x = self.dl2(x) x = torch.relu(x) x = self.output_layer(x) x = torch.sigmoid(x) return x The 9 input values that represent the current board position are passed through two dense hidden layers of 36 neurons each, then to the output layer, which consists of 9 values, each corresponding to the Q-value for a given move Training Most of the training logic for this agent is the same as for the Q-table implementation discussed earlier in this series. However, in that implementation, we prevented illegal moves. For the neural network, I decided to teach it not to make illegal moves, so as to have a more realistic set of output values for any given position. The code below, from qneural.py, shows how the parameters of the network are updated for a single training game: def update_training_gameover(net_context, move_history, q_learning_player, final_board, discount_factor): game_result_reward = get_game_result_value(q_learning_player, final_board) # move history is in reverse-chronological order - last to first next_position, move_index = move_history[0] backpropagate(net_context, next_position, move_index, game_result_reward) for (position, move_index) in list(move_history)[1:]: next_q_values = get_q_values(next_position, net_context.target_net) qv = torch.max(next_q_values).item() backpropagate(net_context, position, move_index, discount_factor * qv) next_position = position net_context.target_net.load_state_dict(net_context.policy_net.state_dict()) def backpropagate(net_context, position, move_index, target_value): net_context.optimizer.zero_grad() output = net_context.policy_net(convert_to_tensor(position)) target = output.clone().detach() target[move_index] = target_value illegal_move_indexes = position.get_illegal_move_indexes() for mi in illegal_move_indexes: target[mi] = LOSS_VALUE loss = net_context.loss_function(output, target) loss.backward() net_context.optimizer.step() We maintain two networks, the policy network ( policy_net) and the target network ( target_net). We perform backpropagation on the policy network, but we obtain the maximum Q-value for the next state from the target network. That way, the Q-values obtained from the target network aren't changing during the course of training for a single game. Once we complete training for a game, we update the target network with the parameters of the policy network ( load_state_dict). move_history contains the Q-learning agent's moves for a single training game at a time. For the last move played by the Q-learning agent, we update its chosen move with the reward value for that game - 0 for a loss, and 1 for a win or a draw. Then we go through the remaining moves in the game history in reverse-chronological order. We tug the Q-value for the move that was played in the direction of the maximum Q-value from the next state (the next state is the state that results from the action taken in the current state). This is analogous to the exponential moving average used in the tabular Q-learning approach: In both cases, we are pulling the current value in the direction of the maximum Q-value available from the next state. For any illegal move from a given game position, we also provide negative feedback for that move as part of the backpropagation. That way, our network will hopefully learn not to make illegal moves. Results The results are comparable to the tabular Q-learning agent. The following table (based on 1,000 games in each case) is representative of the results obtained after a typical training run: These results were obtained from a model that learned from 2 million training games for each of X and O (against an agent making random moves). It takes over an hour to train this model on my PC. That's a huge increase over the number of games needed to train the tabular agent. I think this shows how essential large amounts of high-quality data are for deep learning, especially when we go from a toy example like this one to real-world problems. Of course the advantage of the neural network is that it can generalize - that is, it can handle inputs it has not seen during training (at least to some extent). With the tabular approach, there is no interpolation: The best we can do if we encounter a position we haven't seen before is to apply a heuristic. In games like go and chess, the number of positions is so huge that we can't even begin to store them all. We need an approach which can generalize, and that's where neural networks can really shine compared to prior techniques. Our network offers the same reward for a win as for a draw. I tried giving a smaller reward for a draw than a win, but even lowering the value for a draw to something like 0.95 seems to reduce the stability of the network. In particular, playing as X, the network can end up losing a significant number of games against the randomized minimax agent. Making the reward for a win and a draw the same seems to resolve this problem. Even though we give the same reward for a win and a draw, the agent seems to do a good job of winning games. I believe this is because winning a game usually ends it early, before all 9 cells on the board have been filled. This means there is less dilution of the reward going back through each move of the game history (the same idea applies for losses and illegal moves). On the other hand, a draw requires (by definition) all 9 moves to be played, which means that the rewards for the moves in a given game leading to a draw are more diluted as we go from one move to the previous one played by the Q-learning agent. Therefore, if a given move consistently leads to a win sooner, it will still have an advantage over a move that eventually leads to a draw. Network Topology and Hyperparameters As mentioned earlier, this model has two hidden dense layers of 36 neurons each. MSELoss is used as the loss function and the learning rate is 0.1. relu is used as the activation function for the hidden layers. sigmoid is used as the activation for the output layer, to squeeze the results into a range between 0 and 1. Given the simplicity of the network, this design may seem self-evident. However, even for this simple case study, tuning this network was rather time consuming. At first, I tried using tanh (hyperbolic tangent) for the output layer - it made sense to me to set -1 as the value for a loss and 1 as the value for a win. However, I was unable to get stable results with this activation function. Eventually, after trying several other ideas, I replaced it with sigmoid, which produced much better results. Similarly, replacing relu with something else in the hidden layers made the results worse. I also tried several different network topologies, with combinations of one, two, or three hidden layers, and using combinations of 9, 18, 27, and 36 neurons per hidden layer. Lastly, I experimented with the number of training games, starting at 100,000 and gradually increasing that number to 2,000,000, which seems to produce the most stable results. DQN This implementation is inspired by DeepMind's DQN architecture (see Human-level control through deep reinforcement learning), but it's not exactly the same. DeepMind used a convolutional network that took direct screen images as input. Here, I felt that the goal was to teach the network the core logic of tic-tac-toe, so I decided that simplifying the representation made sense. Removing the need to process the input as an image also meant fewer layers were needed (no layers to identify the visual features of the board), which sped up training. DeepMind's implementation also used experience replay, which applies random fragments of experiences as input to the network during training. My feeling was that generating fresh random games was simpler in this case. Can we call this tic-tac-toe implementation "deep" learning? I think this term is usually reserved for networks with at least three hidden layers, so probably not. I believe that increasing the number of layers tends to be more valuable with convolutional networks, where we can more clearly understand this as a process where each layer further abstracts the features identified in the previous layer, and where the number of parameters is reduced compared to dense layers. In any case, adding layers is something we should only do if it produces better results. Code The full code is available on github (qneural.py and main_qneural.py): nestedsoftware / tictac Experimenting with different techniques for playing tic-tac-toe Demo project for different approaches for playing tic-tac-toe. Code requires python 3, numpy, and pytest. For the neural network/dqn implementation (qneural.py), pytorch is required. Create virtual environment using pipenv: pipenv --site-packages Install using pipenv: pipenv shell pipenv install --dev Set PYTHONPATH to main project directory: - In windows, run path.bat - In bash run source path.sh Run tests and demo: - Run tests: pytest - Run demo: python -m tictac.main - Run neural net demo: python -m tictac.main_qneural% Playing minimax not random vs minimax not random: ------------------------------------------------- x wins: 0.00% o wins: 0.00% draw : 100.00% Playing minimax random vs minimax random: Related - Tic-Tac-Toe with Tabular Q-Learning - Neural Networks Primer - PyTorch Image Recognition with Dense Network Discussion
https://dev.to/nestedsoftware/tic-tac-toe-with-a-neural-network-1fjn
CC-MAIN-2020-50
refinedweb
1,890
54.22
Hi all, I'm not finding anything that seems to address this issue: I'm creating a static class for holding all of the sprites for players. That way, I only have to assign the sprites once and any player can get them by accessing the static class. example: public static class PlayerSprites { public static Sprite faceLeft; public static Sprite faceRight; public static Sprite getSprite(Orientation orientation) { if (orientation.facing == "left") { return faceLeft; } else { return faceRight; } } } Orientation is simply a class that holds a series of ways the character can behave: Orientation public class Orientation { public string facing = "left"; public bool standing = true; public bool shooting = false; } } However, I can't seem to find any way to set the public Sprite variables. Is there a way to do this? even though you may think it irrelevant, you should show the structure/class for Orientation. why are you using a string for facing and not an enum? string comparisons are slow... facing enum what do you mean by "I can't seem to find any way to set the public Sprite variables"? I added the orientation class, but I really do believe it is irrelevant to the question. It's just a collection of flags so I can keep track of what the player is doing and assign the right sprite. The spring comparison is only a temporary thing I threw in there to get it work. I'll come up with something better later. As for the original question, I have a static class with static public variables. Is there any way to set static public variables in a static public class from the inspector? Answer by ian_sorbello · Nov 18, 2015 at 01:27 AM Your entire class does not need to be static - just the accessor, i.e. getSprite(). Something like this would work: public class PlayerSprites : Monobehaviour { public Sprite faceLeft; public Sprite faceRight; private static PlayerSprites _instance; void Start() { _instance = this; } public static PlayerSprites Instance() { return _instance; } public Sprite getSprite(Orientation orientation) { if (orientation.facing == "left") { return faceLeft; } else { return faceRight; } } } You can then add this component (it's now a MonoBehaviour) on an object in your hierarchy. Then use Unity to drag and drop the Sprites you like into the public variables. Then finally, you can get access to your sprites like this: Sprite s = PlayerSprites.Instance().getSprite(orientation); Nope, that won't work. You may ask why, because if your getSprite is static then you cannot use non static member within (faceLeft/Right). I guess you meant getSprite to be non-static, then we're good. Also, I would recommend to use a DontDestroyOnLoad to make the object survive new scenes but then make sure to destroy any new instance that would duplicate. Very interesting idea, I hadn't thought of that.Does the underscore have a specific property about it in C#/Unity? Or was that just a convention you like to use? I'll play around with this and see what I can do with it. It may help me in other areas too. Underscores don't do anything special, so that's just their convention. The problem with this, is that it requires an object in the scene with this script attached. Which means you need to make it a prefab singleton so every scene can have it. Also, as fafase mentioned, if the getSprite accessor is static it won't work as shown, just the instance variable/method should be static with a proper singleton setup in Awake. Yes - agree - it does require an object in the scene with a script attached - but this is a strategy that has worked well for me. And, it is a singleton, not a static class - hence why the static based Instance() method. The reason why I think this is a handy pattern, is that your code can access the underlying assets through the getSprite() method (does not need to be static) - and the component itself is visible in the hierarchy window, so you can drag and drop your other assets (sprites) onto it easily. The object is a dummy object - it doesn't need to be visible or be anywhere special. If you make the object a prefab - then you can bring this into any scene you like... I modified the answer since it was wrong. getSprite cannot be static to work in the current context and to be used as you showed. Answer by OncaLupe · Nov 18, 2015 at 05:56 AM I'd look into Scriptable Objects. They're items that stay in your Assets, but can hold data like a script. You can access the data from any script in your scene like a Static, but the contents are visible in the inspector as well. Also, since it's an asset, any changes made to it in runtime are saved. They take a little to set up, but should work perfectly for what you want. I didn't know about these. I really like the idea of scriptable objects. The more I work in Unity, the less I like working in the Inspector and the more I like to script everything, so this is right up my alley.. I cant access a script that I made in Assets/Scripts from the FirstPersonCharacter script. 1 Answer Static Classes 3 Answers Parent Class variable not saving child class 0 Answers Simple Code Not Woking 1 Answer Create "Optional" public variable that won't throw an error 1 Answer
https://answers.unity.com/questions/1099663/setting-public-static-variables-in-a-static-class.html
CC-MAIN-2020-10
refinedweb
914
70.63
NumPy is a fundamental package for data analysis in Python as the majority of other packages in the Python data eco-system build on it. Subsequently, it makes sense for us to have an understanding of what NumPy can help us with and its general principles. In the following article, we’ll take a look at arrays in Python – which essentially take the ‘lists’ data type to a new level. We’ll have powerful new methods, random number generation and a way of storing data in grid-like structures, not just lists like we have seen. Let’s get things started and import the numpy library. Take a read here if you need to install it! import numpy as np Creating a NumPy array Firstly, we need to create our array. We have a number of different ways to do this. One way is to convert a pre-existing list into an array. Below, we do this to create a 1d array (one line) and a 2d array (a grid, or matrix). #Three lists, one for GK heights, one for GK weights, one for names GKNames = ["Kaller","Fradeel","Hayward","Honeyman"] GKHeights = [184,188,191,193] GKWeights = [81,85,103,99] #Create an array of names print(np.array(GKNames)) #Create a matrix of all three lists, start with a list of lists GKMatrix = [GKNames,GKHeights,GKWeights] print(np.array(GKMatrix)) ['Kaller' 'Fradeel' 'Hayward' 'Honeyman'] [['Kaller' 'Fradeel' 'Hayward' 'Honeyman'] ['184' '188' '191' '193'] ['81' '85' '103' '99']] There we have two examples of creating arrays from a list. Our second one is particularly cool – is just like a spreadsheet and will make our data much easier to deal with. Aside from creating our own arrays from lists we already have, numpy can create them with its own methods: #With 'arange', we can create arrays just like we created lists with 'range' #This gives us an array ranging from the numbers in the arguments np.arange(0,12) array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) #Want a blank array? Create it full of zeros with 'zeros' #The argument within it create the shape of a 2d or 3d array np.zeros((3,11)) array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) #Hate zeros? Why not use 'ones'?! np.ones((3,11)) array([[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]]) #Creating dummy data or need a random number? #randint and randn are useful here #Creates random numbers around a standard distribution from 0 #The argument gives us the array's shape print(np.random.randn(3,3)) #Creates random numbers between two numbers that we give it #The third argument gives us the shape of the array print(np.random.randint(1,100,(3,3))) [[ 1.1403024 -1.76082025 -0.71738168] [-0.44740344 -0.16392845 1.04022957] [ 1.97068835 0.50075891 -0.33750378]] [[70 28 67] [19 54 11] [ 9 34 67]] Looking for more ways to create arrays? Take a look in the documentation for ‘rand’, ‘linspace’, ‘eye’ and others! Array Methods Not only does NumPy give us a good way to store our data, it also gives us some great tools to simplify working with it. Let’s find the tallest goalkeeper from our earlier examples with array methods. #Three lists, one for GK heights, one for GK weights, one for names #Create an array with each list GKNames = ["Kaller","Fradeel","Hayward","Honeyman"] GKHeights = [184,188,191,193] GKWeights = [81,85,103,99] np.array(GKNames) GKHeights = np.array(GKHeights) np.array(GKWeights) #What is the largest height, .max()? GKHeights.max() 193 #What location is the max, .argmax()? GKHeights.argmax() 3 #Can I use this method to locate the player's name? #Instead of a number in the square brackets, I can just put this method GKNames[GKHeights.argmax()] 'Honeyman' With only four players this is a bit long-winded, but I’m sure that you can see the benefit if we have a whole academy of players and we need to find our tallest player from 100s. Swap the max to min to find the smallest value in an array. Summary You are likely to use NumPy with all sorts of packages as you develop your Python skills. Having a healthy appreciation of how it works, especially with arrays, will save you lots of headaches down the line. In this page, we saw how we can create them from scratch, or convert them from lists. We created flat, 1-d arrays and 2-d grids. We then applied methods to find highest datapoints and even used these to navigate our grid. Great work! Take a look at our extension on NumPy arrays here to learn more.
http://fcpython.com/data-analysis/arrays-in-numpy
CC-MAIN-2018-51
refinedweb
834
69.72
Time Lapse Camera Robot Introduction: Time Lapse Camera Robot This is one of my old class projects that I never got around to adding here because I did not like the results. However, the pictures look cool and the code works well so someone will probably like this project. At the very least this project will be a warning to others about what they should take into consideration when starting their time lapse project. Features: - Arduino power - 2 Axis servo motors - Programmable motion - Back lit LCD display - SolidWorks modeled - Laser Cut - Heat shaped acrylic - Cannon Rebel Intervalometer Step 1: Problems I should start by listing what I did not like. First, the motors were to week to move the camera properly. If I had the mount on the tripod then attached the camera it would work good and point the camera where it should, but if the camera had a telescope lenses on it, the balance was off. If I was not careful when manually moving the whole setup the motors would turn. Lastly I did not like the noises the motors made the whole time they were running. The next problem was programming where the camera was suppose to aim. I thought I had an efficient, easy to visualize, programming system for aiming. The camera always spun 180 degrees, left to right. I figured if I wanted less than 180 degrees it was easy to crop afterwards. My two user options were to set the time it took for the camera to move the horizontal 180 degrees, and the vertical angle it started at. The programming system worked well but never had the finesse I needed. Sure I knew the start, and end points but it really needed something like key frames, and the ability to speed up or slow down at different spots. If I was panning across a lake the vertical servo was useless and there wasn’t much that I could do with my two servos that I could not also do with one. Third, having the camera take a bunch of pictures then editing them together afterwards is easy but it needs a lot of pictures to look good. I was setting it up so that when the video was compiled it would be about 11 pictures a second, and it looked jumpy. Normal videos are about 30 pictures a second, but being able to take, and store enough pictures to make a 30 picture a second video was overwhelming for my SD card and camera. I didn’t want the camera taking that many pictures or my computer to stall trying to compile them. Next time I would just get a camera that could take video and avoid the intervalometer all together. Oh and the camera I used was a Cannon Rebel, I don’t know where my pictures with the proper camera went. The last problem was keeping the whole thing powered. The servos use a lot of power and especially during time lapse sessions I never liked having to worry about my batteries dying. I am currently working on a panning slider design that will solve my battery problem. Step 2: Build Process Here is a quick rundown of how I did this project. I included the SolidWorks models and the DWG files. Even though I started by listing everything I would change next time, this project was fun and a good stepping stone for future projects. The final product looked cleaner than many of my projects and I think I learned a lot doing this. - Model the 3D assembly in SolidWorks. - Convert the measurements from the 3D SolidWorks model into a 2D DWG that can be cut with the laser. - Code the Arduno - Assemble the unit, I used a heat gun to make the fancy rounded corners. Hot glue worked good to hold everything together and was easy to disassemble but looked ugly. Acrylic welding glue looked cleaner but harder to remove. - Test - Play Step 3: Code /* Try 1 of camera mount use 2 servos, 2 buttens that have 2 functions for increasing/decreasing time 2 buttons with single functions for adjusting the vertical angles Lcd to display time and angle */ // include the library code: #include #include #include "RTClib.h" #include Servo myservoV; // create servo object to control a servo Servo myservoH; // create servo object to control a servo // initialize the library with the numbers of the interface pins LiquidCrystal lcd(7, 2, 0, 4, 1, 6); #define ledRed 3 // input pin for led backlight #define ledGreen 5 // input pin for led backlight //Define the Button pins #define UpBtnbuttonPin A1 // analog input pin to use as a digital input #define DnBtnbuttonPin A2 // analog input pin to use as a digital input #define VerbuttonPin A3 // analog input pin to use as a digital input //#define DnVerbuttonPin A4 // analog input pin to use as a digital input #define StartbuttonPin A4 // analog input pin to use as a digital input #define trigger 12 //trigger of camera seems to auto focus #define focus 9 // cammera focus, keeps it awake between pictures and focusses //Time Counter Variables unsigned long buttonPushCounter = 0; // counter for the number of button presses int buttonPushDisplayHr = 0; int buttonPushDisplayMin = 0; int buttonPushCounterMin = 0; // UpBtn varables int UpBtnbuttonState = 0; // current state of the button int UpBtnlastButtonState = 0; // previous state of the button // DnBtn varables int DnBtnbuttonState = 0; // current state of the button int DnBtnlastButtonState = 0; // previous state of the button //start button variables int StartbuttonPushCounter = 0; // counter for the number of button presses int StartbuttonState = 0; // current state of the button int StartlastButtonState = 0; // previous state of the button unsigned long Starttime = 0; //int buttonPushCounterStop = 0; int startVal = 0; // UpVer varables int VerbuttonState = 0; // current state of the button int VerlastButtonState = 0; // previous state of the button int VerbuttonPushCounter = 0; int DispAngV = 0; float DispAngVa = 0; float AngFloat = 0; //float valV = 0; //servo variables int valH = 0; // servo position 1 to 180 float valV = 0; // servo position 1 to 180 unsigned long del = 0; // pause between positions int angV = 0; //angle for vertical servo //======================================================== void setup() { // set up the LCD's number of columns and rows: lcd.begin(16,2); //Serial.begin(9600); lcd.setCursor(0, 0); lcd.print("TIME"); //display temp on the lcd lcd.setCursor(0, 1); lcd.print("00:00 00"); // Set button input pin pinMode(UpBtnbuttonPin, INPUT); pinMode(DnBtnbuttonPin, INPUT); pinMode(StartbuttonPin, INPUT); pinMode(ledRed, INPUT); pinMode(ledGreen, INPUT); digitalWrite(UpBtnbuttonPin, HIGH ); digitalWrite(DnBtnbuttonPin, HIGH ); digitalWrite(StartbuttonPin, HIGH ); //Start servos myservoV.attach(11); // attaches the servo on pin 9 to the servo object myservoH.attach(10); // attaches the servo on pin 9 to the servo object //zero servos to start myservoV.write(95); myservoH.write(90); StartbuttonPushCounter = 0; // camera attached stuff pinMode(trigger, OUTPUT); //set the trger pin to start he relay digitalWrite(trigger, LOW); //set the trigger relay to of pinMode(focus, OUTPUT); //set the trger pin to start he relay digitalWrite(focus, LOW); //set the trigger relay to of } //========================================================================== LOOP void loop() { digitalWrite(ledGreen, 255); //check start button------------------------ START BUTTON //for next time through the loop // StartlastButtonState = StartbuttonState; // read the pushbutton input pin: StartbuttonState = digitalRead(StartbuttonPin); // compare the buttonState to its previous state if (StartbuttonState != StartlastButtonState) { // if the state has changed, increment the counter // if (StartbuttonState == HIGH) { // if the current state is HIGH then the button // wend from off to on: if (StartbuttonPushCounter < 2){ StartbuttonPushCounter++; //Starttime = millis(); //to complicated but works better } } } // read the pushbutton input pin: UpBtnbuttonState = digitalRead(UpBtnbuttonPin); // compare the buttonState to its previous state ---- UP Button if (UpBtnbuttonState != UpBtnlastButtonState) { // if the state has changed, increment the counter if (UpBtnbuttonState == HIGH) { //if (buttonPushCounter <= 40) //if (startVal>2) if (StartbuttonPushCounter != 2){ buttonPushCounter = buttonPushCounter + 2; } } } //check down button------------------------ DOWN BUTTON //for next time through the loop // DnBtnlastButtonState = DnBtnbuttonState; // read the pushbutton input pin: DnBtnbuttonState = digitalRead(DnBtnbuttonPin); // compare the buttonState to its previous state if (DnBtnbuttonState != DnBtnlastButtonState) { // if the state has changed, increment the counter if (DnBtnbuttonState == HIGH) { // if the current state is HIGH then the button // wend from off to on: if (buttonPushCounter >= 1) { //if (startVal=0) if (StartbuttonPushCounter != 2){ buttonPushCounter = buttonPushCounter - 2; } } } } // BUTTENS FOR VERTICAL ANGLE ----------------------------------------------------- VER ANG BUTTON // read the pushbutton input pin: VerbuttonState = digitalRead(VerbuttonPin); // compare the buttonState to its previous state ---- SERVO UP if (VerbuttonState != VerlastButtonState) { // if the state has changed, increment the counter if (VerbuttonState == HIGH) { //if (buttonPushCounter <= 40) //if (startVal>2) if (VerbuttonPushCounter != 4){ VerbuttonPushCounter = VerbuttonPushCounter + 1; DispAngV = (VerbuttonPushCounter - 1) * 30; lcd.setCursor(14, 1); lcd.print (DispAngV); } else { VerbuttonPushCounter = 1; DispAngV = (VerbuttonPushCounter - 1) * 30; lcd.setCursor(14, 1); lcd.print (DispAngV); } } } //fix start button starting at vlue of 1----- ---start bnt actions if (StartbuttonPushCounter != 2) { (StartbuttonPushCounter = StartbuttonPushCounter); } else{ //(StartbuttonPushCounter = 5) //calculate delays for horzontal servo loop //del = 200; del = buttonPushCounter * 1500; //calculate angle per pix for vertical servo //angV = 1; // start at 60 degrees go to 0 //start servo loop //servo(); //startVal++; //StartbuttonPushCounter = 2; } //--------------------------------------------------------- //save button states UpBtnlastButtonState = UpBtnbuttonState; DnBtnlastButtonState = DnBtnbuttonState; StartlastButtonState = StartbuttonState; VerlastButtonState = VerbuttonState; // display time---------------------------------------------TIME DISPLAY buttonPushDisplayHr = (buttonPushCounter) / 4; buttonPushCounterMin = buttonPushCounter - (buttonPushDisplayHr * 4); buttonPushDisplayMin = buttonPushCounterMin * 15; //calculate angle for vertical------------------------------- DispAngVa = VerbuttonPushCounter -1; AngFloat = DispAngVa / 2; //truble shoot=======================================================================trouble shoot //Serial.println(Starttime); //lcd.setCursor(13, 1); //lcd.print (StartbuttonPushCounter); //lcd.setCursor(9, 1); // lcd.print (Starttime); //--go to servo after start btn--------------------------go to servo if (StartbuttonPushCounter != 1) { lcd.setCursor(0, 0); lcd.print (" RUNNING "); servo(); } else { //Show how many mins lcd.setCursor(3, 1); lcd.print (buttonPushDisplayMin); //Show how many hours if (buttonPushDisplayHr < 10) { lcd.setCursor(1, 1); lcd.print (buttonPushDisplayHr); lcd.setCursor(0, 1); lcd.print ("0"); } else { lcd.setCursor(0, 1); lcd.print (buttonPushCounter); } //Show seperator lcd.setCursor(2, 1); lcd.print (":"); // show counts lcd.setCursor(11, 0); lcd.print ("ANGLE"); } } //end of loop=============================================LOOP ENDS //servo loop----------------------------------------------SERVO LOOP void servo() // servo movements { //move servo to next position if (valH >=174){ lcd.setCursor(0, 1); lcd.print (" All DONE ");} else{ if (valH < 3) { valV = 90 - DispAngV; } valH = valH + 3; valV = valV + AngFloat; //val = map(val, 50, 300, 0, 179); // scale it to use it with the servo (value between 0 and 180) myservoH.write(valH); // sets the servo position according to the scaled value myservoV.write(valV); // sets the servo position according to the scaled value // camera trigger---------------------------------CAMMERA STUFF digitalWrite(focus, HIGH); delay (1000); digitalWrite(focus, LOW); digitalWrite(trigger, HIGH); delay (1000); digitalWrite(trigger, LOW); // delay until it is time for next position, using clock times would be better // need 10 dels because arduino is messing up the large numbers even with an unsinged long variable delay(del); lcd.setCursor(6, 1); lcd.print (valV); } } you should post the pictures you took with it!
http://www.instructables.com/id/Time-Lapse-Camera-Robot/
CC-MAIN-2017-51
refinedweb
1,763
50.16
I was having trouble with py2exe not getting the datafiles that it needed. I noticed someone else got it to work by copying all the files, but losing the directory structure. I tried that, but I still had missing datafiles at runtime. This function will return everything needed for py2exe to work correctly. Instead of returning one tuple, it returns a list of tuples, so the use changes a little bit, but at least it works. def get_py2exe_datafiles(): outdirname = 'matplotlibdata' mplfiles = [] for root, dirs, files in os.walk(get_data_path()): py2exe_files = [] # Append root to each file so py2exe can find them for file in files: py2exe_files.append(os.sep.join([root, file])) if len(py2exe_files) > 0: py2exe_root = root[len(get_data_path()+os.sep):] if len(py2exe_root) > 0: mplfiles.append((os.sep.join([outdirname, py2exe_root]), py2exe_files)) else: # Don't do a join for the root directory mplfiles.append((outdirname, py2exe_files)) return mplfiles Sorry for not submitting as a patch: I haven’t quite figured out how to do that yet.
https://discourse.matplotlib.org/t/get-py2exe-datafiles-fix/7474
CC-MAIN-2022-21
refinedweb
167
66.54
The? The Arduino MKR GSM 1400 is a development board that combines the functionality of the Arduino Zero with global GSM connectivity using the u-blox SARAU201 modem. Traditionally communicating with a modem is done using AT commands using a separate module. This model board ships with a library that makes AT commands more accessible via function calls. Hardware Requirements - Twilio Programmable Wireless SIM - Arduino MKR GSM 1400 - GSM Antenna - Micro USB cable Software Requirements Setting up the Twilio SIM Remove the Twilio SIM from it’s packaging. Next register and activate your SIM in the Twilio Console. Software side of things Before programming the hardware we need to install a few pieces of software to make it work. To be able to send M2M commands using the on-board modem we will need the MKRGSM library. Open the Arduino IDE and go to Sketch > Manage Libraries. This is where Arduino and 3rd party libraries can be installed into the Arduino IDE. When the Library Manager window pops up search for the MKRGSM library and press install. The MKRGSM library wraps AT commands into functions, making it easier to communicate with the modem. It’s phonetabulous trust me. After the library is installed we need to install the Arduino MKR GSM 1400 board cores. The Arduino MKR GSM 1400 uses a different chipset than traditional Arduinos that use AVR ATmega chipsets. This board uses the SAMD21 Cortex-M0+ and it requires a different set of cores. The cores do not come with the Arduino IDE and they are needed for the computer to recognize the board when connected. Locate the Board Manager under Tools > Board > Board Manager. When the Board Manager window appears search for the Arduino SAMD Boards and install the cores. Restart the Arduino IDE to complete the installation. Great! Time to move on to the hardware setup. Hardware side of things To send M2M commands over the network we need to install the Twilio SIM. Break out the Micro SIM from the Twilio SIM card Insert the Twilio SIM into the SIM slot underneath the board. Next, attach the GSM antenna to the board. Connect the board to the computer using a Micro-USB cable and you are geared up to connect to the network. Creating the Arduino sketch In the Arduino IDE create a new Arduino sketch (File > New). A template is provided that look something like this. void setup(){ } void loop(){ } Instantiate the base class GSM for all of the GSM functions. To send and receive SMS messages the GSM SMS class needs to be instantiated as well. This happens before the setup() function. #include <MKRGSM.h> GSM gsmAccess; GSM_SMS sms; In the setup() function create a serial connection with a baud rate of 115200. The baud rate determines the speed of data over a specific communication channel. Serial.begin(115200); Use the gsmAccess.begin() function to connect to the cellular network that is identified on the Twilio SIM. gsmAccess.begin(); Serial.println("GSM initialized"); In the loop() function define the phone number where the M2M command will be sent using the beginSMS function. The number we will use is “2936”. This is a special Twilio shortcode that is reserved for exchanging M2M commands between Twilio SIMs. It uses the SMS transport to send M2M commands over a cellular network. When a Twilio SIM creates a M2M command a Webhook is generated, we will discuss this shortly. sms.beginSMS("2936"); Pass a char array to the function sms.print() to create a new message to be queued. sms.print("hello world"); Serial.println(“Sending M2M Command”); After a message is created and queued use the endSMS() function to tell the modem the process is complete. Once this happens the “hello world” message will then be sent. sms.endSMS(); Serial.println("M2M Command Sent!"); The last bit of code is a while loop that will capture the program and place it in an infinite loop. The purpose of this is to ensure the M2M command is only sent once. while(1) { delay(4000); } Complete Arduino sketch: #include <MKRGSM.h> GSM gsmAccess; GSM_SMS sms; void setup(){ Serial.begin(115200); gsmAccess.begin(); Serial.println("GSM initialized"); } void loop(){ sms.beginSMS("2936"); sms.print("hello world"); Serial.println(“Sending M2M Command”); sms.endSMS(); Serial.println("M2M Command Sent!"); while(1) { delay(4000); } } Double check that the board has been selected under Tools > Board. If it is not selected the compiler will throw an error when you try to upload the code. Save the new sketch as "SayHelloArduinoGSM.ino". Before uploading the new sketch to the board let’s create a server to receive the M2M command using Go. Spinning up an audio response server with Go and Beep Create a new Go program named “SayHelloArduinoGSM.go” using the template below. package main import ( ) func main(){ } Next add the following libraries to the import section. This is where you link external libraries like Beep to a Go program. If you haven't installed Go yet do so now using Homebrew package main import ( "fmt" "github.com/faiface/beep" "github.com/faiface/beep/mp3" "github.com/faiface/beep/speaker" "log" "net/http" "os" "time" ) Inside the main function create a new server route using HandleFunc() from the net/http library. This will generate a new server-side route (“/helloworld”) for receiving M2M commands from the “2936” shortcode. When an M2M command is received it will then be funneled to the helloworld function. Open up a port and listen for incoming connections using the ListenAndServe() function on port 9999. func main(){ http.HandleFunc("/helloworld", helloworld) http.ListenAndServe(":9999", nil) } Fantastic. Now we have to create the helloworld function. The HTTP request received by this function will be represented by the http.Request type. func helloworld(w http.ResponseWriter, r *http.Request) { } When the request is received the M2M command needs to be parsed. Use the ParseForm() function to parse the request body as a form. if err := r.ParseForm(); err != nil { log.Printf("Error parsing form: %s", err) return } The data from the body can be extracted using the PostFormValue() function by passing it a key. The key will give you the value associated with the named component in the JSON response. In this case we are looking for the value of the “Command” key. pwCommand := r.PostFormValue("Command") fmt.Println("pwCommand : ", pwCommand) And to add a little spice let’s at some Beep code to play an audio file through your system’s audio when the command successfully reaches the server. Complete Go program: package main import ( "fmt" "log" "net/http" "os" "time" "github.com/faiface/beep" "github.com/faiface/beep/mp3" "github.com/faiface/beep/speaker" ) func main() { http.HandleFunc("/helloworld", helloworld) http.ListenAndServe(":9999", nil) } func helloworld(w http.ResponseWriter, r *http.Request) { if err := r.ParseForm(); err != nil { log.Printf("Error parsing form: %s", err) return } pwCommand := r.PostFormValue("Command") fmt.Println("incoming Command from Arduino MKR GSM 1400 : ", pwCommand) fmt.Println("Playing audio file!") } Start the server. go run SayHelloArduinoGSM.go Constructing the bridge with ngrok Currently the hardware and software pieces exist individually. ngrok will be used to bridge the gap. When the SIM sends a M2M command to Twilio a Webhook is sent to a user-defined url called the Commands Callback Url. We will use ngrok to receive this Webhook and then route it to the server running on our own machine. To make the connection, start a new ngrok instance on the same port where the server is running. ngrok http 9999 Copy the Forwarding url that was created with ngrok () Navigate to Programmable Wireless in the Twilio console. Locate the SIM that you previously registered under SIMs. Under the Configure tab you will find the Commands Callback Url. Paste the ngrok Forwarding address into text box and add the previously created server route to the end of the url. Press Save. Send messages through the sky Go back to the Arduino IDE and press upload. Once uploaded, double check to see if the command was sent properly using the Serial Monitor. - Navigate to Tools > Serial Monitor Once the M2M command is sent from the “2936” shortcode it is then routed to ngrok and onto the go application using the Commands Callback Url. And finally the M2M command reaches the server and the “helloworld.mp3” Celltactular! Continue to connect things You just sent your first M2M command using magic. This M2M command model is a foundational piece of how to use Twilio to send M2M commands from a remote hardware device. With the integrated modem and software for sending AT commands as functions, it makes the Arduino MKR GSM 1400 an ideal piece for any IoT prototyping kit. If you are interested in learning about other pieces of hardware that can send M2M commands check out the Wireless Machine to Machine Quickstarts. This project, along with other projects, can be found on the TwilioIoT GitHub. Feel free to reach out with any questions or curiousity. If you have any cool IoT projects you have built or are planning on building drop me a line. - Github: cskonopka - Twitter: @cskonopka Authors - Christopher Konopka - Arduino - Arduino and Sim800 Cellular Machine to Machine Commands Quickstart - Hacking Halloween: Using Arduino and Twilio To Build An Interactive Haunted House - Getting Started with Twilio Programmable Wireless on the LinkIt ONE - DIY Home Automation Using Twilio, PowerSwitch, Arduino, and Pusher Please enable JavaScript to view the
https://www.hackster.io/TwilioIoT/remotely-play-mp3-with-twilio-go-and-arduino-mkr-gsm-1400-a4f7c4
CC-MAIN-2019-13
refinedweb
1,575
66.54
Original text by Sayed Abdelhafiz L;DR Recently I discovered an ACE on Facebook for Android that can be triaged through download file from group Files Tab without open the file. Background I was digging on the method that Facebook use to download files from group, I have found that Facebook use tow different mechanism to download files. If the user download the file from the post itself It will be downloaded via built-in android service called DownloadManager as far as I know It safe method to download files. If the user decide to download the file from Files Tab It will be downloaded through different method, In nutshell the application will fetch the file then will save it to Download directory without any filter. Notice: the selected code is the fix that Facebook pushed. The vulnerable code was without this code. Path traversal The vulnerability was in the second method, security measures was implemented on the server side when uploading the files but It was easy to bypass. Simply the application fetch the download file and for example save the file to /sdcard/Downloads/FILE_NAME without filter the FILE_NAME to protect against path traversal attacks. First idea came to my mind is use path traversal to overwrite native libraries which will leads to execute arbitrary code. I have set up my burp suite proxy then Intercepted upload file request and modify the filename to ../../../sdcard/PoC then forward the request. Unfortunately It wasn’t enough due of the security measures on the server side, my path traversal payload was removed. I decide to play with the payload but unfortunately no payload worked. Bypass security measures. (Bypass?) After many payloads, I wasn’t able to bypass that filter. I came back to browse the application again may find something useful, It came! For first time, I noticed that! I tried to download the file from the post, but DownloadManger service is safe as I told so the attack didn’t work. Navigated to Files Tab, download the file. And here is our attack. My file was wrote to /sdcard/PoC! As I was able to preform path traversal, I can now overwrite the native libraries and preform ACE attack. Exploit To exploit that attack I start new android NDK project to create native library, put my evil code on JNI_OnLoad function to make sure that the evil code will execute when loaded the library. #include <jni.h> #include <string> #include <stdlib.h>JNIEXPORT jint JNI_OnLoad(JavaVM* vm, void* reserved) { system(“id > /data/data/com.facebook.katana/PoC”); return JNI_VERSION_1_6; } I built the project to get my malicious library, then upload it by mobile upload endpoint and renamed it to /../../../../../data/data/com.facebook.katana/lib-xzs/libbreakpad.so Our exploit now is ready! PoC Video: Timeline April 29, 2020 at 5:57 AM: Subbmited the report to facebook. April 29, 2020 at 11:20 AM: Facebook were able to reproduce it. April 29, 2020 at 12:17 PM: Traiged. June 16, 2020 at 12:54 PM: Vulnerability has been fixed. July 15, 2020 at 5:11 PM: Facebook rewarded me $10,000! Bounty I noticed people commented on the amount of bounty when I tweet about the bug, It small? I was shocked and objected to it and tried to discuss Facebook, but noway they say that amount is fair and they won’t revisiting this decision. As Neal told me: Spencer provided you with insight into how we determined the bounty for this issue. We believe the amount awarded is reasonable and will not be revisiting this decision. It’s up to you to decide before you report your vulnerabilities! Vendor or? Have a nice day!
https://movaxbx.ru/2020/10/04/arbitrary-code-execution-on-facebook-for-android-through-download-feature/
CC-MAIN-2021-04
refinedweb
618
64.41
Zip It! Zip It Good with Rails and RubyzipBy Ilya Bodrov-Krukowski In our day-to-day activities we are often interacting with archives. When you want to send your friend a bunch of documents, you’d probably archive them first. When you download a book from the web, it will probably be archived alongside with accompanying materials. So, how can we interact with archives in Ruby? Today we will discuss a popular gem called rubyzip that is used to manage zip archives. With its help, you can easily read and create archives or generate them on the fly. In this article I will show you how to create database records from the zip file sent by the user and how to send an archive containing all records from a table. Source code is available at GitHub. Before getting started, I want to remind you that various compressed formats have different compression ratios. As such, even if you archive a file, its size might remain more or less the same: - Text files compress very nicely. Depending on their contents, the ratio is about 3:1. - Some images can benefit from compression, but when using a format like .jpg that already has native compression, it won’t change much. - Binary files may be compressed up to 2 times of their original size. - Audio and video are generally poor candidates for compression. Getting Started Create a new Rails app: $ rails new Zipper -T I am using Rails 5 beta 3 and Ruby 2.2.3 for this demo, but rubyzip works with Ruby 1.9.2 or higher. In our scenario today, the demo app keeps track of animals. Each animal has the following attributes: name( string) age( integer) – of course, you can use decimal instead species( string) We want to list all the animals, add abilities to them, and download data about them in some format. Create and apply the corresponding migration: $ rails g model Animal name:string age:integer species:string $ rake db:migrate Now let’s prepare the default page for our app: animals_controller.rb class AnimalsController < ApplicationController def index @animals = Animal.order('created_at DESC') end end views/animals/index.html.erb <h1>My animals</h1> <ul> <% @animals.each do |animal| %> <li> <strong>Name:</strong> <%= animal.name %><br> <strong>Age:</strong> <%= animal.age %><br> <strong>Species:</strong> <%= animal.species %> </li> <% end %> </ul> config/routes.rb [...] resources :animals, only: [:index, :new, :create] root to: 'animals#index' [...] Nice! Proceed to the next section and let’s take care of creation first. Creating Animals from the Archive Introduce the new action: animals_controller.rb [...] def new end [...] *views/animals/index.html.erb <h1>My animals</h1> <%= link_to 'Add!', new_animal_path %> [...] Of course, we could craft a basic Rails form to add animals one by one, but instead let’s allow users to upload archives with JSON files. Each file will then contain attributes for a specific animal. The file structure looks like this: - animals.zip - animal-1.json - animal-2.json Each JSON file will have the following structure: { name: 'My name', age: 5, species: 'Dog' } Of course, you may use another format, like XML, for example. Our job is to receive an archive, open it, read each file, and create records based on the input. Start with the form: views/animals/new.html.erb <h1>Add animals</h1> <p> Upload a zip archive with JSON files in the following format:<br> <code>{name: 'name', age: 1, species: 'species'}</code> </p> <%= form_tag animals_path, method: :post, multipart: true do %> <%= label_tag 'archive', 'Select archive' %> <%= file_field_tag 'archive' %> <%= submit_tag 'Add!' %> <% end %> This is a basic form allowing the user to select a file (don’t forget the multipart: true option). Now the controller’s action: animals_controller.rb def create if params[:archive].present? # params[:archive].tempfile ... end redirect_to root_path end The only parameter that we are interested in is the :archive. As long as it contains a file, it responds to the tempfile method that returns path to the uploaded file. To read an archive we will use the Zip::File.open(file) method that accepts a block. Inside this block you can fetch each archived file and either extract it somewhere by using extract or read it into memory with the help of get_input_stream.read. We don’t really need to extract our archive anywhere, so let’s instead store the contents in the memory. animals_controller.rb require 'zip' [...] def create if params[:archive].present? Zip::File.open(params[:archive].tempfile) do |zip_file| zip_file.each do |entry| Animal.create!(JSON.load(entry.get_input_stream.read)) end end end redirect_to root_path end [...] Pretty simple, isn’t it? entry.get_input_stream.read reads the file’s contents and JSON.load parses it. We are only interested in .json files though, so let’s limit the scope using the glob method: animals_controller.rb [...] def create if params[:archive].present? Zip::File.open(params[:archive].tempfile) do |zip_file| zip_file.glob('*.json').each do |entry| Animal.create!(JSON.load(entry.get_input_stream.read)) end end end redirect_to root_path end [...] You can also extract part of the code to the model and introduce a basic error handling: animals_controller.rb [...] def create if params[:archive].present? Zip::File.open(params[:archive].tempfile) do |zip_file| zip_file.glob('*.json').each { |entry| Animal.from_json(entry) } end end redirect_to root_path end [...] animal.rb [...] class << self def from_json(entry) begin Animal.create!(JSON.load(entry.get_input_stream.read)) rescue => e warn e.message end end end [...] I also want to whitelist attributes that the user can assign preventing him from overriding id or created_at fields: animal.rb [...] WHITELIST = ['age', 'name', 'species'] class << self def from_json(entry) begin Animal.create!(JSON.load(entry.get_input_stream.read).select {|k,v| WHITELIST.include?(k)}) rescue => e warn e.message end end end [...] You may use a blacklist approach instead by replacing select with except, but whitelisting is more secure. Great! Now go ahead, create a zip archive and try to upload it! Generating and Downloading an Archive Let’s perform the opposite operation, allowing the user to download an archive containing JSON files representing animals. Add a new link to the root page: views/animals/index.html.erb [...] <%= link_to 'Download archive', animals_path(format: :zip) %> We’ll use the same index action and equip it with the respond_to method: animals_controller.rb [...] def index @animals = Animal.order('created_at DESC') respond_to do |format| format.html format.zip do end end end [...] To send an archive to the user, you may either create it somewhere on the disk or generate it on the fly. Creating the archive on disk involves the following steps: - Create an array of files that has to be placed inside the archive: files << File.open("path/name.ext", 'wb') { |file| file << 'content' } - Create an archive: Zip::File.open('path/archive.zip', Zip::File::CREATE) do |z| - Add your files to the archive: Zip::File.open('path/archive.zip', Zip::File::CREATE) do |z| files.each do |f| z.add('file_name', f.path) end end The add method accepts two arguments: the file name as it should appear in the archive and the original file’s path and name. - Send the archive: send_file 'path/archive.zip', type: 'application/zip', disposition: 'attachment', filename: "my_archive.zip" This, however, means that all these files and the archive itself will persist on disk. Of course, you may remove them manually and even try to create a temporary zip file as described here but that involves too much unnecessary complexity. What I’d like to do instead is to generate our archive on the fly and use send_data method to display the response as an attachment. This is a bit more tricky, but there is nothing we can’t manage. In order to accomplish this task, we’ll require a method called Zip::OutputStream.write_buffer that accepts a block: animals_controller.rb [...] def index @animals = Animal.order('created_at DESC') respond_to do |format| format.html format.zip do compressed_filestream = Zip::OutputStream.write_buffer do |zos| end end end end [...] To add a new file to the archive, use zos.put_next_entry while providing a file name. You can even specify a directory to nest your file by saying zos.put_next_entry('nested_dir/my_file.txt'). To write something to the file, use animals_controller.rb compressed_filestream = Zip::OutputStream.write_buffer do |zos| @animals.each do |animal| zos.put_next_entry "#{animal.name}-#{animal.id}.json" zos.print animal.to_json(only: [:name, :age, :species]) end end We don’t want fields like id or created_at to be present in the file, so by saying :only we limit them to name, age and species. Now rewind the stream: compressed_filestream.rewind And send it: send_data compressed_filestream.read, filename: "animals.zip" Here is the resulting code: animals_controller.rb [...] def index @animals = Animal.order('created_at DESC') respond_to do |format| format.html format.zip do compressed_filestream = Zip::OutputStream.write_buffer do |zos| @animals.each do |animal| zos.put_next_entry "#{animal.name}-#{animal.id}.json" zos.print animal.to_json(only: [:name, :age, :species]) end end compressed_filestream.rewind send_data compressed_filestream.read, filename: "animals.zip" end end end [...] Go ahead and try the “Download archive” link! You can even protect the archive with a password. This feature of rubyzip is experimental and may change in the future, but it seems to be working currently: animals_controller.rb [...] compressed_filestream = Zip::OutputStream.write_buffer(::StringIO.new(''), Zip::TraditionalEncrypter.new('password')) do |zos| [...] Customizing Rubyzip Rubyzip does provide a bunch of configuration options that can be either provided in the block: Zip.setup do |c| end or one-by-one: Zip.option = value Here are the available options: on_exists_proc– Should the existing files be overwritten during extraction? Default is false. continue_on_exists_proc– Should the existing files be overwritten while creating an archive? Default is false. unicode_names– Set this if you want to store non-unicode file names on Windows Vista and earlier.Default is false. warn_invalid_date– Should a warning be displayed if an archive has incorrect date format? Default is true. default_compression– Default compression level to use. Initially set to Zlib::DEFAULT_COMPRESSION, other possible values are Zlib::BEST_COMPRESSIONand Zlib::NO_COMPRESSION. write_zip64_support– Should Zip64 support be disabled for writing? Default is false. More from this author Conclusion In this article we had a look the rubyzip library. We wrote an app that reads users’ archives, creates records based on them, and generates archives on the fly as a response. Hopefully, the provided code snippets will come in handy in one of your projects. As always, thanks for staying with me and see you soon!
https://www.sitepoint.com/accept-and-send-zip-archives-with-rails-and-rubyzip/
CC-MAIN-2017-26
refinedweb
1,733
52.76
>>>>> On Sun, 3 Aug 2008 12:41:22 -0400, Adrian Robert <address@hidden> said: > This is not defined anywhere in emacs, but there was this section in > an earlier version of darwin.h: > #if 0 /* Don't define DARWIN on Mac OS X because CoreFoundation.h uses > it to distinguish Mac OS X from bare Darwin. */ > #ifndef DARWIN > #define DARWIN 1 > #endif > #endif > Does anyone know where this IS defined? Also, I've been unable to > find a version of CoreFoundation.h that makes the check referred to. It's used in CoreFoundation.h in Mac OS X 10.1 - 10.3. BTW, why the above comment was removed? The corresponding ChangeLog entry only says: 2008-07-10 Dan Nicolaescu <address@hidden> * unexec.c: * s/vms.h: * s/usg5-4-2.h: * s/sol2-5.h: * s/freebsd.h: * s/darwin.h: Remove dead code. The comments like this should be left for future reference so people may not make the same mistake again. YAMAMOTO Mitsuharu address@hidden
http://lists.gnu.org/archive/html/emacs-devel/2008-08/msg00170.html
CC-MAIN-2019-04
refinedweb
168
69.18
Answered by: New in SQL-NS 2005 ...plz help me Question hi everyone... Iam new in sql-ns 2005 & I've been trying the last couple of days to get my first running program...but still Iam facing a problem.. I tried to run this sample which I got from one the websites... usingSystem; usingMicrosoft.SqlServer.Management.Smo; usingMicrosoft.SqlServer.Management.Nmo; usingns = Microsoft.SqlServer.NotificationServices; namespaceMy_Nc {class Program {private static Instance nsi; private static Application a; private const string baseDirectoryPath = @"D:\Documents and Settings\falmalik\My Documents\Visual Studio 2005\Projects\My_Nc"; private const string nsServer = // private const string serviceUserName = // private const string servicePassword = // {Server server = new Server("(local)"); // create a new instance NotificationServices ns = server.NotificationServices; nsi =new Instance(ns, "StockWatch"); CreateDeliveryChannel(); a =new Application(nsi, "StockWatchApp"); a.BaseDirectoryPath = baseDirectoryPath; CreateEventClass(); CreateSubscriptionClass(); CreateNotificationClass(); CreateHostedEventProvider(); CreateGenerator(); CreateDistributor(); CreateVacuumSchedule(); a.QuantumDuration =new TimeSpan(0, 0, 15); a.PerformanceQueryInterval =new TimeSpan(0, 0, 5); a.SubscriptionQuantumLimit = 1; a.ChronicleQuantumLimit = 1; a.VacuumRetentionAge =new TimeSpan(0, 0, 1); nsi.Applications.Add(a);Console.WriteLine("Added application."); nsi.Create(); nsi.RegisterLocal(serviceUserName, servicePassword); nsi.Enable();Console.WriteLine("Application enabled." + Environment.NewLine); CreateSubscriber(); CreateSubscription();Console.WriteLine(Environment.NewLine + "Press any key to continue."); Console.ReadKey(); } the problem I get when I run it is with nsi.Create(); it gives me the following message :The Notification Services operation performed is invalid. I don't know if the problem is with the nsServer,serviceUserName,servicePassword or not? because i tried to change them alot of times but still ...it even some times gives me the problem with nsi.RegisterLocal(serviceUserName, servicePassword); so what are the valid values of them? I didn't really understand :serviceUserName is the account the NS$StockWatch Windows Service will run under so could anyone please tell me what the problem is? or even could give me or direct me to a simple sample (even without the use of SMTP) ...I even tried the MSDN samples but still another question is that i read more than once about the eventdata file ...& that I should copy it to the event subfolder to watch it....so do I have to do this everytime I run the application or what? forgive me for my questions which may seem silly...but this is my first time thank you..Tuesday, July 24, 2007 8:17 AM Answers All replies First of all, the best way to learn SQL NS is by reading a good book and following examples in it. Of course, it might seem that it would take much more time than learning by an example on MSDN, but... it's not true. SQL NS, while being a relatively easy-to-learn technology, is not "a piece of cake". The best 2 books that I know (and I used one of them to learn NS) are: and There are also other books (specifically on NS for SQL Server 2005), but the differences between 2000 and 2005 are negligible. The fact is, there's a lot to know if you are the only developer for your NS project, and you can't learn it fast enough without a good book. ------------ You are using C# classes to create all of your NS objects. Although this approach is quite valid, the SQL NS is known for its declarative programming model. That is, it is much easier to create the entire NS application using XML files (application definition file, etc.). At the same time, you can use C# for creating your custom objects, such as a custom event provider, custom content formatter, etc., if you need customization. When you install NS on your machine, it comes with a help file. In the help, there is a tutorial. It gives you step-by-step instructions on how to build an NS application. This tutorial is also available online at (NS for SQL Server 2005) or (if you are using SQL Server 2000). ------------ Event data file: This is one way of submitting "raw" data to your SQL NS app. Yes, you need to copy the new file (with new raw data) into the specified directory every time you have collected new data and want it processed. There is another way of submitting raw data to your NS app: using a built-in SQL Event Provider. Basically, it means inserting data into a SQL Server table, and then SQL NS takes care of the rest: processing the new data, matching it against subscriptions, etc. Also, you can create your own custom event provider, but this task should come only after you have no problems with the "basic" stuff... ------------ So, briefly, I would recommend that you start with the tutorial, and get a good book.Tuesday, July 24, 2007 11:43 AM Sorry, I put it a wrong way: of course, the differences between NS 2.0 (for SQL Server 2000) and NS for SQL Server 2005 are NOT negligible. What I was trying to say is: if you know NS for SQL Server 2000, it will take you only a couple of hours to learn what's new in NS 2005. On the other hand, if you don't care about SQL Server 2000 and need NS 2005, get Shyam Pather's book (on NS 2005). ...and let's hope that Joe Webb will write another edition of his excellent book, this time for SQL Server 2005 (or, 2008).Tuesday, July 24, 2007 11:52 AM Hi fahad even i'm working on notification services and i encounter the same problem..have you completed the project..if so can you kindly help me out please..even i'm trying the same sample from en.csharp-online.net site.. thanxFriday, December 14, 2007 6:37 AM Hi, Did anyone managed to resolve this problem? I don´t know what else to do!Thursday, May 8, 2008 10:21 AM Hi, I'm new to this stuff too and up against the same problem with installing the service. NS seems like a great technology from the outside but when you get into it it's a bit clunky, quite "version 1.0" although it's been around long enough to have better tools around it by now. I've ordered the book that everyone talks about so we'll see when that arrives but I must admit I'm wondering what the fuss is about with NS because at the end of the day whatever you want to do seems to need custom this and custom that, what you have out of the box is the ability to write a file when data changes in a table, thats not too difficult to do with a CLR trigger.Wednesday, June 25, 2008 1:37 PM
https://social.msdn.microsoft.com/Forums/en-US/6fe778b8-6bb4-492f-b086-627b919cf338/new-in-sqlns-2005-plz-help-me?forum=sqlnotificationservices
CC-MAIN-2021-39
refinedweb
1,119
64.51
This is an excerpt from the Scala Cookbook (partially modified for the internet). This is a short recipe, Recipe 7.3, “How to rename members on import in Scala.” Problem You want to rename Scala members when you import them to help avoid namespace collisions or confusion. Solution Give the class you’re importing a new name when you import it with this import syntax: import java.util.{ArrayList => JavaList} Then, within your code, refer to the class by the alias you’ve given it: val list = new JavaList[String] You can also rename multiple classes at one time during the import process: import java.util.{Date => JDate, HashMap => JHashMap} Because you’ve created these aliases during the import process, the original (real) name of the class can’t be used in your code. For instance, in the last example, the following code will fail because the compiler can’t find the java.util.HashMap class: // error: this won't compile because HashMap was renamed during the import process val map = new HashMap[String, String] Discussion As shown, you can create a new name for a class when you import it, and can then refer to it by the new name, or alias. The book Programming in Scala, by Odersky, et al (Artima), refers to this as a renaming clause. This can be very helpful when trying to avoid namespace collisions and confusion. Class names like Listener, Handler, Client, Server, and many more are all very common, and it can be helpful to give them an alias when you import them. From a strategy perspective, you can either rename all classes that might be conflicting or confusing: import java.util.{HashMap => JavaHashMap} import scala.collection.mutable.{Map => ScalaMutableMap} or you can just rename one class to clarify the situation: import java.util.{HashMap => JavaHashMap} import scala.collection.mutable.Map As an interesting combination of several recipes, not only can you rename classes on import, but you can even rename class members. As an example of this, in shell scripts I tend to rename the println method to a shorter name, as shown here in the REPL: scala> import System.out.{println => p} import System.out.{println=>p} scala> p("hello") hello The Scala Cookbook This tutorial is sponsored by the Scala Cookbook, which I wrote for O’Reilly: You can find the Scala Cookbook at these locations: Add new comment
https://alvinalexander.com/scala/how-to-rename-members-import-scala-classes-methods-functions
CC-MAIN-2017-39
refinedweb
400
61.26
19 October 2011 15:55 [Source: ICIS news] HOUSTON (ICIS)--?xml:namespace> The 612,000 tonne/year cracker is up and the company expects to run the unit at full rates by the weekend, a market participant said. A Williams spokesperson did not immediately respond to a request for comment. Two other ExxonMobil is expected to restart its cracker this week, but a company spokesperson declined to comment on a timetable. The 1m tonne/year unit has been down since the first week of September. Shell's The company has 835,000 tonnes/year of ethylene capacity at the site. For more on ethylene visit ICIS chemical intelligence For more on Williams'
http://www.icis.com/Articles/2011/10/19/9501383/williams-louisiana-cracker-restarted-full-rate-by.html
CC-MAIN-2015-06
refinedweb
112
65.01
Using Flow API to Compose Your Jina Workflow orchestrating. Use Flow API in Python¶ Create a Flow¶ To create a new Flow: from jina.flow import Flow f = Flow() Flow() accepts some arguments, see jina flow --help or our documentation for details. For example, Flow(log_server=True) will enable sending logs to the dashboard..executors.encoders.bidaf:latest') .add(name='p3')) This will run p2 in a Docker container equipped with the image jinaai/hub.executors.encoders.bidaf:latest. More information on using containerized Pod can be found in our documentation. in our documentation. Add a Remote Containerized Pod into the Flow¶ A very useful pattern is to combine the above two features together: f = (Flow().add(name='p1') .add(name='p2', host='192.168.0.100', port_expose=53100, image='jinaai/hub.executors.encoders.bidaf in our documentation. Parallelize the Steps¶ By default, if you keep adding .add() to a Flow, it will create a long chain of sequential workflow.(...) .add(...)). Iterate over Pods in the Flow¶ You can iterate the Pods in a Flow like you would a list: f = (Flow().add(...) .add(...)) for p in f.build(): print(f'{p.name} in: {str(p.head_args.socket_in)} out: {str(p.head, output_fn=print) input_fnis an Iterator[bytes], each of which corresponds to the representation of a Document with bytes. output_fnis() Please check out our hello world in client-server architecture for a complete example. WARNING: don’t use a while loop to do the waiting, it is extremely inefficient: with f: while True: # <- dont do that pass # <- dont do that Use Flow API in YAML¶ You can also write a Flow in YAML: !Flow with: logserver: true pods: chunk_seg: uses: craft/index-craft.yml replicas: $REPLICAS read_only: true doc_idx: uses: index/doc.yml tf_encode: uses: encode/encode.yml needs: chunk_seg replicas: $REPLICAS read_only: true chunk_idx: uses: index/chunk.yml replicas: $SHARDS separated_workspace: true join_all: uses: _pass needs: [doc_idx, chunk_idx] read_only: true You can use enviroment variables with $ in YAML. More information on the Flow YAML Schema can be found in our documentation.
https://docs.jina.ai/v0.9.12/chapters/flow/index.html
CC-MAIN-2021-04
refinedweb
343
59.7
Many powerful cloud-native batch scheduler. In a managed compute environment, you submit jobs (containerized applications that can be executed at scale) to a queue while AWS handles the job scheduling and lifecycle for you. You define the container image, application code, and the required parameters and AWS Batch will schedule the job, deploy, and execute the task for you. This blog post is divided into two sections. The first section explains how to set up the AWS Batch compute environment and instrument AWS Batch jobs using the AWS X-Ray SDK. The second section provides a walkthrough to deploy a sample AWS Batch job using X-Ray. All the steps in the setup section of the post are automated in the sample application. The following diagram shows the architecture components. Figure 1 – AWS Batch compute environment configured with X-Ray daemon Setup Follow these steps to configure X-Ray in an AWS Batch managed compute environment: 1. Install the X-Ray daemon on instances in the compute environment. To successfully use AWS X-Ray, the application must send JSON segment documents to a daemon process that listens for UDP traffic. The X-Ray daemon buffers segments in a queue and uploads them to X-Ray. On Batch, the daemon must be installed on the EC2 instances used to run jobs. AWS Batch supports launch templates, so the user data can perform custom configuration consistently across a compute environment. Although Batch uses your user data to configure the instances, any additional configuration must be formatted in the MIME multi-part archive. That way, the configuration can be merged with the user data that is required to configure your compute resources. The following user data installs the X-Ray daemon: Note: AWS Batch executes jobs using Amazon Elastic Container Service (Amazon ECS) container instances and tasks. In an ECS cluster, the X-Ray daemon is executed as a container that runs alongside your application. However, Batch takes care of creating an ECS task definition, so an additional container cannot be defined in the Batch job definition. 2. Add permissions for the instances to call the X-Ray API. The instance role requires permissions to send the captured traces to AWS X-Ray. The IAM policy, AWSXRayDaemonWriteAccess, gives the X-Ray daemon permission to upload trace data. Attach this policy to ecsInstanceRole. If you choose to create an instance role for your compute environment, be sure to add the AWSXRayDaemonWriteAccess policy to it. For more information about IAM permissions required by the Batch instance role, see Amazon ECS Instance Role. 3. Instrument the Batch job application code. AWS Batch is not integrated with X-Ray by default. For this reason, the application code must be manually instrumented using the X-Ray SDK for your programming language. (In this blog post, I use Python.) To instrument libraries, you must patch them with the X-Ray SDK. In the sample application, the boto3 library is used to interact with AWS services and requests is used to perform HTTP requests to internet resources. import boto3 import requests from aws_xray_sdk.core import xray_recorder from aws_xray_sdk.core import patch libraries = (['boto3','requests']) patch(libraries) Whenever the X-Ray SDK is used with an integrated service, such as AWS Elastic Beanstalk, the root segment is configured by the SDK automatically. However, in the Batch job, there is no active segment, so you must start and end the tracing context in the code: # Start a segment if no segment exist segment = xray_recorder.begin_segment('BatchJob') # Do stuff if xray_recorder.is_sampled(): # For sampled traces, the job_id is added as an annotation xray_recorder.put_annotation('job_id', os.getenv('AWS_BATCH_JOB_ID')) # Do more stuff and end tracing segment xray_recorder.end_segment() Note: The method for starting a segment with the X-Ray SDK is different for each programming language. For example, on Node.js applications, you can use the following syntax to start the segment and tracing context manually: // Instrumenting all AWS SDK client with X-Ray var AWSXRay = require('aws-xray-sdk'); var AWS = AWSXRay.captureAWS(require('aws-sdk')); // Starting segment var segment = new AWSXRay.Segment(name); var ns = AWSXRay.getNamespace(); ns.run(function () { AWSXRay.setSegment(segment); // Requests using AWS SDK, HTTP calls, SQL queries... segment.close(); }); For information about the supported programming languages, see the AWS X-Ray Developer Guide. For SDK configuration for your programming language, use the official GitHub repositories: Python, Node.Js, Go, Java, Ruby, and .NET. Sample application In the following sections, I show you how I configured a Batch compute environment with X-Ray. I ran a simple Batch job that interacts with Amazon S3 and sends HTTP requests to internet resources. To simplify the deployment of this sample implementation, I used the AWS Cloud Development Kit (AWS CDK). Prerequisites To launch the sample application and underlying infrastructure, you need the following: - npm and Docker installed on your machine. - The AWS CLI. For more information, see Installing, updating, and uninstalling the AWS CLI. Setup process Here is the setup process: - Prepare the environment for AWS CDK. - Deploy AWS CDK application. - Submit Batch jobs. - Visualize X-Ray Traces. Prepare environment for AWS CDK The AWS CDK is an open-source software development framework to define cloud infrastructure in familiar programming languages and provision it through AWS CloudFormation. AWS CDK works by creating assets to deploy its application. Assets are files, directories, or Docker images that need to be stored on the AWS Cloud. By bootstrapping the environment, the AWS CDK creates the required resources to store assets. In this blog post, an S3 bucket is required to store AWS CloudFormation templates. For more information about AWS CDK bootstrapped resources, see Bootstrap resources on GitHub. Before you bootstrap your environment, download the source code from this Github repository, and then navigate to the directory with package.json file to install the dependencies: $ git clone $ cd aws-batch-xray $ npm install To create the resources required by the AWS CDK, run the following command in the directory where cdk.json exists. Be sure to replace ACCOUNT-ID and REGION with your AWS account ID and the AWS Region where the stack will be created: cdk bootstrap aws://ACCOUNT-ID/REGION Deploy the AWS CDK application After the AWS environment has been bootstrapped, you are now ready to deploy the sample application. To deploy the AWS CDK stack, run the following command in the directory where cdk.json exists: $ cdk deploy BatchXrayStack The deployment of the stack will take a few minutes. You can follow the deployment status from the command line you used to execute the cdk deploy command or you can use the AWS CloudFormation console to check the status of the stack named BatchXrayStack. Submit your Batch jobs Finally, it’s time to submit Batch jobs and see X-Ray traces being generated. The following sample CLI commands will submit one job to Batch. For information about how to submit Batch jobs from the console, see Submitting a job. # Be sure to export the desired region name $ export AWS_DEFAULT_REGION=eu-west-1 $ JOB_DEFINITION=$(aws cloudformation describe-stacks --stack-name BatchXrayStack --query "Stacks[*].Outputs[?ExportName=='JobDefinitionArn'].OutputValue" --output text) $ JOB_QUEUE=$(aws cloudformation describe-stacks --stack-name BatchXrayStack --query "Stacks[*].Outputs[?ExportName=='JobQueueArn'].OutputValue" --output text) $ JOB_ID=$(aws batch submit-job --job-name batch-x-ray --job-queue $JOB_QUEUE --job-definition $JOB_DEFINITION –query 'jobId' –output text) The preceding command assigns the Batch job ID to the variable JOB_ID. To see its value: $ echo JOB_ID Now that the Batch job has been SUBMITTED, it will take a few moments for the Batch scheduler to provision the job. After the job reaches a RUNNABLE state, the scheduler will send the job to an instance. For more information, see Job States. To check the job state, run the following command: $ aws batch describe-jobs --jobs $JOB_ID --query 'jobs[*].status' After the command returns SUCCEEDED in the output, you can see the generated trace. It includes detailed information about the Batch job execution. Visualize X-Ray traces Open the AWS X-Ray console and, using the same JOB_ID from the previous section, use the search bar at the top of the page. Use the following filter expression: annotation.job_id = "JOB_ID" After the trace is found, click the link for the ID. You should see a trace map similar to the following: Figure 2 – Sample application trace in the AWS X-Ray console Cleanup To delete the CloudFormation stacks created through the AWS CDK, run the following commands. BUCKET=$(aws cloudformation describe-stacks --stack-name BatchXrayStack --query "Stacks[*].Outputs[?ExportName=='BucketForBatchJob'].OutputValue" --output text) # Empty bucket with images uploaded by Batch Job aws s3 rm s3://$BUCKET --recursive # Delete CDK stack cdk destroy BatchXrayStack # Delete CDK bootstrap stack aws cloudformation delete-stack --stack-name CDKToolkit Conclusion In this blog post, I showed you how you can set up AWS X-Ray tracing on your AWS Batch jobs. AWS provides a powerful suite of observability tools, including X-Ray, CloudWatch ServiceLens, CloudWatch Container Insights, CloudWatch Lambda Insights, and X-Ray insights. For more information, see Getting started with Amazon CloudWatch and the One Observability Demo Workshop. Authors Thales von Sperling is a Cloud Support Engineer on Deployment services team, focused on helping customers on their DevOps journey. He likes implementing infrastructure with AWS CloudFormation and enjoy finding new ways to automate workflows using AWS Lambda. On his free time, Thales enjoys riding his mountain bike..
https://awsfeed.com/whats-new/management-tools/how-to-configure-aws-x-ray-tracing-for-your-aws-batch-jobs
CC-MAIN-2021-31
refinedweb
1,567
55.34
This chapter includes: Printing and drawing are the same in Photon-the difference depends on the draw context, a data structure that defines where the draw stream flows: To print in Photon: or. The page range is just a string. The application must parse it to determine which pages should be printed. For example: The orientation of the page doesn't affect the margins. The control structure has at least the following members: Each entry indicates which context attributes were modified by that level of control. For example: if( control->changed[PRINTER_GLOBAL] & (1<<Pp_PC_NAME) ) printf( "Print name has been changed according\ to global printer spec file\n"); if( control->emitted & (1<<Pp_PC_SOURCE_OFFSET) ) printf( "source offset has been emitted\n"); The first step to printing in Photon is to create a print context by calling PpPrintCreatePC(): PpPrintContext_t *pc; pc = PpPrintCreatePC(); Once the print context is created, you must set it up properly for your printer and the options (orientation, paper size, etc.) you want to use. This can be done by any of the following:); You can also use PpPrintSetPC() directly instead of, or in addition to, using the PtPrintSel widget or PtPrintSelection(). After creating a print context, you must select a printer: err = PpPrintOpen(pc); if (err == -1) // check errno // succeeded This loads the print context with the values found in your $HOME/.photon/print/config and /usr/photon/print/printers files for the default printer defined in your $HOME/.photon/print/default file.: 72 points/inch * 8.5 inches = 612 points When setting the source size, take the nonprintable area of the printer into account. All printers have a margin around the page that they won't print on, even if the page margins are set to 0. Therefore, the size set above is actually a bit larger than the size of a page, and the font will be scaled down to fit on the printable part of the page. In the following example, the page size and nonprintable area are taken into account to give the proper source size and text height. Try this, and measure the output to prove the font is 1" high from ascender to descender: #include <stdio.h> #include <stdlib.h> #include <Pt.h> PtWidget_t *label, *window; PpPrintContext_t *pc; int quit_cb (PtWidget_t *widget, void *data, PtCallbackInfo_t *cbinfo ) { exit (EXIT_SUCCESS); return (Pt_CONTINUE); } int print_cb (PtWidget_t *widget, void *data, PtCallbackInfo_t *cbinfo ) { int action; PhDim_t size; PhRect_t const *rect; PhDim_t const *dim; action = PtPrintSelection(window, NULL, "Demo Print Selector", pc, Pt_PRINTSEL_DFLT_LOOK); if (action != Pt_PRINTSEL_CANCEL) { // Get the nonprintable area and page size.Both are in // 1/1000ths of an inch.(pc); } return (Pt_CONTINUE); } int main(int argc, char *argv[]) { PtArg_t args[10]; PtWidget_t *print, *quit; PhDim_t win_dim = { 400, 200 }; PhPoint_t lbl_pos = {0, 0}; PhArea_t print_area = { {130, 170}, {60, 20} }; PhArea_t quit_area = { {210, 170}, {60, 20} };, 1, args)) == NULL) exit (EXIT_FAILURE); // Create a print context. pc = PpPrintCreatePC(); // Create a label to be printed. PtSetArg (&args[0], Pt_ARG_POS, &lbl_pos, 0); PtSetArg (&args[1], Pt_ARG_TEXT_STRING, "I am 1 inch high", 0); PtSetArg (&args[2], Pt_ARG_TEXT_FONT, "swiss72", 0); PtSetArg (&args[3], Pt_ARG_MARGIN_HEIGHT, 0, 0); PtSetArg (&args[4], Pt_ARG_MARGIN_WIDTH, 0, 0); PtSetArg (&args[5], Pt_ARG_BORDER_WIDTH, 0, 0); label = PtCreateWidget (PtLabel, window, 6,); } You should also set the source offset, the upper left corner of what's to be printed. For example, if you have a button drawn at (20, 20) from the top left of a pane and you want it to be drawn at (0, 0) on the page, set the source offset to (20, 20). Any other widgets are drawn relative to their position from this widget's origin. A widget at (40, 40) will be drawn at (20, 20) on the page. The code is as follows: PhPoint_t offset = {20, 20}; ... P. After you've made the print context active, you can start drawing widgets. This can be done by any combination of the following: You can force a page break at any point by calling PpPrintNewPage(). Note that once you call PpPrintOpen(), any changes to the print context take effect after the next call to PpPrintNewPage(). For example, to print a widget B: PtDamageWidget(B); PtFlush(); Widget B redraws itself, and the draw commands are redirected to the destination in the print context.: For example, to print a widget B (even if it hasn't been realized): PpPrintWidget(pc, B, trans, NULL, opt); PtFlush(); If you want to start a new page, call PpPrintNewPage(pc); Any changes to the print context (such as the orientation) will go into effect for the new page. If you want to print all the contents of a widget that scrolls, you need some special processing: The only way to make a PtList print (or draw) all the items is by resizing it to be the total height of all the items. The easiest way is probably by using the resize policy: Due to a bug in the resize flags of the multitext widget, the method used for PtList doesn't currently work on PtMultiText. To print a PtMultiText widget's entire text: ); } For a PtScrollArea, you need to print its virtual canvas, which is where all widgets created within or moved to a scroll area are placed: PtValidParent( ABW_Scroll_area, PtWidget ); PtWidgetOffset( PtValidParent( ABW_Scroll_area, PtWidget )); PpPrintWidget( pc, PtValidParent( ABW_Scroll_area, PtWidget ), NULL, NULL, opt); ); When you're finished printing your widgets, the print context must be deactivated and closed. This is done by calling: PpPrintStop(pc); PpPrintClose(pc); All draw events will be directed to the graphics driver.(pc); } return (Pt_CONTINUE); } int main(int argc, char *argv[]) { PtArg_t args[4]; PtWidget_t *print, *quit; PhDim_t win_dim = { 200, 200 }; PhArea_t pane_area = { {0, 0}, {200, 150} }; PhArea_t print_area = { {30, 170}, {60, 20} }; PhArea_t quit_area = { {110, 170}, {60, 20} }; PhArea_t cir_area = { {35, 20}, {130, 110} }; PhArea_t cir2_area = { {67, 40}, {20, 20} }; PhArea_t cir3_area = { {110, 40}, {20, 20} }; PhArea_t cir4_area = { {85, 80}, {30, 30} };, 2, args)) == NULL) exit (EXIT_FAILURE); // create a print context pc = PpPrintCreatePC(); // create the pane to be printed PtSetArg (&args[0], Pt_ARG_AREA, &pane_area, 0); pane = PtCreateWidget (PtPane, window, 1, args); // put some stuff in the pane to be printed PtSetArg (&args[0], Pt_ARG_AREA, &cir_area, 0); PtCreateWidget (PtEllipse, pane, 1, args); PtSetArg (&args[0], Pt_ARG_AREA, &cir2_area, 0); PtSetArg (&args[1], Pt_ARG_FILL_COLOR, Pg_BLACK, 0); PtCreateWidget (PtEllipse, pane, 2, args); PtSetArg (&args[0], Pt_ARG_AREA, &cir3_area, 0); PtSetArg (&args[1], Pt_ARG_FILL_COLOR, Pg_BLACK, 0); PtCreateWidget (PtEllipse, pane, 2, args); PtSetArg (&args[0], Pt_ARG_AREA, &cir4_area, 0); PtCreateWidget (PtEllipse, pane, 1,); }
https://www.qnx.com/developers/docs/qnx_4.25_docs/photon114/prog_guide/printing.html
CC-MAIN-2018-13
refinedweb
1,067
53.04
Lab 5: Linked Lists Due at 11:59pm on 07/07/2016. questions 2 and 3 is in lab05.py. - Questions 4 through 11 (Coding). Note: Notice that we can just use link(37)instead link(37, empty). This is because the second argument of the linkconstructor has a default argument of empty. Required Questions What Would Python Display? Question 1:'] Coding Practice Question 2: Question 3: Is Sorted? Implement the is_sorted(lst) function, which returns True if the linked list lst is sorted in increasing from left to right. If two adjacent elements are equal, the linked list can still be ***"if lst == empty or rest(lst) == empty:: return True elif first(lst) > first(rest(lst)): return False return is_sorted(rest(lst)) Use OK to test your code: python3 ok -q is_sorted Optional Questions Coding Practice Note: The following questions are in lab05_extra.py. Question 4: Sum Write a function that takes in a linked list lst and a function fn which is applied to each number in lst and returns the sum. If the linked list is empty, the sum is 0. 5: Change Write a function that takes in a linked list, lst, and two elements, s and t. The function returns lst but with all instances of s replaced with t. def change(lst, s, t): """Returns a link matching lst but with all instances of s replaced by t. If s does not appear in lst, then return lst >>> lst = link(1, link(2, link(3))) >>> new = change(lst, 3, 1) >>> print_link(new) 1 2 1 >>> newer = change(new, 1, 2) >>> print_link(newer) 2 2 2 >>> newest = change(newer, 5, 1) >>> print_link(newest) 2 2 2 """"*** YOUR CODE HERE ***"if lst == empty: return lst if first(lst) == s: return link(t, change(rest(lst), s, t)) return link(first(lst), change(rest(lst), s, t)) Use OK to test your code: python3 ok -q change Question 6: Link to List Write a function link_to_list that takes a linked list and converts it to a Python list. Hint: To check if a linked list is empty, you can use lst == empty. Also, you can combine two Python lists using +. def link_to_list(linked_lst): """Return a list that contains the values inside of linked_lst >>> link_to_list(empty) [] >>> lst1 = link(1, link(2, link(3, empty))) >>> link_to_list(lst1) [1, 2, 3] """"*** YOUR CODE HERE ***"if linked_lst == empty: return [] else: return [first(linked_lst)] + link_to_list(rest(linked_lst)) # Iterative version def link_to_list_iterative(linked_lst): """ >>> link_to_list_iterative(empty) [] >>> lst1 = link(1, link(2, link(3, empty))) >>> link_to_list_iterative(lst1) [1, 2, 3] """ new_lst = [] while linked_lst != empty: new_lst += [first(linked_lst)] linked_lst = rest(linked_lst) return new_lst Use OK to test your code: python3 ok -q link_to_list Question 7: Insert Implement the insert function that creates a copy of the original list with an item inserted at the specific index. If the index is greater than the current length, you should insert the item at the end of the list. Review your solution for change if you are stuck. Hint: This will be much easier to implement using recursion, rather than using iteration! Note: >>> newer = insert(new, 9002, 15) >>> print_link(newer) 1 9001 2 3 9002 """"*** YOUR CODE HERE ***"if lst == empty: return link(item, empty) elif index == 0: return link(item, lst) else: return link(first(lst), insert(rest(lst), item, index-1)) Use OK to test your code: python3 ok -q insert Question 8: ***"# Recursive version if s0 == empty: return s1 elif s1 == empty: return s0 return link(first(s0), link(first(s1), interleave(rest(s0), rest(s1)))) # Iterative version def interleave(s0, s1): interleaved = empty while s0 != empty and s1 != empty: interleaved = link(first(s1), link(first(s0), interleaved)) s0, s1 = rest(s0), rest(s1) remaining = s1 if s0 == empty else s0 while remaining != empty: interleaved = link(first(remaining), interleaved) remaining = rest(remaining) return reverse_iterative(interleaved) def reverse_iterative(s): rev_list = empty while s != empty: rev_list = link(first(s), rev_list) s = rest(s) return rev_list Use OK to test your code: python3 ok -q interleave Question 9: Filter Implement a filter_list function that takes a linked list lst and returns a new linked list only containing elements from lst that satisfy predicate. Remember, recursion is your friend! def filter_list(predicate, lst): """Returns a link only containing elements in lst that satisfy predicate. >>> lst = link(25, link(5, link(50, link(49, link(80, empty))))) >>> new = filter_list(lambda x : x % 2 == 0, lst) >>> print_link(new) 50 80 """"*** YOUR CODE HERE ***"if lst == empty: return lst elif predicate(first(lst)): return link(first(lst), filter_list(predicate, rest(lst))) else: return filter_list(predicate, rest(lst)) Use OK to test your code: python3 ok -q filter_list Question 10: Reverse Write iterative and recursive functions that reverse a given linked list, producing a new linked list with the elements in reverse order. Use only the link constructor and first and rest selectors to manipulate linked lists. (You may write and use helper functions.) def reverse_iterative(s): """Return a reversed version of a linked list s. >>> primes = link(2, link(3, link(5, link(7, empty)))) >>> reversed_primes = reverse_iterative(primes) >>> print_link(reversed_primes) 7 5 3 2 """"*** YOUR CODE HERE ***"rev_list = empty while s != empty: rev_list = link(first(s), rev_list) s = rest(s) return rev_listdef reverse_recursive(s): """Return a reversed version of a linked list s. >>> primes = link(2, link(3, link(5, link(7, empty)))) >>> reversed_primes = reverse_recursive(primes) >>> print_link(reversed_primes) 7 5 3 2 """"*** YOUR CODE HERE ***"return reverse_helper(s, empty) def reverse_helper(s, tail): if s == empty: return tail return reverse_helper(rest(s), link(first(s), tail)) Use OK to test your code: python3 ok -q reverse_iterative python3 ok -q reverse_recursive Question 11: Kth to Last Implement the kth_last(lst, k) function, which returns the element that is k positions from the last element. def kth_last(lst, k): """Return the kth to last element of `lst`. >>> lst = link(1, link(2, link(3, link(4)))) >>> kth_last(lst, 0) 4 >>> print(kth_last(lst, 5)) None """"*** YOUR CODE HERE ***"# Iterative Version ahead = lst for _ in range(k): if ahead == empty: return None ahead = rest(ahead) start = lst while rest(ahead) != empty: ahead = rest(ahead) start = rest(start) if start == empty: return None return first(start) # Recursive Version def unwind_rewind(lst): if lst == empty: return (k, None, False) previous_k, kth_element, found = unwind_rewind(rest(lst)) if found: return (0, kth_element, True) if previous_k == 0 and not found: return (0, first(lst), True) return (previous_k-1, kth_element, False) return unwind_rewind(lst)[1] # Alternate def unwind_rewind(lst): if lst == empty: return unwind_rewind(rest(lst)) nonlocal k if k == 0: return first(lst) k -= 1 return unwind_rewind(lst) Use OK to test your code: python3 ok -q kth_last
http://inst.eecs.berkeley.edu/~cs61a/su16/lab/lab05/
CC-MAIN-2018-17
refinedweb
1,108
54.97
How to query a SQL database in C# using TableAdapters Visual Studio has some great features to help you access the database and create objects for your database. You could manually create a connection string and manually create objects that represent the data in your database described here: How to query a SQL database in C#?. This article can show you how Visual Studio can do this for you. So how is it done? By adding a Data Source. Imagine you have a simple database for authentication with these tables: User - Id INT AUTOINCREMENT - UserName VARCHAR(100) - Password VARCHAR(MAX) - Salt VARCHAR(MAX) Person - Id INT AUTOINCREMENT - FirstName VARCHAR(255) - LastName VARCHAR(255) - Birthdate DATETIME - UserId int FK to User.Id Now imagine that you want to query these tables and use the data in your application. Step 1 – Create a Visual Studio Project - In visual studio create a new C# Console Application project. - Once you have the project created, click on Project | Add New Data Source. - Select Database and click Next. - Select DataSet and click Next. - Click New Connection and follow the wizard to connect to your database. - Make sure that Yes, save the connection as is checked and give your saved connection a name and click Next. - Click the checkbox next to Tables and click Finish. This adds the following files to your project (the names might be slightly different on yours): - AuthDataSet.xsd - AuthDataSet.Designer.cs - AuthDataSet.xsc - AuthDataSet.xss This code will add table adapters to your project. This basically does a lot of work for you and can save you a lot of potential development time. Step 2 – Query a SQL Database using the Table Adapter Now you can get the data from either of your tables with one line of code: using System; using System.Data; using TableAdapterExample.AuthDataSetTableAdapters; namespace TableAdapterExample { class Program { static void Main(string[] args) { // Query the database (select * from Person) into a DataTable AuthDataSet.PersonDataTable table = new PersonTableAdapter().GetData(); // Print out the table as proof. PrintDataTable(table); } /// How to print a DataTable private static void PrintDataTable(AuthDataSet.PersonDataTable table) { foreach (DataRow row in table.Rows) { foreach (DataColumn col in table.Columns) { Console.Write(row[col].ToString().Trim() + " "); } Console.WriteLine(); } } } } Hope that helps you. I am a bit late for this but: using TableAdapterExample.AuthDataSetTableAdapters; returns an error:"the type or namespace name 'AuthDataSetTableAdapters' does not exist in namespace 'TableAdapterExample'" I am quite new to C# What should I do to resolve this? ignore the "USING" and instead AuthDataSet.PersonDataTable table = new PersonTableAdapter().GetData(); write AuthDataSet.PersonDataTable table = new AuthDataSetTableAdapters.PersonTableAdapter().GetData();
https://www.rhyous.com/2013/05/28/how-to-query-a-sql-database-in-csharp-using-tableadapters
CC-MAIN-2019-47
refinedweb
431
58.89
I have C++ program and I want to add extension system with Python. But to achieve this I have to map Python object method calls to C++ method calls. Is this possible and if yes how to achieve. Example: Python part: class Extension(AbstractExtension): def __init__(self, cool_cpp_object): self.o = cool_cpp_object def some_method(self): self.o.method_to_cpp() class SomeClass : public AnotherClass { public: void method_to_cpp(); } There are several ways of doing so (StoryTeller correctly notes Boost::Python, and there is Swig too). Personally, I find Cython's C++ integration exceptionally easy to use. Create some header file, say classes.hpp, and in it put (along with guards, etc.): class SomeClass : public AnotherClass { public: void method_to_cpp(); } Place the implementation in an implementation file the usual way. Now create Cython file with an export of the interface you will use: cdef extern from "classes.hpp": cdef cppclass SomeClass: method_to_cpp() and a Python wrapper: cdef class PySomeClass: cdef SomeClass obj def method(self): self.obj.method_to_cpp() That's it, basically. You can import and use PySomeClass like a regular Python class. The link above should explain how to build all the files.
https://codedump.io/share/9SZZquYXw7XL/1/c-and-python-tight-integrating
CC-MAIN-2017-09
refinedweb
188
59.7
import "github.com/gohugoio/hugo/resources/resource_factories/create" Package create contains functions for to create Resource objects. This will typically non-files. Client contains methods to create Resource objects. tasks to Resource objects. New creates a new Client with the given specification. FromString creates a new Resource from a string with the given relative target path. Get creates a new Resource by opening the given filename in the assets filesystem. GetMatch gets first resource matching the given pattern from the assets filesystem. Match gets the resources matching the given pattern from the assets filesystem. Package create imports 8 packages (graph) and is imported by 9 packages. Updated 2019-08-18. Refresh now. Tools for package owners.
https://godoc.org/github.com/gohugoio/hugo/resources/resource_factories/create
CC-MAIN-2020-50
refinedweb
116
53.17
FHATUWANI Dondry MUVHANGO17,796 Points problems with starter.py i know its a simple task, but way too frustrated because i cant get the space correct. i believe my spacing is right, but it keeps giving me errors that i cant even find def first_function(arg1): return 'arg1 is {}'.format(arg1) def second_function(arg1): return 'arg1 is {}'.format(arg1) class MyClass: args = [1, 2, 3] def class_func(self): return self.args 3 Answers Jennifer NordellTreehouse Teacher Hi there! You're right. There are invisible characters that are giving the errors. It's counting the indentation as whitespace on lines that otherwise contain nothing else. This has to do with the way things auto-indent on a new line. And because you're so close here, I'm just going to give this hint. Put your cursor in the lines you believe are blank. I can almost guarantee you it won't be flush with the left side. Backspace to remove the additional whitespace that was added, and recheck the code. Pay close attention to the "Bummer!" messages as they are absolutely crucial to passing this challenge. I hope this helps, but let me know if you're still stuck! FHATUWANI Dondry MUVHANGO17,796 Points Ok. I've tried your method I'm left with only one on line 13, that i definetly dont know how to fix. It keeps saying " no new line at the end of the file" Jennifer NordellTreehouse Teacher Jennifer NordellTreehouse Teacher Then you're at the end! Simply add a new line after the last line that you've typed. Be careful though, it might try to auto-indent so make sure you take out any of those pesky spaces that might have popped up so that the cursor is flush with the left side
https://teamtreehouse.com/community/problems-with-starterpy
CC-MAIN-2021-21
refinedweb
299
83.46
D3.js version 5 Lazily loading DOM elements In this example I demonstrate how you can lazily load DOM elements as you scroll through a page using D3.js. The full code for this is available on my website. Why you might want to lazily load DOM elements As page size increases the browser will slow down rendering it. This can have detrimental effects when someone is using the page. One way to reduce the amount of DOM elements on the page is to only draw those that a viewer can see. By doing this once the person has scrolled past elements they can be cleaned up and removed. This should improve the redraw speed and memory usage of the page, as it needs to store less elements in the DOM. This will however increase the complexity of the page, and may overall cause it to run slower cumulatively as it will spend more time performing calculations. Working out where each element will be placed The big disadvantage of this method is that you need to keep track of where all the elements are on the page. Once you know this you can work out which ones you conditionally want to not display. In my example I am going focus on y axis scrolling (standard scrolling). As elements get scrolled offscreen they will be removed from the page. For my example I am creating rectangles and when created, storing the maxium and minimum y positions. Here this is done in a loop that creates my data. for(var i = 0; i < ROWS; i++) { rawData.push({"index": i, "color": colorScale(i), "startY": i * ROW_HEIGHT, "endY": (i+1) * ROW_HEIGHT}); } Calculating these values is trivial in this case as my SVG is placed at (0,0) on the page. However if your SVG is not placed here you will also need to calculate the position of the SVG and add this to your Y values. These values are stored as the position of these elements never change. This is then later used to decide whether it should be rendered or not. Only rendering those elements currently shown Once I have calculated the Y positions of the elements I can create my filter function to decide what is within the bounds of my user scroll. Here before I pass my data array to D3 I call filter() on it to remove the elements I dont want to render. //This will be called for each element, if this returns true it will keep the element in the array //Otherwise it will remove it from the newly returned array function filterVisibleElements(d) { return (d.startY >= minYWithPreloading && d.endY <= maxYWithPreloading) || (d.startY <= minYWithPreloading && d.endY >= minYWithPreloading) || (d.startY <= maxYWithPreloading && d.endY >= maxYWithPreloading) ; } Each element will be tested with the above function to decide whether it should be drawn or not. For this I have defined two variables, minYWithPreloading and maxYWithPreloading. These define the minimum and maximum Y that I want all elements to sit in. These have been created by taking the current Y positions of the screen and padding it by 1.5 times the viewpoint. This renders the elements before the user scrolls to it, meaning that if they scroll fast the elements should always be rendered. The three tests used to decide whether an element should be rendered are: - Does the element sit entirely between the two bounds - Does the element sit at the top of one of the bounds, with part of it inside the bound - Does the element sit at the bottom of one of the bounds, with part of it inside the bound If the elements match these criteria, they are returned into the data array that is passed to D3. var filteredData = rawData.filter(filterVisibleElements); var rowRectSelection = dynamicSvg.selectAll(".rowsMain").data(filteredData); var rowRectScrollSelection = scrollSvg.selectAll(".rowsScroll").data(filteredData); Every time the user scrolls the update function is called and the elements visible are updated. Summary of lazily loading DOM elements First when the elements are created their Y positions are calculated. These are stored as they will be constant throughout the lifetime of the page. Once this is calculated the update function is called for the first time. This draws the first set of elements. Once this has been done any further scroll events will trigger an update, refreshing what elements are displayed. The right SVG is used to demonstrate what elements have been loaded on the page. Here the red box is used to show the range of elements that are loaded for the page. The blue box is demonstrating the current view of the page. This relatively simple example allows expansion to more complex methods of drawing a page. It is important to note that if the update method is quite expensive you may need to buffer DOM updates. This may require writing a small buffer function to delay updates if multiple scroll events are received at the same time. The full code is available on my website and if you have any questions ask below.
https://chewett.co.uk/blog/2335/d3-js-version-5-lazily-loading-dom-elements/
CC-MAIN-2022-27
refinedweb
839
63.39
An abstract spin box. More... #include <Wt/WAbstractSpinBox> An abstract spin box. Although the element can be rendered using a native HTML5 control, by default it is rendered using an HTML4 compatibility workaround which is implemented using JavaScript and CSS, as most browsers do not yet implement the HTML5 native element.LineEdit. Returns whether a native HTML5 control is used. Taking into account the preference for a native control, configured using setNativeControl(), this method returns whether a native control is actually being used. Returns the prefix. Refresh the widget. The refresh method is invoked when the locale is changed using WApplication::setLocale() or when the user hit the refresh button. The widget must actualize its contents in response. Reimplemented from Wt::WFormWidget. Reimplemented in Wt::WDoubleSpinLineEdit. Reimplemented in Wt::WDoubleSpinBox. Configures whether a native HTML5 control should be used. When native, the new "number" input element, specified by HTML5 and when implemented by the browser, is used rather than the built-in element. The native control is styled by the browser (usually in sync with the OS) rather than through the theme chosen. The default is false (as native support is now well implemented). Sets a prefix. Option to set a prefix string shown in front of the value, e.g.: The default prefix is empty. Sets a suffix. Option to set a suffix string shown to the right of the value, e.g.: The default suffix is empty. Sets the content of the line edit. The default value is "". Reimplemented from Wt::WLineEdit. Returns the suffix. Validates the field. Reimplemented from Wt::WLineEdit.
https://webtoolkit.eu/wt/wt3/doc/reference/html/classWt_1_1WAbstractSpinBox.html
CC-MAIN-2021-31
refinedweb
264
53.88
Updated on augustus 11, 2020· know how I did it and where I struggled. Hopefully it can help you getting started with React or Firebase as well! Project Setup You can see a live demo of this application here:. You need to register an account first before you can see the chat. I did not put any effort into styling. The GIT Repo of this project can be found here:. Feel free to have a look around! PART 2 IS FINALLY HERE! Make sure to check it out once you’ve finished part 1 of course. I am just starting out with React myself as well, so if you are an advanced React user, this blog probably can’t help you. But feel free to leave a comment with suggestions or feedback! Assuming you have basic knowledge of ES6 and React, I’ll be using the create-react-app, firebase, and react-router npm packages. So no redux or any other fancy library. Let’s get started! First let’s create our new react project! create-react-app chatbox After this is installed, manually browse to this folder and take a look at the /src folder. There’s a bunch of files that I personally do not like, so I’m removing all files in this folder EXCEPT the index.js and index.css files! Before we’ll start, we also add the react-router and firebase packages to our project: npm i react-router npm i react-router-dom npm i firebase Now that all our dependencies are installed, we can go ahead and creating our folder structure. We need a ‘homepage’, where all our chats will be displayed. And an Auth folder that will have all our views for authentication, like user registration and login etc… We’ll create a /components folder inside the default /src folder. Afterwards, inside /components we’ll create a /Home and a /Auth folder. Our Home folder will have a Home.js and a Home.css file. The Auth folder will have an Auth.css, Register.js and Login.js file. Here’s my folder structure: - node_modules - package-lock.json - package.json - public - README.md - src - components - Home - Home.js - Home.css - Auth - Auth.css - Login.js - Register.js - index.css - index.js - yarn.lock Let’s fire up our local server to see what we have so far! I’m using Yarn, but create-react-app will tell you in the terminal what commands are available for you if you don’t have yarn installed. Make sure you are in the root of your project before you run the command. yarn start Once the pages is loaded, we should see an error. (as I removed App.js to cleanup my project). So let’s start building our Home.js component: // /components/Home/Home.js import React from 'react'; import './Home.css'; class Home extends React.Component{ render(){ return( <div className="home--container"> <h1>Home Container</h1> </div> ); } } export default Home; We’ll just add a <h1> for now, to see if our component is working. After this, we need to include our fresh home component into our index.js file, to make sure we’re launching the Home component instead of the previous App component (that we removed). // index.js import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; import Home from './components/Home/Home'; ReactDOM.render(<Home />, document.getElementById('root')); If we navigate to our project now, we should be able to see our ‘Home Container’ h1 element on the page. We’ll copy the code from our Home component also to our Auth components, to make sure they all work fine. So we have something to look at while we implement our React-Router. Copy the Home.js code to the /Auth/Login.js and Register.js so we can start navigating through our different components. Just change the classname of the projects (obviously) and the title. So we can actually see when we change views. After that, import our Auth components into our index.js file as well to finish our project setup. // index.js ... import Login from './components/Auth/Login'; import Register from './components/Auth/Register'; ... Implementing React Router Now that we have all our components, let’s hook up React Router first, before we actually start implementing Firebase and Authentication. If you didn’t have react-router and react-router-dom installed in your project already, please go ahead and do so. If you have it, let’s edit our index.js file with react-router. First, import BrowserRouter from the react-router-dom: import { BrowserRouter as Router, Route, Link, Switch } from 'react-router-dom'; Afterward, let’s create a Router component inside our index.js. This looks like a regular React Component like we did for Home and our Auth. class AppRouter extends React.Component{ render(){ return( /* ... router code here ... */ ); } } We’ll add our Routercode in a second. For now, let’s switch out our <Home /> component with our new <AppRouter /> component to make sure we’re displaying the correct component at startup. ReactDOM.render( <AppRouter />, document.getElementById('root') ); Ok, time to build our router inside our previously created AppRouter component! The Router component only accepts 1 child. So we’ll wrap everything in a parent-div: // index.js ... class AppRouter extends React.Component{ render(){ return( <Router> <div className="app"> <nav className="main-nav"> <Link to="/">Home</Link> <Link to="/login">Login</Link> <Link to="/register">Register</Link> </nav> <Switch> <Route path="/" exact component={Home} /> <Route path="/login" exact component={Login} /> <Route path="/register" exact component={Register} /> <Route component={NoMatch} /> </Switch> </div> </Router> ); } } ... We’ll use the <nav> element to display our navigation, so we’re able to navigate through our different views. Instead of <a href=””> elements, we need to use <Link to=””> elements. ReactRouter will automatically render these into normal <a href=””> tags once it’s in the browser. Also notice we’re using the Switch component to create our routes. Here we can say that if the URL of the page is equal to /login, then render the Login component. Notice I use a {NoMatch} default Route, that we will be using for 404-pages. here’s the function for the NoMatch if you want to use this too. Place it right under our AppRouter component: const NoMatch = ({location}) =>No route match for {location.pathname}; Implementing Firebase Before we can continue with our authentication, we’ll need to link firebase first. Head over to firebase.com and create a new database. After creating the new database, you should be able to see next screen: Click on the web icon to receive your web access tokens. This will open a popup with all the data you need. Copy all of it, because we’ll need it in a second. If you have not installed the firebase npm package yet, please go ahead and do so. If you have it already, create a new file named firebase.js in our /src folder and paste the firebase snipped we just copied in there. Your firebase.js file will look like this: // firebase.js import firebase from 'firebase'; const config = { apiKey: "xxxxxxxxxx-xxxx-xxxx_xxxxxxxxxx_xxxxxxx", authDomain: "app-name.firebaseapp.com", databaseURL: "", projectId: "app-name", storageBucket: "app-name.appspot.com", messagingSenderId: "xxxxxxxxxxxx" }; firebase.initializeApp(config); export const provider = new firebase.auth.GoogleAuthProvider(); export const auth = firebase.auth(); export default firebase; Import Firebase from ‘firebase’ at the top of this file to ease-up our future process. Because we’re using authentication through firebase as well, we’ll need a few more stuff. Add the firebase Provider and the firebase Auth into the file as well, just as in the example I posted before. At the end of our firebase.js file we need to export our firebase file so we can import it in all the files that will need to link with firebase. That’s actually all we need to link our React App with React-router to Firebase! Woohoow, I hope you could still follow along so far? Register new users in firebase First things first, we can’t send messages without a user account. Because we need to know who send the message, so we know with who we’re chatting. Best to do authentication at the start, so we don’t need to manually edit all previous messages with a ‘sender_id’ or something like that. Here’s the setup for our Register.js component: // /components/Auth/Register.js import React from 'react'; import firebase from '../../firebase.js' import { Link } from 'react-router-dom'; import './Auth.css'; import Login from './Login'; class Register extends React.Component{ constructor(props){ super(props); this.state = { username: '', email: '', password: '', error: null } } handleChange = e => { this.setState({[e.target.name]: e.target.value}); } handleSubmit = e => { e.preventDefault(); console.log('Submitting form...'); } render(){ const {email, username, password, error} = this.state; return( <div className="auth--container"> <h1>Register your account</h1> {error && <p className="error-message">{error.message}</p>} <form onSubmit={this.handleSubmit}> <label htmlFor="username">Username</label> <input type="text" name="username" id="username" value={username} onChange={this.handleChange} /> <label htmlFor="email">Email address</label> <input type="text" name="email" id="email" value={email} onChange={this.handleChange} /> <label htmlFor="password">Choose a password</label> <input type="password" name="password" id="password" value={password} onChange={this.handleChange} /> <button className="general-submit" children="Get Started" /> <p>Already have an account? <Link className="login-btn" to="/login">Login here</Link></p> </form> </div> ); } } export default Register; I’ll go over the file top to bottom. At the top, we import everything we need from React, firebase, and react-router. I am importing our Login component to easily switch to Login if our user already has an account. After that, we’ll create a constructor to set up our default state. We add an onChange handler for our inputfields (otherwise React doesn’t let us edit those fields) and an OnSubmit handler when our form will be submitted. For now, we only log ‘Submitting…’ to make sure this is working. We’ll add our firebase code in a minute! After our functions we’ll add our jsx code inside the render() method. Our input fields need a name-attribute. Because this is what we target in our onChange function with [e.target.name] to change the current state. The value of the input-fields will be equal to their {state} value and we also add an onChange attribute that is equal to our handleChange() function. Update Firebase Authentication settings Before we can start building user registration and login, we need to activate authentication in our firebase project. Head over to firebase and select the ‘Authentication’ tab from the left sidebar menu. In there, head over to the ‘inlog-method’ tab and enable email/password authentication. Once done, and our /register page is rendering correctly on the page, we can add our firebase code to register our users. Please change our onSubmit function to the following: handleSubmit = e => { e.preventDefault(); const {email, username, password} = this.state; firebase .auth() .createUserWithEmailAndPassword(email, password) .then(() => { const user = firebase.auth().currentUser; user .updateProfile({displayName: username}) .then(() => { this.props.history.push('/'); }) .catch(error => { this.setState({error}); }); }) .catch(error => { this.setState({error}); }); } First, we’ll declare all the variables from our state with ES6 destructuring like this: const { email, username, and password }. After that, we can do our Firebase call. This one access a lot of functions at once. firebase.auth() gives us access to the firebase authentication methods. Next we call ‘createUserWithEmailAndPassword(). I mean… ever seen an easier way to register users to your database? I don’t think so! The user’s password will be automatically hashed and we automatically get a user UID. I’m in love! The createUserWithEmailAndPassword() doesn’t allow to set a username all at once. So we’ll access the auth().currentUser directly after our registration and set the username ourselves. All Firebase calls are Promises, so we can hook them up with .then(() => {...}) and .catch(error => {...}) to catch all errors. We have an error-field in our state, which will be displayed on the page when we have errors. So after each catch, we’ll set the state to error. As you start playing with this, you will see that firebase gives us perfect error messages for everything that can go wrong. Login existing users The firebase registration function automatically logs-in new users. That’s why we redirect back to the homepage after registration, because the user will be logged in automatically. You can turn this off in your firebase settings as well as adding extra’s like user email confirmations first. But I am not going to cover that in this post. Let’s build something to login existing users, as for now, we lose our login-state when we navigate to different pages. We will add something to keep track of our state in this login function. The login code is actually exactly the same as our registration code. Except the handleSubmit function, this will look like this: handleSubmit = e => { e.preventDefault(); const {email, password} = this.state; firebase .auth() .signInWithEmailAndPassword(email, password) .then(user => { this.props.history.push('/'); }) .catch(error => { this.setState({error}); }); } This is the handleSubmit event in our Login.js file. All the rest is the same as our registration file. Except of course for login, we’ll only use email and password, no username. But even the handleSubmit code looks the same. We just switchour the createUserWithEmailAndPassword() function with the signInWithEmailAndPassword() function. Now… How do we remain logged in while switching views? We’ll be updating our index.js file for this: // index.js //Import firebase package import firebase, {auth, provider} from './firebase.js' //Add Contstructor to AppRouter component constructor(props){ super(props); this.state = {user: null} } //Add componentDidMount lifecycle componentDidMount(){ auth.onAuthStateChanged(user => { if(user){ this.setState({user}); } }); We need to include firebase at the top of our indexfile to be able to use Firebase functions. We also add a constructor to initialize our starting state. If our component mounts, we’ll check with firebase if our ‘userstate’ (authstate in firebase) changed. If so, we check if we had a user, and we put the user in our React state again. So we’re actually pulling our Firebase state into our React state. (If you don’t logout in firebase, you will automatically be logged in when you visit the webpage again later on) Logout the current user Let’s build a function to log our current users out of the application. We’ll just run a logout function when clicking on a link, so we don’t need navigation or create a new component for this. Add a log out link to the navigation: // element in index.js I just use a regular <a> element, with on onClick handler that will run our logout function. here is the Logout function: logOutUser = () => { firebase.auth().signOut() .then(window.location = "/"); } This will log out our user on firebase and afterwards hard-redirect to the homepage again. If you don’t want the hard-redirect, you can just change the React-state user to null again. But with the hard-redirect, our componentDidMount will check our actual firebase state and will see that no-one is logged in anymore. If the logout function throws an error, make sure to bind this new logOutUser function in our AppRouter constructor. What’s next? Phew… This already feels like a lot actually… I am going to split this post up into two parts. This part, that does the firebase hookup with authentication. And a second part, where we will be building the chatbox. Let me know if my examples and code are not clear! I am both learning how to do React & Firebase AND how to write decent blog posts at the same time. But hopefully, all works fine for you as well, and we can head over to PART 2: Building the chatbox! You can see a live demo of this application here:. You need to register an account first before you can see the chat. I did not put any effort into styling. The GIT Repo of this project can be found here:. Feel free to have a look around! 15 reacties Hello, Thanks a lot for the tutorial, it’s a good step by step explanation. One comment though, it would be nice to write what result we’re supposed to get when we test the code, like maybe a screenshot of your user having been saved in the firebase console etc .. Because i’m haviong trouble seeing if what i did is working. But thanks for the initiative anyway, appreciate it ! Camille Hi Camille, thank you for your feedback! I’ll try to add some screenshots about console-logs or firebase results soon. You know. You have wasted my time!!!. i came from googling to see chat using firebase and react.. not login, logout and register pages.. i already know that i just followed you through you code (by the way a newbie to react will fail if he follow you instructions) and finally when the real part start you cut off until god know when you will continue or never. (i read comment you switch to vue). i know i sound harsh but .. YOU REALLY WASTED MY TIME please take an hour from your time and finish it as soon as possible to avoid wasting time of another one Hi Nassim, Sorry to hear that. You do have a point that this post was never finished. I wrote it a while ago and assumed that no one would read it. Thank you for posting this comment! It’s what I needed to hear to find the time to finishing it off. In the meantime, maybe you can find your answers in the Github Repo? The version in the git-repo is the same version as the one that’s running online. Keep an eye on this post, I promise you I’ll finish a part 2! Very easy and well explained. I am now wondering if there is any way we can update the username for previous messages. Hi Luk, Changing your username or password is something I did not include in this simple example. Same for editing or removing messages that have been sent. Unique and authentic information indeed. Thanks Hello, first of all, thanks a lot for the tutorial, it was really helpful. Second, i tried to execute the program on my computer, but when i try to send a message to the chat a get a “PERMISSION_DENIED” error. Can you help me please? Update: I believe it is a database error, can you please share the data base creation? Update: I found the error, you have to enable write and read from your firebase console. Thanks a lot! Great tut! We are waiting the part 2. Thank you Hi there, Thanks for commenting and apologies for the delay on part 2. I’ll make work of it! In the meantime, I uploaded to project to git:. Feel free to have a look at the source and hopefully it can help you at a bit faster than me writing part 2. Cool tutorial for beginners! Thanks for sharing the repo as well. Is part 2 still in the works? I’m reading the react firebase chat app. When is the part 2 coming?. Hi Kalyan, Thanks for letting me know you’re waiting on this one! I swapped over to Vue instead of React but will try to find the time to finish this course off as soon as possible The project is done and working, so I’ve added a demo link in the article and will put all files on GIT somewhere this week as well. Part 2 coming soon then!
https://weichie.com/nl/blog/react-firebase-chat-app/
CC-MAIN-2020-45
refinedweb
3,308
67.86
I am running on a linux machine a python script which creates a child process using subprocess.check_output() as it follows: subprocess.check_output(["ls", "-l"], stderr=subprocess.STDOUT) Your problem is with using subprocess.check_output - you are correct, you can't get the child PID using that interface. Use Popen instead: proc = subprocess.Popen(["ls", "-l"], stdout=PIPE, stderr=PIPE) # Here you can get the PID global child_pid child_pid = proc.pid # Now we can wait for the child to complete (output, error) = proc.communicate() if error: print "error:", error print "output:", output To make sure you kill the child on exit: import os import signal def kill_child(): if child_pid is None: pass else: os.kill(child_pid, signal.SIGTERM) import atexit atexit.register(kill_child)
https://codedump.io/share/8GwWjGzqAiH/1/how-to-kill-a-python-child-process-created-with-subprocesscheckoutput-when-the-parent-dies
CC-MAIN-2017-04
refinedweb
123
60.82
android / kernel / common.git / refs/tags/ASB-2019-12-05_4.9-o-mr1 / . / Documentation / spi / spi-summary blob: d1824b399b2d1d79059231a52e2552949d9ba24f [ file ] [ log ] [ blame ] Overview of Linux kernel SPI support ==================================== 02-Feb-2012 What is SPI? ------------ The "Serial Peripheral Interface" (SPI) is a synchronous four wire serial link used to connect microcontrollers to sensors, memory, and peripherals. It's a simple "de facto" standard, not complicated enough to acquire a standardization body. SPI uses a master/slave configuration. commonly used. Each clock cycle shifts data out and data in; the clock doesn't cycle except when there is a data bit to shift. Not all data bits are used though; not every protocol uses those full duplex capabilities. SPI masters use a fourth "chip select" line to activate a given SPI slave device, so those three signal wires may be connected to several chips in parallel. All SPI slaves support chipselects; they are usually active low signals, labeled nCSx for slave 'x' (e.g. nCS0). Some devices have other signals, often including an interrupt to the master. Unlike serial busses like USB or SMBus, even low level protocols for SPI slave functions are usually not interoperable between vendors (except for commodities like SPI memory chips). - SPI may be used for request/response style device protocols, as with touchscreen sensors and memory chips. - It may also be used to stream data in either direction (half duplex), or both of them at the same time (full duplex). - Some devices may use eight bit words. Others may use different word lengths, such as streams of 12-bit or 20-bit digital samples. - Words are usually sent with their most significant bit (MSB) first, but sometimes the least significant bit (LSB) goes first instead. - Sometimes SPI is used to daisy-chain devices, like shift registers. In the same way, SPI slaves will only rarely support any kind of automatic discovery/enumeration protocol. The tree of slave devices accessible from a given SPI master will normally be set up manually, with configuration tables. SPI is only one of the names used by such four-wire protocols, and most controllers have no problem handling "MicroWire" (think of it as half-duplex SPI, for request/response protocols), SSP ("Synchronous Serial Protocol"), PSP ("Programmable Serial Protocol"), and other related protocols. Some chips eliminate a signal line by combining MOSI and MISO, and limiting themselves to half-duplex at the hardware level. In fact some SPI chips have this signal mode as a strapping option. These can be accessed using the same programming interface as SPI, but of course they won't handle full duplex transfers. You may find such chips described as using "three wire" signaling: SCK, data, nCSx. (That data line is sometimes called MOMI or SISO.) Microcontrollers often support both master and slave sides of the SPI protocol. This document (and Linux) currently only supports the master side of SPI interactions. Who uses it? On what kinds of systems? --------------------------------------- Linux developers using SPI are probably writing device drivers for embedded systems boards. SPI is used to control external chips, and it is also a protocol supported by every MMC or SD memory card. (The older "DataFlash" cards, predating MMC cards but using the same connectors and card shape, support only SPI.) Some PC hardware uses SPI flash for BIOS code. SPI slave chips range from digital/analog converters used for analog sensors and codecs, to memory, to peripherals like USB controllers or Ethernet adapters; and more. Most systems using SPI will integrate a few devices on a mainboard. Some provide SPI links on expansion connectors; in cases where no dedicated SPI controller exists, GPIO pins can be used to create a low speed "bitbanging" adapter. Very few systems will "hotplug" an SPI controller; the reasons to use SPI focus on low cost and simple operation, and if dynamic reconfiguration is important, USB will often be a more appropriate low-pincount peripheral bus. Many microcontrollers that can run Linux integrate one or more I/O interfaces with SPI modes. Given SPI support, they could use MMC or SD cards without needing a special purpose MMC/SD/SDIO controller. I'm confused. What are these four SPI "clock modes"? ----------------------------------------------------- It's easy to be confused here, and the vendor documentation you'll find isn't necessarily helpful. The four modes combine two mode bits: - CPOL indicates the initial clock polarity. CPOL=0 means the clock starts low, so the first (leading) edge is rising, and the second (trailing) edge is falling. CPOL=1 means the clock starts high, so the first (leading) edge is falling. - CPHA indicates the clock phase used to sample data; CPHA=0 says sample on the leading edge, CPHA=1 means the trailing edge. Since the signal needs to stablize before it's sampled, CPHA=0 implies that its data is written half a clock before the first clock edge. The chipselect may have made it become available. Chip specs won't always say "uses SPI mode X" in as many words, but their timing diagrams will make the CPOL and CPHA modes clear. In the SPI mode number, CPOL is the high order bit and CPHA is the low order bit. So when a chip's timing diagram shows the clock starting low (CPOL=0) and data stabilized for sampling during the trailing clock edge (CPHA=1), that's SPI mode 1. Note that the clock mode is relevant as soon as the chipselect goes active. So the master must set the clock to inactive before selecting a slave, and the slave can tell the chosen polarity by sampling the clock level when its select line goes active. That's why many devices support for example both modes 0 and 3: they don't care about polarity, and always clock data in/out on rising clock edges. How do these driver programming interfaces work? ------------------------------------------------ The <linux/spi/spi.h> header file includes kerneldoc, as does the completion callbacks. There are also some simple synchronous wrappers for those calls, including ones for common transaction types like writing a command and then reading its response. There are two types of SPI driver, here called: Controller drivers ... controllers may be built into System-On-Chip processors, and often support both Master and Slave roles. These drivers touch hardware registers and may use DMA. Or they can be PIO bitbangers, needing just GPIO pins. Protocol drivers ... these pass messages through the controller driver to communicate with a Slave or Master device on the other side of an SPI link. So for example one protocol driver might talk to the MTD layer to export data to filesystems stored on SPI flash like DataFlash; and others might control audio interfaces, present touchscreen sensors as input interfaces, or monitor temperature and voltage levels during industrial processing. And those might all be sharing the same controller driver. A "struct spi_device" encapsulates the master-side interface between those two types of driver. At this writing, Linux has no slave side programming interface. There is a minimal core of SPI programming interfaces, focussing on using the driver model to connect controller and protocol drivers using device tables provided by board specific initialization code. SPI shows up in sysfs in several locations: /sys/devices/.../CTLR ... physical node for a given SPI controller /sys/devices/.../CTLR/spiB.C ... spi_device on bus "B", chipselect C, accessed through CTLR. /sys/bus/spi/devices/spiB.C ... symlink to that physical .../CTLR/spiB.C device /sys/devices/.../CTLR/spiB.C/modalias ... identifies the driver that should be used with this device (for hotplug/coldplug) /sys/bus/spi/drivers/D ... driver for one or more spi*.* devices /sys/class/spi_master/spiB ... symlink (or actual device node) to a logical node which could hold class related state for the controller managing bus "B". All spiB.* devices share one physical SPI bus segment, with SCLK, MOSI, and MISO. Note that the actual location of the controller's class state depends on whether you enabled CONFIG_SYSFS_DEPRECATED or not. At this time, the only class-specific state is the bus number ("B" in "spiB"), so those /sys/class entries are only useful to quickly identify busses. How does board-specific init code declare SPI devices? ------------------------------------------------------ Linux needs several kinds of information to properly configure SPI devices. That information is normally provided by board-specific code, even for chips that do support some of automated discovery/enumeration. DECLARE CONTROLLERS The first kind of information is a list of what SPI controllers exist. For System-on-Chip (SOC) based boards, these will usually be platform devices, and the controller may need some platform_data in order to operate properly. The "struct platform_device" will include resources like the physical address of the controller's first register and its IRQ. Platforms will often abstract the "register SPI controller" operation, maybe coupling it with code to initialize pin configurations, so that the arch/.../mach-*/board-*.c files for several boards can all share the same basic controller setup code. This is because most SOCs have several SPI-capable controllers, and only the ones actually usable on a given board should normally be set up and registered. So for example arch/.../mach-*/board-*.c files might have code like: #include <mach/spi.h> /* for mysoc_spi_data */ /* if your mach-* infrastructure doesn't support kernels that can * run on multiple boards, pdata wouldn't benefit from "__init". */ static struct mysoc_spi_data pdata __initdata = { ... }; static __init board_init(void) { ... /* this board only uses SPI controller #2 */ mysoc_register_spi(2, &pdata); ... } And SOC-specific utility code might look something like: #include <mach/spi.h> static struct platform_device spi2 = { ... }; void mysoc_register_spi(unsigned n, struct mysoc_spi_data *pdata) { struct mysoc_spi_data *pdata2; pdata2 = kmalloc(sizeof *pdata2, GFP_KERNEL); *pdata2 = pdata; ... if (n == 2) { spi2->dev.platform_data = pdata2; register_platform_device(&spi2); /* also: set up pin modes so the spi2 signals are * visible on the relevant pins ... bootloaders on * production boards may already have done this, but * developer boards will often need Linux to do it. */ } ... } Notice how the platform_data for boards may be different, even if the same SOC controller is used. For example, on one board SPI might use an external clock, where another derives the SPI clock from current settings of some master clock. DECLARE SLAVE DEVICES The second kind of information is a list of what SPI slave devices exist on the target board, often with some board-specific data needed for the driver to work correctly. Normally your arch/.../mach-*/board-*.c files would provide a small table listing the SPI devices on each board. (This would typically be only a small handful.) That might look like: static struct ads7846_platform_data ads_info = { .vref_delay_usecs = 100, .x_plate_ohms = 580, .y_plate_ohms = 410, }; static struct spi_board_info spi_board_info[] __initdata = { { .modalias = "ads7846", .platform_data = &ads_info, .mode = SPI_MODE_0, .irq = GPIO_IRQ(31), .max_speed_hz = 120000 /* max sample rate at 3V */ * 16, .bus_num = 1, .chip_select = 0, }, }; Again, notice how board-specific information is provided; each chip may need several types. This example shows generic constraints like the fastest SPI clock to allow (a function of board voltage in this case) or how an IRQ pin is wired, plus chip-specific constraints like an important delay that's changed by the capacitance at one pin. (There's also "controller_data", information that may be useful to the controller driver. An example would be peripheral-specific DMA tuning data or chipselect callbacks. This is stored in spi_device later.) The board_info should provide enough information to let the system work without the chip's driver being loaded. The most troublesome aspect of that is likely the SPI_CS_HIGH bit in the spi_device.mode field, since sharing a bus with a device that interprets chipselect "backwards" is not possible until the infrastructure knows how to deselect it. Then your board initialization code would register that table with the SPI infrastructure, so that it's available later when the SPI master controller driver is registered: spi_register_board_info(spi_board_info, ARRAY_SIZE(spi_board_info)); Like with other static board-specific setup, you won't unregister those. The widely used "card" style computers bundle memory, cpu, and little else onto a card that's maybe just thirty square centimeters. On such systems, your arch/.../mach-.../board-*.c file would primarily provide information about the devices on the mainboard into which such a card is plugged. That certainly includes SPI devices hooked up through the card connectors! NON-STATIC CONFIGURATIONS Developer boards often play by different rules than product boards, and one example is the potential need to hotplug SPI devices and/or controllers. For those cases you might need to use spi_busnum_to_master() to look up the spi bus master, and will likely need spi_new_device() to provide the board info based on the board that was hotplugged. Of course, you'd later call at least spi_unregister_device() when that board is removed. When Linux includes support for MMC/SD/SDIO/DataFlash cards through SPI, those configurations will also be dynamic. Fortunately, such devices all support basic device identification probes, so they should hotplug normally. How do I write an "SPI Protocol Driver"? ---------------------------------------- Most SPI drivers are currently kernel drivers, but there's also support for userspace drivers. Here we talk only about kernel drivers. SPI protocol drivers somewhat resemble platform device drivers: static struct spi_driver CHIP_driver = { .driver = { .name = "CHIP", .owner = THIS_MODULE, .pm = &CHIP_pm_ops, }, .probe = CHIP_probe, .remove = CHIP_remove, }; The driver core will automatically attempt to bind this driver to any SPI device whose board_info gave a modalias of "CHIP". Your probe() code might look like this unless you're creating a device which is managing a bus (appearing under /sys/class/spi_master). static int CHIP_probe(struct spi_device *spi) { struct CHIP *chip; struct CHIP_platform_data *pdata; /* assuming the driver requires board-specific data: */ pdata = &spi->dev.platform_data; if (!pdata) return -ENODEV; /* get memory for driver's per-chip state */ chip = kzalloc(sizeof *chip, GFP_KERNEL); if (!chip) return -ENOMEM; spi_set_drvdata(spi, chip); ... etc return 0; } As soon as it enters probe(), the driver may issue I/O requests to the SPI device using "struct spi_message". When remove() returns, or after probe() fails, the driver guarantees that it won't submit any more such messages. - An spi_message is a sequence of protocol operations, executed as one atomic sequence. SPI driver controls include: + when bidirectional reads and writes start ... by how its sequence of spi_transfer requests is arranged; + which I/O buffers are used ... each spi_transfer wraps a buffer for each transfer direction, supporting full duplex (two pointers, maybe the same one in both cases) and half duplex (one pointer is NULL) transfers; + optionally defining short delays after transfers ... using the spi_transfer.delay_usecs setting (this delay can be the only protocol effect, if the buffer length is zero); + whether the chipselect becomes inactive after a transfer and any delay ... by using the spi_transfer.cs_change flag; + hinting whether the next message is likely to go to this same device ... using the spi_transfer.cs_change flag on the last transfer in that atomic group, and potentially saving costs for chip deselect and select operations. - Follow standard kernel rules, and provide DMA-safe buffers in your messages. That way controller drivers using DMA aren't forced to make extra copies unless the hardware requires it (e.g. working around hardware errata that force the use of bounce buffering). If standard dma_map_single() handling of these buffers is inappropriate, you can use spi_message.is_dma_mapped to tell the controller driver that you've already provided the relevant DMA addresses. - The basic I/O primitive is spi_async(). Async requests may be issued in any context (irq handler, task, etc) and completion is reported using a callback provided with the message. After any detected error, the chip is deselected and processing of that spi_message is aborted. - There are also synchronous wrappers like spi_sync(), and wrappers like spi_read(), spi_write(), and spi_write_then_read(). These may be issued only in contexts that may sleep, and they're all clean (and small, and "optional") layers over spi_async(). - The spi_write_then_read() call, and convenience wrappers around it, should only be used with small amounts of data where the cost of an extra copy may be ignored. It's designed to support common RPC-style requests, such as writing an eight bit command and reading a sixteen bit response -- spi_w8r16() being one its wrappers, doing exactly that. Some drivers may need to modify spi_device characteristics like the transfer mode, wordsize, or clock rate. This is done with spi_setup(), which would normally be called from probe() before the first I/O is done to the device. However, that can also be called at any time that no message is pending for that device. While "spi_device" would be the bottom boundary of the driver, the upper boundaries might include sysfs (especially for sensor readings), the input layer, ALSA, networking, MTD, the character device framework, or other Linux subsystems. Note that there are two types of memory your driver must manage as part of interacting with SPI devices. - I/O buffers use the usual Linux rules, and must be DMA-safe. You'd normally allocate them from the heap or free page pool. Don't use the stack, or anything that's declared "static". - The spi_message and spi_transfer metadata used to glue those I/O buffers into a group of protocol transactions. These can be allocated anywhere it's convenient, including as part of other allocate-once driver data structures. Zero-init these. If you like, spi_message_alloc() and spi_message_free() convenience routines are available to allocate and zero-initialize an spi_message with several transfers. How do I write an "SPI Master Controller Driver"? ------------------------------------------------- An SPI controller will probably be registered on the platform_bus; write a driver to bind to the device, whichever bus is involved. The main task of this type of driver is to provide an "spi_master". Use spi_alloc_master() to allocate the master, and spi_master_get_devdata() to get the driver-private data allocated for that device. struct spi_master *master; struct CONTROLLER *c; master = spi_alloc_master(dev, sizeof *c); if (!master) return -ENODEV; c = spi_master_get_devdata(master); The driver will initialize the fields of that spi_master, including the bus number (maybe the same as the platform device ID) and three methods used to interact with the SPI core and SPI protocol drivers. It will also initialize its own internal state. (See below about bus numbering and those methods.) After you initialize the spi_master, then use spi_register_master() to publish it to the rest of the system. At that time, device nodes for the controller and any predeclared spi devices will be made available, and the driver model core will take care of binding them to drivers. If you need to remove your SPI controller driver, spi_unregister_master() will reverse the effect of spi_register_master(). BUS NUMBERING Bus numbering is important, since that's how Linux identifies a given SPI bus (shared SCK, MOSI, MISO). Valid bus numbers start at zero. On SOC systems, the bus numbers should match the numbers defined by the chip manufacturer. For example, hardware controller SPI2 would be bus number 2, and spi_board_info for devices connected to it would use that number. If you don't have such hardware-assigned bus number, and for some reason you can't just assign them, then provide a negative bus number. That will then be replaced by a dynamically assigned number. You'd then need to treat this as a non-static configuration (see above). SPI MASTER METHODS master->setup(struct spi_device *spi). ** BUG ALERT: for some reason the first version of ** many spi_master drivers seems to get this wrong. ** When you code setup(), ASSUME that the controller ** is actively processing transfers for another device. master->cleanup(struct spi_device *spi) Your controller driver may use spi_device.controller_state to hold state it dynamically associates with that device. If you do that, be sure to provide the cleanup() method to free that state. master->prepare_transfer_hardware(struct spi_master *master) This will be called by the queue mechanism to signal to the driver that a message is coming in soon, so the subsystem requests the driver to prepare the transfer hardware by issuing this call. This may sleep. master->unprepare_transfer_hardware(struct spi_master *master) This will be called by the queue mechanism to signal to the driver that there are no more messages pending in the queue and it may relax the hardware (e.g. by power management calls). This may sleep. master->transfer_one_message(struct spi_master *master, struct spi_message *mesg) The subsystem calls the driver to transfer a single message while queuing transfers that arrive in the meantime. When the driver is finished with this message, it must call spi_finalize_current_message() so the subsystem can issue the next message. This may sleep. master->transfer_one(struct spi_master *master, struct spi_device *spi, struct spi_transfer *transfer) The subsystem calls the driver to transfer a single transfer while queuing transfers that arrive in the meantime. When the driver is finished with this transfer, it must call spi_finalize_current_transfer() so the subsystem can issue the next transfer. This may sleep. Note: transfer_one and transfer_one_message are mutually exclusive; when both are set, the generic subsystem does not call your transfer_one callback. Return values: negative errno: error 0: transfer is finished 1: transfer is still in progress DEPRECATED METHODS master->transfer(struct spi_device *spi, struct spi_message *message) This must not sleep. Its responsibility is to arrange that the transfer happens and its complete() callback is issued. The two will normally happen later, after other transfers complete, and if the controller is idle it will need to be kickstarted. This method is not used on queued controllers and must be NULL if transfer_one_message() and (un)prepare_transfer_hardware() are implemented. SPI MESSAGE QUEUE If you are happy with the standard queueing mechanism provided by the SPI subsystem, just implement the queued methods specified above. Using the message queue has the upside of centralizing a lot of code and providing pure process-context execution of methods. The message queue can also be elevated to realtime priority on high-priority SPI traffic. Unless the queueing mechanism in the SPI subsystem is selected, the bulk of the driver will be managing the I/O queue fed by the now deprecated function transfer(). That queue could be purely conceptual. For example, a driver used only for low-frequency sensor access might be fine using synchronous PIO. But the queue will probably be very real, using message->queue, PIO, often DMA (especially if the root filesystem is in SPI flash), and execution contexts like IRQ handlers, tasklets, or workqueues (such as keventd). Your driver can be as fancy, or as simple, as you need. Such a transfer() method would normally just add the message to a queue, and then start some asynchronous transfer engine (unless it's already running). THANKS TO --------- Contributors to Linux-SPI discussions include (in alphabetical order, by last name): Mark Brown David Brownell Russell King Grant Likely Dmitry Pervushin Stephen Street Mark Underwood Andrew Victor Linus Walleij Vitaly Wool
https://android.googlesource.com/kernel/common.git/+/refs/tags/ASB-2019-12-05_4.9-o-mr1/Documentation/spi/spi-summary
CC-MAIN-2020-50
refinedweb
3,788
54.93
#include <wx/listbox. On some platforms (notably wxGTK2) pressing the enter key is handled as an equivalent of a double-click.. On wxGTK this method can only determine the number of items per page if there is at least one item in the listbox.. The index must be valid, i.e. less than the value returned by GetCount(), otherwise an assert is triggered. Notably, this function can't be called if the control is empty.. MSW-specific function for setting custom tab stop distances. Tab stops are expressed in dialog unit widths, i.e. "quarters of the average character width for the font that is selected into the list box". Set the specified item to be the first visible item. Set the specified item to be the first visible.
https://docs.wxwidgets.org/trunk/classwx_list_box.html
CC-MAIN-2022-05
refinedweb
129
75.91
* Thomas Gleixner <tglx@linutronix.de> wrote:> Yep, it's all the same scheme. Most of the offending code uses> MUTEX_LOCKED in an init function and plays the down, and up from a> different context game, which triggers the deadlock/owner verify. Not> hard to fix, but at some places it takes a bit, until you see the> intention of the driver hacker. the NFS ones seemed to be the least clear ones. I'm glad you convertedthose already :-)> The most surprising one was in driver/base. I did not expect that new> 2.5/6 code uses those tricks too.it is not strictly a bug, but that technique was discouraged for years -completions are cleaner and faster for that purpose anyway. (they weredesigned for what in the semaphore case is the slowpath.)> Fixes for aic7xxx and sym53c8xx_2 attached.Applied. The sym53c8xx_2 looks good. aic7xxx is good too except for aminor cleanup issue: i've changed all _sem symbols to be _done symbols.It's not a semaphore anymore, lets avoid the namespace-rotting effect. I've put these into -U8 so anyone hitting aic7xxx or sym53c8xx_2 shouldre-download the -U8 patch. (others who have already downloaded it shouldnot bother.) Ingo-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2004/10/20/112
CC-MAIN-2017-04
refinedweb
229
76.62
I have a python file B with all my function and a main code which is in loop of 0.25 sec, and I want to call this file in a loop in my file A. Can you get my weird mind ? What I did but only read the loop from file B once : #FileA while 1: from FileB import * #FileB while t<0.25: #my stuff The import statement only reads the target module one time. If you have control of both files, I'd suggest that you make your loop a function in file B: def main(): while t<0.25: #my stuff if __name__ == '__main__': main() Then you can call it repeatedly from file A: from fileB import main as Bmain while 1: Bmain() If you don't have control of the source code for the files (meaning: if the code comes from someone else), there are a few options. Probably the easiest and fastest to code would be to use the os.system(command) function to run the contents of fileB in a separate process.
https://codedump.io/share/SRS3UVSi95tG/1/call-function-from-a-file-b-in-a-loop-in-file-a
CC-MAIN-2016-44
refinedweb
179
86.13
Hi, I am writing a program that allows a user to enter a string of 0s, 1s, and x's. The program will then print out all of the possible combinations that a binary number can have with the string. Example: If the input is 1xx0 The output would be 1000 1010 1100 1111 The part that I am having trouble on is that I need to write this program using recursion. I'll let you guys know that I am terrible at recursion =(. Any hints or tips about how I should approach this program would be appreciated. Thanks #include <stdio.h> #include <string.h> #define MAX_SIZE 50 // I may need this /*int find_x(char number[]){ int i, loc_of_x; for(i= strlen(number) - 1; i >= 0; i--){ if (number[i] == 'x'){ loc_of_x = i; return loc_of_x;} } return 0; } */ int count_x(char number[]){ int i, number_of_x; for(i=0, number_of_x = 0; (i < strlen(number)); i++){ if (number[i] == 'x') ++number_of_x; } return number_of_x; } void display(char number[]){ if (count_x(number) == 0) printf("%s\n", number); } int main(){ char number[MAX_SIZE]; int number_of_x, i; printf("Binary number: "); scanf("%s", number); display(number); return 0; }
https://www.daniweb.com/programming/software-development/threads/194723/program-help
CC-MAIN-2018-30
refinedweb
190
58.52
in reply to Re: OT: Ruby On Rails - your thoughts?in thread OT: Ruby On Rails - your thoughts?] They all have their strengths and weaknesses, so why not choose the best for the task at hand? Why not use multiple template engines, multiple orm's in the same application? Why write all of Catalyst in Perl, for that matter? Oh, wait. [. def execute( *args ) args.flatten .... end [download] And, yes, do..end is more idiomatic, but I'm not comfortable with it yet. I'm sure I'll be making the shift soon enough. :-) Yes, Class::DBI sucks bad, but we have DBIx::Class, which does a lot more than ActiveRecord. Beer Other beverages Pizza Fruit and Vegetables Other foods Organs Thyme Space Itself Lies Me, that's why I'm so cool Archeologists Penguins Servers Mystery Logic (separated into Horror and Brilliance) Results (224 votes). Check out past polls.
http://www.perlmonks.org/?node_id=509483
CC-MAIN-2017-43
refinedweb
150
76.62
I was solving this problem and got 100 points. This submission ran in 0.98s as seen below. I sorted the vector, considered every pair a[i], a[j] with i < j. Then using their common difference, I checked if the next value exists with the help of binary search (basic Arithmetic progression). Since this is very slow (1 second time limit), I decided to try to use pragmas to decrease runtime. I don’t know much about that so I failed to get 100 points. I decided to submit my code without pragmas (the original submission). It ran in 0.99s. I don’t know why the exact same submission ran slower. Can someone please explain this? For the most important part, I was annoyed about my code being slow. I decided to use unordered_map. I began to doubt the test cases. My solution with unordered_map got accepted in 0.36s which is almost 3x faster. I blew up unordered_map. I created an input file with 2500 occurrences of 99733 using this simple python code: f = open("test.txt", "a") s = "2500 " for i in range(2500): s += "99733 " f.write(s) f.close() The same solution which ran in 0.36s is running since 10 minutes. #include <bits/stdc++.h> #include <ext/pb_ds/assoc_container.hpp> #include <ext/pb_ds/tree_policy.hpp> #define ll long long using namespace __gnu_pbds; using namespace std; template <class T> using oset = tree <T, null_type, less <T>, rb_tree_tag, tree_order_statistics_node_update>; void usaco(string name = "") { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); if(name.size()) { freopen((name+".txt").c_str(), "r", stdin); freopen((name+".out").c_str(), "w", stdout); } } int main() { usaco("test"); int n, ans = 1; cin >> n; vector <int> a(n); unordered_map <int, int> m; for (int i = 0; i < n; ++i) cin >> a[i], ++m[a[i]]; sort(begin(a), end(a)); for (int i = 0; i < n-1; ++i) { for (int j = i+1; j < n; ++j) { int d = a[j]-a[i]; int cur = a[j]+d; int t = 2; while (m[cur]) { ++t; cur += d; } ans = max(ans, t); } } cout << ans << '\n'; } This shows how bad the test cases are. Anything above O(n^2) should not pass according to the constraints. Is it possible for the solutions to be rejudged? Also, does anyone know how to do it in O(n^2)?
https://discuss.codechef.com/t/very-weak-test-cases-in-bamboo-art-zco16002/76485
CC-MAIN-2020-40
refinedweb
392
76.72
You're enrolled in our new beta rewards program. Join our group to get the inside scoop and share your feedback.Join group Join the community to find out what other Atlassian users are discussing, debating and creating. I have several issues which should have been resolved some time back. I want to close and resolve them now, but do not know how to properly backdate the Resolved date. it is important for reporting purpuses and GH charts. Thanks, -Jim I know I'm hopping onto an old thread, but this was the first page I came across when searching for an answer. This turned out to be a very straight forward fix in SQL. The resolved date is stored in the dbo.jiraissue table. I updated the resolved date on mine by executing the following: UPDATE dbo.jiraissue SET UPDATED = '2013-05-23 12:19:22.087', RESOLUTIONDATE = '2013-05-23 12:19:22.127' WHERE pkey = 'CCSP-455' Just change the pkey to the issue key that you're trying to update. The update didn't require any re-indexing nor did I bring Jira offline (rebel!). Hope this helps anyone doing the same search as me. UPDATE FOR JIRA 6.1+: It looks like they deprecated the pkey identifier starting in Jira v6.1. This is the new query I ran to accomplish the same in the new version: UPDATE dbo.jiraissue SET UPDATED = '2014-02-03 14:07:22.087', RESOLUTIONDATE = '2014-02-03 14:07:22.087' WHERE PROJECT = 10104 AND issuenum = 44 It may take some searching for you to identify the PROJECT value. Works for me; to be safe, I did a complete reindex afterwards. I was actually fixing up resolve dates on issues which were imported from another system, so I was able to do a direct join between the two SQL Server databases and update all the JIRA issues in one go :-) Remember that this is only appropriate for imported issues that don't have a history entry for when the resolution changed from <null> to <something> If you have history records, then hacking the resolution date will make your data inconsistent nonsense. This approach doesn't work on JIRA 7, changed both values and the Resolved Date didn't change, I did a locked re-index and tried with different issues with no luck, is there anywhere else i can get to that value? You can't. The resolved date is derived from the last time you resolved the issue, which you can't alter. Of course, there's always hacking - in this case, you'd need to use SQL to change the date on the changegroup (and I think you'd need to alter the os_workflow too). That does require backups, downtime and re-indexing though (never alter a Jira database while it's running. Just. No. Never), and will destroy the information about when your users actually resolved it. Use ScriptRunner - this will update only 1 ticket at a time. Paste this template the ScriptRunner script console, Update the ticket number and the date. Run. import com.atlassian.jira.component.ComponentAccessor import com.atlassian.jira.issue.IssueManager import com.atlassian.jira.issue.MutableIssue import java.sql.Timestamp import com.atlassian.jira.util.ImportUtils import java.text.SimpleDateFormat import java.text.DateFormat; import java.util.Date; DateFormat dateFormat = new SimpleDateFormat("MM/dd/yyyy hh:mm:ss"); Date date = dateFormat.parse("05/23/2019 06:00:00"); long time = date.getTime(); Timestamp targetDate = new Timestamp(time); IssueManager issueManager = ComponentAccessor.getIssueManager() MutableIssue missue = (MutableIssue) issueManager.getIssueObject("PS-2469") missue.setResolutionDate(targetDate); missue.store(); return (targetDate); works fine for me. But history entry mismatch needs to be taken into account. Thanks! It is important to trigger issue reindex then ! import com.atlassian.jira.issue.index.IssueIndexingService def indexingService = ComponentAccessor.getOSGiComponentInstanceOfType(IssueIndexingService) boolean wasIndexing = ImportUtils.isIndexIssues() ImportUtils.setIndexIssues(true) indexingService.reIndex(missue) ImportUtils.setIndexIssues(wasIndexing) Would it be possible to loop this over the issues in a saved filter or JQL? Ugh, answering my own question, but for those who follow... was very helpful. I almost started running the script but the warning put me off, showing in the script console: "(!) Since version v5.0. DO NOT USE THIS as it overwrites all the fields of the issue which can result in difficult to reproduce bugs. Prefer to use QueryDslAccessor to change only needed fields @ line 24, column 8" We're using Jira D.C. 8.14.x with ScriptRunner Version: 6.21.0 I could not create a script myself using QueryDslAccessor as this seems to be possible only from database queries, not ScriptRunner, but I may be wrong. Does anyone have a workaround or update to this script that uses a method that updates only the Resolution Date field and will not overwrites all fields as the warning implies? For newer Jira instance SQL command should look like: update jiraissue set updated = '2018-11-28 09:22:22.498+01', resolutiondate = '2018-11-28 09:22:22.498+01' where id = 59918; I was thinking you could go in to the workflow and do the modification that way. Create a new field put it on the transistion. Then as a part of the post workflow actions, have the Resolved date copy the date of the field you just entered. (Added in later) But make the transistion go back to the same step it was previously ie. New - New, Open -Open, etc> You will lose the last updated date but at least you get your Resolved data back Also then to protect yourself, go back and make sure that you or the jira-admins are the only people who can see that workflow option. No. Generally, that is a very good approach and exactly what I'd do. Looped transitions are immensely useful to get around the weakness in "edit" permissions. It's just that for this one field, it's not right - because it's derived field, you don't need to touch it - and in fact, you can't. It'll take its value from the changed change.
https://community.atlassian.com/t5/Jira-questions/How-can-I-change-the-Resolved-Date-for-an-Issue/qaq-p/253663
CC-MAIN-2021-21
refinedweb
1,018
57.98
%matplotlib inline import random import numpy as np import matplotlib.pyplot as plt from math import sqrt, pi import scipy import scipy.stats plt.style.use('seaborn-whitegrid') Now we'll consider 2D numeric data. Recall that we're taking two measurements simultaneously so that their should be an equal number of data points for each dimension. Furthermore, they should be paired. For example, measuring people's weight and height is valid. Measuring one group of people's height and then a different group of people's weight is not valid. Our example for this lecture will be one of the most famous datasets of all time: the Iris dataset. It's a commonly used dataset in education and describes measurements in centimeters of 150 Iris flowers. The measured data are the columns and each row is an iris flower. They are sepal length, sepal width, pedal length, pedal width, and species. We'll ignore species for our example. import pydataset data = pydataset.data('iris').values #remove species column data = data[:,:4].astype(float) np.cov(data[:,1], data[:,3], ddof=1) array([[ 0.18997942, -0.12163937], [-0.12163937, 0.58100626]]) This is called a covariance matrix:$$\left[\begin{array}{lr} \sigma_{xx} & \sigma_{xy}\\ \sigma_{yx} & \sigma_{yy}\\ \end{array}\right]$$ The diagonals are the sample variances and the off-diagonal elements are the sample covariances. It is symmetric, since $\sigma_{xy} = \sigma_{yx}$. The value we observed for sample covariance is negative covariance the measurements. That means as one increases, the other decreases. The ddof was set to 1, meaning that the divosor for sample covariance is $N - 1$. Remember that $N$ is the number of pairs of $x$ and $y$ values. The covariance matrix can be any size. So we can explore all possible covariances simultaneously. #add rowvar = False to indicate we want cov #over our columns and not rows np.cov(data, rowvar=False, ddof=1) array([[ 0.68569351, -0.042434 , 1.27431544, 0.51627069], [-0.042434 , 0.18997942, -0.32965638, -0.12163937], [ 1.27431544, -0.32965638, 3.11627785, 1.2956094 ], [ 0.51627069, -0.12163937, 1.2956094 , 0.58100626]]) To read this larger matrix, recall the column descriptions: sepal length (0), sepal width (1), pedal length (2), pedal width (3). Then use the row and column index to identify which sample covariance is being computed. The row and column indices are interchangable because it is symmetric. For example, the sample covariance of sepal length with sepal width is $-0.042$ centimeters. plt.plot(data[:,0], data[:,2]) plt.show() What happened? It turns out, our data are not sorted according to sepal length, so the lines go from value to value. There is no reason that our data should be ordered by sepal length, so we need to use dot markers to get rid of the lines. plt.title('Sample Covariance: 1.27 cm') plt.plot(data[:,0], data[:,2], 'o') plt.xlabel('Sepal Length [cm]') plt.ylabel('Pedal Length [cm]') plt.show() Now the other plot plt.title('Sample Covariance: 0.52 cm') plt.plot(data[:,0], data[:,3], 'o') plt.xlabel('Sepal Length [cm]') plt.ylabel('Pedal Width [cm]') plt.show() That is suprising! The "low" sample variance plot looks like it has as much correlation as the "high" sample covariance. That's because sample variance measures both the underlying variance of both dimensions and their correlation. The reason this is a low sample covariance is that the y-values change less than in the first plot. Since the covariance includes the correlation between variables and the variance of the two variables, sample correlation tries to remove the variacne so we can view only correlation.$$r_{xy} = \frac{\sigma_{xy}}{\sigma_x \sigma_y}$$ Similar to the covariance, there is something called the correlation matrix or the normalized covariance matrix. np.corrcoef(data, rowvar=False) array([[ 1. , -0.11756978, 0.87175378, 0.81794113], [-0.11756978, 1. , -0.4284401 , -0.36612593], [ 0.87175378, -0.4284401 , 1. , 0.96286543], [ 0.81794113, -0.36612593, 0.96286543, 1. ]]) Note that we don't have to pass in ddof because it cancels in the correlation coefficient expression. Now we also see that the two plots from above have similar correlations as we saw visually. Let's try creating some synthetic data to observe properties of correlation. I'm using the rvs function to sample data from distributions using scipy.stats. x = scipy.stats.norm.rvs(size=15, scale=4) y = scipy.stats.norm.rvs(size=15, scale=4) cor = np.corrcoef(x,y)[0,1] plt.title('r = {}'.format(cor)) plt.plot(x, y, 'o') plt.xlabel('x') plt.ylabel('$y$') plt.show() x = scipy.stats.norm.rvs(size=100, scale=4) y = x ** 2 cor = np.corrcoef(x,y)[0,1] plt.title('r = {}'.format(cor)) plt.plot(x, y, 'o') plt.xlabel('x') plt.ylabel('$x^2$') plt.show() See that $x^2$ is an analytic function of $x$, but it has a lower correlation than two independent random numbers. That's because correlation coefficients are unreliable for non-linear behavior. Another example:
https://nbviewer.jupyter.org/github/whitead/numerical_stats/blob/master/unit_7/lectures/lecture_4.ipynb
CC-MAIN-2020-40
refinedweb
844
60.61
Archive Project euler 29:? def distpowers(n): a={2} # initialize the set d={} # map for total number of elements for each power s=[0 for x in range(n + 1)] for i in range(1, 8): for j in range(2, n + 1): a.add(i * j) d[i]=len(a) tot = 0 for i in range (2, n + 1): if s[i] == 0: prod = i power = 0 while prod <= n: s[prod] = 1 prod *= i power += 1 tot += d[power] print (tot) if __name__ == '__main__': distpowers(100) Time complexity O(8 * n) ~ O(n) Space complexity O(n) Answer: 9183)
https://vasanthexperiments.wordpress.com/tag/algorithms/
CC-MAIN-2017-51
refinedweb
101
68.74
Old: >>47023148 Just programming a meme xd so random on da 4chins trying to revamp my build system for my library I'm building. So far it works on wangblows but the tests won't build on linux Which does Wosniak approve of?app.directive('helloWorld', function() { return { restrict: 'AE', replace: 'true', template: '<h3>Hello World!!</h3>' }; }); orApp.directive('helloWorld', function() { var directive = {}; directive.restrict = 'AE'; directive.template = "<h3>Hello World!!</h3>"; return directive; }); Also, how the fuck do you pass parameters to directives? >>47031276 >animeme picture Is it possible to create an Android app with only C++ compiled with the NDK? >wanting to be a programmer >when you can be an astronaut >>47031554 By the way, is anyone familiar with CMake? >>47031601 Technically, sure... maybe. Realistically, No, you're still gonna need to write a bunch of glue code in java. >>47031609 >wanting to be an astronaut >when you can invent immortality and be a billionaire Seriously, I feel like my life is just a lost ship in a storm being thrown about. I never wanted to be a programmer. It just fucking happened. Any time I try to direct the ship, it just find a shithole it has to run from. >>47031679 >wanting to invent inmortality >more humans that can't die >not joining the illuminati in the plan to reduce the human population what does /g/ think about static imports? And now we play the good old "have fun trying to find out why your vertex buffer is fucked up" game. At least it's not a black screen. >>47031640 only slightly >>47031739 Zalgo mode is a feature Can programming take me to NASA? Made a simple R web application for playing around with the Jacobi method. >>47031732 Well you are are suppose to programming in an OO style, so they should be avoided when possible. Asked this in the old thread but it died. I'm thinking about becoming a programmer. Don't know too much about it, but I teach myself a little bit everyday. Soon I will be going to school for software engineering. My only question is, is this a good field to get in to? Is the pay good, job security, flexible work schedule (as opposed to the standard 9-5), etc.? I know every position will be different, I'm just asking those who do work in this field what it's generally like. Sorry if this is the wrong thread to be asking in. >>47031747 nevermind I got it to work. Kind of... The library isn't built but the CMake system is in place >>47031808 *are suppose to be >>47031812 Yes, fucking do it. Med, engineering, and comp sci are master race. End of story. Finance + math/stats only works if parents have connections or ivy league. I fucking hate modern computers. Every single level of computers today is just layers and layers of retardation and terrible decisions, from circuit board design all the way up to high level applications. I wish i could go to some far away land where computers don't exist yet, and then build my own computer from scratch, CPU architecture, peripherals, kernel, operating system, userland and internet protocol exactly how i fucking like it. It depressed me that i'll never be able to achieve that. Fuck everything. >>47031905 Fuck off autist I've asked this before (I think on Friday) but I'll ask again. Is anyone interested in contributing to either a bit manipulation library or path recognition/comparison/prediction library? >>47031924 No, YOU fuck off. I have a file with entries that start all with @<, so like @<A //data //data @<B //data //data //data @<C //data etc. What would be the best way to read in all the entries into different class objects? I tried to iterate over the file and create objects as I hit @< but then I fucked something up with the iterators and it just kept getting stuck at one before EOF. >>47031609 being an astronaut is vastly overrated. what are you, five years old? >>47031905 get help, and make sure you get your autism bux if you don't get them already >>47031947 Post the code that you have done so far. >>47031905 >I wish i could go to some far away land where computers don't exist yet, and then build my own computer from scratch, CPU architecture, peripherals, kernel, operating system, userland and internet protocol exactly how i fucking like it. Why would you need to go to a far away land? Just start building it right now. >>47032008 Because if he did go to a faraway land then he would actually have to commit himself to the task he supposedly wants to do. Anybody know any good SDL learning resources? >>47031905 Nothing is stopping you from building your own computer: >Big Mess o’ Wires 1 is an original CPU design. It does not use any commercial CPU, but instead has a custom CPU constructed from dozens of simple logic chips. Around this foundation is built a full computer with support for a keyboard, sound, video, and external peripherals. Now get to work. Not quite programming, but can you check with a query whether or not you have access to a certain table in mysql? >>47032008 >>47032067 I've actually been writing a virtual machine since new years that simulates my ideal computing platform, I've gotten tons of progress on it, i recently implemented a framebuffer, and plan to tackle persistent storage next. But that's the most i can do, i can't do hardware shit, only software. I plan to study CE and EE in uni. >>47031992 It's broken and shitty because i've been moving shit around but I'm trying something like this. Got it half-working before but only with the first row of the data..::istream_iterator<std::string> it = begin; std::vector<Molecule*> molecules; while (it != end) { if ((*it).find("@<TRIPOS>MOLECULE") != std::string::npos) { //Create a Molecule object and store it in "molecules" vector Molecule* comp = new Molecule(); molecules.push_back(comp); std::cout << "First if " << *it << '\n'; comp->addLine(*it); while ((*(++it)).find("@<TRIPOS>MOLECULE") != std::string::npos || it != end) { comp->addLine(*it); std::cout << "Second if " << *it << '\n'; } } } } >>47032123 I'm not worried about fixing the code, I just can't get the logic down. >>47032076SHOW TABLES;will list all tables you have any rights on iirc.SHOW GRANTS;will list all permissions for the user that issues the query. >>47032101 if you don't do hardware shit then who are you to judge that everything is retardedly designed? >>47032150 That works, thank you adding perlin noise gives some nice results to the dla procedural terrain >>47032204 >Static image I'm a little disappointed. >>47031798 Is that Shiny? Does anyone write Java code outside of an IDE? >>47032230 webgl a shit >>47032189 Maybe not, but i am rather experienced in software development and i have a right to say that everything relating to software is fucking retarded. >>47032265 i pity the fool who writes java code outside of an IDE >>47032265 Unless you're using some other tool to compile the classes, not a chance in hell. Have you ever tried compiling a multi-package project from the command line? >>47032265 Maybe? >>47032265 I write Java in Emacs, but I consider that a OS. >>47032291 Yes, I have. Just learn to reformat errors into a log and there you go. >>47032288 >>47031276 Why the fuck is she holding the book upside down? >>47032236 Yes, it is. >>47032339 So that you can read the title anon. My image processor i am working on went a little screwy >>47032265 Indians >>47032358 looks like a wrong stride >>47032340 Looks nice. I might have to try it out sometime; R evangelists sure love to talk it up. Are you a statistician or some sort of analyst, by chance? Always nice to see projects in these by people who aren't just developers. >>47032446 yep, off by one error... again >>47032358 fuck yeah tornado evangelion >>47032526 Ok, got it done now :D Doing some windows phone app development >>47032578 >>47032578 That's neat. Thanks anon. >>47032608 This art is pretty nice. >>47032640 If you want the script, i can put it up. I have 2 things you change for different banding effects. I am about to make it so that you can do negative band shifting to shift in the other direction (go right and not left) >>47032640 I remember I made a bunch of these glitched pictures and gifs with many anons at [spoiler]420chan[/spoiler], melting effects on gifs are pretty cool, writing an algorithm to make these effects shouldn't too hard. >>47032668 Yes, I want it. >>47032668 Now animate it. >>47032483 Actually I was just testing out the Shiny development environment. I attended an R course a year ago during my exchange studies and have been doing small stuff with it, such as reading xml files, using databases and such. I only recently found Shiny. I don't have any other experience with web development. >>47032692 You guys use Matlab, I guess? Doessudo apt-get install git Install everything I need on my end for github or even on my own network git work? >>47032708 challenge accepted, though the runtime is gonna suck. Currently, it takes 1.3 seconds on my computer for the single image. Now i got the negative shifting to work. One last thing before animation, more than one imagewidth fo rthe band rotation >>47032123 I think I've fixed it...is this okay or am I doing something very redundant::vector<Molecule*> molecules; std::vector<std::string> tempStorage; for (std::istream_iterator<std::string> it = begin; it != end; ++it) { if (state == "") { if ((*it).find("@<TRIPOS>MOLECULE") != std::string::npos) { std::cout << "Found a molecule...\n"; state = "READING"; tempStorage.push_back(*it); } } else if (state == "READING") { if ((*it).find("@<TRIPOS>MOLECULE") != std::string::npos) { std::cout << "Found a new molecule, saving old one...\n"; Molecule* comp = new Molecule(tempStorage); molecules.push_back(comp); tempStorage.clear(); state = ""; } else { tempStorage.push_back(*it); } } } } >>47032710 With Microsoft's purchase of revolution analytics we're probably going to see R# in the future, which will enable Shiny-like web applications in asp.net, exciting. Am I doing this right?if worldcollision( objHB ) then x = x - ( varxSpeed * dt() ) setHB() endint luaWorldCollision( lua_State* L ) { strHitbox lHitbox; /* initialize temp hitbox */ lua_getglobal(L,"objHB"); /* grab hitbox */ lua_getfield(L,-1,"x"); /* set hitbox x */ lHitbox.x = lua_tointeger(L,-1); lua_pop(L,1); lua_getfield(L,-1,"y"); /* set hitbox y */ lHitbox.y = lua_tointeger(L,-1); lua_pop(L,1); lua_getfield(L,-1,"w"); /* set hitbox w */ lHitbox.w = lua_tointeger(L,-1); lua_pop(L,1); lua_getfield(L,-1,"h"); /* set hitbox h */ lHitbox.h = lua_tointeger(L,-1); lua_pop(L,1); if( WorldCollision( &lHitbox ) == true ) { return true; } else { return false; } } Do I need that parameter in the lua file? I would prefer to use it, as I know getglobal doesn't need it. I want to get what I have no fully working before I move onto metatable stuff. Also, I haven't tried it, but say the function in C has getglobal and setglobal, but those items dont exist in the lua script, would this cause a crash, or is there a way to set it up so that there is no crash? Thanks >>47032873 >>47032940 //this is bridge So graphics programmers have SIGGRAPH, what do web devs have? >>47032969 canvas and javascript >>47032977 I mean a equivalent conference >>47032733 Anyone? How would I use gethostbyname() in C++ for IRC? For example, I want to connect to irc.rizon.net. How would I do this? >>47033002 nm ill just install it and see what happpens. >>47033002 Are you seriously asking that? I have this codechar charray[] = { "ur", "a", "dong" }; cout << chararray[2] << endl; How do I split this into an actual array by delimiter? >>47033066 Was wondering more if there were additional things to install. Never touched it before thought it might be multi part or use a different program with it. Got the multiple width of the image done#!/usr/bin/python numberOfBands=1200/42 unshiftRatio=-.25 from PIL import Image from math import floor image=Image.open("image.jpg"); pixelsOriginal=list(image.getdata()) print type(pixelsOriginal),len(pixelsOriginal),image.size[0] print image.size[0],'x',image.size[1] pixels=[] count=0 while(len(pixelsOriginal)>count): pixels.append(pixelsOriginal[count:count+image.size[0]]) count+=image.size[0] del pixelsOriginal #move the bands def rotateLeft(line,number): #rotates the bands to the side. Dont let the name fool you as a negative unshiftRatio will shift it right number=int(number) negativity=1 #assume we are doing a positive shift if not number==0:negativity=abs(number)/number #is it negative while(abs(number)>len(line)): # if too big then change the shift number-=len(line)*negativity # make it smaller (also, if negative, add to make it smaller) return line[number:]+line[:number] bandHeight=image.size[1]/numberOfBands shiftDistance=int((image.size[0]/numberOfBands)/unshiftRatio) if bandHeight==0: print "Height Issue, your number of bands is likely too large" bandHeight=1 for num in xrange(len(pixels)): pixels[num]=rotateLeft(pixels[num],floor(num/bandHeight)*shiftDistance) #recombine the pixels and save pixelsShifted=[] for line in pixels: pixelsShifted+=line image.putdata(pixelsShifted) image.save('out.png') >>47033026 Have you read Beej's guide to network programming? I did and really liked it, it's a good book to get into C++ network programming Why haven't you joined yet? >>47032940 i should remove the squiggly brackets, but I am returning either 0 or 1 to lua, thats the simplest thing to do >>47033330 I wonder if there is some quiet majority that just wants to kick all the fucks out of feminism. >>47033330 >This repository has been disabled. >>47033104T h a n k s ! T h a n k s ! ! T h a n k s s ! T h a n k k s ! T h a n n k s ! T h a a n k s ! T h h a n k s ! T T h a n k s ! >>47033334 >thats the simplest thing to do No it's fucking not, returning WorldCollision(&lHitbox) is. >>47033104 seems like this could be done a lot more succintly with wtf my post disappeared >>47033104 seems like this could be done a lot more succintly with >>47033532 >my post disappeared >>47033509 >>47033461 almost got animated somethings..... maybe........ >>47033532 that is really neat. Doesn't scipy require extra stuff. I am trying to keep it as vannila as possible. >>47033554 >implying that's 100% the same poster >>47033554 i swear it did. i was on this page and the little [deleted] thing showed up next to it and it wasn't there when i refreshed. that's really fucking strange. wtf my post disappeared >>47033554 >implying that's 100% the same poster >>47033571 it's just numpy you've got two imports anyway >>47033508 okay Xr.GNUOCD This function is big and ugly, how do I make it small and pretty?def getAvgPos(posList): sumx = 0 sumy = 0 for x,y in posList: sumx += x sumy += y avgx = sumx/len(posList) avgy = sumx/len(posList) if (avgx, avgy) not in successor.getWalls().asList(): return (avgx, avgy) else if (avgx, avgy+1) not in successor.getWalls().asList(): return (avgx, avgy+1) else if (avgx, avgy-1) not in successor.getWalls().asList(): return (avgx, avgy-1) else if (avgx-1, avgy-1) not in successor.getWalls().asList(): return (avgx-1,avgy-1) >>47033936 You don't need to call getWalls().asList() so many times. Just save the result of the operation in variable. Also, isn't '/' a float division? Seems like you want integers, not floating point numbers. Can anyone tell me why theinstall TARGETS ${samplename}... failsproject(bitmanip c) cmake_minimum_required(VERSION 2.6.2) #bitmanip folders set(bitmanip_base ${basepath}/bitmanip) set(bitmanip_src ${bitmanip_base}/src) set(bitmanip_samples ${bitmanip_base}/samples) #head file inclusions include_directories(${BASEPATH}/include) #libraries link_directories(${BASEPATH}/lib) #make the library file(GLOB bitmanip_sources ${bitmanip_src}/*.c) add_library(bitmanip ${bitmanip_sources}) #make the samples file(GLOB bitmanip_samples ${bitmanip_samples}/*.c) foreach(samplesourcefile ${bitmanip_samples}) string(REPLACE ".c" "" samplename ${samplesourcefile}) add_executable(${samplename} ${samplesourcefile}) target_link_libraries(${samplename} bitmanip) install(TARGETS ${samplename} DESTINATION bin) endforeach(samplesourcefile ${bitmanip_samples_sources}) install(TARGETS bitmanip DESTINATION lib) I'm new to programming and need some help understanding OOP. I have an object (game), in which I have two objects (room, player). Is there a way to use a method of 'room' from inside 'player'? Basically, I'm trying to ask the room if the space I want to walk to is free, from inside a player method. Example: [Code]class Game(): def __init__(self): room = Room() player = Player() class Room(): def check_space(space): stuff class Player(): def action(): if room.check_space(space) == 'empty': self.move()[/Code] Programming language is Python (+ PyGame). >>47033071char chararray[][] That's what you mean right? >>47034175 whelpclass Game(): def __init__(self): room = Room() player = Player() class Room(): def check_space(space): stuff class Player(): def action(): if room.check_space(space) == 'empty': self.move() >>47034200 fuck meclass Game(): def __init__(self): room = Room() player = Player() class Room(): def check_space(space): stuff class Player(): def action(): if room.check_space(space) == 'empty': self.move() really sorry >>47034144 figured it out. Nevermind >>47034216 Just don't confuse classes and instances of those classes. Also, if you make an instance of Room class inside an instance of Game class, you'd need to assign it with self.room = Room() in order to be able to call its functions like this: g.room.check_space(space) Assuming you created "g" instance of the Game. >>47034324 Yeah, I omitted self. to keep it as simple as possible. Can't I access an instance of something from an instance of something else? >>47034407 Sure. In this case, if the instance of the game is assigned to a global variable g, you can call g.room from g.player. Also, don't forget to add "self" as the first argument when defining functions that belong to a class. I know I'm not supposed to ask /g/ for help with projects and such, but I'm desperate. What do these functions mean and how do I implement them? Specifically, lines 56 through 66. It's C++ by the way. >>47034608 IS that just Matrix stuff? Comparing MAtrixs adding them subtraction? Scaler means instead of multiplying two matricies your scaleing them as in lets say int = 2 then that would double the size of the matrix. I dont know c++, >>47034586 It's not working for me, not sure what I'm doing wrong. This is the actual code: And this is the line I'm wondering about if such a thing exists: I tried game.self.room.METHOD and room.METHOD, but it's not working. >>47034608 Man, mixing INotifyPropertyChanged and multi-threaded applications is an enormous pain in the ass. For every class you have to make sure the UI notifications are raised in the right thread... >>47034199 Great, now I understand how to use char arrays now. I need to split a string from a delimiter into a char array now. >>47034608 i wish more languages had user-defined operators. Not precisely "programming", but I'm taking a break from programming in order focus on the website that I wanted to build. >inb4 drugs are for degenerates >inb4 tables, what is this? 1996? >>47034997 Said no one, ever, since the emergence of C++. >>47034748 Well, I didn't go through all the code, but maybe it'd be best if you created player with room as argument and stored the room as self.room? Then the player could use the room like this: self.room.check_space(self.target_space) >>47034705 Yes, but my question is how would I implement them? I don't understand the syntax in the header and how to implement it. Thanks, anyways. >>47034800 Thanks, anon. I think I now understand. Yeah! I think that will work, thanks! self.room wouldn't just be a copy, but the actual, changing thing, right? >>47035156 Yes. You'd have to make a deep copy in order to copy the instamce. >CS student doesn't know why creating, simulating and destroying tens of thousands of objects is resource intensive >>47035746 Why the fuck doesnt that degree have more programming in it? What do people egt that degree for besides programming? >>47035760 Wanking around. Any degree which builds on a base should teach that base first or inform the students they need to get it. >>47035760 I don't know Anon, I'm convinced they don't teach them anything, I've heard some of the most retarded shit in my life from CS students. >>47035832 Its math right? Like some post calc probably. And then just what physics? >>47035746 >student doesn't know something I wonder why >>47035853 >Its math right? not according to /g/. someone posted Santiago Gonzalez's mathematical outline for some program and they shit all over it because they couldn't understand it. >>47035746 I wouldn't blame the programme as much as the person. You are retarded if you don't know that whether you have an education or not. >>47035853 I don't think so? they don't know linear algebra in their 3rd year, I've literally seen some maths major shitting all over them. >>47035916 We had linear algebra here in the first year. The professor was a sleeping pill though, one of the worst courses I've taken so far. >>47034175 Python is one of the worst OOP languages there is. If you're going to use a game, just use Java or C or something. >>47035916 At my school linear algebra was required in second year, along with either multivariable calculus or a stats course. Most of the math you do in CS is of a different sort than linear algebra or calculus, though. Mostly proving that such and such algorithm has some property. >>47035905 I can't choose to believe that any more, I've met too many people from different computer science departments and universities and they're all equally as retarded. >>47035967 Granted, it's not really meant to be used in games and other heavy tasks. Otherwise it's a great language if you just want to get shit done quickly and with ease. Why are so many people here shitting on Python? >>47036068 What I think is actually happening is that people who don't really care are getting degrees and jobs, so you're more exposed to retards with degrees than retards without degrees since it's easier to get a job with a degree. And some confirmation bias. >>47036090 >Why are so many people here shitting on Python? They're the same people that learnt it back when interpreted languages were all the hype and thought everything would be rewritten in them, now their former stewardesses are abandoning them they've turned to self-hatred. >>47036090 python is a scripting language. It's very useful if you want to write small scripts for useful things but any real program should be written in a real language >>47035746 But it sure feels amazing. And this is 2015, after all. >>47036090 Because, like you said, it really is just a language meant to do small little things, hence why it's so valuable in other tasks like pen testing. But you get all these fags trying to lern 2 code lululululul Xb compter scic! using python it's become a manifestation of "shut up kid, I'll hack you." It's simplicity is it's own downfall in that way, in that in most areas, the language is weak. It's basically the opposite of Perl, where everything is super powerful, but it looks like it just came out of RSA. >>47036121 And vice versa, since the retards will teach other people one day. >>47036145 That sort of shit really pisses me off. It happened to lua as well, just in a different way. Now lua is the official "u progrm for gawy's mawd, too? xDDD le g-man maymay" I bet Roberto Ierusalimschy contemplates suicide daily for creating the language that would drive garry's mod. >>47036090 Python is the new BASIC. As long as you treat it as such, there's no problem. Does anybody kind of wish more language would come with built in interpreters for RPN? It would make compilation easier and it would make math, for me at least, a lot easier. >inb4 we go back to assembly Anyone help me with >>47036221 >>47036145 >It's basically the opposite of Perl, where everything is super powerful, but it looks like it just came out of RSA. I'm really curious, what are the main differences? Why is Perl more powerful than Python? I personally dropped Perl and chose to learn Python because of the tons of modules available for it. I consider it kind of a "jack off all trades but masturbate none" scripting language. these threads are big enough that we should start making separate threads for each language >>47036320 A lot of conversations aren't specific to languages, and some discussion would be restricted if comments had to be kept to one language. Thinking of programming as restricted to language is not a good way of thinking >>47036320 Dude no. >>47036309 One major reason is that Perl has built-in regular expressions, It's at its very core, Perl is fucking regular expressions. in Python it's just a tacked on half-developed library. >>47036309 Personally, I haven't used Perl in a long time (or python for that matter,) but from what I remember, Perl was better for security reasons; no buffer overflow possibilities as all variable types are allocated post assignment, hence the $ for string, @ for array, you get the idea. If I remember correctly, php was even made based off of perl because of it's usage in server side computations, because once again, it's unique allocation style allows for minimal processing power for computations. It's basically really complicated, but really useful for managing a lot of different requests at once. However, the downside is your code looks like you just sat on the toilet and took a huge shit all over your computer. >>47036309 Almost forgot about hash support. Perl does hashing really well. >>47031905 Start reading, autist. >>47036320 it's not like /agdg/ where the threads last minutes or something crazy like that. we used to have lisp and graphics programming generals sometimes but they weren't popular. >>47036373 >>47036381 >>47036409 Thanks. I really loved built-in regex in Perl too. That's where I learned to use it, and basically that's all I remember from Perl. >>47036454 we still have lisp >>47036476 The last thread died with like 27 replies in it though. I know C#. I want to get into developing android apps. Should I bother with learning java? It's all as a hobby, so I don't care how employable it makes me [or doesn't]. I see a lot of people say C# is just java but better, which puts me off wanting to learn it, and I know C# is perfectly fine for android apps with shit like xamarin. Would there be any benefit to offset the opportunity cost of learning java to make android apps? Why I feel uncomfortable with almost any language except C++/C or C# >>47031905 I'm sorry we failed you, RMS. >>47036578 because you don't have enough experience with other languages to feel comfortable with them start a project in a different language and work on it on your own time by yourself, but make sure it's somewhat challenging >>47036572 >Should I bother with learning java? You pretty much already know it. For a hobby, at least start learning C++ or something, at least that's different enough. >>47036572 If you plan to make any program, even as a hobby, you really should be multi-lingual. If you know C#, java is like picking up a spoon. It might take you a couple days at most to get an almost complete grasp of it. >>47036572 xamarin seems like such a round-about way of doing it and you'd still have to learn xamarin. C# is just a shitty clone of java, you'll learn java in a breeze. >>47036622 >C# is just a shitty clone of java You take that back, you.. you.. DOUBLE NIGGER. >>47036622 >shitty copy of java >>47036622 see >>47036628, you disgusting kike. I just installed curl for linux. Seems like it needed to transfer url's to git for downloads. Anything I need to know security wise? Just dont click on links I dont know? >>47036622 >C# is just a shitty clone of java >>47031555 Why TF would you pass parameters to directives. They aren't supposed to do anything except use the controller's scope. Use $scope in function($scope) if you have to ya lazy fuck. Where do I go or what do I read to understand XMPP and everything surrounding it like STUN and all that shit? Would learning this be beneficial for human communication over the network or should I learn about IRC instead? But if I'm honest, I do love C# and Java very much. I should make an effort to get back into C++, because I haven't used it since I took an advanced C++ concepts class (or whatever the fuck it was called then). >>47036707 For your own sanity, I recommend IRC unless you think your problem has needs that IRC cannot address. I've looked at both protocols. IRC is clean, elegant, and easy to implement. You can even just implement a subset of IRC and still get pretty good functionality. XMPP is very "Enterprise" in comparison. >>47036707 You won't learn anything from reading about IRC, the protocol was considered dumb and dated when it was created. >>47036740 Good to know, thanks. I posted too quick. >>47036748 Do you suggest any alternative protocol? (defun kekify (string) "Kekify the string to copy-pasteable output" (format t "~a~%" string) (let ((kek-collection)) (setf kek-collection (loop for a across string for b across (reverse string) :with len = (length string) :with padding = (concatenate 'string (loop for i below (- len 2) collect #\space)) :collecting (concatenate 'string (string a) padding (string b)))) ;; remove first element (pop kek-collection) (loop for n below (1- (length kek-collection)) :do (format t "~a~%" (nth n kek-collection))) (format t (reverse string)))) it's all yours I'm looking to make an image tagger with mysql or postgresql. For the gui I'd like to enter tags and lookup them and display thumbnails. What gui toolkit should I use? I like gtk or qt so I think I'll go with one of those, using python or c++ or both. >>47036868 For anyone that's worked with Android using the NDK, are you supposed to use an OpenGL function loader before using OpenGL? What about for iOS? And in general, how the fuck am I supposed to find this kind of information out because Google isn't helping at all >>47036748 I don't think the IRC protocol is dumb or dated. There are some oddities or things it doesn't handle as well. Things like authentication (nickserv), channel ownership; network splits, etc. IRC isn't perfect from that standpoint, but it does have solutions even if they aren't directly part of the protocol. If you need strict authentication rules, worry about the security of nickserv, require more fine grained permissions, need VoIP support, then IRC probably isn't the best for the job. IRC isn't "infinitely extensible" the way XMPP brags about, but that's also why the core functionality is so much more easier to implement. For fucks sake, I've heard of people typing IRC protocol commands directly into fucking telnet on machines which didn't have an IRC client available. If someone is asking /g/ of all places for advice, then I'm going to strongly suspect they're not working for a Fortune 50 and implementing an internal corporate messaging system. They're probably just dicking around. In that scenario, IRC is much better for them. >>47036916 You're missing the point, there's literally nothing to learn about IRC, It's that simple, literally just netcatting strings simple. Where is $HOME in linux? As in Vim says my vimrc file is located in $HOME/.vimrc >>47037007 ignore that there is no .vimrc file in my home directory should I create it? Anyone knows about this library? I need to do some shit in raw OpenGL and I'm not really with time to write all the boilerplate on top of OGL. >>47036899 People actually comment like that? Hey /dpt/ for some reason,for (i = 0; i < lim; i++) { while (ch1 = getc(first) != '\n') putchar(ch1); putchar('\n'); while (ch2 = getc(second) != '\n') putchar(ch2); putchar('\n'); } only prints the newlines, and then I have to kill it with ^C. lim is found like this:fseek(first, 0L, SEEK_END); fseek(second, 0L, SEEK_END); fsize = ftell(first); ssize = ftell(second); // rewind(first); // rewind(second); fseek(first, 1L, SEEK_SET); fseek(second, 1L, SEEK_SET); if (ssize > fsize) lim = ssize; else lim = fsize; yet, if i set lim to say, 200, manually, i still have to kill it. How else can I do this so it actually works? Thanks >Tfw you have Perl and C++ assignments to do and you're just shit posting on /g/. >>47037022 If you want a vim profile yeah >>47037090 Use Qt if LGPL is okay It just werks >>47037120 shit Also, I've uncommented the rewind()s and gotten rid of the latter pair of fseek()s, still same thing. >>47037123 >Spring break starts tomorrow >Professor wants me to go see him to finish a lab experiment >>47036899 >For anyone that's worked with Android using the NDK, are you supposed to use an OpenGL function loader before using OpenGL? i'm wondering this too. i'm relying on calling OpenGL functions from java via libGDX instead of from my native code but for my next more ambitious game i should really be calling them directly from the native side if possible. >>47037007 It's pretty self-explanatory, but you can check it yourself:echo $HOME >>47036899 >>47037184 btw you can check out libGDX's internal code in Gdx.gl20 and see how they do it Here to ask more questions. What a surprise. Anyway, I am making a IRC bot. I started this project on Saturday and I think I've made a lot of progress on it. It currently connects to the IP I specify it to connect to and then says "hey niggers". It then sits there and sends a PONG back to the PING that the IRC server sends. What I'm having a problem with is this, I want to say .help and then have my bot PRIVMSG the sender of the message and then prompt a list of commands available. How would I do it, and if you don't mind, show me how? My code is here If you help me, thanks in advanced. I greatly appreciate any support/feedback I can get on this journey I'm embarking on. >>47036889 > > It is developed mostly for Windows, but fairly functional builds for Linux and OS X nope.jpg (need to make a nope tag) I dont under stand why would this be 49? int x = 10; int y = 10 + x--; y = y * 2; System.out.println(x + y); >>47037327 post vs pre increment. The x isn't set to 9 until after the rest of statement is evaluated. >>47037327 line 1: x = 10 2: y = 20; x = 9 (added x to y before decrementing) 3: y = 40; 4: x + 7 = 40 + 9 >>47037350 int x = 10; int y= 10 + --x; y = y * 2; System.out.println(x+y); Is what you're thinking it is doing. Hey I am completely lost on how this works and doesn't crash. It's from this video. I've laid out a few comments in the pastebin but I'll just address my main one and you can refer to the pastebin for greater clarity (if you need or please).int* num = NULL; sscanf(ptr += n, "%f%f%n", §->floor, §->ceil, &n); //Ignore the '#' and the 'x' below, they represent something not used in map-clear txt (a simpler version of the real file) for(m=0; sscanf(ptr+=n, "%32s%n", word, &n) == 1 && word[0] != '#'; ) { num = realloc(num, ++m * sizeof(*num)); num[m-1] = word[0]=='x' ? -1 : atoi(word); /* Upon extiting num should be allocated to sizeof(int) * m bytes of space */ } sect->npoints = m /= 2; sect->neighbors = malloc( (m ) * sizeof(*sect->neighbors)); sect->vertex = malloc( (m+1)* sizeof(*sect->vertex)); ->>>>>!!! for(n=0;n<m;++n) sect->neighbors[n] = num[m + n]; /* Shouldn't this be out of bounds, since num is only allocated to m elements? This seems to start from the end of the array and go over, never touching the beginning. (specifically it assumes num is of m*2 size but I don't see how that could be) */ for(n=0; n<m; ++n) sect->vertex[n+1] = vert[num[n]]; /* This one goes from 0 to n, why the hell does the above work? */ How does that work? Am I being an idiot? >>47032959 Any good books, or blogs on software testing? >>47037237 How is this different from responding to PING request or writing a message to the channel? The bot checks if the string is in the line it received from the server, and if it is it sends the corresponding line. I suggest you let the bot print every line it receives from the server and then type shit in the channel for a while, just to see how IRC commands work, or search all the available commands. >>47037370 It might go out of bounds and not crash, run it under valgrind. >>47037237 I don't know how to say this, but you need to completely go back to the drawing board, firstly learn how sockets work. >>47037370 Wait I left out some critical information.//Looks like sector 0 20 3 14 29 49 -1 1 11 22 //in map-clear.txt sectors = realloc(sectors, ++NumSectors * sizeof(*sectors)); struct sector* sect = §ors[NumSectors-1]; The declaration of sector looks like this in the map-clear.txt file this function is reading. I guess the m just represents the vertices and the neighbors (-1 1 11 22) are represented by m + offset? But I still don't understand why that works because the for function should have read the neighbors too so m should be 8 in the examplesector 0 20 3 14 29 49 -1 1 11 22 so if I did m + n I should still go past the bounds of the array for n right? What the fuck? >>47037497 Valgrind says 0 errors from 0 contexts and it crashes if I change it any way I've tried. >>47037359 >x+7 >>47037429 Bamp >>47037558 Now do PHP. >>47037370 >m /= 2 Halves m. >>47037485if (charSearch(buf,"hi scooby")) { sendData("PRIVMSG #niggersack :hows it going\r\n"); } Doesn't work. This error is returned[email protected]:~# g++ -w main.cpp -o main main.cpp: In function 'int main(int, const char**)': main.cpp:124:63: error: too few arguments to function 'bool sendRawPacket(char*, int)' sendRawPacket("PRIVMSG #niggersack :hows it going\r\n"); ^ main.cpp:14:6: note: declared here bool sendRawPacket(char *msg, int sockfd){ ^ >>47037499 No thanks. >>47037621 Goddammit gg. >>47037623 Why are you root? >>47037623 Read the error? >too few arguments it isn't cryptic in any way. sendRawPacket("blah", sockfd); >>47037645 Problem was fixed by including sockfd. The bot successfully connects to the channel but when I say "hi scooby". Nothing returnsif (charSearch(buf,"hi scooby")) { sendRawPacket("PRIVMSG #niggersack :watch it fag\r\n", sockfd); That's whats in the console. So it sees the message but doesn't reply for some reason. >>47037602 too lazy to do all the tests >>47037645 Probably a haxor liveUSB? >>47037714 It's a dedi I'm logging to and it's a saved session so I don't have to type in the host name everytime. :~) >>47037723 >Logging into your dedi as root >ever Anyone know anything about Vim want to talk about whats in their vimrc file. Plugin 'rstacruz/sparkup', {'rtp': 'vim/'} " Avoid a name conflict with L9 Plugin 'user/L9', {'name': 'newL Is what vundle starts you with going to pull the plugins I dont want out of it now. >>47037558 anyone do software testing? >>47037729 >giving a shit about a junk dedi it's literally just for development. i have nothing useful on it. >tfw I don't think in traditional OOP anymore kek, thank you Go >>47037811 Go shill detected >>47037687 a-anyone? >>47037669 see >>47037811 >tfw I think only in cache friendly code now with C/C++ I only really use sepples now for namespaces >>47031905 >>>/loper-os/ >>>/cat-v/ dont come back >>47037687 Weird how this doesn't work. It literally makes total sense to work. If it receives the word "hi scooby", it sends a packet to #niggersack saying "watch is fag". Makes absolutely no sense. Not sure what I'm doing wrong.. Cant figure out this loop import java.util.Scanner; public class CurrencyBankTester { public static void main(String[] args) { double money= 0; double dollar=0; double euro=0; double yen=0; String currency; char c; CurrencyBank curbank = new CurrencyBank(); Scanner in = new Scanner(System.in); System.out.println("Welcome to the CurrencyBankTester! "); System.out.println("What type of currency to add (Y, E, D or X to exit)?"); currency = in.next(); c= currency.charAt(0); c= Character.toUpperCase(c); while (c !='X'){ System.out.println("What value do you wish to add?"); money = in.nextDouble(); if (currency.equals("Y")) { money=yen; curbank.AddYen(yen); } else if (currency.equals("E")) { money=euro; curbank.AddYen(euro); } else if (currency.equals("D")) { money=dollar; curbank.AddYen(dollar); } System.out.println("What type of currency to add (Y, E, D or X to exit)?"); currency = in.next(); c= currency.charAt(0); c= Character.toUpperCase(c); }//end while System.out.println(curbank.CountMoney()); } } >>47038131 Can you be more specific? Are you saying you are stuck inside the while loop? Does the program always ask the user to input something or does it go dead? > interdasting if anyone of you guys is still curious about my poopy way of avoiding branching >Some pretty confused code coming from the OP, however I approve of the general advice to favour virtual functions >A cache miss results in 250% penalty of performance on an arm processor. Safer to say that a good practice is to avoid if statments if possible What am I doing wrong here? >>47038330 Its going through the loop but the System.out.println(curbank.CountMoney()); prints 0.0 this is the other class public class CurrencyBank { double dollar=0; double euro=0; double yen=0; public CurrencyBank() { } public double countMoney() { dollar += (yen * 0.0084); dollar += (euro * 1.13); return dollar; } public void addDollar(double dollar) { this.dollar+=dollar; } public void addEuro(double euro) { this.euro+=euro; } public void addYen(double yen) { this.yen+=yen; } //end of program } perhaps I miss worded, I cant figure out why the return value keeps showing up zeros >>47038411 try printing out different variables in different parts of the program and see which value doesn't match your expectation. if (charSearch(buf, "hey")) { sendRawPacket("PRIVMSG #niggersack :hey there!\r\n", sockfd); } I need to parse the PRIVMSG string sent to the channel or directly to the bot for the user nick, parse the message to see if it contains ".help", and if it does, send back a raw PRIVMSG command to that nick with the list of commands as the message string. How would I do this? >>47038573 So i'd have to parse 'userid', and 'PRIVMSG' to see that it's a channel message. Channel isn't really relevant, but the message at the end would be, because I have to parse this to see if it matches ".help" >>47038573 The ".help" needs to be at the start in the string or could be anywhere within? >>47038131 Look at this part carefully. Firstly, you're calling addYen everywhere instead of the proper currency. Secondly, you read the user input into "money" and right before the add method, you immediately overwrite it.money = in.nextDouble();if (currency.equals("Y")) { money = yen; curbank.addYen(yen); } else if (currency.equals("E")) { money = euro; curbank.addYen(euro); } else if (currency.equals("D")) { money = dollar; curbank.addYen(dollar); } >>47038595 The start Im sure this is a pretty basic fix for you guys, but I'm not sure what im doing wrong. switch (word) { case 'a' : case 'A' : cout << "Alpha "; case 'b' : case 'B' : cout << "Bravo "; case 'c' : case 'C' : cout << "Charlie "; case 'd' : case 'D' : cout << "Delta "; case 'e' : case 'E' : cout << "Echo "; case 'f' : case 'F' : cout << "Foxtrot "; case 'g' : case 'G' : cout << "Golf "; case 'h' : case 'H' : cout << "Hotel "; } The breaks arent there, but when I was putting them in, i would get only the first value. Without them, I get all of them. I want one for each letter entered. >>47038609 Then mantain an array of pointer to string, then compare that string with every element of the array using strncmp(), if returns 0 you have found that command. >>47038613 Then you need to call the switch for each character in word, not on the whole word. >>47038613 What is the data type of "word"? It should probably be called "letter" or "c". Otherwise, it sounds like a String. >>47038613 I think you just need to put 'break;' every 2 cases And that's assuming 'word' in the variables is only a character variable. If it's a string, just separate it into individual letters and then pass each through the switch in a loop. >>47038659 char. Basically, if I type in kek, I would get a word for each first letter of each word. >>47038656 I'm sorry, what would that look like? >>47038633 Something like this?if(!strncmp(buf, ".help", 4)){ sendRawPacket("PRIVMSG #niggersack :hey there!\r\n", sockfd); } >>47038670 Same poster here. This book is kind of bad. I never really feel prepared to do my assignments for this class. The examples have been less than helpful. >>47038411 Anon, look at your code. You are telling the scanner to store whatever value you type into the variable "money". Then it gets overridden with 0 (regardless if Yen, Euro or Dollar) and you add the 0 to the total pile. Am I making sense? >>47038674 Yeah, a pointer to array of string would be nice if you need to "detect" a lot of commands. >>47038700 Weird. The bot connects and says "hey there!" instantly when this happens and doesn't respond to ".help" by me. >>47038670 Using a for (each) loop:#include <iostream> using namespace std; int main() { string word; cin >> word; for (char c : word) switch (c) { case 'a': case 'A': cout << "Alpha "; break; case 'b': case 'B': cout << "Bravo "; break; case 'c': case 'C': cout << "Charlie "; break; /* ... */ } cout << endl; } >>47038709 Maybe because it starts comparing at the address of buf, and the third argument for strncmp should be 5. >>47038751 Could you show me what to do? Sorry for sounding so stupid. I'm not that well at understanding concepts over text. >>47038748 And this isnt really explained in the examples. This is tiresome. Mind giving a little explanation? >>47038603 I just figured that out and came back to thank you guys. I can t believe I skipped over it. :) >>47038828 The string "word" is set to the switch of (c), you get your input taken and its assigned to "word" and if the case is 'a' then it'll equal "Alpha". etc. >>47038804 I don't know IRC bots m8, but if buf is some string obtained for example user input:Enter input: ".help pls" // this is saved on buf Then usingstrncmp(buf, ".help", 5); // 5 because .help is 5 characters long Should return 0, negate that to execute the if. >>47038845 Ok I see now. I didnt know of the switch (c) >>47038131 How many threads are you shitposting this on? >>47035797 >Criticize people for "selling out" or "being a hack" for programming in anything but C >Submits coursework in Python >Prerequisites explicitly states Scheme and Java are strongly preferred >Regularly discusses shit like "Lets be the first people to write a glNext game engine, we'll earn a fortune!" Average day in CS. >>47039067 >Scheme Stopped reading. >>47039067 I wish I was taught C properly. I've had one course on it and the rest are microcontroller programming. I still have no idea how to do make files or link/make libraries >>47039087 Scheme is still popular for teaching data structures. What's the problem? >>47039117 >What's the problem? It's a shit. >>47039123 >>47039123 Says the tripfag >>47039103 Haven't got to microcontroller programming yet, I suspect it's gonna be in Matlab. >>47039158 I dont think you can use matlab for microcontrollers, or rather you shouldn't. What's a good place to put all my libraries and compiled things in Windows. I feel like I'm clogging C:/ directory, I have a bunch of OpenGl context libraries compiled, boost, ant, ijmage processing libraries, misc. All on C:/ in their respective directories. >>47039184 C:/cp/ (Stands for computer programs). I don't know how to code at all, but I'm trying to make this simple Python script to control my media player with my Wiimote. keyboard.setKey( Key.Alt, and Key.Equals, wiimote[0].buttons.button_down(WiimoteButtons.DPadRight) ) I stole most of the structure from another code. I don't know how to set two keys for the same button so I just assumed "and" would join them, but it doesn't work so could anybody tell me how to join "Key.Alt" and "Key.Equals" in there? >>47039178 You can on the more powerful microcontrollers and through a serial port, they've also got a Matlab to C compiler thing. >>47039233 I don't know what you're using there, but try doing it in 2 lines:keyboard.setKey( Key.Alt, wiimote[0].buttons.button_down(WiimoteButtons.DPadRight) ) keyboard.setKey( Key.Equals, wiimote[0].buttons.button_down(WiimoteButtons.DPadRight) ) So I'm learning abut trees in my data structures class and am trying to make a class for it. I keep getting this compiler error and I dont know how I'm getting it.└[~/school/data_structures-CS2250/proj2]$make g++ -c -std=c++11 -Wall -pedantic main.cpp -o main.o In file included from main.cpp:2:0: main.h:11:8: error: invalid use of non-static data member ‘intTree::root’ node *root; ^ main.h:20:37: error: from this location bool intTree::grow(int a, node* b = root){ ^ main.h:11:8: error: invalid use of non-static data member ‘intTree::root’ node *root; ^ main.h:48:41: error: from this location void intTree::view(int a = 0, node* b = root){ ^ Makefile:13: recipe for target 'main.o' failed make: *** [main.o] Error 1 Here is the code, I'm still working on it so its not finished but it should work if I can fix this compiler error. Actually here's the entire code, if anybody sees something off here (I don't know what any of this shit means) please help.def wiimoteButtonUpdates (): #Volume keyboard.setKey( Key.Alt, Key.Equals, wiimote[0].buttons.button_down(WiimoteButtons.DPadRight) ) keyboard.setKey( Key.Alt, Key.Minus, wiimote[0].buttons.button_down(WiimoteButtons.DPadLeft) ) #Timeline keyboard.setKey( Key.LeftArrow, wiimote[0].buttons.button_down(WiimoteButtons.DPadUp) ) keyboard.setKey( Key.RightArrow, wiimote[0].buttons.button_down(WiimoteButtons.DPadDown) ) keyboard.setKey( Key.Space, wiimote[0].buttons.button_down(WiimoteButtons.One) ) #Time keyboard.setKey( Key.Enter, wiimote[0].buttons.button_down(WiimoteButtons.B) ) keyboard.setKey( Key.Alt, Key.Ctrl, wiimote[0].buttons.button_down(WiimoteButtons.Home) ) #Frames keyboard.setKey( Key.Ctrl, Key.E wiimote[0].buttons.button_down(WiimoteButtons.Two) ) keyboard.setKey( Key.Ctrl, Key.A wiimote[0].buttons.button_down(WiimoteButtons.A) ) #Subtitles keyboard.setKey( Key.Alt, Key.H wiimote[0].buttons.button_down(WiimoteButtons.Minus) ) if starting: system.setThreadTiming(TimingTypes.HighresSystemTimer) system.threadExecutionInterval = 2 wiimote[0].buttons.update += wiimoteButtonUpdates >>47039276 Good idea! I'm using FreePIE. Now I'm getting "unexpected token 'wiimote'"... >>47039305 I don't think you can do that kind of thing in function declarations. Why are you even putting that as an argument and not just assigning the global variable within the function? or just passing it normally? >>47039362 What do you mean? >>47039325 Paste the whole error message. >>47039406 instead of saying node* b = root why not just node* b? >>47039413 Sadly that's it... >>47039435 Because I'm using the function recursively. Did you look at the pastbin?bool intTree::grow(int a, node* b = root){ if (b == nullptr){ node* nNode = nullptr; nNode = new node; if (nNode == nullptr) {return false;} nNode->data = a; nNode->count = 1; nNode->left = nNode->right = nullptr; return true; } else if (a == b->data){ b->count++; return true; } else if (a < b->data) grow(a,b->left); else grow(a,b->right); //should not reach this std::cout << "ERROR" << std::endl; return false; } >>47039473 I did, but that doesn't mean you should be doing that at all, all it looks like you're doing is overwriting b with root every call >>47039500 Hmm, maybe I'm not understanding pointers completely then. If I did node* b = root, does b not point to the data that root is pointing to? So in the first call (tree.grow(1)) b would essentially be root, but in the recursive calls b would be the left or right branches. >>47039577 If you think that b is assigned the data root is pointing to in the first call, why not in the recursive call as well? try deleting the = root and call it with root instead and see if it worksgrow(1, root) >>47039446 Try changing the last line with this, just to hopefully get more details on the error:try: wiimote[0].buttons.update += wiimoteButtonUpdates except Exception, e: print e If you wrote a function that changes the state of multiple GUI controls then what would you name the function? >>47039611 No luck. I tried removing the entire if starting: paragraph and still the same thing. I'm asking about it on the forums of FreePIE so hopefully I'll get an answer about it in the next days... >>47039607 So Ive tried that and now I'm getting this.└[~/school/data_structures-CS2250/proj2]$make g++ -c -std=c++11 -Wall -pedantic main.cpp -o main.o In file included from main.cpp:2:0: main.h:49:6: error: default argument missing for parameter 2 of ‘void intTree::view(int, intTree::node*)’ void intTree::view(int a = 0, node* b){ ^ Makefile:13: recipe for target 'main.o' failed make: *** [main.o] Error 1 What I dont understand is, in the code before, why I was getting "invalid use of non-static data member ‘intTree::root’". >>47039700 Yeah, please ignore me. just put static before node* root. Its been a while since I used c++ >>47039757 If I do that wouldn't I just get one root and not a root for each instance of the intTree class? What am I doing that root needs to be static? Also Ive tied that and got this:└[~/school/data_structures-CS2250/proj2]$make g++ -c -std=c++11 -Wall -pedantic main.cpp -o main.o g++ main.o -o _main main.o: In function `main': main.cpp:(.text+0x278): undefined reference to `intTree::root' main.o: In function `intTree::intTree()': main.cpp:(.text._ZN7intTreeC2Ev[_ZN7intTreeC5Ev]+0xb): undefined reference to `intTree::root' collect2: error: ld returned 1 exit status Makefile:10: recipe for target 'all' failed make: *** [all] Error 1 >>47036373 >in Python it's just a tacked on library Because that's all it needs to be. >>47036628 >>47036672 >>47036646 What is this, the "I enjoy proprietary software which has crappy Visual Basic features tacked on" club? Fight me. >>47039184 Why don't you make a libraries or code directory. has anyone ever used git-encrypt? >>47039305 You should not be passing around member root anyway, but it isn't valid because, at compile time, there is no way to guaranteed that root will exist. If root were a static member, it would be valid. My suggestions: >>47039835 >If I do that wouldn't I just get one root and not a root for each instance of the intTree class Yes, so don't. >What am I doing that root needs to be static? You are attempting to pass it as a default parameter. >>47039700 >So Ive tried that and now I'm getting this. A default argument can only be specified after all parameters that have no defaults. In other words, any parameter to the right of a parameter with defaults must also have defaults. Also, this: >>47040269 >>47040052 roooooollllll someone yesterday asked about what callbacks were, and I gave an example. for anyone else wondering, here is the example with comments, somewhat31905 Why don't you work on THIS? *unzips dick* >>47031276 >Learning programming to genuinely fund the other things I want to do >At the same time enjoy it even though I suck at it >Nobody here tells me how to get better or what mistakes I make to look out for >Just call my code shit Just reminding you that you're all faggots >>47040517 now if only I can copy / paste correctlyfunction coolThingsToDo40544 Well, maybe you should stop writing shit code >>47040572 Then explain how its shit. That's like telling a cook >Your cooking is shit They know its shit. Then tell them how to improve instead of twiddling around with your thumb up your ass >>47040589 Maybe if someone asks for criticism, but if someone just posts some shitty code, I'm going to call it shit. >>47040589 it takes too much human capital to invest in anon like that. we arnt your teacher / professor nor your mentor or even your boss. if you have a specific question, it may get answered. but there is no value to anyone here in diagnosing why you are bad. you need to provide value, and you will get value back. congratulations, you are now an adult. >>47040634 Then you'll just keep being some smug unfunny le cs grad circlejerking cancer >>47040653 >It takes too much effort to invest in someone >Implying everyone has access to those resources So you're telling me its too hard for an experienced coder to take one look at someone whos new to it all for one second, and see what they could improve on. Or even offer general generic advice. >>47040697 Helping the competition is frowned upon. You can always pick a book or get a tutor. >>47040697 >Or even offer general generic advice. my general generic advice for the day is, use the google. >>47040697 >People are obligated to help you >Implying that it isn't a lot of effort to understand a chunk of code, especially one that has shit structure >general generic advice "Learn C and a RISC assembly" >>47040737 first project: tornado. >>47040737 Never said anyone was obligated, but whenever someone obviously new posts and all of a sudden its >Wow look at how shit this is To try and laugh some poor faggot out. I'm not saying to go full hugbox but holy shit there's a reason people don't like you >>47040716 >>47040717 Those three are pretty much the basics of how anyone gets started, but its practically right. >>47040716 >can't into coöperatition New Thread: >>47040809 >>47040809 >>47040809 >>47031555 ew angular
https://4archive.org/board/g/thread/47031276/dpt-daily-programming-thread
CC-MAIN-2018-05
refinedweb
10,040
65.73
Get the highlights in your inbox every week. Build a Kubernetes cluster with the Raspberry Pi | Opensource.com Build a Kubernetes cluster with the Raspberry Pi Install Kubernetes on several Raspberry Pis for your own "private cloud at home" container service. Subscribe now." Nothing says "cloud" quite like Kubernetes, and nothing screams "cluster me!" quite like Raspberry Pis. Running a local Kubernetes cluster on cheap Raspberry Pi hardware is a great way to gain experience managing and developing on a true cloud technology giant. Install a Kubernetes cluster on Raspberry Pis This exercise will install a Kubernetes 1.18.2 cluster on three or more Raspberry Pi 4s running Ubuntu 20.04. Ubuntu 20.04 (Focal Fossa) offers a Raspberry Pi-focused 64-bit ARM (ARM64) image with both a 64-bit kernel and userspace. Since the goal is to use these Raspberry Pis for running a Kubernetes cluster, the ability to run AArch64 container images is important: it can be difficult to find 32-bit images for common software or even standard base images. With its ARM64 image, Ubuntu 20.04 allows you to use 64-bit container images with Kubernetes. AArch64 vs. ARM64; 32-bit vs. 64-bit; ARM vs. x86 Note that AArch64 and ARM64 are effectively the same thing. The different names arise from their use within different communities. Many container images are labeled AArch64 and will run fine on systems labeled ARM64. Systems with AArch64/ARM64 architecture are capable of running 32-bit ARM images, but the opposite is not true: a 32-bit ARM system cannot run 64-bit container images. This is why the Ubuntu 20.04 ARM64 image is so useful.Without getting too deep in the woods explaining different architecture types, it is worth noting that ARM64/AArch64 and x86_64 architectures differ, and Kubernetes nodes running on 64-bit ARM architecture cannot run container images built for x86_64. In practice, you will find some images that are not built for both architectures and may not be usable in your cluster. You will also need to build your own images on an AArch64-based system or jump through some hoops to allow your regular x86_64 systems to build AArch64 images. In a future article in the "private cloud at home" project, I will cover how to build AArch64 images on your regular system. For the best of both worlds, after you set up the Kubernetes cluster in this tutorial, you can add x86_64 nodes to it later. You can schedule images of a given architecture to run on the appropriate nodes by Kubernetes' scheduler through the use of Kubernetes taints and tolerations. Enough about architectures and images. It's time to install Kubernetes, so get to it! Requirements The requirements for this exercise are minimal. You will need: - Three (or more) Raspberry Pi 4s (preferably the 4GB RAM models) - Install Ubuntu 20.04 ARM64 on all the Raspberry Pis To simplify the initial setup, read Modify a disk image to create a Raspberry Pi-based homelab to add a user and SSH authorized_keys to the Ubuntu image before writing it to an SD card and installing on the Raspberry Pi. Configure the hosts Once Ubuntu is installed on the Raspberry Pis and they are accessible via SSH, you need to make a few changes before you can install Kubernetes. Install and configure Docker As of this writing, Ubuntu 20.04 ships the most recent version of Docker, v19.03, in the base repositories and can be installed directly using the apt command. Note that the package name is docker.io. Install Docker on all of the Raspberry Pis: # Install the docker.io package $ sudo apt install -y docker.io After the package is installed, you need to make some changes to enable cgroups (Control Groups). Cgroups allow the Linux kernel to limit and isolate resources. Practically speaking, this allows Kubernetes to better manage resources used by the containers it runs and increases security by isolating containers from one another. Check the output of docker info before making the following changes on all of the RPis: # Check `docker info` # Some output omitted $ sudo docker info (...) Cgroup Driver: cgroups (...) WARNING: No memory limit support WARNING: No swap limit support WARNING: No kernel memory limit support WARNING: No kernel memory TCP limit support WARNING: No oom kill disable support The output above highlights the bits that need to be changed: the cgroup driver and limit support. First, change the default cgroups driver Docker uses from cgroups to systemd to allow systemd to act as the cgroups manager and ensure there is only one cgroup manager in use. This helps with system stability and is recommended by Kubernetes. To do this, create or replace the /etc/docker/daemon.json file with: # Create or replace the contents of /etc/docker/daemon.json to enable the systemd cgroup driver $ sudo cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF Enable cgroups limit support Next, enable limit support, as shown by the warnings in the docker info output above. You need to modify the kernel command line to enable these options at boot. For the Raspberry Pi 4, add the following to the /boot/firmware/cmdline.txt file: cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 swapaccount=1 Make sure they are added to the end of the line in the cmdline.txt file. This can be accomplished in one line using sed: # Append the cgroups and swap options to the kernel command line # Note the space before "cgroup_enable=cpuset", to add a space after the last existing item on the line $ sudo sed -i '$ s/$/ cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 swapaccount=1/' /boot/firmware/cmdline.txt The sed command matches the termination of the line (represented by the first $), replacing it with the options listed (it effectively appends the options to the line). With these changes, Docker and the kernel should be configured as needed for Kubernetes. Reboot the Raspberry Pis, and when they come back up, check the output of docker info again. The Cgroups driver is now systemd, and the warnings are gone. Allow iptables to see bridged traffic According to the documentation, Kubernetes needs iptables to be configured to see bridged network traffic. You can do this by changing the sysctl config: # Enable net.bridge.bridge-nf-call-iptables and -iptables6 cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF $ sudo sysctl --system Install the Kubernetes packages for Ubuntu Since you are using Ubuntu, you can install the Kubernetes packages from the Kubernetes.io Apt repository. There is not currently a repository for Ubuntu 20.04 (Focal), but Kubernetes 1.18.2 is available in the last Ubuntu LTS repository: Ubuntu 18.04 (Xenial). The latest Kubernetes packages can be installed from there. Add the Kubernetes repo to Ubuntu's sources: # Add the packages.cloud.google.com atp key $ curl -s | sudo apt-key add - # Add the Kubernetes repo cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb kubernetes-xenial main EOF When Kubernetes adds a Focal repository—perhaps when the next Kubernetes version is released—make sure to switch to it. With the repository added to the sources list, install the three required Kubernetes packages: kubelet, kubeadm, and kubectl: # Update the apt cache and install kubelet, kubeadm, and kubectl # (Output omitted) $ sudo apt update && sudo apt install -y kubelet kubeadm kubectl Finally, use the apt-mark hold command to disable regular updates for these three packages. Upgrades to Kubernetes need more hand-holding than is possible with the general update process and will require manual attention: # Disable (mark as held) updates for the Kubernetes packages $ sudo apt-mark hold kubelet kubeadm kubectl kubelet set on hold. kubeadm set on hold. kubectl set on hold. That is it for the host configuration! Now you can move on to setting up Kubernetes itself. Create a Kubernetes cluster With the Kubernetes packages installed, you can continue on with creating a cluster. Before getting started, you need to make some decisions. First, one of the Raspberry Pis needs to be designated the Control Plane (i.e., primary) node. The remaining nodes will be designated as compute nodes. You also need to pick a network CIDR to use for the pods in the Kubernetes cluster. Setting the pod-network-cidr during the cluster creation ensures that the podCIDR value is set and can be used by the Container Network Interface (CNI) add-on later. This exercise uses the Flannel CNI. The CIDR you pick should not overlap with any CIDR currently used within your home network nor one managed by your router or DHCP server. Make sure to use a subnet that is larger than you expect to need: there are ALWAYS more pods than you initially plan for! In this example, I will use 10.244.0.0/16, but pick one that works for you. With those decisions out of the way, you can initialize the Control Plane node. SSH or otherwise log into the node you have designated for the Control Plane. Initialize the Control Plane Kubernetes uses a bootstrap token to authenticate nodes being joined to the cluster. This token needs to be passed to the kubeadm init command when initializing the Control Plane node. Generate a token to use with the kubeadm token generate command: # Generate a bootstrap token to authenticate nodes joining the cluster $ TOKEN=$(sudo kubeadm token generate) $ echo $TOKEN d584xg.xupvwv7wllcpmwjy You are now ready to initialize the Control Plane, using the kubeadm init command: # Initialize the Control Plane # (output omitted) $ sudo kubeadm init --token=${TOKEN} --kubernetes-version=v1.18.2 --pod-network-cidr=10.244.0.0/16 If everything is successful, you should see something similar to this at the end of the output: Make a note of two things: first, the Kubernetes kubectl connection information has been written to /etc/kubernetes/admin.conf. This kubeconfig file can be copied to ~/.kube/config, either for root or a normal user on the master node or to a remote machine. This will allow you to control your cluster with the kubectl command. Second, the last line of the output starting with kubernetes join is a command you can run to join more nodes to the cluster. After copying the new kubeconfig to somewhere your user can use it, you can validate that the Control Plane has been installed with the kubectl get nodes command: # Show the nodes in the Kubernetes cluster # Your node name will vary $ kubectl get nodes NAME STATUS ROLES AGE VERSION elderberry Ready master 7m32s v1.18.2 Install a CNI add-on A CNI add-on handles configuration and cleanup of the pod networks. As mentioned, this exercise uses the Flannel CNI add-on. With the podCIDR value already set, you can just download the Flannel YAML and use kubectl apply to install it into the cluster. This can be done on one line using kubectl apply -f - to take the data from standard input. This will create the ClusterRoles, ServiceAccounts, and DaemonSets (etc.) necessary to manage the pod networking. Download and apply the Flannel YAML data to the cluster: # Download the Flannel YAML data and apply it # (output omitted) $ curl -sSL | kubectl apply -f - Join the compute nodes to the cluster With the CNI add-on in place, it is now time to add compute nodes to the cluster. Joining the compute nodes is just a matter of running the kubeadm join command provided at the end of the kube init command run to initialize the Control Plane node. For the other Raspberry Pis you want to join your cluster, log into the host, and run the command: # Join a node to the cluster - your tokens and ca-cert-hash will vary $ sudo Once you have completed the join process on each node, you should be able to see the new nodes in the output of kubectl get nodes: # Show the nodes in the Kubernetes cluster # Your node name will vary $ kubectl get nodes NAME STATUS ROLES AGE VERSION elderberry Ready master 7m32s v1.18.2 gooseberry Ready <none> 2m39s v1.18.2 huckleberry Ready <none> 17s v1.18.2 Validate the cluster At this point, you have a fully working Kubernetes cluster. You can run pods, create deployments and jobs, etc. You can access applications running in the cluster from any of the nodes in the cluster using Services. You can achieve external access with a NodePort service or ingress controllers. To validate that the cluster is running, create a new namespace, deployment, and service, and check that the pods running in the deployment respond as expected. This deployment uses the quay.io/clcollins/kube-verify:01 image—an Nginx container listening for requests (actually, the same image used in the article Add nodes to your private cloud using Cloud-init). You can view the image Containerfile here. Create a namespace named kube-verify for the deployment: # kube-verify Active 19s Now, create a deployment in the new Kubernetes will now start creating the deployment, consisting of three pods, each running the quay.io/clcollins/kube-verify:01 image. After a minute or so, the new pods should be running, and you can view them with kubectl get all -n kube-verify to list all the resources created in the new namespace: # Check the resources that were created by the deployment $ kubectl get all -n kube-verify NAME READY STATUS RESTARTS AGE pod/kube-verify-5f976b5474-25p5r 0/1 Running 0 46s pod/kube-verify-5f976b5474-sc7zd 1/1 Running 0 46s pod/kube-verify-5f976b5474-tvl7w 1/1 Running 0 46s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/kube-verify 3/3 3 3 47s NAME DESIRED CURRENT READY AGE replicaset.apps/kube-verify-5f976b5474 3 3 3 47s You can see the new deployment, a replicaset created by the deployment, and three pods created by the replicaset to fulfill the replicas: 3 request in the deployment. You can see the internals of Kubernetes are working. Now, create a Service to expose the Nginx "application" (or, in this case, the Welcome page) running in the three pods. This will act as a single endpoint through which you can connect to the pods: # Create a service for the deployment $ cat <<EOF | kubectl create -f - apiVersion: v1 kind: Service metadata: name: kube-verify namespace: kube-verify spec: selector: app: kube-verify ports: - protocol: TCP port: 80 targetPort: 8080 EOF service/kube-verify created With the service created, you can examine it and get the IP address for your new service: # Examine the new service $ kubectl get -n kube-verify service/kube-verify NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-verify ClusterIP 10.98.188.200 <none> 80/TCP 30s You can see that the kube-verify service has been assigned a ClusterIP (internal to the cluster only) of 10.98.188.200. This IP is reachable from any of your nodes, but not from outside of the cluster. You can verify the containers inside your deployment are working by connecting to them at this IP: # Use curl to connect to the ClusterIP: # (output truncated for brevity) $ curl 10.98.188.200 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" ""> <html xmlns="" xml: <head> Success! Your service is running and Nginx inside the containers is responding to your requests. At this point, you have a running Kubernetes cluster on your Raspberry Pis with a CNI add-on (Flannel) installed and a test deployment and service running an Nginx webserver. In the large public clouds, Kubernetes has different ingress controllers to interact with different solutions, such as the recently-covered Skipper project. Similarly, private clouds have ingress controllers for interacting with hardware load balancer appliances (like F5 Networks' load balancers) or Nginx and HAProxy controllers for handling traffic coming into the nodes. In a future article, I will tackle exposing services in the cluster to the outside world by installing your own ingress controller. I will also look at dynamic storage provisioners and StorageClasses for allocating persistent storage for applications, including making use of the NFS server you set up in a previous article, Turn your Raspberry Pi homelab into a network filesystem, to create on-demand storage for your pods. Go forth, and Kubernetes "Kubernetes" (κυβερνήτης) is Greek for pilot—but does that mean the individual who steers a ship as well as the action of guiding the ship? Eh, no. "Kubernan" (κυβερνάω) is Greek for "to pilot" or "to steer," so go forth and Kubernan, and if you see me out at a conference or something, give me a pass for trying to verb a noun. From another language. That I don't speak. Disclaimer: As mentioned, I don't read or speak Greek, especially the ancient variety, so I'm choosing to believe something I read on the internet. You know how that goes. Take it with a grain of salt, and give me a little break since I didn't make an "It's all Greek to me" joke. However, just mentioning it, I, therefore, was able to make the joke without actually making it, so I'm either sneaky or clever or both. Or, neither. I didn't claim it was a good joke. So, go forth and pilot your containers like a pro with your own Kubernetes container service in your private cloud at home! As you become more comfortable, you can modify your Kubernetes cluster to try different options, like the aforementioned ingress controllers and dynamic StorageClasses for persistent volumes. This continuous learning is at the heart of DevOps, and the continuous integration and delivery of new services mirrors the agile methodology, both of which we have embraced as we've learned to deal with the massive scale enabled by the cloud and discovered our traditional practices were unable to keep pace. Look at that! Technology, policy, philosophy, a tiny bit of Greek, and a terrible meta-joke, all in one article! 25 Comments, Register or Log in to post a comment. Definitely want to try it out!! So cool! Can you imagine that in the new Rasp 4 8GB?! I definitely want to get a bunch of the 8GB models now. There's certain workloads that need more than 4 gigabytes of RAM sometimes, and you just can't run those on a cluster if none of the nodes have 4 gigabytes to begin with! The Raspberry Pi's are perfect for playing around with a kubernetes cluster at home. They're so inexpensive, and well supported. It's easy to get a number of them and replicate how kubernetes would be deployed in a production environment. Chris, Awesome article right on time , Today I got 4 raspberry pi 4 8gb for creating a cluster. I will try it out and see how it goes based on your article I was looking for a detailed article like this. I've found it. If you're just playing around for educational purposes, you better specify the pod network cidr as 10.244.0.0/16, otherwise you might get into issues. Tested :) Thanks for your feedback! Would you mind elaborating? I did not set the pod network cidr directly, but it was created with 10.244.0.0/24. (Not /16 as you suggest.) I do see the flannel config is hard-coded to 10.244.0.0/16. What am I missing? Thanks! Really clear tutorial - thanks Just one question... in the validate steps, should I be able to curl the ClusterIP address from the master? I only seem to be able to do it from the worker node (I only have one), not sure if this is expected or not? Thanks You should be able to curl the ClusterIP from the master with no problem. I'm not sure why you are not able to. I verified it on the cluster I used to write this article, and it worked without issue. Any chance it might be a routing issue? Maybe an IP conflict with the ClusterIP? Complete-noob question: Is this (Kubernetes on a Raspberry Pi cluster) a reasonable way to "load-balance" multiple ssh logins across three or four Raspberry Pis? I expect to support a group of students who need remote, basic access to a 64-bit Raspberry Pi. Thanks. In my opinion that would be complete overkill for load balancing SSH logins, and even so, that would be SSH logins into the containers, so not directly into the Pis themselves. Unless, that is what you intended. I think it would be a lot of overhead still, though. You might be better off with something like a round-robin DNS setup - this is something we did at a previous job. You could also use a proxy like HAProxy to handle the load balancing. That would probably be my suggestion for something like that. Thanks! I'm not in control of the DNS (I get one IP address). I'll dig into HAProxy. just tried editing the file /boot/firmware/cmdline.txt, does not exist on my raspberry pi 4. On my version on PI cmdline.txt is located within /boot Thanks for sharing that. Yes, I think the /boot/firmware directory is a newer thing. I *think* it's a distribution thing, but I could be wrong on that. I'm pretty sure I applied those cgroups settings correctly but they don't see to have taken effect. docker info WARNING: No swap limit support WARNING: No cpu cfs quota support WARNING: No cpu cfs period support pi@k8wk-2:~ $ cat /boot/cmdline.txt console=serial0,115200 console=tty1 root=PARTUUID=38313a2a-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 swapaccount=1 That all looks right to me. Have you rebooted the Pi since making the changes? yes, multiple times. pi@k8wk-2:~ $ uname -a Linux k8wk-2 4.19.118-v7l+ #1311 SMP Mon Apr 27 14:26:42 BST 2020 armv7l GNU/Linux I'm having an issue where kubeadm refuses to initialize on my Pi 4: 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) I believe I read somewhere that etcd is failing because it hasn't actually been tested for arm64 - but if that were the case, then everyone would have the same issue. Has anyone seen this issue before? same here. main difference is that i'm using raspberry OS 10 (debian buster) since i need support for SDD boot. i did try with Ubuntu 20.04 and it works Hello, very good article. All my Kubernetes Cluster works but when I don a curl on service I have no response. Do you have an idea ? Thanks for this article Hello. Very good article. My raspi cluster works but when I'm done a curl I have no response. Do you have an ide why ? Regards One more thank you for the post. Consider to replace `sudo cat > /etc/docker/daemon.json < Great article. I was getting a permission issue when creating `/etc/docker/daemon.json`. This worked for me: ```bash $ sudo tee -a /etc/docker/daemon.json >/dev/null <<'EOF' { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF ``` Chris, Very nice article! Just an FYI regarding versioning. Looks like Kubelet was bumped in the repository to 1.19.0 6 days ago which is throwing. You may want to include package versions in the `apt install` as the next minor versions make it out the door. sudo kubeadm init --token=${TOKEN} --kubernetes-version=v1.18.2 --pod-network-cidr=10.244.0.0/16 W0831 21:35:28.472959 1108 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.2 [preflight] Running pre-flight checks [WARNING SystemVerification]: missing optional cgroups: hugetlb error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR Swap]: running with swap on is not supported. Please disable swap [ERROR KubeletVersion]: the kubelet version is higher than the control plane version. This is not a supported version skew and may lead to a malfunctional cluster. Kubelet version: "1.19.0" Control plane version: "1.18.2" [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher
https://opensource.com/article/20/6/kubernetes-raspberry-pi
CC-MAIN-2021-21
refinedweb
4,155
63.7
SYNOPSIS #include <tracefs.h> struct tracefs_synth *tracefs_sql(struct tep_handle *tep, const char *name, const char *sql_buffer, char **err); DESCRIPTION Synthetic events are dynamically created events that attach two existing events together via one or more matching fields between the two events. It can be used to find the latency between the events, or to simply pass fields of the first event on to the second event to display as one event. The Linux kernel interface to create synthetic events is complex, and there needs to be a better way to create synthetic events that is easy and can be understood via existing technology. If you think of each event as a table, where the fields are the column of the table and each instance of the event as a row, you can understand how SQL can be used to attach two events together and form another event (table). Utilizing the SQL SELECT FROM JOIN ON [ WHERE ] syntax, a synthetic event can easily be created from two different events. For simple SQL queries to make a histogram instead of a synthetic event, see HISTOGRAMS below. tracefs_sql() takes in a tep handler (See tep_local_events(3)) that is used to verify the events within the sql_buffer expression. The name is the name of the synthetic event to create. If err points to an address of a string, it will be filled with a detailed message on any type of parsing error, including fields that do not belong to an event, or if the events or fields are not properly compared. The example program below is a fully functional parser where it will create a synthetic event from a SQL syntax passed in via the command line or a file. The SQL format is as follows: SELECT <fields> FROM <start-event> JOIN <end-event> ON <matching-fields> WHERE <filter> Note, although the examples show the SQL commands in uppercase, they are not required to be so. That is, you can use "SELECT" or "select" or "sElEct". For example: SELECT syscalls.sys_enter_read.fd, syscalls.sys_exit_read.ret FROM syscalls.sys_enter_read JOIN syscalls.sys_exit_read ON syscalls.sys_enter_read.common_pid = syscalls.sys_exit_write.common_pid Will create a synthetic event that with the fields: u64 fd; s64 ret; Because the function takes a tep handle, and usually all event names are unique, you can leave off the system (group) name of the event, and tracefs_sql() will discover the system for you. That is, the above statement would work with: SELECT sys_enter_read.fd, sys_exit_read.ret FROM sys_enter_read JOIN sys_exit_read ON sys_enter_read.common_pid = sys_exit_write.common_pid The AS keyword can be used to name the fields as well as to give an alias to the events, such that the above can be simplified even more as: SELECT start.fd, end.ret FROM sys_enter_read AS start JOIN sys_exit_read AS end ON start.common_pid = end.common_pid The above aliases sys_enter_read as start and sys_exit_read as end and uses those aliases to reference the event throughout the statement. Using the AS keyword in the selection portion of the SQL statement will define what those fields will be called in the synthetic event. SELECT start.fd AS filed, end.ret AS return FROM sys_enter_read AS start JOIN sys_exit_read AS end ON start.common_pid = end.common_pid The above labels the fd of start as filed and the ret of end as return where the synthetic event that is created will now have the fields: u64 filed; s64 return; The fields can also be calculated with results passed to the synthetic event: select start.truesize, end.len, (start.truesize - end.len) as diff from napi_gro_receive_entry as start JOIN netif_receive_skb as end ON start.skbaddr = end.skbaddr Which would show the truesize of the napi_gro_receive_entry event, the actual len of the content, shown by the netif_receive_skb, and the delta between the two and expressed by the field diff. The code also supports recording the timestamps at either event, and performing calculations on them. For wakeup latency, you have: select start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) as lat from sched_waking as start JOIN sched_switch as end ON start.pid = end.next_pid The above will create a synthetic event that records the pid of the task being woken up, and the time difference between the sched_waking event and the sched_switch event. The TIMESTAMP_USECS will truncate the time down to microseconds as the timestamp usually recorded in the tracing buffer has nanosecond resolution. If you do not want that truncation, use TIMESTAMP instead of TIMESTAMP_USECS. Finally, the WHERE clause can be added, that will let you add filters on either or both events. select start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) as lat from sched_waking as start JOIN sched_switch as end ON start.pid = end.next_pid WHERE start.prio < 100 && (!(end.prev_pid < 1 || end.prev_prio > 100) || end.prev_pid == 0) NOTE Although both events can be used together in the WHERE clause, they must not be mixed outside the top most "&&" statements. You can not OR (||) the events together, where a filter of one event is OR’d to a filter of the other event. This does not make sense, as the synthetic event requires both events to take place to be recorded. If one is filtered out, then the synthetic event does not execute. select start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) as lat from sched_waking as start JOIN sched_switch as end ON start.pid = end.next_pid WHERE start.prio < 100 && end.prev_prio < 100 The above is valid. Where as the below is not. select start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) as lat from sched_waking as start JOIN sched_switch as end ON start.pid = end.next_pid WHERE start.prio < 100 || end.prev_prio < 100 KEYWORDS AS EVENT FIELDS In some cases, an event may have a keyword. For example, regcache_drop_region has "from" as a field and the following will not work select from from regcache_drop_region In such cases, add a backslash to the conflicting field, and this will tell the parser that the "from" is a field and not a keyword: select \from from regcache_drop_region HISTOGRAMS Simple SQL statements without the JOIN ON may also be used, which will create a histogram instead. When doing this, the struct tracefs_hist descriptor can be retrieved from the returned synthetic event descriptor via the tracefs_synth_get_start_hist(3). In order to utilize the histogram types (see xxx) the CAST command of SQL can be used. That is: select CAST(common_pid AS comm, CAST(id AS syscall) FROM sys_enter Which produces: # echo 'hist:keys=common_pid.execname,id.syscall' > events/raw_syscalls/sys_enter/trigger # cat events/raw_syscalls/sys_enter/hist { common_pid: bash [ 18248], id: sys_setpgid [109] } hitcount: 1 { common_pid: sendmail [ 1812], id: sys_read [ 0] } hitcount: 1 { common_pid: bash [ 18247], id: sys_getpid [ 39] } hitcount: 1 { common_pid: bash [ 18247], id: sys_dup2 [ 33] } hitcount: 1 { common_pid: gmain [ 13684], id: sys_inotify_add_watch [254] } hitcount: 1 { common_pid: cat [ 18247], id: sys_access [ 21] } hitcount: 1 { common_pid: bash [ 18248], id: sys_getpid [ 39] } hitcount: 1 { common_pid: cat [ 18247], id: sys_fadvise64 [221] } hitcount: 1 { common_pid: sendmail [ 1812], id: sys_openat [257] } hitcount: 1 { common_pid: less [ 18248], id: sys_munmap [ 11] } hitcount: 1 { common_pid: sendmail [ 1812], id: sys_close [ 3] } hitcount: 1 { common_pid: gmain [ 1534], id: sys_poll [ 7] } hitcount: 1 { common_pid: bash [ 18247], id: sys_execve [ 59] } hitcount: 1 Note, string fields may not be cast. The possible types to cast to are: HEX - convert the value to use hex and not decimal SYM - convert a pointer to symbolic (kallsyms values) SYM-OFFSET - convert a pointer to symbolic and include the offset. SYSCALL - convert the number to the mapped system call name EXECNAME or COMM - can only be used with the common_pid field. Will show the task name of the process. LOG or LOG2 - bucket the key values in a log 2 values (1, 2, 3-4, 5-8, 9-16, 17-32, …) The above fields are not case sensitive, and "LOG2" works as good as "log". A special CAST to COUNTER or COUNTER will make the field a value and not a key. For example: SELECT common_pid, CAST(bytes_req AS _COUNTER_) FROM kmalloc Which will create echo 'hist:keys=common_pid:vals=bytes_req' > events/kmem/kmalloc/trigger cat events/kmem/kmalloc/hist { common_pid: 1812 } hitcount: 1 bytes_req: 32 { common_pid: 9111 } hitcount: 2 bytes_req: 272 { common_pid: 1768 } hitcount: 3 bytes_req: 1112 { common_pid: 0 } hitcount: 4 bytes_req: 512 { common_pid: 18297 } hitcount: 11 bytes_req: 2004 RETURN VALUE Returns 0 on success and -1 on failure. On failure, if err is defined, it will be allocated to hold a detailed description of what went wrong if it the error was caused by a parsing error, or that an event, field does not exist or is not compatible with what it was combined with. CREATE A TOOL The below example is a functional program that can be used to parse SQL commands into synthetic events. man tracefs_sql | sed -ne '/^EXAMPLE/,/FILES/ { /EXAMPLE/d ; /FILES/d ; p}' > sqlhist.c gcc -o sqlhist sqlhist.c `pkg-config --cflags --libs libtracefs` Then you can run the above examples: sudo ./sqlhist 'select start.pid, (end.TIMESTAMP_USECS - start.TIMESTAMP_USECS) as lat from sched_waking as start JOIN sched_switch as end ON start.pid = end.next_pid WHERE start.prio < 100 || end.prev_prio < 100' EXAMPLE #include <stdio.h> #include <stdlib.h> #include <stdarg.h> #include <string.h> #include <errno.h> #include <unistd.h> #include <tracefs.h> static void usage(char **argv) { fprintf(stderr, "usage: %s [-ed][-n name][-s][-S fields][-m var][-c var][-T][-t dir][-f file | sql-command-line]\n" " -n name - name of synthetic event 'Anonymous' if left off\n" " -t dir - use dir instead of /sys/kernel/tracing\n" " -e - execute the commands to create the synthetic event\n" " -m - trigger the action when var is a new max.\n" " -c - trigger the action when var changes.\n" " -s - used with -m or -c to do a snapshot of the tracing buffer\n" " -S - used with -m or -c to save fields of the end event (comma deliminated)\n" " -T - used with -m or -c to do both a snapshot and a trace\n" " -f file - read sql lines from file otherwise from the command line\n" " if file is '-' then read from standard input.\n", argv[0]); exit(-1); } enum action { ACTION_DEFAULT = 0, ACTION_SNAPSHOT = (1 << 0), ACTION_TRACE = (1 << 1), ACTION_SAVE = (1 << 2), ACTION_MAX = (1 << 3), ACTION_CHANGE = (1 << 4), }; #define ACTIONS ((ACTION_MAX - 1)) static int do_sql(const char *instance_name, const char *buffer, const char *name, const char *var, const char *trace_dir, bool execute, int action, char **save_fields) { struct tracefs_synth *synth; struct tep_handle *tep; struct trace_seq seq; enum tracefs_synth_handler handler; char *err; int ret; if ((action & ACTIONS) && !var) { fprintf(stderr, "Error: -s, -S and -T not supported without -m or -c"); exit(-1); } if (!name) name = "Anonymous"; trace_seq_init(&seq); tep = tracefs_local_events(trace_dir); if (!tep) { if (!trace_dir) trace_dir = "tracefs directory"; perror(trace_dir); exit(-1); } synth = tracefs_sql(tep, name, buffer, &err); if (!synth) { perror("Failed creating synthetic event!"); if (err) fprintf(stderr, "%s", err); free(err); exit(-1); } if (tracefs_synth_complete(synth)) { if (var) { if (action & ACTION_MAX) handler = TRACEFS_SYNTH_HANDLE_MAX; else handler = TRACEFS_SYNTH_HANDLE_CHANGE; if (action & ACTION_SAVE) { ret = tracefs_synth_save(synth, handler, var, save_fields); if (ret < 0) { err = "adding save"; goto failed_action; } } if (action & ACTION_TRACE) { /* * By doing the trace before snapshot, it will be included * in the snapshot. */ ret = tracefs_synth_trace(synth, handler, var); if (ret < 0) { err = "adding trace"; goto failed_action; } } if (action & ACTION_SNAPSHOT) { ret = tracefs_synth_snapshot(synth, handler, var); if (ret < 0) { err = "adding snapshot"; failed_action: perror(err); if (errno == ENODEV) fprintf(stderr, "ERROR: '%s' is not a variable\n", var); exit(-1); } } } tracefs_synth_echo_cmd(&seq, synth); if (execute) tracefs_synth_create(synth); } else { struct tracefs_instance *instance = NULL; struct tracefs_hist *hist; hist = tracefs_synth_get_start_hist(synth); if (!hist) { perror("get_start_hist"); exit(-1); } if (instance_name) { if (execute) instance = tracefs_instance_create(instance_name); else instance = tracefs_instance_alloc(trace_dir, instance_name); if (!instance) { perror("Failed to create instance"); exit(-1); } } tracefs_hist_echo_cmd(&seq, instance, hist, 0); if (execute) tracefs_hist_start(instance, hist); } tracefs_synth_free(synth); trace_seq_do_printf(&seq); trace_seq_destroy(&seq); return 0; } int main (int argc, char **argv) { char *trace_dir = NULL; char *buffer = NULL; char buf[BUFSIZ]; int buffer_size = 0; const char *file = NULL; const char *instance = NULL; bool execute = false; char **save_fields = NULL; const char *name; const char *var; int action = 0; char *tok; FILE *fp; size_t r; int c; int i; for (;;) { c = getopt(argc, argv, "ht:f:en:m:c:sS:TB:"); if (c == -1) break; switch(c) { case 'h': usage(argv); case 't': trace_dir = optarg; break; case 'f': file = optarg; break; case 'e': execute = true; break; case 'm': action |= ACTION_MAX; var = optarg; break; case 'c': action |= ACTION_CHANGE; var = optarg; break; case 's': action |= ACTION_SNAPSHOT; break; case 'S': action |= ACTION_SAVE; tok = strtok(optarg, ","); while (tok) { save_fields = tracefs_list_add(save_fields, tok); tok = strtok(NULL, ","); } if (!save_fields) { perror(optarg); exit(-1); } break; case 'T': action |= ACTION_TRACE | ACTION_SNAPSHOT; break; case 'B': instance = optarg; break; case 'n': name = optarg; break; } } if ((action & (ACTION_MAX|ACTION_CHANGE)) == (ACTION_MAX|ACTION_CHANGE)) { fprintf(stderr, "Can not use both -m and -c together\n"); exit(-1); } if (file) { if (!strcmp(file, "-")) fp = stdin; else fp = fopen(file, "r"); if (!fp) { perror(file); exit(-1); } while ((r = fread(buf, 1, BUFSIZ, fp)) > 0) { buffer = realloc(buffer, buffer_size + r + 1); strncpy(buffer + buffer_size, buf, r); buffer_size += r; } fclose(fp); if (buffer_size) buffer[buffer_size] = '\0'; } else if (argc == optind) { usage(argv); } else { for (i = optind; i < argc; i++) { r = strlen(argv[i]); buffer = realloc(buffer, buffer_size + r + 2); if (i != optind) buffer[buffer_size++] = ' '; strcpy(buffer + buffer_size, argv[i]); buffer_size += r; } } do_sql(instance, buffer, name, var, trace_dir, execute, action, save_fields); free(buffer); return 0; } FILES tracefs.h Header file to include in order to have access to the library APIs. -ltracefs Linker switch to add when building a program that uses the library. SEE ALSO sqlhist(1), libtracefs(3), libtraceevent(3), trace-cmd(1), tracefs_synth_init(3), tracefs_synth_add_match_field(3), tracefs_synth_add_compare_field(3), tracefs_synth_add_start_field(3), tracefs_synth_add_end_field(3), tracefs_synth_append_start_filter(3), tracefs_synth_append_end_filter(3), tracefs_synth_create(3), tracefs_synth_destroy(3), tracefs_synth_free(3), tracefs_synth_echo_cmd(3),).
https://trace-cmd.org/Documentation/libtracefs/libtracefs-sql.html
CC-MAIN-2022-27
refinedweb
2,282
58.92
University of Illinois Jan 2001 Aliases: tar_append_eof(3), tar_append_eof(3), tar_append_regfile(3), tar_append_regfile(3) NAME tar_append_file, tar_append_eof, tar_append_regfile - append data to tar archives SYNOPSIS #include <libtar.h> int tar_append_file(TAR *t, char *realname, char *savename); int tar_append_regfile(TAR *t, char *realname); int tar_append_eof(TAR *t); VERSION This man page documents version 1.2 of libtar. DESCRIPTION The tar_append_file() function creates a tar file header block describing the file named by the realname argument, but with the encoded filename of savename. It then sets the current header associated with the TAR handle t to the newly created header block, and writes this block to the tar archive associated with t. If the file named by realname is a regular file (and is not encoded as a hard link), tar_append_file() will call tar_append_regfile() to append the contents of the file. The tar_append_regfile() function appends the contents of a regular file to the tar archive associated with t. Since this function is called by tar_append_file(), it should only be necessary for applications that construct and write the tar file header on their own. The tar_append_eof() function writes an EOF marker (two blocks of all zeros) to the tar file associated with t. RETURN VALUES On successful completion, these functions will return 0. On failure, they will return -1 and set errno to an appropriate value. ERRORS The tar_append_*() functions will fail if: They may also fail if any of the following functions fail: lstat(), malloc(), open(), read(), th_write(), or the write function for the file type associated with the TAR handle t.
https://reposcope.com/man/en/3/tar_append_file
CC-MAIN-2022-05
refinedweb
259
50.36
I.>; } Microsoft sponsored a usability study for my side project Live Geometry, and I have to say, it was awesome. It was a lot of fun watching the participants using the software and I got a ton of great and useful feedback. I have to confess, I didn’t realize that it’s not obvious how to use Live Geometry (especially if you’ve never seen it before). Since I was the one who developed the software, I subconsciously assumed that it’s all intiutive and trivial. Well guess what, it turns out to be not the case. I am not the end user. Things that are obvious for me, might not be obvious for others. So I developed a plan on how to make things better. There are two ways: improving User Experience and investing in User Education. The former will be a slow and gradual process of me designing the features and the UI, fixing bugs, reworking the UI and thinking through UI details. Today I’ll start approaching the task of User Education and present a 5 min. screencast – a brief overview of the Live Geometry software and its possibilities (Hint: double-click the video for fullscreen viewing): You can also download the .wmv file (15 MB). More documentation will follow later, but this should at least give a quick start and give you an idea of how things work. Any feedback is welcome! I was looking for error code –2146232797 (hex 0x80131623, which turned out to be what is thrown by Environment.FailFast) and I’ve stumbled upon this treasure: Also, here’s a great blog about deciphering an HRESULT: And here's another good list from MSDN: I sincerely wish you to never ever need this knowledge... I!: I. As part of my research back in school I was building an experimental structured editor for C#. Now I’ve decided to publish the sources and binaries on CodePlex: A detailed discussion of structured editors deserves a separate post, which is coming soon. For now, to give a better idea of how the editor works, I’ve recorded six short videos showing the different features below. If your blog reader doesn’t support iframes, I recommend you view this post in the browser.. To see the video the page has to be viewed in a web-browser that supports iframes. Structured editing is a topic surrounded with scepticism and controversy for the past 20-30 years. Some argue that directly editing the AST on screen is inflexible and inconvenient, because the constraints of always having a correct program restrict the programmer way too much. Others expect structured editors to be more helpful than text editors because the user operates atomically and precisely on the language constructs, concentrating on the semantics and not on syntax. In summer 2004, my professor Peter Bachmann initiated a student research project - we started building a structured editor for C#. I took part because I was deeply persuaded that good structured editors can actually be built, and it was challenging to go and find out for myself. After the original project was over, I took over the basic prototype and evolved it further to the state it is today. As one of numerous confirmations for my thoughts, in 2004, Wesner Moise wrote: .. remember how I agreed with this! After three years, in 2007, the prototype implementation was ready - it became the result of my master's thesis. I still agree with what Wesner was envisioning in 2004 - with one exception. Now I believe that structured editors shouldn't (and can't) be a revolution - fully replacing text editors is a bad thing to do. Instead, structured editors should complement text editors to provide yet another view on the same source tree (internal representation, or AST). As a result of my work, I'm convinced that structured editors actually are, in some situations, more convenient than text editors and providing the programmer with two views on the code to choose from would be a benefit. Just like Visual Studio Class Designer - those who want to use it, well, just use it, and the rest continues to happily use the text editor. All these views should co-exist to provide the programmer with a richer palette of tools to chooce from. Hence, my first important conclusion. A program's internal representation (the AST) should be observable to allow the MVC architecture - many views on the same internal code model. With MVC, all views will be automatically kept in sync with the model. This is where for example something like WPF data-binding would come in handy. As for the structured editor itself - it is still a work in progress and I still hope to create a decent complement for text editors. It has to be usable and there are still a lot problems to solve before I can say: "Here, this editor is at least as good as the text editor". But I managed to solve so many challenging problems already, that I'm optimistic about the future. The current implementation edits a substantial subset of C# 1.0 - namespaces, types, members (except events), and almost all statements. If you're interested, you can read more at and - those are two sites I built to tell the world about my efforts. I also accumulate my links about structured editing here: It turned out that it makes sense to build structured editors not only for C#, but for other languages as well - XML, HTML, Epigram, Nemerle, etc. That is why, at the very beginning, the whole project was split in two parts - the editor framework and the C# editor built on top of it. If you want to know more, or if you want to share your opinion on this, please let me know. I welcome any feedback! Thanks!. This (modified to avoid allocations of the Stopwatch object):). The resolution of both DateTime.Now and DateTime.UtcNow is very low – about 10 ms – this is a huge time interval!: * * * Visual Studio macros are a fantastic productivity booster, which is often under-estimated. It's so easy to record a macro for your repetitive action and then just play it back. Even better, map a macro to a keyboard shortcut. I'll share a couple of examples. InsertCurlies If you open up Visual Studio and type in this code: How many keystrokes did you need? I've got 10 (including holding the Shift key once). It's because I have this macro mapped to Shift+Enter: Sub InsertCurlies() DTE.ActiveDocument.Selection.NewLine() DTE.ActiveDocument.Selection.Text = "{" DTE.ActiveDocument.Selection.NewLine() DTE.ActiveDocument.Selection.Text = "}" DTE.ActiveDocument.Selection.LineUp() DTE.ActiveDocument.Selection.NewLine() End Sub So I just typed in void Foo() and hit Shift+Enter to insert a pair of curlies and place the cursor inside. Remarkably, I've noticed that with this macro I now almost never have to hit the curly brace keys on my keyboard. Readers from Germany will especially appreciate this macro, because on German keyboard layouts you have to press Right-Alt and the curly key, which really takes some time to get used to. This macro is also useful to convert an auto-property to a usual property: you select the semicolon and hit Shift+Enter: Try it out! ConvertFieldToAutoProp Suppose you have a field which you'd like to convert to an auto-implemented property: And when you click the menu item, you get: How did I do it? First, here's the macro: Sub ConvertFieldToAutoprop() DTE.ActiveDocument.Selection.StartOfLine( _ vsStartOfLineOptions.vsStartOfLineOptionsFirstText) DTE.ActiveDocument.Selection.EndOfLine(True) Dim fieldtext As String = DTE.ActiveDocument.Selection.Text If fieldtext.StartsWith("protected") _ Or fieldtext.StartsWith("internal") _ Or fieldtext.StartsWith("private") Then fieldtext = fieldtext.Replace("protected internal", "public") fieldtext = fieldtext.Replace("protected", "public") fieldtext = fieldtext.Replace("internal", "public") fieldtext = fieldtext.Replace("private", "public") ElseIf Not fieldtext.StartsWith("public") Then fieldtext = "public " + fieldtext End If fieldtext = fieldtext.Replace(";", " { get; set; }") DTE.ActiveDocument.Selection.Text = fieldtext End Sub And then just add the macro command to the refactor context menu or any other place. This may seem like no big deal, but I had to convert fields to auto-properties recently in 50+ files. I really learned to appreciate this macro. gs code snippet This is a very little but useful snippet: gs expands to { get; set; } <?xml version="1.0" encoding="utf-8" ?> <CodeSnippets xmlns=""> <CodeSnippet Format="1.0.0"> <Header> <Title>gs</Title> <Shortcut>gs</Shortcut> <Description>Code snippet for { get; set; }</Description> <Author>Microsoft Corporation</Author> <SnippetTypes> <SnippetType>Expansion</SnippetType> </SnippetTypes> </Header> <Snippet> <Code Language="csharp"><![CDATA[{ get; set; }$end$]]> </Code> </Snippet> </CodeSnippet> </CodeSnippets> Although I usually use the prop snippet to create auto-implemented properties, but gs is useful in some cases as well. I hope this has inspired you to do some personal usability research experiments and define everyday actions that you can optimize using macros, snippets and shortcuts. I would love to hear about your personal tips and tricks as well. Here’s a nice simple. Also, anyone venture a guess what is a practical application for such a method?. A while back I was looking around for a color picker control for Live Geometry. The ColorPicker from was exactly what I was looking for: (live preview needs Silverlight 3.0) I just took the source from CodePlex and embedded it in my project. You need 5 files: Alternatively, you can reference the binary which you can download from the SilverlightContrib CodePlex project site. Pay attention that generic.xaml contains the template for the control, so don’t forget the xaml. The control will work just fine with WPF and Silverlight, which is really a great thing, especially if you’re multitargeting. To include the control in your application, here’s the basic code: <sc:ColorPicker Don’t forget to add an XML namespace: xmlns:sc="clr-namespace:SilverlightContrib.Controls" The source code for this control is very good for educational purposes. For instance, I had no idea how they create the big gradient for every possible hue. Well, it’s genius and it’s simple. In generic.xaml: <Canvas Canvas. <Rectangle x:</Rectangle> <Rectangle x: <Rectangle.Fill> <LinearGradientBrush StartPoint="0,0" EndPoint="1,0"> <GradientStop Offset="0" Color="#ffffffff"/> <GradientStop Offset="1" Color="#00ffffff"/> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle x: <Rectangle.Fill> <LinearGradientBrush StartPoint="0,1" EndPoint="0, 0"> <GradientStop Offset="0" Color="#ff000000"/> <GradientStop Offset="1" Color="#00000000"/> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Canvas x: <Ellipse Width="10" Height="10" StrokeThickness="3" Stroke="#FFFFFFFF"/> <Ellipse Width="10" Height="10" StrokeThickness="1" Stroke="#FF000000"/> </Canvas> </Canvas> This canvas contains layers like in a cake. ZIndex of objects is stacked bottom to top, so the solid rectangle with initially red background is on the bottom. Above it, there is a horizontal white gradient fill, completely white on the left and completely transparent on the right. Above it, there is a vertical black gradient fill, completely black on the bottom and completely transparent on the top. As these gradients overlay, the transparency over the initial solid background creates the desired effect – the actual color is in top-right, where both gradients are 100% transparent. The white spot is in top-left, where the white gradient is most intense and black gradient fades out. Same for the black edge of the gradient. Also, it is well worth studying how the template is written – I learned a lot from this sample. Since I conveniently borrowed the source code for my project, I did several fixes for my own purposes. Ideally I should contribute the fixes back to SilverlightContrib, but I can never get around to it. First of all, I reordered the two StackPanels in the template so that the actual selected color is on top. I also made it collapsible and collapsed by default. You can expand the control by clicking it like a combobox. Unlike a combobox, you have to explicitly click the color area to collapse it again. I’ve enabled this by adding an Expanded property: public bool Expanded { get { return m_chooserArea.Visibility == Visibility.Visible; } set { var visibility = value ? Visibility.Visible : Visibility.Collapsed; if (m_chooserArea.Visibility == visibility) { return; } m_chooserArea.Visibility = visibility; } } When clicking on the color area, I just call Enabled = !Enabled to toggle it and it does the magic for me. The default value for this is obviously the default visibility of m_chooserArea, which is specified in XAML template (Visibility=”Visible” to set to true by default). Other fixes are not as interesting. I fixed a division by zero in ColorSpace.ConvertRgbToHsv (they had h = 60 * (g - b) / (max - min); and didn’t check if min == max). There are a couple of other things which I don’t remember off the top of my head. I’d have to view TFS history to remember what those were. If you’re willing to help and incorporate these fixes to the original project, I’ll dig this up, just let me know :) Both Sara Ford and myself agree that this control deserves both thumbs. I! I. First? ;-) I}"
http://blogs.msdn.com/b/kirillosenkov/default.aspx?PostSortBy=MostViewed&PageIndex=1
CC-MAIN-2014-41
refinedweb
2,181
56.25
In this section, you will learn how to read a particular line from file. Sometimes you want to read a particular line from the file but you have to read the whole file for this process. So in order to remove this lengthy and unnecessary coding, this section will really helpful for you. You can see in the given example, we have created an object of FileReader class and parse the file path. Through the code, we want to read the line 3 of the file, so we have created a for loop which iterates the lines 1-10 of the text file. When loop reaches the third line, the br.readLine() method of BufferedReader class read that particular line and display it on the console. Here is the file.txt: Here is the code: import java.io.*; public class FileReadParticularLine { public static void main(String[] args) { String line = ""; int lineNo; try { FileReader fr = new FileReader("C:/file.txt"); BufferedReader br = new BufferedReader(fr); for (lineNo = 1; lineNo < 10; lineNo++) { if (lineNo == 3) { line = br.readLine(); } else br.readLine(); } } catch (IOException e) { e.printStackTrace(); } System.out.println("Line: " + line); } } Through the above code, you can read any line of any file. Output: Advertisements Posted on: Apr
http://www.roseindia.net/tutorial/java/core/files/filelinereader.html
CC-MAIN-2017-09
refinedweb
206
75.5
A computer algebra system written in pure Python . To get started to with contributing I am working in a simple QFT calculation and would like to do it using Sympy to learn (and also check my result). I have found the Quantum Mechanics module but cannot see how to start using it for my purpose. I have defined these quantities: import sympy.physics.quantum as Q vacuum = Q.OrthogonalKet(0) annihilation_op = Q.Operator('a') creation_op = Q.Dagger(annihilation_op) and now I want to tell Sympy that if else and . How would I do this? Also, how do I impose the commutation relations between and ? Hi all, I am just a new user for SymPy. I am self learning this library for my undergrauate research. But in the middle of the process I am stucked with one code. So I have defined a function with a subscript. U_n= x^n + 1/x^n When I consider (U_1)^3 I get (substitute n=1) (U_1)^3 = (x+1/x)^3 Then after simplifying this I get (U_1)^3 = (x^3 + 1/x^3) + 3(x+ 1/x) But one can see this answer as (U_1)^3 = U_3 + 3U_1 How to get the output in terms of U_n 's ? Can someone please give an idea how to build this code using SymPy. It would be a very big help for my research. Thank you very much. Gayanath Chandrasena. dr*(k-1) < dr*(k+1), which is obviously true, since dr = symbols('Delta',real=True,positive=True,nonzero=True)and k = symbols('k',integer=True,real=True), yet sympy doesn't seem to thing this is the case. Am I doing something wrong? Also I get In [54]: print(ask(dr*(k-1) < dr*(k+1))) None but In [55]: print(ask(dr*k-dr < dr*k+dr)) True @ThePauliPrinciple My function is U_n= x^n + 1/x^n. As an example when I compute (U_1)^3 I get (x^3 + 1/x^3) + 3(x+ 1/x).---------(i) And if I compute (U_2)^2 I get x^4 + 1/x^4 + 2---------(ii) But since in (i), (x^3 + 1/x^3)= U_3 and 3(x+ 1/x)=3U_1 I want to get the answer U_3+3U_1. In (ii) I want to get the answer U_4 +2 since x^4 + 1/x^4 = U_4. aman@amanUBUNTU:~/Desktop$ python3 Un.py Enter Value of n and k as in (U_n)^k n = 1 k = 3 3*U_1 + U_3 aman@amanUBUNTU:~/Desktop$ python3 Un.py Enter Value of n and k as in (U_n)^k n = 1 k = 2 U_2 + 2 aman@amanUBUNTU:~/Desktop$ python3 Un.py Enter Value of n and k as in (U_n)^k n = 2 k = 10 45*U_12 + 10*U_16 + U_20 + 210*U_4 + 120*U_8 + 252 @mostlyaman Yes.This is kind of same what I want.But in my case it is bit advanced. U(n)=x^n + 1/x^n V(n)=x^n - 1/x^n So if I ask for any expression I should get the answer in U(n)'s and V(n)'s.I have buil it for U(n) and V(n) separately but cannot combine them. ex: u(1)v(1)=v(2) v(2)u(1)-v(1)=v(3) v(1)^2 - u(1)^2 =2*u(2) This is the program which I want. @mostlyaman is your program executable in jupyter notebook?. If you have an idea please let me know..
https://gitter.im/sympy/sympy?at=612ca3fb8065e87a8eb63b12
CC-MAIN-2021-49
refinedweb
581
75.4
39783/write-files-to-external-public-storage-in-android My app should save files to a place where, when you connect your phone/tablet to a computer, you can see them through the system file explorer. This is the way I implemented file writing: protected String mDir = Environment.DIRECTORY_DOCUMENTS; protected File mPath = Environment.getExternalStoragePublicDirectory(mDir); protected void writeLogFile(String filename) { File f = new File(mPath, filename + ".txt"); f.getParentFile().mkdirs(); try (BufferedWriter bw = new BufferedWriter(new FileWriter(f, false))) { // Details omitted. } catch (Exception e) { e.printStackTrace(); return; } makeText("Wrote " + f.getAbsolutePath()); } This is what I see when I connect my Sony Xperia Z4 tablet to Windows (notice missing documents folder): This is the directory to which the file is written (using above implementation): What is wrong with my implementation? MediaStore has not discovered your newly-created files yet. What you see in Windows — and in many on-device "gallery" apps — is based on what MediaStore has indexed. Use MediaScannerConnection and its scanFile() method to tell MediaStore about your file, once you have written out your data to disk: public void scanFile(Context ctxt, File f, String mimeType) { MediaScannerConnection .scanFile(ctxt, new String[] {f.getAbsolutePath()}, new String[] {mimeType}, null); Hello @kartik, This is my unzip method, which ...READ MORE InputStream in = this.getClass().getClassLoader().getResourceAsStream("TextFile.txt"); InputStream in = this.getClass().getResourceAsStream("/TextFile.txt"); package ...READ MORE Follow these steps: Write: public class User { ...READ MORE Following are the steps to run the ...READ MORE Writing a File in android: private void writeToFile(String ...READ MORE List<String> results = new ArrayList<String>(); File[] files = ...READ MORE We can use Apache Commons IO. It ...READ MORE You can use Scanner class to read ...READ MORE String[] source = new String[] { "a", ...READ MORE Using the ViewPager.OnPageChangeListener is the correct way to go, ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/39783/write-files-to-external-public-storage-in-android
CC-MAIN-2021-17
refinedweb
309
52.05
I know that there are similar topics but i couldn’t find the solution in any of them I use this sample sketch from arduino software, “Dhcp adress printer” #include <SPI.h> #include <Ethernet.h> // Enter a MAC address for your controller below. // Newer Ethernet shields have a MAC address printed on a sticker on the shield byte() { } every time i get this error “failed to configure ethernet with dhcp”, i tried to reset the arduino uno board and the ethernet shield, i have no sd card in the slot; my router it’s working fine and on the ethernet shield the leds are blinking so everything seems to work but i still get that error ! Please help, what should i do ? what can i try ? ideas ? Moderator edit: </mark> <mark>[code]</mark> <mark> </mark> <mark>[/code]</mark> <mark> tags added.
https://forum.arduino.cc/t/failed-to-configure-ethernet-with-dhcp/226534
CC-MAIN-2022-33
refinedweb
141
69.11
It is quite common when modeling a real-life problem to have constraints such as having a user account with either a nickname or an email, an avatar or an emoji. TypeScript gives us tools to model this kind of data. What options do we have? type Boss = { president?: Official; king?: Monarch; } type Official = { name: string, age: number, } type Monarch = { name: string, title: string, } This example adds two optional properties to describe that situation. Notice that to display a Boss’s name, we have to check for the presence of at least one attribute. A boss object might have either a president or a king, but the shape of the data does not fit that description. Having n optional properties isn’t the best way to describe the presence of at least one of them. The Boss type aims to represent it can be either an official or a monarch. Having both optional properties opens up to many invalid states. Such as the possibility of having none of the attributes, or having them all at the same time. If we model a solution with multiple optional properties, navigating the codebase would come with multiple null object checks. const bossName({ president, king }: Boss): string => { if(president) { return president.name; } else if(king) { return king.name; } // Still the type definitions allows a case where both properties might // be null so we must plan for this else { return "Apparently total anarchy"; } } How might we avoid this mismatch between the state of the problem and our domain modeling? Typescript allows us to combine different types using the union operator(|). When used along with type aliases to define a new type is known as union types. type Boss = Official | Monarch; const bossName = (boss: Boss): string => { return boss.name; } This works particularly well here since both types have the same property. Type guards 💂♀️ What about when we have to reach for other attributes? The check for nullability becomes a type assertion. const bossDescription = (boss: Boss): string => { if((boss as Official).age) { const official = (boss as Official); return `${official.name}, ${official.age} years old`; } else if((boss as Monarch).title) { const monarch = (boss as Monarch); return `${monarch.title}, ${monarch.name}`; } else { return 'The anarchy continues'; } } The as operator instructs the compiler to treat the variable to the left as the type to the right. In this particular example, we are converting an argument of type Boss into an Official first and later into a Monarch. One thing to notice with this operand is that we are using it twice in the if block. First, to assert that we are dealing with an Official and then to access the age property in the object. Given that we guarded that branch feels unnecessary to specify again that we are in the presence of an instance of an object of type Official. The final type assertion feels unnecessary since there should be only two possibilities. Since we are telling the compiler to trust us on this one(by using type assertions), we need to keep guiding it after that. More type guards There are other ways to check the type: one of them is using type predicates with the is operand. const isMonarch(boss: Boss) boss is Monarch { return (boss as Monarch).title !== undefined; } This approach works, but it feels like we are doing the compiler’s job, the compiler is probably better at figuring this out than we are. As long as we stick to annotating the types from our queries, using type guards feels more robust. In you go Another tool at our disposal is the in operator. We can assert the presence of the property in our instance before using it. const bossDeescription = (boss: Boss): string => { if ("age" in boss) { return `${boss.name}, ${boss.age} years old`; } return `${boss.title}, ${boss.name}`; } Using the in operator confirms that the properties we check are present and non-optional. However, the opposite is true in both counts. After this predicate, the remaining type either does not have the checked property, or it is undefined. One downside to this alternative is that the action of picking the right property involves insider knowledge about the type. Using a property that was added to describe the object itself, to differentiate two types is far from ideal. There’s gotta be a better way! Introducing Discriminated Unions: A solution is using a combination of string literal types, union types, type guards, and type aliases. To discriminate one type from the other in a union, we add a __typename property for that goal. Since it’s purpose is to differentiate, the value should be a constant. Here we use a hence the string literal type: type Official = { __typename: 'Official', name: string, age: number, } type Monarch = { __typename: 'Monarch', name: string, title: string, } type Boss = Official | Monarch; const bossDescription = (boss: Boss): string => { if (boss.__typename === 'Official') { return `${boss.name}, ${boss.age} years old`; } return `${boss.title}, ${boss.name}`; } Now there is a straightforward and self-composed way of distinguishing the types on our aliased union. Without having to peek into the type definition and not leaving us exposed to future side effects from valid changes. Conclusion When we model a problem that has property A or property B Typescript has an impressive array of options to solve the problem. Making sure we evaluate the options is our part. By using the in operator, we are relying on attributes added with a purpose far from differentiating this type from the others. When using a random attribute from a type that was not added to make it unique, we are violating the contract agreed upon by consuming it. The as operator demands to import all of the type definitions just for making the assertion adding unnecessary dependencies to our code. Discriminated Unions combine more than one technique and create self-contained types. Types that carry all the information to use them without worrying about name collisions or unexpected changes.
https://thoughtbot.com/blog/the-case-for-discriminated-union-types-with-typescript
CC-MAIN-2019-47
refinedweb
991
64
I hope that my little crash course on Python was enough to give you a general feel for the language. I have to admit, I picked up basics of the language in about two to three days, since the concepts are all very similar to those of C++ (and all other object-oriented imperative languages), so I doubt you'll have any problems. Now, on to the difficult partintegrating C++ and Python. The good news is that the Python language was originally made in C, so that gives us a nice platform to start with. NOTE If you ever have a question about Python, there is great free documentation available at http:// . As of this writing, there's a great tutorial to the lan guage here: doc/current/tut/tut.html . The Python-C API is very easy to use, and is well documented as well. As of this writing, you can access the documentation for the Python-C API at this address:. The first demo I want to show you is an example of how easy it is to integrate the Python interpreter into your C++ program. You can find this demo on the CD in the directory /Demos/Chapter17/Demo17-01/ on the CD. To compile the demo, set up your compiler as described in Appendix A, which is also on the CD. I'm going to spit all the code out at you at once, but I don't think you'll mind, since it's really simple: #include <iostream> #include <string> #include "Python.h" int main() { std::cout << "Welcome to SIMPLEPYTHON!!" << std::endl; std::cout << "Chapter 17, Demo 01 - MUD Game Programming" << std::endl; Py_Initialize(); // initialize python std::string str; std::getline( std::cin, str ); // get each line while( str != "end" ) { // exit if you got "end" PyRun_SimpleString( const_cast<char*>( str.c_str() ) ); std::getline( std::cin, str ); } Py_Finalize(); // shut down python return 0; } The three calls to Python functions in the code are marked in bold. The first call initializes the Python interpreter, the second tells it to execute a string, and the final call tells the Python interpreter to shut down. Figure 17.4 shows the SimplePython interpreter in action. When I first got this working, I had two thoughts: WOW! THIS IS SO COOL!! Hey, I can't believe I made this in less than two minutes! See how easy it is to get Python code up and running in your programs? NOTE C++ std::string s cannot be con verted into char* pointers implicitly, so to accomplish that you must call their c_str function. However, the function returns const char* s, and Python, for some oddball reason, accepts only non- const char* s as parameters to the functions. So in order to properly pass an std::string into the function, I need to cast away its const characteristics first. That first example was quite simple, and really only concerned the execution of Python code from a string. If you want to do anything more complex, you'll have to mess around with the internals of Python. Everything in the API is based on the idea of a Python Object , which is a structure that points to an object that is being used within Python. This code x = 10 creates a new Python object, entitled x , and it holds an integer value of 10. Creating a class creates a new Python object that contains the definition of a class, and creating an instance of that class creates yet another Python object. Python objects are stored in a simple structure called PyObject . They are large and fairly complex, but luckily, you should never have to deal with them, except to pass them to and from Python-C API functions. Here is a simple example, which loads a file containing Python code: PyObject* mod = PyImport_ImportModule( "pythontest" ); After this code has been called, mod should be pointing to a Python object that represents the module pythontest , which was loaded from the file pythontest.py. Python automatically assumes modules end with a .py suffix. NOTE Python modules can contain code that is outside functions or classes. This code is executed when the module is first loaded, so the mo ment you load it up, THIS IS A TEST OF PYTHON!!!! should be enthusiastically printed to your console window. Modules can also have variables , as shown by the x = 10 line. Now, what can you do with this module? You can actually do anything you want. Let me show you what's in the module first though: print "THIS IS A TEST OF PYTHON!!!!" x = 10 def testfunc(): print "TEST FUNCTION!!" def printfunc( arg ): print arg def returnstring(): return "HELLO C++!!" Once you have that loaded, you can execute Python code: PyRun_SimpleString( "import pythontest\n" "pythontest.testfunc()\n" ); This imports the module into the namespace of the PyRun_SimpleString function, and then calls its testfunc function, which should print TEST FUNCTION!! to your console. Now that you can call stuff using a simple string, try the slightly more complex operation of calling it directly from C++: PyObject* result = PyObject_CallMethod( mod, "testfunc", null ); NOTE It should be noted that since you're importing the function inside the call to PyRun_SimpleString , you don't actually have to import the module in C using PyImport_ImportModule first. But since I plan on using the module later on, I load it anyway. This calls the test function from mod with no parameters. Since the function doesn't return anything either, why the heck did I record the result? This is one of the little quirks of Python. Even if you return nothing, the language internally returns the null object . Of course, if it's a null object, you can just ignore it right? Wrong. Everything in Python is reference counted, which means that the Python-C API tries to track the number of places in the program that are referencing any given Python object. Whenever the reference count drops to 0, the Python-C API knows that it is safe to delete the object. If you accidentally still have it after it has been deleted, you're going to be accessing an invalid object, and you'll be in big trouble. The PyObject_ functions always return new references to Python objects, which means that the null reference count of the object that was returned from this function increased. It is up to you to remove one from its reference count like this: Py_DECREF( result ); result = 0; You're telling Python that you've finished pointing to that object, and that you won't be referencing it anymore. Therefore it's a good idea to clear the pointer as well. This can get tricky, so later I'll show you my Python wrapper that takes care of this stuff for you. NOTE For each function in the API, the Python-C API documentation lists references as new or borrowed . Whenever you call a function that returns a borrowed reference, you're not supposed to decrease its refer ence count at all. This can make managing objects somewhat difficult, but luckily, there aren't many func tions that return borrowed refer ences. In fact, in my Python wrapper, I never use any of those functions, so I can safely assume that all pointers need to be dereferenced. Now we can try something more complicated, calling a function with a parameter: result = PyObject_CallMethod( mod, "printfunc", "s", "HELLO PYTHON!" ); Py_DECREF( result ); result = 0; result = PyObject_CallMethod( mod, "printfunc", "i", 42 ); Py_DECREF( result ); result = 0; This calls the printfunc function with a string, and then with an integer, which should print them both. On the other side of the equation, you're going to need to extract return values from Python objects. This is a relatively painless task to accomplish, because Python contains tons of built-in functions for converting the basic types from objects. This time, we're going to call returnstring : result = PyObject_CallMethod( mod, "returnstring", null ); std::string str = PyString_AsString( result ); std::cout << str << std::endl; Py_DECREF( result ); result = 0; The line in bold is the function that converts a Python object to a C++ char* object, which I then promptly copy into a std::string . You need to be careful when using the PyString_AsString function. It returns a char* , which is a pointer, but Python still owns the buffer it points to. You shouldn't modify it at all or try to delete it. In fact, the safest, sanest thing to do is to copy the contents of the buffer into a string of your own right away , because Python may even modify the buffer later on, or deallocate it without telling you (how rude!). NOTE The char* s of C are the devil . I have never seen a more evil invention in my life. Well maybe the Oscar Meyer Weeniemobile, but I don't think that counts. Use std::string of C++ instead. All the code in this section is compiled into Demo 17.2 on the CD, which you compile in the same way as Demo 17.1. When you run it, it should produce output like this: Python Test! Chapter 17, Demo 02 - MUD Game Programming THIS IS A TEST OF PYTHON!!!! TEST FUNCTION!! TEST FUNCTION!! HELLO PYTHON! 42 HELLO C++!! That's it for this demo.
https://flylib.com/books/en/4.241.1.126/1/
CC-MAIN-2020-45
refinedweb
1,551
70.23
Adding a driver to Player From The Player Project These are rough directions for adding a new driver to the CVS source-tree for Player/Stage. Other HowTos/Tutorials explaining other aspects of writing and building a Player 2.0 driver: Migrating from Player 1.6 to Player 2.0 How to write a player plugin driver Very good tutorial that describes in detail how to write a new driver Although this example is for a "roboteq" driver it should work for other drivers. "..." in a code block means lines were omitted for clarity. o.k., I have created a patch file for the changes made to the Player source tree in order to add my new Roboteq driver. Only glitch is that the patch does not include the two new files or the new directory: i.e.; position/ roboteq/ roboteq.cc Makefile.am here is the process: 1. cvs checkout of Player source (a cvs checkout and build is its own process; check Player FAQs for more info) 2. drop the directory for the new driver ("roboteq" -- position2d) in "player/server/drivers/position/" with its appropriately edited roboteq.cc (removed the extern "C" Extra stuff for building a shared object, otherwise same as the plugin driver). 3. add a new entry in "player/configure.ac": ... dnl Create the following Makefiles (from the Makefile.ams) AC_OUTPUT(Makefile ... server/drivers/position/roboteq/Makefile ... 4. add a new entry in "player/server/drivers/position/Makefile.am": ... SUBDIRS = isense microstrain vfh ascension bumpersafe lasersafe nav200 nd roboteq ... 5. add new entries in "player/server/libplayerdrivers/driverregistry.cc": ... #ifdef INCLUDE_ROBOTEQ void roboteq_Register (DriverTable* table); #endif ... #ifdef INCLUDE_ROBOTEQ roboteq_Register(driverTable); #endif ... 6. add new entry in "player/acinclude.m4": ... PLAYER_ADD_DRIVER([roboteq],[yes],[],[],[]) ... 7. create "player/server/drivers/position/roboteq/Makefile.am": AM_CPPFLAGS = -Wall -I$(top_srcdir) noinst_LTLIBRARIES = if INCLUDE_ROBOTEQ noinst_LTLIBRARIES += libroboteq.la endif libroboteq_la_SOURCES = roboteq.cc 8. run the usual ./bootstrap ./configure ./make && make install if you want to make sure this worked 9. from the top-level source directory (player/) cvs diff -u > registernewdriver.patch to make a patch file of any existing files that have changed. Check out the patch file to see if its in good shape. Mine had a bunch of ? marks at the top, listing new files cvs did not know about because they had not been added, so I cleaned it up a bit. Otherwise it should show the changes in all the files that were modified (above). 10. cvs did not allow me to add any files to the repository without having write-privileges: $ cvs add roboteq cvs [server aborted]: "add" requires write access to the repository so I just uploaded a tar.gz of the new directory with the patch file to patch tracker - don't know if there is a better way.
http://playerstage.sourceforge.net/wiki/index.php?title=Adding_a_driver_to_Player&oldid=4221
CC-MAIN-2015-40
refinedweb
465
67.15
Euler problems/91 to 100 From HaskellWiki Latest revision as of 20:08, 21 February 2010 [edit] ] [edit] [edit] 3 Problem 93 Using four distinct digits and the rules of arithmetic, find the longest sequence of target numbers. Solution: import Data.List import Control.Monad import Data.Ord (comparing) = comparing results main = appendFile "p93.log" $ show $ maximumBy cmp $ [[a,b,c,d] | a <- [1..10], b <- [a+1..10], c <- [b+1..10], d <- [c+1..10] ] problem_93 = main [edit] [edit] 5 Problem 95 Find the smallest member of the longest amicable chain with no element exceeding one million. $ [edit] = [edit] 7 Problem 97 Find the last ten digits of the non-Mersenne prime: 28433 × 27830457 + 1. Solution: problem_97 = flip mod limit $ 28433 * powMod limit 2 7830457 + 1 where limit=10^10 [edit] 8 Problem 98 Investigating words, and their anagrams, which can represent square numbers. Solution: import Data.List import Data.Maybe import Data.Function = (==) ) [edit] 9 Problem 99 Which base/exponent pair in the file has the greatest numerical value? Solution: import Data.List lognum (b,e) = e * log b logfun x = lognum . read $ "(" ++ x ++ ")" problem_99 = snd . maximum . flip zip [1..] . map logfun . lines main = readFile "base_exp.txt" >>= print . problem_99 [edit]
https://wiki.haskell.org/index.php?title=Euler_problems/91_to_100&diff=33783&oldid=18612
CC-MAIN-2015-27
refinedweb
203
62.95
Ethernet addresses. It is used by all Ethernet datalink providers (interface drivers) and can be used by other datalink providers that support broadcast, including FDDI and Token Ring. The only network layer supported in this implementation is the Internet Protocol, although ARP is not specific to that protocol. ARP caches IP-to-link-layer a maximum of four packets while awaiting a response to a mapping request. ARP keeps only the first four-link address tables. Ioctls that change the table contents require sys_net_config privilege. See privileges(5). ); SIOCSARP, SIOCGARP and SIOCDARP are BSD compatible ioctls. These ioctls do not communicate the mac address length between the user and the kernel (and thus only work for 6 byte wide Ethernet addresses). To manage the ARP cache for media that has different sized mac addresses, use SIOCSXARP, SIOCGXARP and SIOCDXARP ioctls. #include <sys/sockio.h> #include <sys/socket.h> #include <net/if.h> #include <net/if_dl.h> #include <net/if_arp.h> struct xarpreq xarpreq; ioctl(s, SIOCSXARP, (caddr_t)&xarpreq); ioctl(s, SIOCGXARP, (caddr_t)&xarpreq); ioctl(s, SIOCDXARP, (caddr_t)&xarpreq); Each ioctl() request takes the same structure as an argument. SIOCS[X]ARP sets an ARP entry, SIOCG[X]ARP gets an ARP entry, and SIOCD[X]ARP deletes an ARP entry. These ioctl() requests may be applied to any Internet family socket descriptors, or to a descriptor for the ARP device. Note that SIOCS[X]ARP and SIOCD[X]ARP require a privileged user, while SIOCG[X]ARP does not. The arpreq structure contains /* * ARP ioctl request */ struct arpreq { struct sockaddr arp_pa; /* protocol address */ struct sockaddr arp_ha; /* hardware address */ int arp_flags; /* flags */ }; The xarpreq structure contains: /* * Extended ARP ioctl request */ struct xarpreq { struct sockaddr_storage xarp_pa; /* protocol address */ struct sockaddr_dl xarp_ha; /* hardware address */ int xarp_flags; /* arp_flags field values */ }; #define ATF_COM 0x2 /* completed entry (arp_ha valid) */ #define ATF_PERM 0x4 /* permanent (non-aging) entry */ #define ATF_PUBL 0x8 /* publish (respond for other host) */ #define ATF_USETRAILERS 0x10 /* send trailer packets to host */ #define ATF_AUTHORITY 0x20 /* hardware address is authoritative */ The address family for the [x]arp_pa sockaddr must be AF_INET. The ATF_COM flag bits ([x]arp_flags) cannot be altered. ATF_USETRAILER is not implemented on Solaris and is retained for compatibility only. ATF_PERM makes the entry permanent (disables aging) if the ioctl() request succeeds. ATF_PUBL specifies that the system should respond to ARP requests for the indicated protocol address coming from other machines. This allows a host to act as an "ARP server," which may be useful in convincing an ARP-only machine to talk to a non-ARP machine. ATF_AUTHORITY indicates that this machine owns the address. ARP does not update the entry based on received packets. The address family for the arp_ha sockaddr must be AF_UNSPEC. Before invoking any of the SIOC*XARP ioctls, user code must fill in the xarp_pa field with the protocol (IP) address information, similar to the BSD variant. The SIOC*XARP ioctls come in two (legal) varieties, depending on xarp_ha.sdl_nlen: Other than the above, the xarp_ha structure should be 0-filled except for SIOCSXARP, where the sdl_alen field must be set to the size of hardware address length and the hardware address itself must be placed in the LLADDR/sdl_data[] area. (EINVAL will be returned if user specified sdl_alen does not match the address length of the identified interface). On return from the kernel on a SIOCGXARP ioctl, the kernel fills in the name of the interface (excluding terminating NULL) and its hardware address, one after another, in the sdl_data/LLADDR area; if the two are larger than can be held in the 244 byte sdl_data[] area, an ENOSPC error is returned. Assuming it fits, the kernel will also set sdl_alen with the length of hardware address, sdl_nlen with the length of name of the interface (excluding terminating NULL), sdl_type with an IFT_* value to indicate the type of the media, sdl_slen with 0, sdl_family with AF_LINK and sdl_index (which if not 0) with system given index for the interface. The information returned is very similar to that returned via routing sockets on an RTM_IFINFO message. ARP performs duplicate address detection for local addresses. When a logical interface is brought up (IFF_UP) or any time the hardware link goes up (IFF_RUNNING), ARP sends probes (ar$spa == 0) for the assigned address. If a conflict is found, the interface is torn down. See ifconfig(1M) for more details. ARP watches for hosts impersonating the local host, that is, any host that responds to an ARP request for the local host's address, and any address for which the local host is an authority. ARP defends local addresses and logs those with ATF_AUTHORITY set, and can tear down local addresses on an excess of conflicts. ARP also handles UNARP messages received from other nodes. It does not generate these messages. arp(1M), ifconfig(1M), privileges(5), if_tcp(7P), inet(7P) Plummer, Dave, An Ethernet Address Resolution Protocol or Converting Network Protocol Addresses to 48 bit Ethernet - Addresses for Transmission on Ethernet Hardware, RFC 826, STD 0037, November 1982. Malkin, Gary, ARP Extension - UNARP, RFC 1868, November 1995. Several messages can be written to the system logs (by the IP module) when errors occur. In the following examples, the hardware address strings include colon (:) separated ASCII representations of the link layer addresses, whose lengths depend on the underlying media (for example, 6 bytes for Ethernet). Duplicate IP address warning. ARP has discovered another host on a local network that responds to mapping requests for the Internet address of this system, and has defended the system against this node by re-announcing the ARP entry. Duplicate IP address detected while performing initial probing. The newly-configured interface has been shut down. Duplicate IP address detected on a running IP interface. The conflict cannot be resolved, and the interface has been disabled to protect the network. An interface with a previously-conflicting IP address has been recovered automatically and reenabled. The conflict has been resolved. This message appears if arp(1M) has been used to create a published permanent (ATF_AUTHORITY) entry, and some other host on the local network responds to mapping requests for the published ARP entry. Name | Synopsis | Description | APPLICATION PROGRAMMING INTERFACE | See Also | Diagnostics
http://docs.oracle.com/cd/E19253-01/816-5177/6mbbc4g2j/index.html
CC-MAIN-2014-35
refinedweb
1,033
53.1
04 July 2005 16:49 [Source: ICIS news] LONDON (CNI)--Industry players on the sidelines of the European Petrochemicals Luncheon (EPL) meeting on 30 June-1 July in Concerns came to a head when aromatics producers, consumers and traders expressed their confusion at the plethora of benzene brokers. “We don’t need more brokers than traders,” said one major producer. European benzene broking was for many years the preserve of two Rotterdam-based aromatics brokers, Petrochemical Brokerage (PCB) and Starsupply Petroleum Europe. But the field has seen three newcomers in the last two years. First came Swiss-based New Stone, an established methyl tertiary butyl ether (MTBE) broker expanding into benzene as an offshoot of its gasoline components business. Into the same space came Zug, Switzerland-based Alpax Petroleum, focused on benzene broking and founded in 2004 by Patrick Cox, a former trader at Swiss-based trading company Trammochem. The most recent arrival was Cyprus-registered R and PB Ltd, a two-man team set up by former BP trading manager David Phillips and Yigal Roter, previously of Talichem in Brussels and Tel Aviv. ?xml:namespace> “There are too many brokers,” said one broker. “If you count Chemconnect there are six. The newer ones don’t have the range of contacts or the full range of aromatics.” “I expect to see more Asian companies setting up in ?xml:namespace> “However, when there are too many brokers, markets normally sort themselves out – it’s like the Serengeti,” the broker said. There is certainly impatience amongst buyers and sellers of benzene at the number of calls they receive every day from brokers. “A broker needs to bring added value,” said one major benzene producer. “It isn’t enough to just talk numbers. I want someone who can bring me an informed opinion on market direction.” At the same time as this increase in brokers, the future of benzene traders is increasingly precarious,
http://www.icis.com/Articles/2005/07/08/690324/Five-European-benzene-brokers-Who-needs-them.html
CC-MAIN-2014-35
refinedweb
320
61.77
Quickly Learn The Basic Concepts Of Strings, Pair & Tuples In STL. In this tutorial, we will gain basic knowledge of Strings, Pair, and Tuples in STL, before we actually jump to detailed and bigger concepts like Iterators, Algorithms, and Containers. Although Strings are used in the same way as in general C++ language, it is worth discussing from the STL point of view. We can think of strings as a sequential container of characters. Also as we deal with template classes in STL, it is quite imperative that we know the concept of PAIR and TUPLE with respect to STL. => Check The In-Depth C++ Training Tutorials Here. What You Will Learn: Strings In STL Strings in STL support both ASCII as well as Unicode (wide-character) format. STL supports two types of strings: #1) string: This is the ASCII format string and to include this type of string objects in the program we need to include string.h file in our program. #include <string> #2) wstring: This is the wide-character string. In MFC programming, we call it a CString. To include wstring objects in our program we include the file xstring. #include <xstring> Whether ASCII or Unicode, strings in STL support various methods just in the way in which the other STL containers do. Some of the methods supported by the string object are: - begin() : Return iterator at the beginning. - end() : Return iterator at the end. - insert() : Insert into string. - erase() : Erase characters from string. - size() : Returns the length of string. - empty() : Empty the contents of string. Apart from these methods stated above, we have already covered string class methods in our earlier strings in C++ tutorials. Let’s write a simple program to demonstrate STL strings. #include <string> #include <iostream> using namespace std; int main() { string str1; str1.insert(str1.end(),'W'); str1.insert(str1.end(),'O'); str1.insert(str1.end(),'R'); str1.insert(str1.end(),'L'); str1.insert(str1.end(),'D'); for (string::const_iterator it = str1.begin(); it != str1.end(); ++it) { cout << *it; } int len = str1.size(); cout<<"\nLength of string:"<<len; cout << endl; return (0); } Output: WORLD Length of string:5 In the above code, as we have seen, we declare a string object str1 and then using the insert method, we add characters one by one at the end of the string. Then using an iterator object, we display the string. Next, we output the length of the string using the size method. This is a simple program to demonstrate the strings only. PAIR In STL PAIR class in STL comes handy while programming the associative containers. PAIR is a template class that groups together two value of either the same or different data types. The general syntax is: pair<T1, T2> pair1, pair2; The above line of code creates two pairs i.e. pair1 and pair2. Both these pairs have the first object of type T1 and the second object of type T2. T1 is the first member and T2 is the second member of pair1 and pair2. Following are the methods that are supported by PAIR class: - Operator (=): Assign values to a pair. - swap: Swaps the contents of the pair. - make_pair(): Create and returns a pair having objects defined by the parameter list. - Operators( == , != , > , < , <= , >= ) : Compares two pairs lexicographically. Let’s write a basic program that shows the usage of these functions in code. #include <iostream> using namespace std; int main () { pair<int,int> pair1, pair3; pair<int,string> pair2; pair1 = make_pair(1, 2); pair2 = make_pair(1, "SoftwareTestingHelp"); pair3 = make_pair(2, 4); cout<< "\nPair1 First member: "<<pair1.first << endl; cout<< "\nPair2 Second member:"<<pair2.second << endl; if(pair1 == pair3) cout<< "Pairs are equal" << endl; else cout<< "Pairs are not equal" << endl; return 0; } Output: Pair1 First member: 1 Pair2 Second member: SoftwareTestingHelp Pairs are not equal In the above program, we create two pairs of type integer each and another pair of type integer and string. Next using the “make_pair” function we assign values to each pair. Next, we compare pair1 and pair2 using the operator “==” to check if they are equal or not. This program demonstrates the basic working of the PAIR class. Tuple In STL Tuple concept is an extension of Pair. In pair, we can combine two heterogeneous objects, whereas in tuples we can combine three heterogeneous objects. The general syntax of a tuple is: tuple<T1, T2, T3>tuple1; Just like pair, tuple also supports similar functions and some more additional functions. These are listed below: - Constructor: To construct a new tuple. - Tuple_element: Returns the type of tuple element. - make_tuple(): Creates and return a tuple having elements described by the parameter list. - Operators( == , != , > , < , <= , >= ): Lexicographically compares two pairs. - Operator(=): To assign value to a tuple. - swap: To swap the value of two tuples. - Tie: Tie values of a tuple to its references. Let’s use some of these functions in a program to see their working. #include <iostream> #include <tuple> using namespace std; int main () { tuple<int, int, int> tuple1; tuple<int, string, string> tuple2; tuple1 = make_tuple(1, 2,3); tuple2 = make_tuple(1,"Hello", "C++ Tuples"); int id; string str1, str2; tie(id, str1, str2) = tuple2; cout << id <<" "<< str1 <<" "<< str2; return 0; } Output: 1 Hello C++ Tuples In the above code to demonstrate tuples, we create two tuples. The first tuple tuple1 consists of three integer values. Second tuple i.e. tuple2 consists of one integer value and two string values. Next, we assign values to both the tuples using “make_tuple” function. Then using “tie” function call, we tie or assign the values from tuple2 to id and two strings. Finally, we output these values. The output shows the values from tuple2 we assigned to id and two strings. Conclusion Thus in this tutorial, we have briefly discussed strings, pair, and tuple used in STL. Whereas strings operations are similar to general C++, in addition, we can also operate iterators on these strings. Pair and tuple constructs come handy while programming STL containers especially the associative containers. In our upcoming tutorial, we will learn about algorithms and iterators in detail before we jump to the actual STL programming using STL. => Visit Here To See The C++ Training Series For All.
https://www.softwaretestinghelp.com/strings-pair-tuples-in-stl/
CC-MAIN-2021-17
refinedweb
1,030
65.62
snd_pcm_nonblock_mode() Set or reset the blocking behavior of reads and writes to PCM channels Synopsis: #include <sys/asoundlib.h> int snd_pcm_nonblock_mode( snd_pcm_t *handle, int nonblock ); Since: BlackBerry 10.0.0 Arguments: - handle - The handle for the PCM device, which you must have opened by calling snd_pcm_open_name(), snd_pcm_open(), or snd_pcm_open_preferred(). - nonblock - If this argument is nonzero, non-blocking mode is in effect for subsequent calls to snd_pcm_read() and snd_pcm_write(). Description:. Returns: Zero on success, or a negative error code. Errors: - -EBADF - Invalid file descriptor. Your handle may be corrupt. - -EINVAL - Invalid handle. Classification: QNX Neutrino Caveats:. Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.audio.lib_ref/topic/snd_pcm_nonblock_mode.html
CC-MAIN-2019-35
refinedweb
116
62.34
Definitions for the user-defined app entry point, mgos_app_init(). The mgos_app_init() function is like the main() function in the C program. This is a app's entry point. The mongoose-os core code does implement mgos_app_init() stub function as a weak symbol, so if user app does not define its own mgos_app_init(), a default stub will be used. That's what most of the JavaScript based apps do - they do not contain C code at all. enum mgos_app_init_result mgos_app_init(void); User app init function. A weak stub is provided in mgos_app_init.c, which can be overridden. Example of a user-defined init function: #include "mgos_app.h" enum mgos_app_init_result mgos_app_init(void) { if (!my_super_duper_hardware_init()) { LOG(LL_ERROR, ("something went bad")); return MGOS_APP_INIT_ERROR; } LOG(LL_INFO, ("my app initialised")); return MGOS_APP_INIT_SUCCESS; } void mgos_app_preinit(void); edit this docedit this doc An early init hook, for apps that want to take control early in the init process. How early? very, very early. If the platform uses RTOS, it is not running yet. Dynamic memory allocation is not safe. Networking is not running. The only safe thing to do is to communicate to mg_app_init something via global variables or shut down the processor and go (back) to sleep.
https://mongoose-os.com/docs/mongoose-os/api/core/mgos_app.h.md
CC-MAIN-2022-05
refinedweb
200
67.96
Stats with Python: Unbiased Variance January 17, 2021 | 7 min read | 1,423 views Are you wondering what unbiased sample variance is? Or, why it is divided by n-1? If so, this post answers them for you with a simple simulation, proof, and an intuitive explanation. Consider you have i.i.d. samples: , and you want to estimate the population mean and the population variance from these samples. The sample mean is defined as: This looks quite natural. But what about the sample variance? This is defined as: When I first saw this, it looked weird. Where does come from? The professor said “this term makes the estimation unbiased”, which I didn’t quite understand. But now, thanks to Python, it’s much clearer than it was then. So, in this post, I’ll make a concise and clear explanation of unbiased variance. Visualizing How Unbiased Variance is Great Consider a “biased” version of variance estimator: In fact, as well as unbiased variance, this estimator converges to the population variance as the sample size approaches infinity. However, the “biased variance” estimates the variance slightly smaller. Let’s see how these esitmators are different. Suppose you are drawing samples, one by one up to 100, from a continuous uniform distribution . The population mean is and the population variance is I simulated estimating the population variance using the above two estimators in the following code. import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set_style("darkgrid") n = 100 rands = np.random.rand(n) biased = [] unbiased = [] for i in range(2, n+1): biased.append(np.var(rands[:i], ddof=0)) unbiased.append(np.var(rands[:i], ddof=1)) #1"); This code gives different results every time you execute it. Indeed, both of these estimators seem to converge to the population variance and the biased variance is slightly smaller than the unbiased estimator. However, from these results, it’s hard to see which is more “unbiased” to the ground truth. So, I repeated this experiment 10,000 times and plotted the average performance in the figure below. n = 100 k = 10000 rands = np.random.rand(n, k) biased = [] unbiased = [] for i in range(2, n+1): biased.append(np.var(rands[:i], axis=0, ddof=0).mean()) unbiased.append(np.var(rands[:i], axis=0, ddof=1).mean()) (averaged over 10,000 trials)"); Now it’s clear how the biased variance is biased. Even when there are 100 samples, its estimate is expected to be 1% smaller than the ground truth. In contrast, the unbiased variance is actually “unbiased” to the ground truth. Proof Though it is a little complicated, here is a formal explanation of the above experiment. Recall that the variance of the sample mean follows this equation: Thus, which means that the biased variance estimates the true variance times smaller. To correct this bias, you need to estimate it by the unbiased variance: then, Here, is a quantity called degree of freedom. In the above example, the samples are subject to the equation: So, given the sample mean , the samples have only degrees of freedom. When I called the function np.var() in the experiment, I specified ddof=0 or ddof=1. This argument is short for delta degree of freedom, meaning how many degrees of freedom are reduced. Intuition The bias of the biased variance can be explained in a more intuitive way. By definition, the sample mean is always closer to the samples than the population mean, which leads to the smaller variance estimation if divided by the sample size . For more explanations, I’d recommend this video:. Written by Shion Honda. If you like this, please share!
https://hippocampus-garden.com/stats_unbiased_variance/
CC-MAIN-2022-27
refinedweb
616
58.89
This document is also available in these non-normative formats: XML. Copyright © 2007. As elaborated in the Status section, this specification is subject to change. Items presently under consideration by the WG are either noted in the main text or listed in F Format Features Under F Format Features Under Consideration, for which only brief descriptions are provided, and should probably not yet be considered for implementation. Distinguishing Bits 5.2 EXI Format Version 5.3 EXI Options 6. Encoding EXI Streams 6.1 Determining Event Codes 6.2 Representing Event Codes 6.3 Fidelity Options 7. Representing Event Content 7.1 Built-in EXI Datatypes Representation Pluggable CODECS. Format Features Under Consideration (Non-Normative) F.1 Strict Schemas F.2 IEEE Floats F.3 Bounded Tables F.4 Grammar Coalescence F.5 Indexed Elements F.6 Further Features Under Consideration G Example Encoding (Non-Normative) H Changes from First Public Working Draft (Non-Normative) I. Present work centers around evaluating and integrating some features from other measured format technologies into EXI (see Appendix F Format Features Under Consideration).. Terminal symbols that are qualified with a qname permit the use of a wildcard symbol (*) in place of a qname. The terminal symbol SE (*) matches a start element (SE) event with any qname. Similarly, the terminal symbol AT (*) matches an attribute (AT) event with any qname. stream body. It is the EXI stream body that carries the content of the document, while the EXI header amongst its roles communicates the options that were used for encoding the EXI stream stream body independent of an EXI stream. [Definition:] The building block of an EXI stream body is an EXI event. An EXI stream. EXI grammar is formally specified in section 8. EXI Grammars. The following table summarizes the EXI events and associated content that occur in an EXI stream. In addition, the table includes the grammar notation used to represent each event in this specification. Each event in an EXI stream participates in a mapping system that relates events to XML Information Items so that an EXI document... The EXI header distinguishes EXI documents from text XML documents, identifies the version of the EXI format being used, and can specify the options used to encode or alignment of the EXI stream is turned on dictated by EXI options, padding bits of minumum length required to make the whole length of the header byte-aligned are added at the end of the header. The following sections describe the remaining three parts of the header. documents as well as XML documents can look at the Distinguishing Bits to determine whether to interpret a particular stream as XML or EXI. [Definition:] The third fourth documents. using the default options specified by the following XML document: <header xmlns=""> < fragment option is a Boolean that indicates whether the EXI body is an EXI document or an EXI fragment. When set to true, the EXI body is an EXI fragment. Otherwise, the EXI body is an EXI document. [Definition:] EXI fragments Section 8.4.2 Built-in. compression and alignment are not in effect. When the EXI events are subsequently subject to EXI compression or alignment, EXI compression and alignment options,... Note: Support for IEEE float representation is currently under consideration. (See F.2 IEEE Floats). A value represented as Unsigned Integer can be decoded by going through the following steps with the initial value set to 0 and the initial multiplier set to 1.. Otherwise, URI and local-name components are encoded as Strings (see 7.1.10 String) per the rules defined for uri content item and a local-name content item, respectively..ypes: The uris and local-names used in qname content items are also namespace URI declared in the schema,. Local-name content items and the local-name portion of qname content items are assigned to partitions based on the namespace URI of the NS event or qname content item of which the local-name is a part. Partitions containing local-name content items-name content items and all string table partitions containing value content items. When a string value is found in the partitions containing local-name content items,-name content items,, for elements with unrestricted types (e.g., xsd:anyType) qualified name and do not have additional a priori constraints as to their content. A separate instance of built-in element grammar is assigned to each qualified name upon the first occurrence of the elements of the same qualified name,.: regular or elements with unrestricted types (e.g. xsd:anyType), where 0 ≤ i < n and n is the number of type declarations in the schema.} .:.3.1.6 Particles. Then combine the sequence of grammars using the grammar concatenation operator defined in section 8.5.3.1.3.1.3.2.1 Eliminating Productions with no Terminal Symbol. The second step is described in section 8.5.3.1.3.1.6 Particles for the rules used to derive grammars from particles. Grammars for attribute uses of attributes "sku" and "color" are as follows. See section 8.5.3.1.3.1.4.3.2 EXI Normalized Grammars for the process that converts proto-grammars into normalized grammars, and section 8.5 which are shown below. See section 8.5.3.4 Undeclared Productions for the process that augments normalized grammars with productions that represent terminal symbols not declared in schemas...="codecMap" minOccurs="0" maxOccurs="unbounded"> <xsd:complexType> <xsd:sequence> <xsd:any <!-- schema type --> <xsd:any <!-- CODEC --> <", respectively.. WildcardEsc(i.e. meta-character '.'). See [XSD:37a]. MultiCharEscthat is one of '\S', '\I', '\C', '\D'and '\w'. See [XSD:37]. complEsc(examples of which are '\P{ L }'and '\P{ N }'). See [XSD:26]. negCharGroup.. The feature of F.1 Strict Schemas will be described in a normative part of the specification once the details of its definition have been settled. The others will each be tested in order to collect data to determine whether their value outweighs the additional cost and complexity they would introduce into the format. A strict schema fidelity option can be supported whereby an encoder can be informed of the validity of the document being processed. Strict schema coding would likely improve compactness for use cases where documents are known to be valid according to a given schema, or where it is known that a non-valid part of a document is of no relevance to the decoder. This fidelity option is unlikely to impact the runtime performance since it only affects an encoder's initialization phase. However, this additional information will likely benefit compactness by reducing the number of productions in the grammar and, consequently, the number of bits needed to encode event codes. One of the usages of strict schemas would be for processing the EXI document representing the EXI Options in an EXI header. It can be encoded using the schema in Appendix C XML Schema for EXI Options Header with the options specified by the following XML document: <header xmlns=""> <strict/> </header>. Schema-informed grammars are constructed primarily from structural constraints expressed in a schema (e.g., an XML Schema) by considering those events that are expected to happen at a certain state as being more likely to occur than other events. These primary events are assigned an event code of length 1 to ensure that they are represented in fewer bits. On the other hand, other events such as unexpected elements, unexpected attributes, unexpected end-of-elements, comments and processing instructions are considered less likely to occur relative to the primary events, and are thus assigned event codes of length 2 or 3 to reflect their occurrence expectancy. A schema can be regarded as a template. It follows that a schema-informed EXI encoding defines a way of representing an instance of an XML infoset relative to a template. This aspect of schema as a template opens up a path to the proposed feature of coalesced grammars, whereby EXI events convey not only an atomic event (such as an element or an attribute) but also a sequence thereof in a single coalesced event. Based on the constraints expressed in a schema, these events are construed to occur more frequently together than not. As an example, it may be possible to learn from a schema that the element <order> is always followed by the element <product>, which in turn is always followed by the element <description>. Based on this knowledge, it is possible to assign an event code to the sequence SE("order") SE("product") SE("description") by inserting a new, coalesced production into the grammar. This new production can be used in place of the existing productions for each event, thus using a single event code to represent them. Note that this coalescing technique is not restricted to SE events; it applies to any sequence of events that can be shown to occur frequently, for which the use of a single event code can provide obvious benefits. For example, an SE event followed by multiple AT events can also be combined using a single event code. To improve coalescing, the lexicographical order currently defined for attributes in EXI may need to be revisited so that required attributes are placed before optional ones. This additional requirement will enable better grouping and, consequently, better compactness..)
http://www.w3.org/TR/2007/WD-exi-20071219/
CC-MAIN-2017-17
refinedweb
1,549
55.24
On this blog we have recently learned how to build an Alexa Skill using Microsoft tools and platforms, like C#, Visual Studio and Azure Functions. The skill we have built in the previous posts was available to every user: it didn't need to know who the user is asking the questions in order to return a proper feedback. However, there are some scenarios where this approach isn't good enoguh. Think, for example, to a skill to order food that can be delivered at home. The skill must know who is placing the order, to charge your credit card with the total amount; or to pick your home location as delivery address. For these scenarios, Alexa provides a feature called account linking. When the user enables the skill, he will be requested to authenticate to a 3rd party service using an OAuth 2.0 flow. Once the linking is successful, every request to the skill will be authenticated against the service and Alexa will send you, in the JSON request, the access token which is required to perform further operations. You can identify skills which require account linking thanks to the message displayed under the Enable button in the Alexa Store. In this blog post we're going to see how to implement this flow by adding support to Azure Active Directory authentication and the Microsoft Graph to a skill. We'll welcome the user by saying his name and we'll be able to provide information about his OneDrive storage. Let's start! The account linking To setup the account linking you need to select your skill on the Alexa Developer Console and choose Account linking from the menu on the left: In the first part of the page you'll be able to customize the configuration of the account linking. In order to enable it, you have to turn on the first option Do you allow users to create an account or link to an existing account with you?. This will enable the message you have seen in the previous image in the Alexa Store. The second option can be used if your skill offers some features which can be leveraged also without linking an account. For example, the food delivery skill could offer a command to know the restaurants around you which can be freely invoked; however, as soon as you try to order some food, you'll be asked to link your account. In our scenario, we leave this option turned off. Our skill needs to use the Microsoft Graph, so we aren't able to provide any feature if the user hasn't login with his Microsoft Account. Then make sure to check Auth Code Grant as authorization grant type, since it's the safest one. The rest of the page is dedicated to configure the OAuth 2.0 flow, which will be triggered when the user enables the skill and authenticates with the 3rd party service. Let's take a step back and take a look at how the OAuth 2.0 flow works: The process involves three steps: - First, the client application (in our case, the skill) sends an authorization request to the specific endpoint exposed by the service. The user will see a popup or he will be redirected to a different page (in case of a web application), where he will be asked to login with his credentials. This page belongs to the owner of the service. This means that the client application (in our case, neither Amazon nor our skill) will never get access to the credentials of the user. - If the authorization request is approved, the application can use the received grant to request an access token. The access token is a unique identifier of the user which, however, doesn't contain any information that can be used to identify his credentials. - Once we have the access token, we can now finally access to all the protected resources exposed by the service. The access token will be included as authorization header of every request and, if it's valid, we will receive the resource we have asked. If we take a look now at the section to configure the OAuth 2.0 flow, we will see how the various fields match the flow we have just highlighted: - Authorization URI: this is the endpoint exposed by the service to ask the permission to start using it. You send a request with all the details to identify your application and, if approved, you will receive back the authorization code. - Access Token URI: this is the endpoint to use as next step. After we have been authorized to use the service, we can use the authorization code to send a request to this URI to get back the access token. - Client ID is the unique identifier of our application. - Client secret is a password which has been provided by the owner of the service to authenticate our application. - Scope is a set of values which which identiy the features of the service we want to leverage. In our scenario, we need to authenticate to the Microsoft Graph using a Microsoft Account or an Office 365 account. Where can we find all the information we need to fill the Account Linking page in the Amazon portal? In order to leverage authentication with a 3d party service, we need to register our application against it. For this reason, all the 3rd party services which provide authentication (Microsoft, Google, Twitter, GitHub, etc.) offer a developer portal that can be used to register an application and to obtain all the credentials we need to implement the authentication flow. The one offered by Microsoft to register an application which needs to support Azure AD authentication is available in the Azure Portal. As such, the first step is to login to using the account which you want to use to register your application (Microsoft Account or Office 365). Once you are logged into Azure, open the Azure Active Directory section and choose App registrations (Preview): Let's start to register our application. Click on the New registration button. First give to your application a unique name. In my case, I chose MyAlexaSkill. Then you must specify which account types you want to support: - Accounts in the organizational directory only: if you're logged in with an Office 365 account, you will be able to create an application which is authorized to allow the login only for users from your organization. This approach is useful when you're building an internal app, which my be used only by people from your company. - Accounts in any organizational directory: this option will enable any Office 365 user to login with their account, regardless of the tenant they belong to. - Accounts in any organizational directory and personal Microsoft Account: this option will enable not only Office 365 users to login, but also users with a personal Microsoft Account. In our scenario, let's enable the third option. We want to enable every user to use the skill to know how much space they have left on OneDrive, regardless if it's the personal space or the business one. Ignore, for the moment, the Redirect URI section. We're going to fill it later. Press Register at the end of the section. Now we can start to fill the various field in the Alexa Developer Portal. Let's see where to retrieve the various parameters: - Authorization URI and Access Token URI: we can find this information by clicking on the Endpoints button. From there, we need to copy the two URIs under OAuth 2.0 authorization endpoint (v2) and OAuth 2.0 token endpoint (v2). - Client Id can be retrieved in the main page under Application (client) ID: - Client Secret must be generated from the Azure portal. Click on Certificates & secrets in the left panel, then click on New client secret. Give it a description and choose an expiration date. In the context of a skill, you can safely choose Never. This way, you won't have to remember to update, from time to time, the client secret in the Alexa Developer Portal. Once the secret has been generated, copy it in the portal and store it also somewhere safe. The reason is that this is the only time you'll be able to see the client secret in clear. After that, it will be automatically masked and there won't be any way to see it again. You'll be forced to generate a new one, if you lose it. - Client Authentication Scheme: keep HTTP Basic. - Scope must match the various scopes that we have enabled in the Azure portal. You can see them by clicking on API permissions in the Azure app configuration. By default, the standard one is User.Read, which allows the application to read the basic information of the user profile, like his name or his mail address. If you want to add more scopes, you can click on Add a permission and explore all the other scopes that are offered by the Azure services or by your custom services. In this case, we're building a skill which integrates with the Microsoft Graph, so you'll find all the available permissions under Microsoft Graph: Scopes are categorized as Delegated permissions and Application permissions. In our case, we want the skill to access to the API as the signed user, so we can focus on the first category. From there, you'll be able to explore all the available scopes and enable the ones you need. Pay attention that, for some of them, you may see the value Yes in the Admin consent required column. If you're building an enterprise skill which authenticates with an Office 365 account, the administrator of the tenant may block the usage of some scopes for security reasons. In such cases, you will need to reach him to ask to enable the proper permissions. Once you have defined the scopes you need, you will need to copy them inside the Scope field of the Alexa Developer Portal. If you have multiple scopes, you can click on the + button to add new fields. The last step requires the opposite process compared to what we have done so far. We'll need to take some info from the Alexa Developer Portal and copy them to the Azure one. In the OAuth 2.0 flow, in fact, once the authentication process is completed the service will forward the response to a specific endpoint exposed by our application. In this case, since it's an Alexa skill, the endpoints are provided directly by Amazon. We can find them at the end of the Account Linking configuration page: We need to copy them in the Azure Portal, in the Authentication section: Hit Save to complete the process, then move back to the Alexa Developer Portal and hit Save also there. We're done! Now account linking should have been properly enabled. This is how the configuration of your Alexa skill should look like: In order to test the linking, we need to open the Alexa web application, which you can find at. Login with your account, move to the Skills section and click on the Your skills button on the top right. Please note! I apologize if the following screenshots are in Italian, but it seems that there's no way to force Amazon to display the application in English if your account is based in Italy. However, they should be helpful to understand where to look for the various options. You will find multiple categories, including one called Skills for developers. Here you will find all the skills you have created in the Alexa Developer Portal. Look for the one you're working on and notice how, below the name, you will see now a message informing that account linking is required. Click on the skill and enable it. If you did everything correctly, another windows or tab of your browser will open up to display the Microsoft Account authentication page. Login with your Microsoft Account or Office 365 account and you should be all set! You should see a confirm message notifying you that your account is now linked to the skill. Handling the authentication in the Azure Function Now that we have completed the configuration on the portal, we need to start working on our backend, which is hosted by an Azure Function. We need, in fact, to retrieve the access token and use it to perform operations with the Microsoft Graph. I won't explain from scratch how to build an Alexa Skill hosted by an Azure Function, since we already did it in the previous posts. Here we will focus only on the relevant snippets for handling the authentication process. Thanks to Alexa.NET it's easy to retrieve the token, since it's stored in the Session.User.AccessToken property of the SkillRequest object, which is the one that maps the JSON coming from Alexa with all the details about the request: public static async Task<IActionResult> Run([HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = null)] HttpRequest req, ILogger log) { var json = await req.ReadAsStringAsync(); var skillRequest = JsonConvert.DeserializeObject<SkillRequest>(json); var token = skillRequest.Session.User.AccessToken; ///handle the request and return the response } Now that we have the token, we can use it to perform operations against the Microsoft Graph. The Graph is nothing more than a set of REST endpoints, which you can call by sending HTTP commands. Based on the type of operation, it could be a GET, a POST, a PUT, etc. The easiest way to understand the opportunities offered by the Microsoft Graph is to use Graph Explorer, which is a sort of playground. You can login with your Microsoft Account or Office 365 account and then start making request to the APIs. The tool will offer you the opportunity to compose requests and to observe the JSON response, other than discovering all the available endpoints. However, we are not required to perform manual HTTP requests in order to use the Graph. Microsoft, in fact, provides multiple SDKs which encapsulate all the logic to work with the Graph, similarly to how Alexa.NET abstracts all the JSON requests and responses and allows you to work with classes and objects. Of course, we have also a SDK for .NET, so we can easily integrate it in our Azure Function. Right click on your project in Visual Studio, choose Manage NuGet packages and install the package called Microsoft.Graph. The object that we can use to perform operations against the Graph is called GraphServiceClient, which is included in the Microsoft.Graph namespace. However, we can't use it as it is. We need to use an authenticated client, so we will need to supply the access token included in the request. Here is a simple setup method that we can add to our Azure Function: public static GraphServiceClient GetAuthenticatedClientForUser(string token, ILogger logger) { GraphServiceClient graphClient = null; // Create Microsoft Graph client. try { graphClient = new GraphServiceClient( "", new DelegateAuthenticationProvider( (requestMessage) => { requestMessage.Headers.Authorization = new AuthenticationHeaderValue("bearer", token); return Task.CompletedTask; })); return graphClient; } catch (Exception exc) { logger.LogError(exc, "Could not create a graph client"); } return graphClient; } We create a new instance of the GraphServiceClient object, passing as parameter: - The endpoint we want to use for the Graph. We use the production one, which is. There's also a preview one, which provides access to all the beta APIs, and it's available at. - A DelegateAuthenticationProvider, which is the delegate used to handle the authentication. The delegate provides a reference to the HTTP request which is sent to the Microsoft Graph, that we need to customize by supplying the access token in the authorization header. Now that we have an authenticated client, we can use it to perform operations with the Graph. For example, let's customize the welcome message of the skill by adding the name of the user. We do this in case the incoming request type is LaunchRequest: public static async Task<IActionResult> Run([HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = null)] HttpRequest req, ILogger log) { var json = await req.ReadAsStringAsync(); var skillRequest = JsonConvert.DeserializeObject<SkillRequest>(json); // Verifies that the request is indeed coming from Alexa. var isValid = await skillRequest.ValidateRequestAsync(req, log); if (!isValid) { return new BadRequestResult(); } var request = skillRequest.Request; SkillResponse response = null; var token = skillRequest.Session.User.AccessToken; var client = GetAuthenticatedClientForUser(token, log); if (request is LaunchRequest launchRequest) { var me = await client.Me.Request().GetAsync(); var welcomeMessage = $"Welcome {me.DisplayName}"; response = ResponseBuilder.Tell(welcomeMessage); } return new OkObjectResult(response); } The client simply maps all the endpoints that you can see in the Graph Explorer as objects. As such, in order to get the basic profile of the user (which is mapped with the endpoint), we can use the Me property exposed by the client. To actually perform the request we need to call the Request() method, followed by the HTTP method we need to use. In this case, the me endpoint replies to a HTTP GET, so we call the GetAsync() method. We get in return an object which maps all the properties that we can see in the response JSON in Graph Explorer. For our scenario, we retrieve the name using the DisplayName property and we add it to the welcome message. The rest of the code is the same we've seen in the other posts: we use the ResponseBuilder class to create a response and then we send it back as response to the Alexa service which invoked our function. Now we that we have seen how to handle the authentication in the backend, the sky is the limit 😃 For example, if we follow our original idea of integrating OneDrive with our Alexa skill, we can create a new intent in the Interaction Model with name Quota and then, in the function, handle it with the help of the Graph client: if (intentRequest.Intent.Name == "Quota") { var drive = await graphClient.Me.Drive.Request().GetAsync(); int free = (int)(drive.Quota.Remaining.Value / 1024 / 1024 / 1024); var quotaMessage = $"You have {free.ToString()} GB available"; response = ResponseBuilder.Tell(quotaMessage); } The structure of the code is the same as the previous example. If we use Graph Explorer, we can see how the information about the OneDrive storage of the user is available through the endpoint. This endpoint is mapped by the property Me.Drive, which returns a Drive object that contains all the properties of the storage. Thanks to the Quota property we can get the information we're looking for, which is the available space, and we use it to build a response for the user. Wrapping up In this post we have seen how to enable account linking for an Alexa Skill, allowing it to identify the user and to provide customized responses. In this specific example we have leveraged AAD authentication and the Microsoft Graph, but any service which supports OAuth 2.0 for the authentication flow is supported. You could integrate your Alexa skill, for example, with Twitter, GitHub, Facebook, etc. Thanks to the OAuth 2.0 configuration provided in the Alexa Developer Portal, you don't have to handle the authentication flow on your own, by manually communicating with the various authentication and access token endpoints. You just have to provide to Amazon the configuration of your service and, automatically, you will receive the access token in the body of the request which is delivered to your skill. You can find the full OneDrive sample on GitHub at. Of course, you will find only the code of the backend Azure Function. It's up to you to create the most appropriate interaction model and to setup an Azure AD application to support the authentication. A huge thanks to my friend Marco Minerva, who is sharing with me lot of interesting activities around Alexa skills and who helped me in setting up Account Linking. Happy coding!
https://blogs.msdn.microsoft.com/appconsult/2018/12/05/alexa-azure-functions-microsoft-graph-a-smarter-assistant/
CC-MAIN-2019-09
refinedweb
3,329
61.67
On Mon, Aug 20, 2007 at 11:22:17AM -0400, David Roundy wrote: > On Sat, Aug 18, 2007 at 03:50:48PM +0200, Andrea Rossato wrote: > > The present behavior is incompatible with Tabbed (it forces a redraw > > of tabs every time the pointer enters a window). Moreover I don't like > > very much the fact that a keyboard driven WM gives such a power to the > > mice. > > Actually, the present behavior works as it is supposed to with Tabbed: > when focus changes, we *need* to redraw the tabs, so they will properly > reflect the focus. Sorry, I didn't mean this: what I meant is that mouse focus forces tabs to be redrawn whenever the pointer enters the focused windows coming from tabs (you've just sent a patch to correct that if I read correctly). > I like the idea of xmonad as a user-driven WM... I was just kidding when talking about mice's power...;-) If your patch will make mouse focus a bit less noticeable, I can live with it. > > I need it for my remote controlling stuff. But I'm not the only one. > > I think I can see this: you want to insert hooks that are not part of a > Layout? That's it. > > >. I'm quite aware that I seem not to get something about layout modifiers. I thought that adding a hook was just like: layoutModifier :: Layout a -> Layout a layoutModifier cl@(Layout {doLayout = dl , modifyLayout = ml}) = Layout {doLayout = dl , modifyLayout = modLay} where modLay sm | Just e <- fromMessage sm = do handle_event e ml sm | otherwise = ml sm handle_event e@(AnyEvent {ev_event_type = t}) | t == whatEverEvent = do io (putStrLn "Hello World") >> return () -- but focus w ??? handle_event _ = return () But if I change "putStrLn Hello..." with focus w, that is to say, if I change the stack of windows what should I do next? runlayout (with the need of recalculating the screen rectangle), for instance? This is not clear to me and I seem not to be able to get it only by studying the code (your code actually). Self-modifying layouts and layout transformers is precisely what I want to understand. Can you give me some directions, please? Thanks for your kind attention, Andrea
http://www.haskell.org/pipermail/xmonad/2007-August/001848.html
CC-MAIN-2014-15
refinedweb
369
69.52
An abstract base class that should never be used directly, only inherited. More... #include <TelescopeClient.hpp> An abstract base class that should never be used directly, only inherited. This class used to be called Telescope, but it has been renamed to TelescopeClient in order to resolve a compiler/linker conflict with the identically named Telescope class in Stellarium's main code. Definition at line 53 of file TelescopeClient.hpp. Return the angular radius of a circle containing the object as seen from the observer with the circle center assumed to be at getJ2000EquatorialPos(). Implements StelObject. Definition at line 79 of file TelescopeClient.hpp. Return object's name in english. Implements StelObject. Definition at line 62 of file TelescopeClient.hpp. Returns a unique identifier for this object. The ID should be unique for all objects of the same type, but may freely conflict with IDs of other types, so getType() must also be tested. With this it should be possible to at least identify the same object in a different instance of Stellarium running the same version, but it would even be better if the ID provides some degree of forward-compatibility. For some object types (e.g. planets) this may simply return getEnglishName(), but better candidates may be official designations or at least (stable) internal IDs. An object may have multiple IDs (different catalog numbers, etc). StelObjectMgr::searchByID() should search through all ID variants, but this method only returns one of them. Implements StelObject. Definition at line 78 of file TelescopeClient.hpp. Get a color used to display info about the object. Reimplemented from StelObject. Definition at line 64 of file TelescopeClient.hpp. TelescopeClient supports the following InfoStringGroup flags: Implements StelObject. Return translated object's name. Implements StelObject. Definition at line 63 of file TelescopeClient.hpp. Return object's type. It should be the name of the class. Implements StelObject. Definition at line 77 of file TelescopeClient.hpp.
http://stellarium.org/doc/0.16/classTelescopeClient.html
CC-MAIN-2018-09
refinedweb
318
50.73
Google Maps on Rails with NetBeans(Part One) By Gao Ang on 四月 01, 2008 Tutorial Requirements This tutorial requires the following technology: - NetBeans 6.0 Ruby IDE or 6.1 Beta - Ruby on Rails: 2.0.2 - Ruby: 1.8.6 - Database: MySQL - Rails Gems: GeoKit, Cartographer Objective of our application Our Google maps Rails mashup demo will accomplish the following tasks: - A table will be created in the database that contents the location description information, such as address, longitude and latitude of the location, description. - There will be a Google Maps with control bar in our browser, as well as a form of us to submit new location. - Users could input specific address location and description, when these information submit by users, location will be translate to the longitude and latitude and store in the backend database. - After user adds a new location, the new point will show up in the map. - If user clicks the new marker, a popup tips will show up with the submitted information. Okay, let get started to realize these kinds of functions. Obtaining a Google Maps API Key The Google Maps API lets us embed Google Maps in our own web pages. First of all, we should have a API key to make use of the Google Maps API. Go to the following address to sign up for the Google Maps API Key. Agree with the terms and conditions of Google Maps usage. Make sure to input the site URL and click "Generate API Key" to obtain the Google Maps API key for our site. Then we should copy the API key generated by Google Maps and we will use it in our configuration file later. Creating the Ruby on Rails Project Choose File > New Project in NetBeans. Select Ruby in the Categories field and Ruby on Rails Application in the Projects field and then click Next. Type 'gmaps' in the Project Name field. Choose MySQL as the backend database and then click Finish. In the Projects window, expand the gmaps node. We could see that all the Rails folder has been generated by NetBeans Ruby IDE. Then we should create a new database called gmaps_development. We use 'mysqladmin' tools to create this database. In the command line, input the following command: mysqladmin -u root create gmaps_development After the database is created, we should expand the 'config' node and add the database description in the database.yml file like this: development: adapter: mysql encoding: utf8 database: gmaps_development username: root host: localhost After the database configuration, we could use the Rake command 'rake db:migrate' to check whether or not our configuration is correct. Add model for our project Now, we are gonna add a new model 'location' for our application. In NetBeans Rails Generator, we select 'Generate->model' and input 'location' in the argument and click 'OK'. This action will generate a new model for us. In the generated database migration file 'db/migrate/001_create_locations.rb', add the following code: class CreateLocations < ActiveRecord::Migration def self.up create_table:locationsdo |t| t.column:address,:string,:limit =>100 t.column:description,:string,:limit =>100 t.column:lat,:decimal,:precision =>15,:scale =>10 t.column:lng,:decimal,:precision =>15,:scale =>10 end end def self.down drop_table:locations end end In the database migration file, we add three parameters (address, description, latitude, longitude) for our model location. And we also assign the type and precision of these parameters. Then right click the gmaps project and select 'Migrate Database-> To Current Version' to synchronize the database migration task with the database scheme. We could see the feedback from the console widow like this: == 1 CreateLocations: migrating ============= -- create_table(:locations) -> 0.1720s == 1 CreateLocations: migrated (0.1720s) ======= Then the table locations have been created with the columns address, description, latitude and longitude. Installing the GeoKit and Cartographer Gem In the command line of our application folder, input the following command to install the GeoKit gem: ruby script/plugin install svn://rubyforge.org/var/svn/geokit/trunk After the installation, we could test the GeoKit first in the Rails console. GeoKit use Google, Yahoo and Geocoder.us for geo-coding (to translate the real address to latitude and longitude). GeoKit should copy your Google Maps API Key to the file config/envirnoment.rb, after the sentence 'GeoKit::Geocoders::google ='. Now, right click gmaps project and then select 'Rails console' to load Rails console in NetBeans. Type in: >>include GeoKit::Geocoders >> home = MultiGeocoder.geocode('100 Spear St,San Francisco, CA') => #<GeoKit::GeoLoc:0x We use parameter 'home' to receive the result of real address geo-coding. In the Rails console, we will get the country code, zip code, lng and lat information according to the address '100 Spear St, San Francisco, CA'. Then we will use the auto coding function of GeoKit to store our address information in the database. In the model app/models/location.rb, add the following sentence: class Location < ActiveRecord::Base acts_as_mappable:auto_geocode =>true end Now we back to the Rails console, use the following command to create the new record for the location model: >> Location.create(:address => "") => #<Location id: 1, address: "", lat:#<BigDecimal:3063968,'0.37792528E2',12(12)>,lng:#<BigDecimal: >> Location.create(:address => "") => #<Location id: 2, address: "", lat: #<BigDecimal:2fff184,'0.37794391E2',12(12)>, lng: #<BigDecimal:2ffef04,'-0.122394831E3',12(16)>> Then one record has been added to the Locations table. In the next step, we will use marker to put these record in the locations table. To make it easy to use Google Maps API in our application, we need another Ruby gem and we should download the Cartographer plugin from the project homepage() to help us embed Google Maps API in our pages. After download the Cartographer gem, unzip the plugin to the folder vendor/plugins, and move the configuration file cartographer-config.yml to the directory config in the gmaps application. Of course, we should add the Google API Key to cartographer-config.yml file like this: 127.0.0.1:3000: ABQIAAAAj5cpJ2swzFT77RVZXuP73BTX2XchcwgyHzp4Xo0DHRAzt2aLjhSg2ymVlJvVjBa7kWNgtqU8xxwIAQ Add controller for our project Now we will create a controller named 'location'. Open the Rails Generator dialogue and fill the 'Name' field with location and then click 'OK'. Then the controller file app/controllers/location_controller.rb has been created. There will be two methods in the controller 'index' and 'create', the code of index method looks like: def index @locations = Location.find(:all) @map = Cartographer::Gmap.new("gmaps") @map.controls = [ :large, :scale ] @map.debug = true @map.center = [37.79, -122.4] @locations.each do |location| @map.markers << Cartographer::Gmarker.new(:name => "location_" + location.id.to_s, :position => [location.lat, location.lng], :info_window => location.description, :map => @map ) end end Here we accomplish there tasks in the index method. First, use 'Location.find(:all)' to get all the records in the database. Second, mark the address to the Maps with Google Maps API. And then put the location information in the Maps with the Google Maps API. We use the plugin Cartographer to help us to generate the New Maps with Google Maps, the id of the map is gmaps, and this id should be coordinate with the div label in the pages. Besides, we use Cartographer to define the control bar and the scale bar of the Maps, and use @map.center to set the center position of the map. Then in the next loop, we get the record from the table and use @map.markers to generate markers of the map. And also display the description sentence in popup windows. After the index controller, we will focus on the index view and location layout. The location layout (views/layouts/location.rhtml) looks like: <html> <head> <title>Map demo</title> <%= Cartographer::Header.header_for(request) %> </head> <body id="map"> <div id="main"> <%= yield :layout %> </div> </body> </html> And the index view (views/location/index.rhtml) looks like: <h1>Add New Location</h1> <hr /> <% @map.zoom = 14 %> <%= @map.to_html(true) %> <div id="gmaps" style="width: 600px; height: 400px;"></div> We put Cartographer tag in the head label. Cartographer plugin will interpret our code to Google Maps API JavaScript code to load the Google Maps in the page. Now let's start the server and open browser to look at what we get from the above code. We could see a map with two point markers in the final page. These two point markers are the record from the table that we added just now.!
https://blogs.oracle.com/Chinese_Functional_CA/tags/maps
CC-MAIN-2014-15
refinedweb
1,402
58.58
| The. Plant’s first step was creating variations of her initial drawing to show different states of motion and saving each variation as a separate file. Then she opened the first sketch in Photoshop and dragged the remaining files from her computer into the document, pressing return (or enter) to place each file onto its own layer. Next, she clicked, and the many tools photographers rely on. Creative Cloud All Apps Get Photoshop and the entire collection of creative apps for just US$52.99/mo. Students and teachers Save over 60% on the entire collection of Creative Cloud apps. US$19.99/mo. Business Get Photoshop and all the Creative Cloud apps plus exclusive business features. US$52.99/mo
https://www.adobe.com/creativecloud/photography/discover/animated-gif.html
CC-MAIN-2021-04
refinedweb
121
61.16
OpenMP/Tasks If you followed along with the previous chapter and did the exercises, you'll have discovered that the sum function we developed was quite unstable. (To be fair, we set up the data to show the instability.) In fact, the naive way of summing has round-off error: the error is proportional to the number of elements we sum. We can fix this by changing the summation algorithm. There are three candidates for a stable sum algorithm: - Sort the number from small to large, then sum them as before. Problem: this changes the complexity of the problem from linear to . - Use Kahan's algorithm. While this algorithm takes linear time and is very stable, it is quite slow in practice and harder to parallelize then our third alternative, which is - divide-and-conquer recursion. The round-off error for such an algorithm is , i.e. proportional to the logarithm of the number of elements. The basic divide and conquer summation is very easy to express in C: float sum(const float *a, size_t n) { // base cases if (n == 0) { return 0; } else if (n == 1) { return *a; } // recursive case size_t half = n / 2; return sum(a, half) + sum(a + half, n - half); } If you use this definition of sum in the program we developed for the previous chapter, you'll see that it produces exactly the expected result. But this algorithm doesn't have a loop, so how do we make a parallel version using OpenMP? We'll use the tasks construct in OpenMP, treating the problem as task-parallel instead of data parallel. In a first version, the task-recursive version of sum looks like float sum(const float *a, size_t n) { // base cases if (n == 0) { return 0; } else if (n == 1) { return 1; } // recursive case size_t half = n / 2; float x, y; #pragma omp parallel #pragma omp single nowait { #pragma omp task shared(x) x = sum(a, half); #pragma omp task shared(y) y = sum(a + half, n - half); #pragma omp taskwait x += y; } return x; } We introduced two tasks, each of which sets a variable that is declared shared with the other task. If we did not declare the variables shared, each task would set its own local variable, then throw away the results. We then wait for the tasks to complete with #pragma omp taskwait and combine the recursive results. You may be surprised by the #pragma omp parallel followed immediately by #pragma omp single nowait. The thing is that the first pragma causes all of the threads in the pool to execute the next block of code. The single directive causes all threads but one (usually the first to encounter the block) to not execute it, while the nowait turns off the barrier on the single; there's already a barrier on the enclosing parallel region, to which the other threads will rush. Unfortunately, if you actually try to run this code, you'll find that it's still not extremely fast. The reason is that the tasks are much too fine-grained: near the bottom of the recursion tree, invocations are splitting two-element arrays into subtasks that process one element each. We can solve this problem by introducing, apart from the base and recursive cases, an "intermediate case" for the recursion which is recursive, but does not involve setting up parallel tasks: if the recursion hits a prespecified cutoff, it will no longer try to set up tasks for the OpenMP thread pool, but will just do the recursive sum itself. - Exercise: introduce the additional case in the recursion and measure how fast the program is. Don't peek ahead to the next program, because it contains the solution to this exercise. Now we effectively have two recursions rolled into one: one with a parallel recursive case, and a serial one. We can disentangle the two to get better performance, by doing less checks at each level. We also separate the parallel setup code to a driver function. #include <stddef.h> #define CUTOFF 100 // arbitrary static float parallel_sum(const float *, size_t); static float serial_sum(const float *, size_t); float sum(const float *a, size_t n) { float r; #pragma omp parallel #pragma omp single nowait r = parallel_sum(a, n); return r; } static float parallel_sum(const float *a, size_t n) { // base case if (n <= CUTOFF) { return serial_sum(a, n); } // recursive case float x, y; size_t half = n / 2; #pragma omp task shared(x) x = parallel_sum(a, half); #pragma omp task shared(y) y = parallel_sum(a + half, n - half); #pragma omp taskwait x += y; return x; } static float serial_sum(const float *a, size_t n) { // base cases if (n == 0) { return 0.; } else if (n == 1) { return a[0]; } // recursive case size_t half = n / 2; return serial_sum(a, half) + serial_sum(a + half, n - half); } This technique works better when the code inside the parallel tasks spends more time computing and less time doing memory accesses, because those may need to be synchronized between processors.
https://en.wikibooks.org/wiki/OpenMP/Tasks
CC-MAIN-2021-43
refinedweb
829
52.43
Note: See also Multi-site support in the Personalization Native Integration for Commerce and Personalization 2.0 breaking changes. Scope A scope is a context in which tracking and catalog export occur. A scope contains an instance of a Product Recommendations (formerly Perform) Engine and one or more websites that are communicating with that instance. Scopes are always mutually exclusive and never nested; a tracking action occurs only in a single scope. Recommendations for products belonging to a specific scope are not given for tracking actions occurring in another scope. Alias A scope alias (referred to as “alias” from now on) is a shorthand version of the scope's name. Aliases are used as suffixes to appSettings key attribute values. The purpose of an alias is to improve the readability of configuration settings. Aliases are used in configuration settings only; they are not used in the API. Scopeless Settings that do not use an alias suffix are called scopeless. In earlier versions of the configuration schema, all settings are scopeless. Scopeless settings are used as a fallback if a requested setting is not defined for a specific scope. This makes the new configuration schema backwards-compatible. If asked for a valid setting for scope X, and scope X is not defined in the configuration file, the internal configuration class falls back to the corresponding scopeless setting. Configuration The new version of Personalization introduces a set of appSettings that let you configure several scopes within the same configuration file. The new version is backwards compatible with the old configuration scheme. So, if you do not want to use the scope feature, do not update your configuration when upgrading. ScopeAliasMapping [New in Personalization.Common 2.0, the Commerce native integration package for Personalization] ScopeAliasMapping is the only new setting in version 2.0, but because aliases are appended as a suffix to the key attribute value, you may have to modify existing keys. See Installing and configuring the native integration package for the list of other settings. To define scopes that you will use, add a ScopeAliasMapping setting for each scope: <add key="episerver:personalization.ScopeAliasMapping.[Alias]" value="[Scope]"/> The value attribute contains the scope name. The alias should be a shorthand name to make it easier to read in the configuration file. Note that the alias value is specified as a suffix in the key attribute. When extracting the alias from the key attribute, episerver:personalization.ScopeAliasMapping. is trimmed off the beginning of the key, and the remainder is the alias. In the example above, [Alias] is the alias. Any string can be the scope and alias values as long as they do not include reserved XML characters. Together, the ScopeAliasMappings act as the master list of scopes. The aliases defined there are used to find other settings that apply to that scope. Any Personalization appSetting keys with an alias not defined as a ScopeAliasMapping are ignored. Each ScopeAliasMapping adds a requirement that all other, required Personalization appSettings are defined using the same alias suffix. This is validated upon site initialization; an exception is thrown if any required settings are missing. Multiple channels [New in Commerce 13.8.0 and Personalization.Common v3.1.0] You can configure more than one channel per scope. So, a scope can have multiple channels per tracking request. To configure multiple channels, use the ScopeAliasMapping setting and add a channel setting for each channel. For example: <add key="episerver:personalization.ClientToken.[Alias].[Channel]" value="[ChannelToken]"/> - [Channel] is the name of channel used for tracking (for example: web, mobile). - [Channel] is specified as a suffix in the key attribute. When extracting the channel from the key attribute, episerver:personalization.ClientToken.Alias. is trimmed from the beginning of the key, and the remainder is the channel. - The value attribute is the ChannelToken value. In the example above, [Channel] is the channel’s name. Any string can be the channel’s name and token values, but they cannot include reserved XML characters. Required, Optional, and Global settings A Personalization appSetting is either required, optional, or global. The following rules apply to each group. - Required. Treated as a group; all of them must be defined for each scope. If there are no scopes, all must be defined as scopeless. Scopeless settings are allowed in parallel with scoped settings and are, in that case, used for all undefined scopes. See Fallback. - Optional. May be defined on the scope level, scopeless level, or not at all. - Global. Settings cannot be defined per scope. Fallback The fallback scheme that works on the following levels: - Scope fallback. At runtime, if asked to provide settings for an unknown scope (that is, a scope that has no matching ScopeAliasMapping), the fallback returns the scopeless settings if available. - Channel fallback [New in Commerce 13.8.0 and Personalization.Common v3.1.0]. If, at runtime, you provide settings for an unknown channel (that is, a channel that does not match any channel’s name), the default channel of web is returned. - Optional settings fallback. If an optional setting is not defined for a scope, the fallback uses the scopeless setting and, finally, the hard-coded default value. Individual required settings do not allow for fallback due to rules that apply to them. You cannot define global settings per scope. Defining a global setting on the scope level does not cause an error, but the scoped value is ignored, instead falling back to the scopeless or default value. Consider the following partial configuration. [V1 and V2 keys are new in Commerce 13.8.0 and Personalization.Common v3.1.0] <add key="episerver:personalization.ScopeAliasMapping.Alias1" value="Scope1"/> <add key="episerver:personalization.BaseApiUrl.Alias1" value="A"/> <add key="episerver:personalization.Site.Alias1" value="B" /> <add key="episerver:personalization.ClientToken.Alias1" value="C" /> <add key="episerver:personalization.ClientToken.Alias1.Web" value="V1" /> <add key="episerver:personalization.ClientToken.Alias1.Mobile" value="V2" /> <add key="episerver:personalization.BaseApiUrl" value="D" /> <add key="episerver:personalization.Site" value="E" /> <add key="episerver:personalization.ClientToken" value="F" /> <add key="episerver:personalization.CatalogNameForFeed" value="G"/> At runtime, the Site value (required) is B for Scope1, and E for all other scopes. The CatalogNameForFeed value (optional) is G for all scopes. In Commerce versions 13.8.0 and Personalization.Common v3.1.0 and up, the channel token value is V1 for web channel, and V2 for mobile channel. Default implementation The EpiServer.Personalization package provides a default implementation of the Scope feature. This implementation assumes that each CMS site uses one Commerce catalog and one Product Recommendations Engine instance. If this setup fits your needs, you can set it up using the configuration only. If you need a more specialized setup, write custom code. Configuration If you have only one site that uses Personalization, use scopeless settings only; do not bother to define scopes. If you need to configure several scopes, the scope names are expected to be each site's SiteDefinitionID. You can find this value in the CMS Admin view > Config tab > Manage Websites. Starting with EPiServer.CMS.UI version 11.5.0, this value is shown in a field for each site. In earlier versions, get the ID from the query string in the links in the list of websites. Open each link in its own browser tab and copy the ID from the address field. Catalog export The default implementation iterates over all scopes registered in configuration and exports one full catalog feed per scope. Also, you can write custom code to filter each exported product. CUID and SessionID storage The default implementation stores CUID and SessionID in cookies, as it did in earlier versions. If you have a single site using Personalization, you do not need to modify this behavior. To add additional scopes, you probably need to implement your own ICookieService. See Modifying the Default Behavior for further discussion. Identifying correct scope for tracking actions The scope is automatically set to the SiteDefinitionID of the site that triggers the tracking action. This value is used to load the correct settings from configuration. You can override this behavior by passing the scope to the tracking method yourself. See Specifying scope for tracking actions. Modifying the default behavior If your installation differs from what is supported by the default implementation, you need to write custom code to fit your purposes. Catalog export There are several interfaces you can create custom implementations of to modify the behavior of the catalog export. The main change in the scope feature release is that the scope name for the current export is passed to all extension points. The following interfaces are of extra interest if you want to use fractions of the same catalog for different scopes - ICatalogItemFilter. Lets you decide whether to include each item in the export for a given scope. - IEntryUrlService/IFeedUrlConverter. Lets you define the absolute URL for a product, based on a given scope. See the SDK for the full list of extension points. CUID and SessionID storage CUIDs and SessionIDs are created by the Product Recommendations Engine. They are only valid for the Product Recommendations Engine instance that created them. This means that these values need to be siloed per scope. The default implementation stores CUID and SessionID in cookies that are bound to the current domain. This implementation covers the common scenario where no domain is shared between scopes. If you want to support a scenario where two different scopes share a domain, write custom code to support that. If you want to split an existing scope in two, make sure that the values in existing cookies are not used for the new scope. To control how cookies are created and read for a scope, replace the default ICookieService (namespace EPiServer.Personalization.Common) implementation with your custom implementation. Specifying scope for tracking actions If the correct scope cannot be derived from the current SiteDefinitionID, you are responsible for determining the correct scope for each tracking action. The EPiServer.Tracking.Commerce Nuget package contains extension methods for TrackingService that let you to pass scope to the Track method. The following naive scope calculation implementation is based on one of the RecommendationService class methods in Quicksilver. public async Task<TrackingResponseData> TrackCategoryAsync(HttpContextBase httpContext, NodeContent category) { if (_contextModeResolver.CurrentMode != ContextMode.Default) { return null; } var trackingData = _trackingDataFactory.CreateCategoryTrackingData(category, httpContext); AddMarketAttribute(trackingData); var scope = category.Name.StartsWith("Mens") ? "MensFashion" : "WomensFashion"; return await _trackingService.TrackAsync(trackingData, httpContext, _contentRouteHelperAccessor().Content, scope); } Last updated: Oct 07, 2019
https://world.episerver.com/documentation/developer-guides/commerce/personalization/recommendations/multiple-scopes/
CC-MAIN-2020-45
refinedweb
1,743
50.43
In this lesson we will write the code for the TicTacToe game model itself. This is just an abstract implementation that by itself isn’t playable, and doesn’t actually display anything. It simply has the idea of what the game is, what the rules are, and how to win, etc. Tic Tac Toe - Create a new project folder named “Model” as a subfolder of the “Scripts” folder - Create a new C# script named “Game” in the “Model” folder - Open the script for editing and replace the template code with the following: using UnityEngine; using System.Collections; namespace TicTacToe { public enum Mark { None, X, O } public class Game { // Add Code Here } } I like to use the most simple and intuitive name I can for my types. However, names like “Game” and “Mark” are so generic that it is important to place them within the “TicTacToe” namespace. This will help us to avoid naming conflicts and also provides a helpful context for understanding what kind of Game or what kind of Mark I am working with. I defined a simple enumeration called “Mark”. Note that it is defined within the “TicTacToe” namespace, but outside of the “Game” class. The Mark enum defines three types: an ‘X’ and ‘O’ to represent the moves made by opposing players, and ‘None’ to indicate a spot which hasn’t been claimed yet. In a more complex game I might have stuck an enumeration like this into its own file, but this is fine for now. I decided to create the “Game” as a normal class, not a subclass of MonoBehaviour. This makes the class much more reusable – even outside of Unity, and prevents me from tightly coupling the code to any idea of how it should be presented. When you treat your game model in an abstract way, it is much easier to adapt it to any sort of “view” whether you want a 2D or 3D implementation, etc. Really simple board games like this are easy to create this way. Other more complex games which rely on physics, etc aren’t quite as obvious. public const string DidBeginGameNotification = "Game.DidBeginGameNotification"; public const string DidMarkSquareNotification = "Game.DidMarkSquareNotification"; public const string DidChangeControlNotification = "Game.DidChangeControlNotification"; public const string DidEndGameNotification = "Game.DidEndGameNotification"; There are several notifications I will want the model to be able to post. This includes posting a notice when a new game has begun, when a square has been marked, when a new turn has begun, and when the game ends. public Mark control { get; private set; } public Mark winner { get; private set; } public Mark[] board { get; private set; } int[][] wins = new int[][] { // Horizontal Wins new int[] { 0, 1, 2 }, new int[] { 3, 4, 5 }, new int[] { 6, 7, 8 }, // Vertical Wins new int[] { 0, 3, 6 }, new int[] { 1, 4, 7 }, new int[] { 2, 5, 8 }, // Diagonal Wins new int[] { 0, 4, 8 }, new int[] { 2, 4, 6 } }; I have three public properties with private setters – this makes them “read only” to the perspective of other classes. The game model will handle its own logic and state and indicates this to consumers by protecting access to those fields. The “control” field indicates which mark will be placed the next time an empty board square is chosen. Later we will create “Player” objects which will be assigned a Mark, and the player with the Mark which matches the “control” will be allowed to take a turn while the other player’s input will be ignored. When the game ends, “control” will be set to “None” and neither player will be able to take a turn. The “winner” field indicates the mark of the “player” who won the game. For example if there are three X’s in a row, then the field will hold the “X” Mark. If the game is a tie, then the “winner” field will hold “None” while control is also “None”. The “board” field holds an array of Marks. This is a “flattened” array which will have a length of 9 – one for each square on the board. If you prefer readability over speed then you can feel free to use a 2D array instead. Since I am not writing A.I. for this project, the speed is not a concern. The final field “wins” is a convenient array of arrays which hold all of the possible places a win can occur, whether row, column, or diagonal line. public Game () { board = new Mark[9]; } In C# 6 you can provide a default value for a property. Unfortunately the version of C# used by unity is not up-to-date, so we will need to use the class constructor to initialize our “board” property. public void Reset () { for (int i = 0; i < 9; ++i) board[i] = Mark.None; control = Mark.X; winner = Mark.None; this.PostNotification(DidBeginGameNotification); } The “Reset” method wipes the board clean by setting all squares to a “None” Mark. In addition it hands control to the X mark – X will always go first in my implementation, although I decided to let the players control a different mark at random on each new game. The winner field also must be reset, just in case the game had previously been won, and finally, we post a notification that a new game has begun. public void Place (int index) { if (board[index] != Mark.None) return; board[index] = control; this.PostNotification(DidMarkSquareNotification, index); CheckForGameOver(); if (control != Mark.None) ChangeTurn(); } The “Place” method is used any time a player attempts to take a turn. The index of the desired square is passed as a parameter. If the spot is not vacant, then the call is ignored. Otherwise, the mark will be placed, and a notification will be posted so that the view can be updated to match. Next we check to see if the placement of a new mark caused the game to end. If not, then we hand control over to the next player. void ChangeTurn () { control = (control == Mark.X) ? Mark.O : Mark.X; this.PostNotification(DidChangeControlNotification); } The “ChangeTurn” method is very simple. It assigns the control to whichever mark was not currently in control, and then posts a relevant notification. void CheckForGameOver () { if (CheckForWin() || CheckForStalemate()) { control = Mark.None; this.PostNotification(DidEndGameNotification); } } Checking for a Game Over requires a few checks. First, we want to know if there are any winning patterns on the board. If the first check fails, then we will check to see if there are no empty squares. Either condition would cause the game to end, and a relevant notification to fire. bool CheckForWin () { for (int i = 0; i < 8; ++i) { Mark a = board[wins[i][0]]; Mark b = board[wins[i][1]]; Mark c = board[wins[i][2]]; if (a == b && b == c && a != Mark.None) { winner = a; return true; } } return false; } When checking to see if a player has won, we loop through the wins array. I grab the Marks located at each of the indices of the particular win pattern and see if they all match a single player mark. If so, we can update the winner mark with the same value, and set control to “None” which indicates that the game has ended. bool CheckForStalemate () { for (int i = 0; i < 9; ++i) if (board[i] == Mark.None) return false; return true; } When checking for a stalemate I am really checking for an open square. If I find a single square which still holds “None” then the game is allowed to continue. Summary In this lesson we created everything necessary to create a model of a playable TicTacToe game. Actually interacting with it and showing it to a user will come next. Don’t forget that if you get stuck on something, you can always check the repository for a working version here. 6 thoughts on “Turn Based Multiplayer – Part 2” hi there, thanks for the Tutorial, I got a question: thisPostNotification – I can’t find this method… I get an error: ‘Game’ does not contain a definition for `PostNotification’ and no extension method `PostNotification’ In part 1, there was a unity package for you to download within the “Project Setup” section. It contained a couple of scripts including “NotificationCenter” and “NotificationExtensions” that you need here. thanks, I will take a look at bitbucket i cant import it – my unity seems to be too new I opened it in Unity 5.4 today without any issues. What problem are you seeing? I’m getting an error on the strings that are found in the beginning of the code. I put them in the TicTacToe namespace, so if they belong in public Game(), then that is the problem. If not, I am getting the error message “parser error: unexpected symbol ‘const’, expecting ‘class’, ‘delegate’, etc…”. This leads me to believe that I must put these in a class, but the article never specifies this. Any help on how the completed code looks like is appreciated. Yep, the const string notifications do belong inside the class. Some conventions I used to help hint at this are that the initial code snipped included a comment “Add Code Here” so that the next code snippets would would be inserted from that point on. Also, the value of the string is prefixed with “Game” which should indicate it being in the Game class. Even with those hints it can be difficult to guess at my intentions, so the complete code exists in an online repository which I linked to in the Summary, but here it is again:
http://theliquidfire.com/2016/05/05/turn-based-multiplayer-part-2/
CC-MAIN-2020-45
refinedweb
1,586
70.02