text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
#include <deal.II/base/thread_management.h> A container for thread objects. Allows to add new thread objects and wait for them all together. The thread objects need to have the same return value for the called function. Definition at line 1148 of file thread_management.h. Add another thread object to the collection. Definition at line 1155 of file thread_management.h. Wait for all threads in the collection to finish. It is not a problem if some of them have already been waited for, i.e. you may call this function more than once, and you can also add new thread objects between subsequent calls to this function if you want. Definition at line 1168 of file thread_management.h. List of thread objects. Definition at line 1180 of file thread_management.h.
https://www.dealii.org/current/doxygen/deal.II/classThreads_1_1ThreadGroup.html
CC-MAIN-2019-39
refinedweb
130
69.68
Jiri Slaby <jirislaby@gmail.com> writes:> On 09/27/2007 11:22 AM, Andrew Morton wrote:>>>.> # find /proc >/dev/null> find: WARNING: Hard link count is wrong for /proc/net: this may be a bug in your> filesystem driver. Automatically turning on find's -noleaf option. Earlier> results may have failed to include directories that should have been searched.> # stat net> File: `net'> Size: 0 Blocks: 0 IO Block: 1024 directory> Device: 3h/3d Inode: 4026531864 Links: 2> Access: (0555/dr-xr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)> Access: 2007-09-28 18:21:24.651209759 +0200> Modify: 2007-09-28 18:21:24.651209759 +0200> Change: 2007-09-28 18:21:24.651209759 +0200> # stat net/> File: `net/'> Size: 0 Blocks: 0 IO Block: 1024 directory> Device: 3h/3d Inode: 4026531909 Links: 4> Access: (0555/dr-xr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)> Access: 2007-09-28 18:26:48.813048220 +0200> Modify: 2007-09-28 18:26:48.813048220 +0200> Change: 2007-09-28 18:26:48.813048220 +0200>> hmm, this is some kind of weirdness :)Yes.I can explain it. For the network namespace stuff we need special handlingof /proc/net so that depending on the network namespace we are resolvingagainst you see a different behavior. So you actually are observingtwo different directories, one being a magic invisible symlink to theother.Currently I am resolving against current (which has a number oflimitations) and the weird ugly effect you are current seeing.So it looks like I need to either make /proc/net a symlink to/proc/self/net or make the network namespace something that we captureat mount time of /proc.This was my don't get hung up on this implementation detail version.Thanks for pointing out it has user visible problems. I will seewhat I can do to resolve this.Eric-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2007/9/28/238
CC-MAIN-2014-49
refinedweb
338
67.35
Write a program in Java excellen T here are 2 failure students with average grade 56.5 Requirements: Use the scanner class for reading from file. Use the Student.java class from the course website to represent each student record and extend it to handle the grade categories. Print the student records (name, grade and grade category) using the println method with only a student object passed to it (hint: modify the toString method in class Student to return also the grade category). When you write your program use proper names for the variables suggesting their purpose. format your code accordingly using indentation and spacing. for each line of code add a short comment to explain its meaning. This is the student.java code public class Student { String fname, lname; int grade; public Student(String fname, String lname, int grade) { this.fname = fname; this.lname = lname; this.grade = grade; } public String toString() { return fname + " " + lname + "\t" + grade; } }
http://www.chegg.com/homework-help/questions-and-answers/write-program-java-program-reads-text-file-student-records-first-name-last-name-grade-prin-q3353908
CC-MAIN-2015-18
refinedweb
157
68.26
I think it was irresponsible of Brian Friesen to write that equal checksums implies 99% probability of equal files. I assume that Brian just pulled that number out of the air. If you don't know a probability, Brian, then don't guess one! I agree with commenter msg555, that the probabilities are FAR better than Brian said. The probability would be 1-2^32, for a 32 bit checksum, if we assume all byte values are equally probable. Brian should fix his article!Reply Has CRC32 and tons of other crypto and hash codes over there. Open source.Reply Thank you! Your article, performance chart, sample code help me a lot! Thank you very much!Reply There is a 99.999999977% (assuming a good CRC algorithm) chance that two pieces of data with the same checksum are identicle. I think that is good enough odds to assume the data is indeed the same. If you use a 64 bit CRC there is a 99.99999999999999999458% chance.Reply The project is useful, but I found that the algorithm, at least in Crc32Dynamic, is not PKZip compatible. I had to fix the function CalcCrc32 to the following: [code] { // dwCrc32 = ((dwCrc32) >> 8) ^ m_pdwCrc32Table[(byte) ^ ((dwCrc32) & 0x000000FF)]; dwCrc32 = m_pdwCrc32Table[(byte ^ dwCrc32) & 0x000000FF] ^ (dwCrc32 >> 8); } [/code] Old code contained in the line comment. I can't explain why you might have gotten differnet results, but I can prove the code is compliant. In a general case (x^y)&z may not be equivalent to x^(y&z), but specifically in my algorithm it works just fine. Let's look closer at each one (I'm going to use '?' to represent bits with data): Mine: (byte) ^ (dwCrc32 & 0x000000FF) 1. 0x000000?? ^ (0x???????? & 0x000000FF) 2. 0x000000?? ^ (0x000000??) 3. 0x000000?? RFC1952: (byte ^ dwCrc32) & 0x000000FF 1. (0x000000?? ^ 0x????????) & 0x000000FF 2. (0x????????) & 0x000000FF 3. 0x000000?? Follow the logic yourself, it evaluates the same.Reply See the code at the bottom of. That's where I got my change from. It produces different results. Notice how inside the bracket, it xor's before it and's. BTW, I don't mean my correction as a slight. Your project was useful to me, so thanks are in order. However, I still hold it isn't 100% compliant as posted.Reply How come it didn't work for me then? Also, I'm pretty sure the expressions are different. (x ^ y) & z != x ^ (y & z).Reply Actually my code is RFC 1952 compliant. If you look carefully at the two lines of code you'll see they are semantically the same, just in a different order. They each generate the same result, which means my code is RFC 1952 compliant. I have checked my CRCs against PKZip many times and never found a discrepancy.Reply Originally posted by: Jamie Whitham I am using this for a small settings file 300 bytes or so. Here is the code I am using. I have changed Brian's code So you might find this simpler for smaller files.... Cheers Jamie // returns calculated CRC32 Great code! Thanks Brian :-) I have this in memory before saving to disk when I quit. In my application power could be ditched at any point so I always write two copies and use CRC to check them on loading. as with such a small file I thought there's not much need to create the CRC table etc. However it will be inefficient on larger files. // pData points to data for which the checksum is needed // dwLen is the length of that data in bytes DWORD CheckSum(BYTE* pData,DWORD dwLen) { DWORD dwCrc32=0xFFFFFFFF; for (DWORD ct=0;ct<dwLen;ct++) { DWORD i=(pData[ct]) ^ ((dwCrc32) & 0x000000FF); for(INT j = 8; j > 0; j--) { if(i & 1) i = (i >> 1) ^ 0xEDB88320; else i >>= 1; } dwCrc32 = ((dwCrc32) >> 8) ^ i; } return ~dwCrc32; } I am using this for a small settings file 300 bytes or so. Here is the code I am using. I have changed Brian's code So you might find this simpler for smaller files.... Cheers Jamie // returns calculated CRC32 Originally posted by: wolf bit Enjoy ! __asm { mov edx, ecx check_sum32: // all given data loop noxdata: I hope that the comments will be helpful. If you find some bugs or you have a time to test the performens please mail me ! Code is free ... use it whenever you need it. For something more mail me! unsigned short check_sum32bit(void *data_ptr, unsigned int length) { mov ecx,dword ptr [ebp+0Ch] // get length mov ebx,dword ptr [ebp+8] // get data_ptr shr ecx, 2 // set loop counter and edx, 0x3 // set the rest xor eax, eax // equal mov eax,0 just is better cmp ecx, 0 // if loop counter = 0 no loop je noloop add eax, ds:[ebx] adc eax, 0 add ebx, 4 loop check_sum32 // end loop noloop: sub ebx, 4 // correct last loop variable befor exit ebx+4 should be dec again cmp edx, 0 // check for the rest je noxdata add ebx, edx // !!! positioned the offset to the last data (could be a problem if data is less then 4 bytes) mov ecx, 4 // get valid bytes from the rest data sub ecx, edx mov edx, ds:[ebx] shrloop: shr edx, 8 // set no valid bytes to zero loop shrloop add eax, edx // add the rest data adc eax, 0 // add carry flag mov ebx, eax // make 32 bits check sum to 16 bits chesk sum if you need 32 bits just make the function "unsigned int" and delete next lines ! shr ebx, 16 add ax, bx adc ax, 0 not ax } return ; // returned value is in ax register } Enjoy ! __asm { mov edx, ecx check_sum32: // all given data loop noxdata: Originally posted by: krishna I need to implement a CRC check on file to detect whether it has been modified. I compare it with old CRC. As it is 99% guranteed how do I make sure that the file has changed or not. One way could be that I use two CRC's generated by different algorithms. Could that be useful.Reply Originally posted by: Jonathan Thanks for the useful tidbit of code, perfect for what I need!Reply Originally posted by: Imran Hi Guys! I am having a little bit of problem with understanding the two implementations of source code i have.This is because of the fact that the implementations I have seem quite different from the CRC methods explained in a typical Communication & Networking books. I would like someone to explain these implementations to me ( reason why we we assume initial value to be ffffffff, why we and crc with 0x80000000 etc). Moreover I will appreciate if you could point out the flaws in both algorithms. Also please let me know how can i verify the out put of CRC algorithm. Thanks for your help Imran Here is the code: #define CRC_POLYNOMIAL 0xEDB88320 int CRC_Algorithm1(int *addr) { int i, j; long crc = 0xFFFFFFFF; int carry; int maddr[6]; /* Put the ethernet address into a local array */ memcpy(maddr, addr, 6); /* Cycle through each character of the address */ for (i = 0; i < 6; ++i) { /* Cycle through each bit of this character */ for (j = 0; j < 8; ++j) { /* Update the CRC for this address */ carry = ( (crc & 0x80000000) ? 0x01 : 0x00) ^ (maddr[i] & 0x01); crc <<= 1; maddr[i] >>= 1; if (carry) { crc = ((crc ^ CRC_POLYNOMIAL) | carry); } } } return (crc); } /* CRC_Algorithm1 */ SECOND ALGORITHM static int crc32( char * s, int length ) { /* indices */ int perByte; int perBit; /* crc polynomial for Ethernet */ const unsigned long poly = 0xedb88320; /* crc value - preinitialized to all 1's */ unsigned long crc_value = 0xffffffff; for ( perByte = 0; perByte < length; perByte ++ ) { unsigned char c; c = *(s++); for ( perBit = 0; perBit < 8; perBit++ ) { crc_value = (crc_value>>1)^ (((crc_value^c)&0x01)?poly:0); c >>= 1; } } return crc_value; } ReplyReply
http://www.codeguru.com/comment/get/48322410/
CC-MAIN-2016-50
refinedweb
1,290
71.14
ddmd 0.0.7 DMD Compiler Frontend To use this package, run the following command in your project's root directory: Overview: This project is a port of the DMD frontend, automatically translated from C++ to D using magicport2. The current version is based on dmd 2.067. This project is designed to be built with dub. (dub package) Currently only the lexer and some support modules are included. This project may eventually be merged into dmd upstream development. No stable API is provided. As the code is automatically generated, pull requests should be made against upstream. Warnings and deprecations currently need to be disabled for the code to compile successfully. Example: { "name": "ddmdlexertest", "dependencies": { "ddmd": ">=0.0.7" } } import std.stdio; import std.file; import ddmd.tokens; import ddmd.lexer; ///////////////////////// void main() { string data = "void blah() {} // stuff"; auto l = new Lexer("myfile", data.ptr, 0, data.length, 0, 0); l.nextToken(); do { printf("token: %s\n", l.token.toChars()); } while (l.nextToken() != TOKeof); } - Registered by Daniel Murphy - 0.0.7 released 3 years ago - yebblies/ddmd - BSL-1.0 - Authors: - - Dependencies: - none - Versions: - Show all 8 versions - Download Stats: 0 downloads today 0 downloads this week 0 downloads this month 274 downloads total - Score: - 1.4 - Short URL: - ddmd.dub.pm
https://code.dlang.org/packages/ddmd
CC-MAIN-2019-35
refinedweb
212
62.95
HPP emit (Delphi) Go Up to Delphi Compiler Directives (List) Index The HPPEMIT directive adds a specified string to the header file generated for C++. For example: {$HPPEMIT 'typedef double Weight;' }. Whenever there is a unit that must be linked in even if there are no references to it, Delphi code should use the HPPEMIT directive. The HPP header generated from that unit then contains some macros that ensure that, when included in a C++ source file, the header causes the unit to be linked in. HPPEMIT directives are output into the "user supplied" section at the top of the header file in the order in which they appear in the Delphi file. The HPPEMIT directive accepts an optional END directive that instructs the compiler to emit the string at the bottom of the .hpp file. Otherwise, the string is emitted at the top of the file. Example: {$HPPEMIT 'Symbol goes to top of file'} {$HPPEMIT END 'Symbol goes to bottom of file'} For C++ on Mobile Platforms, {$HPPEMIT LINKUNIT} Replaces #pragma link For C++ applications, {$HPPEMIT LINKUNIT} replaces #pragma link on mobile platforms. The Delphi run time has units that must be linked in order to enable some functionality. In C++, auto-linking was previously achieved using the following directive: {$HPPEMIT '#pragma link "<unitname>"'} Now you should use the following directive instead: {$HPPEMIT LINKUNIT} Generating Namespace Declarations for C++ Additionally, two new HPPEMIT directives have been added in XE5 Update 2: {$HPPEMIT OPENNAMESPACE} - This generates C++ namespace declarations for the current unit. - For example, if you use {$HPPEMIT OPENNAMESPACE}in the FMX.Bind.Editors.pas unit, the following content will be generated in the corresponding .HPP file: namespace Fmx { namespace Bind { namespace Editors { {$HPPEMIT CLOSENAMESPACE} - This generates closing braces to the namespaces declared with {$HPPEMIT OPENNAMESPACE}. {$HPPEMIT NOUSINGNAMESPACE} - This tells the Delphi compiler not to generate the "using namespace <unit-name>;" that is typically seen at the end of the .HPP generated from a .PAS unit. This directive avoids polluting the global namespace and can be very helpful in avoiding ambiguities. The same effect can be achieved by defining the DELPHIHEADER_NO_IMPLICIT_NAMESPACE_USE macro. However, the latter might cause failures in cases where code does not use qualified names (for instance, in event handlers generated by the IDE).
https://docwiki.embarcadero.com/RADStudio/Alexandria/en/HPP_emit_(Delphi)
CC-MAIN-2022-40
refinedweb
377
50.57
Number of ways to remove elements to maximize arithmetic mean Given an array arr[], the task is to find the number of ways to remove elements from the array so as to maximize the arithmetic mean of the remaining array. Examples: Input: arr[] = { 1, 2, 1, 2 } Output: 3 Remove elements at indices: { 0, 1, 2 } { 0, 2, 3 } { 0, 2 } Input: arr[] = { 1, 2, 3 } Output: 1 Approach: The arithmetic mean of the array is maximized when only the maximum element(s) remains in the array. Now consider the array arr[] = { 3, 3, 3, 3 } We just need to make sure that at least one instance of the maximum element remains in the array after removing the other elements. This will guarantee maximization of the arithmetic mean. Hence we need to remove at most 3 elements from the above array. The number of ways to remove at most 3 elements: - Zero elements removed. Number of ways = 1. - One element removed. Number of ways = 4. - Two elements removed. Number of ways = 6. - Three elements removed. Number of ways = 4. Hence total = 1 + 4 + 6 + 4 = 15 = 24 – 1. Now consider the array = { 1, 4, 3, 2, 3, 4, 4 } On sorting the array becomes = { 1, 2, 3, 3, 4, 4, 4 }. In this case, there are elements other than 4. We can remove at most 2 instances of 4 and when those instances are removed, the other elements (which are not 4) should always be removed with them. Hence the number of ways will remain the same as the number of ways to remove at most 2 instances of 4. The various ways of removing elements: { 1, 2, 3, 3 } { 1, 2, 3, 3, 4 } { 1, 2, 3, 3, 4 } { 1, 2, 3, 3, 4 } { 1, 2, 3, 3, 4, 4 } { 1, 2, 3, 3, 4, 4 } { 1, 2, 3, 3, 4, 4 } Therefore the answer is 2count of max element – 1. Below is the implementation of the above approach. C++ Java Python3 C# PHP 0) { // If y is odd, multiply // x with result if ($y & 1) $res = ($res * $x) % $p; // y must be even now // y = $y/2 $y = $y >> 1; $x = ($x * $x) % $p; } return $res; } // Function to return number of ways // to maximize arithmetic mean function numberOfWays($arr, $n) { $mod = 1000000007; $max_count = 0; $max_value = $arr[0]; for($i = 0; $i < $n; $i++) if($max_value < $arr[$i]) $max_value = $arr[$i]; for ($i = 0; $i < $n; $i++) { if ($arr[$i] == $max_value) $max_count++; } return (power(2, $max_count, $mod) - 1 + $mod) % $mod; } // Driver code $arr = array( 1, 2, 1, 2 ); $n = 4; echo numberOfWays($arr, $n); // This code is contributed // by Arnab Kundu ?> 3 Recommended Posts: - Number of ways to calculate a target number using only array elements - Number of ways an array can be filled with 0s and 1s such that no consecutive elements are 1 - Ways to Remove Edges from a Complete Graph to make Odd Edges - Maximize the number of subarrays with XOR as zero - Ways to multiply n elements with an associative operation - Remove elements from the array which appear more than k times - Maximize the number of segments of length p, q and r - Maximize the number of sum pairs which are divisible by K - Maximize the product of four factors of a Number - Ways of dividing a group into two halves such that two elements are in different groups - Remove elements from the array whose frequency lies in the range [l, r] - Find d to maximize the number of zeros in array c[] created as c[i] = d*a[i] + b[i] - Maximize a number considering permutations with values smaller than limit - Rearrange the string to maximize the number of palindromic substrings - Maximize the median of the given array after adding K elements, andrew1234
https://www.geeksforgeeks.org/number-of-ways-to-remove-elements-to-maximize-arithmetic-mean/
CC-MAIN-2019-22
refinedweb
629
60.08
README easy-reacteasy-react easy-react is a framework that solves the store and router problems of creating a single page react app, it is composed of three independent libraries: mini-routerjs as router, jsonstore-js as store and History for browser history management. It is also light weight, the size of minified bundle is less than 25k. InstallingInstalling Use via npm: $ npm install easy-react --save const EasyReact = require('easy-react'); const Provider = require('easy-react/Provider'); // Use es6 import import EasyReact from 'easy-react'; import Provider from 'easy-react/Provider'; Use in browser: Scripts for browser is under build directory, use easy-react.js for development environment(contains inline source maps), use easy-react.min.js for production. The reference in browser is window.EasyReact and it has two properties: App as easy-react Constructor and Provider as contexts provider component. Make sure window.React, window.ReactDOM and window.ReactDOMServer are available before using these bundles. It is recommended to build your own bundles using easy-react package. ConventionsConventions EasyReact is the main class of easy-react package, and app is an instance of EasyReact. Write your first isomorphic react application using easy-reactWrite your first isomorphic react application using easy-react Now we are going to write an application displaying users information, it contains two pages: user list and user detail. Here is a complete example, read the instruction and run it. Let's walk through the steps of creating the example easy-react app: 1. Creating two react class User and UserList as pages. The following is a common example of an easy-react react class: import React, { Component, PropTypes } from 'react'; const onClickAction = function(store, data){ store.set('data', data); }; class Example extends Component{ _onClick(){ this.context.update(onClickAction, this.props.data); // This operation will change the app store and then update the current page. this.context.to('/'); // This operation will navigate router to '/'. } render(){ var Link = this.context.Link; return ( <p> <button onClick={this._onClick.bind(this)}> {this.props.data} </button> <Link href="/">Index page</Link> </p> ); } } Example.propTypes = { data: PropTypes.string.isRequired }; Example.contextTypes = { to: PropTypes.func.isRequired, // It's a reference of easy-react's to method. update: PropTypes.func.isRequired, // It's a reference of easy-react's update method. Link: PropTypes.func.isRequired // It's a react component whose type is function. }; export default Example; The special things of this Example are the contexts it uses: to, update and Link. These contexts are provided by Provider(It will be introduced later) of easy-react. About the react context, please see Context or google it if the link is invalid. Use update(name, action, a, b, c, d, e, f) if you want to update the store and then update the page, more about the params please see do. Use to(url, action, a, b, c, d, e, f) if you want the app navigate to another page. Use Link if you want the app navigate to another page when the link is clicked. 2. Mapping / to UserList and /users/:userId to User. The following is a common example of an easy-react route registration: import React from 'react'; import EasyReact from 'easy-react'; import Provider from 'easy-react/Provider'; const app = new EasyReact({ store: yourStore, viewContainer: '#root' }); app.createRoute('/foo/:bar', function (request, state) { return ( <Provider app={app}> <Example data={state}/> </Provider> ); }); As the example shows, we instantiated an app first and then register a route with the path /foo/:bar, the route callback returns a react component which will be used as view. The Provider component provides to, update and Link contexts to it's children, it requires an instance of EasyReact as its app property. The router callback will be passed two parameters: request and state. The state is a copy of current store and the request is an object parsed by simple-url. 3. Do some further processing to make the app work.3. Do some further processing to make the app work. App created by easy-react is url drived. At server side, we use app.getView(path, stringify, staticMarkup) to get rendered markup as the example does. Before rendering the view, we perhaps need to update the store using some data that comes from database or other server. To do this, use the app.updateStore(name, action, a, b, c, d, e, f) method. In browser context, there are tree different ways to drive the app: app.to, app.update and Link. The to method navigate the app to display a page routed by url parameter. The update method will do an action(if it's provided) to update the store and then get the window.location.href to update the current page. The Link component will navigate the app to its href. Also, the app will listen in browser's history change and use to to drive itself. So, to make the app work in browser we need do a update() when the bundled script is loaded as the example does. That all, a basic easy-react application is completed. Constructor and methodsConstructor and methods EasyReact(options)EasyReact(options) app.createRoute(route, callback)app.createRoute(route, callback) app.createMismatch(callback)app.createMismatch(callback) app.getView(path, stringify, staticMarkup)app.getView(path, stringify, staticMarkup) app.updateStore(name, action, a, b, c, d, e, f)app.updateStore(name, action, a, b, c, d, e, f) This method is used to update the app's store, its usage is equal to jsonsotre-js's do. app.to(url, action, a, b, c, d, e, f)app.to(url, action, a, b, c, d, e, f) This method is used to navigate the app to a page whose url matches the url parameter. If parameters following the url is provided, the app's store will be updated first. The usage of the action is equal to jsonsotre-js's do. app.update(name, action, a, b, c, d, e, f)app.update(name, action, a, b, c, d, e, f) This method is used to update the current page whose url matches the window.location.href. If parameter action is provided, the app's store will be updated first. The usage of the action is equal to jsonsotre-js's do. app.get(path, copy)app.get(path, copy) This method is used to get some data of the app's store, its usage is equal to jsonstore-js's get. LicenseLicense MIT
https://www.skypack.dev/view/easy-react
CC-MAIN-2022-05
refinedweb
1,075
58.48
- Creating Stored Procedures - Returning Result Sets - Conclusion If you are familiar with stored procedures and why they are used, you know about Microsoft’s T-SQL language for writing them with SQL Server. But the problem with T-SQL is that programmers already have enough to learn, much less having to master another language for storing and executing code in databases. T-SQL is a crude version of some higher-level languages such as Visual Basic, but gets the job done with fundamentals. Now, with SQL 2005, you can harness the power of a higher-level language to not only do more with stored procedures but also save time by already knowing a popular .NET language. It’s just another part of Microsoft’s quest to make things easier and more familiar to administrators and programmers when developing and managing projects with SQL 2005. The most important reason to use CLR (Common Language Runtime) stored procedures is security. Your stored procedures written with .NET managed code are not only type safe but also more protected from unwanted code manipulation. Microsoft also ascertains that CLR stored procedures written in a .NET-based language are as good performance-wise as T-SQL stored procedures are. This article shows you how you can use CLR integration creating stored procedures. Creating Stored Procedures Before using CLR, it has to be enabled in SQL 2005. To do this, execute the sp_performance system stored procedure by executing the following code in the New Query pane of Management Studio: EXEC sp_configure @configname = ’clr enabled’, @configvalue = 1 RECONFIGURE WITH OVERRIDE GO It would be nice to create and execute CLR stored procedures as easily as doing T-SQL stored procedures in SQL 2005, but there are a number of extra steps to do. To get the .NET Framework and SQL 2005 talking to each other first requires the use of an assembly, which is a compiled code source that is more secure because it is stored to a DLL (Dynamic Link Library). It also provides a way for SQL 2005 to implement this code without having the .NET Framework built directly into the SQL Server. To demonstrate the process of creating procedures with CLR, let’s begin with a simple Hello World example. First, create the file that will hold the code to be executed against SQL Server. In this example, the code could be what follows in Listing 1.1. Listing 1.1 Hello World CLR stored procedure code using C# using System; using System.Data.Sql; using Microsoft.SqlServer.Server; using System.Data.SqlTypes; namespace HelloWorld.SqlServer { public static class SProc { public static int PrintMessage(String Message) { int i = 0; try { SqlContext.Pipe.Send(Message); } catch (Exception err) { i = 1; SqlContext.Pipe.Send("An error occurred: " + err.Message); } return i; } The code in Listing 1.1 is a simple Namespace with one class having a constructor that fires off the message when executed. The SqlContext.Pipe property, which belongs to the Microsoft.SqlServer.Server Namespace, is used to pipe the message back to the client from SQL Server. It is also often used to send back result sets from tables in a database when using SQL queries. Copy and paste the code in Listing 1.1 to a file called HelloWorldCLR.cs, which is the file that we’ll compile to an assembly. Use the following code from the command line to compile to an assembly: csc /target:library HelloWorldCLR.cs /reference:"C:\Program FilesMicrosoft SQL Server\MSSQL.1\MSSQL\Binn\sqlaccess.dll" The csc command is used to compile C# code. You may have to provide the path to the HelloWorldCLR.cs file if it does not reside in the same directory as the csc executable. Along with compiling our file to an assembly, we’re also referencing the .NET provider library for SQL Server called sqlaccess.dll. Next, we need to add the assembly to SQL Server by using the Create Assembly statement with T-SQL: CREATE ASSEMBLY HelloWorldCLR FROM ’C:\WINNT\Microsoft.NET\Framework\v2.0.50727\HelloWorldCLR.dll’ WITH PERMISSION_SET = SAFE GO The path to HelloWorldCLR.dll may vary. Now that the assembly has been registered with SQL Server, we need a way to execute the CLR procedure on demand from the server. As odd as it sounds, the most common way to execute CLR stored procedures is by using an SQL Server stored procedure with T-SQL because SQL Server cannot handle the CLR code natively. A SQL Server stored procedure must first be created with T-SQL that will reference the CLR counterpart. To do this, execute the following code against the server: CREATE PROC HelloWorldCLRMsg @Message NVARCHAR(255) AS EXTERNAL NAME HelloWorldCLR.[HelloWorld.SqlServer.SProc].PrintMessage GO The preceding code creates a stored procedure called HelloWorldCLRMsg that, when called, in turn executes the associated CLR procedure code. The statement External Name indexes into our assembly (HelloWorldCLR) and finds the SProc class firing off the class constructor (PrintMessage). To see the CLR stored procedure get set in motion, execute the following code to test it out: EXEC HelloWorldCLRMsg "Hello World!" The message should appear in the results pane. This stored procedure can now be called from your web applications in the same way any other SQL Server stored procedures are called. The only difference is that you are actually executing a CLR stored procedure via a SQL Server stored procedure. A slick yet somewhat tedious way to get SQL Server using .NET code for stored procedures. When doing other CLR stored procedures for the same database, you can add them to the same assembly. Each stored procedure essentially becomes a code module for that assembly.
http://www.informit.com/articles/article.aspx?p=715008&amp;seqNum=2
CC-MAIN-2016-44
refinedweb
945
56.35
Named Arguments Another way of passing arguments is by using named arguments. Named arguments free you from remembering or following the order of the parameters when passing arguments. In exchange, you have to memorize the names of the parameters of a method (but since Visual Studio has Intellisense, you don’t have to do so). Named arguments improve readability because you can see what values are assigned to what parameters. Named argument is introduced in C# 4 (2010) so if you are using an older version such as C# 3.5 (2008), then this won’t work. The following example shows the syntax of using named arguments when calling a method. MethodToCall( paramName1: value, paramName2: value, ... paramNameN: value); The program in Figure 1 demonstrates how to use named arguments. using System; namespace NamedArgumentsDemo { public class Program { static void SetSalaries(decimal jack, decimal andy, decimal mark) { Console.WriteLine("Jack's salary is {0:C}.", jack); Console.WriteLine("Andy's salary is {0:C}.", andy); Console.WriteLine("Mark's salary is {0:C}.", mark); } public static void Main() { SetSalaries(jack: 120, andy: 30, mark: 75); //Print a newline Console.WriteLine(); SetSalaries(andy: 60, mark: 150, jack: 50); Console.WriteLine(); SetSalaries(mark: 35, jack: 80, andy: 150); } } } Example 1 – Using Named Arguments Jack's salary is $120. Andy's salary is $30. Mark's salary is $75. Jack's salary is $50. Andy's salary is $60. Mark's salary is $150. Jack's salary is $80. Andy's salary is $150. Mark's salary is $35. The WriteLine() methods in lines 9-11 used the currency formatter indicated by {0:C} which formats a numerical data into a monetary format. The output shows that even if we change the order of the arguments in the three method calls, the proper values are still assigned to their respective parameters. You can mix named arguments and fixed arguments that depend on the position of the parameter. //Assign 30 for Jack's salary and use named arguments for // the assignment of the other two SetSalary(30, andy: 50, mark: 60); // or SetSalary(30, mark: 60, andy: 50); The following codes are wrong and will lead to errors: SetSalary(mark: 60, andy: 50, 30); // and SetSalary(mark: 60, 30, andy: 50); As you can see, you need to place the fix arguments first. On the first and second examples, we give 30 as the first argument so it was assigned to jack since he is the first parameter of the method. The third and fourth examples are wrong since you indicated the named parameters first before the fixed parameter. Always placed the named parameters after defining the fixed arguments to prevent an error.
https://compitionpoint.com/named-arguments/
CC-MAIN-2021-31
refinedweb
447
66.54
I’m playing around at making an RTS. One of the things you need is a way to drag a box and select units. In that process you need a way of getting the nodes within that box. You could probably generate a collision solid on the fly to test for collisions within that boundary but I think it’s faster to just find the 2d relative point of the nodepaths and compare that to the mouse coordinates at the corners of the selection box. To do this I wrote the following function: enjoy responsibly def Is3dpointIn2dRegion(self, node, point1,point2,point3d): """This function takes a 2d selection box from the screen as defined by two corners and queries whether a given 3d point lies in that selection box Returns True if it is Returns False if it is not""" #node is the parent node- probably render or similar node #point1 is the first 2d coordinate in the selection box #point2 is the opposite corner 2d coordinate in the selection box #point3d is the point in 3d space to test if that point lies in the 2d selection box # Convert the point to the 3-d space of the camera p3 = base.cam.getRelativePoint(node, point3d) # Convert it through the lens to render2d coordinates p2 = Point2() if not base.camLens.project(p3, p2): return False r2d = Point3(p2[0], 0, p2[1]) # And then convert it to aspect2d coordinates a2d = aspect2d.getRelativePoint(render2d, r2d) #Find out the biggest/smallest X and Y of the 2- 2d points provided. if point1.getX() > point2.getX(): bigX = point1.getX() smallX = point2.getX() else: bigX = point2.getX() smallX = point1.getX() if point1.getY() > point2.getY(): bigY = point1.getY() smallY = point2.getY() else: bigY = point2.getY() smallY = point2.getY() pX = a2d.getX() pY = a2d.getZ() #aspect2d is based on a point3 not a point2 like render2d. if pX < bigX and pX > smallX: if pY < bigY and pY > smallY: return True else: return False else: return False You provide the two mouse coordinates, the 3d point you’re testing, and it tells you if that 3d point is within the box coordinates. I’m using this function to record the mouse points. def savemousePos(self): pos = Point2(base.mouseWatcherNode.getMouse()) pos.setX(pos.getX()*1.33) return pos The multiplication of the X pos by 1.33 corrects between mouse coordinates (1x1 screen) to the actual screen dimensions that aspect2d uses (1.33x1 == 4x3).
https://discourse.panda3d.org/t/drag-selecting-multiple-nodes/2967/1
CC-MAIN-2022-33
refinedweb
408
73.68
. For up-to-date information please follow to corresponding WebStorm blog or PhpStorm blog. I would enjoy a screen cast covering the new features that I can review at anytime and at my pace. Any plans on doing something like this? Thanks. This webinar will be recorder and it will cover most of PhpStorm 6 new features, so it should be pretty convenient for you to view it later at your pace. Will the recording be available publicly or just for the attendees? It’ll be available for everyone, publicly. That’s awesome. Thanks for the info. Yeah, that’s great news I make a code generator based on php, yes phpstorm much helping on identify much problem upon generated code. My code quite complex using the latest namespace ,try catch throw exception,and used.. The most i appreciate is how to document a phpdoc after generated code.. Yes it really important since my other project need to code quite headache to remember after a few years……………………………… Please check relevant webhelp section at awesome! can’t wait for the webinar where can i get the recorded webinar? thanks It’ll be available at and announced in the blog. PhpStorm 6 webinar recording is now available. Also included are the session Q&A.
http://blog.jetbrains.com/webide/2013/03/phpstorm-6-webinar-on-march-26th-more-tools-to-develop-smarter-not-harder/
CC-MAIN-2014-42
refinedweb
212
66.23
Microsoft Windows has long supported standards-based management. We were one of the founding members of the Distributed Management Task Force (DMTF) and shipped the first, and richest, Common Information Model (CIM) Object Manager (CIMOM) we all know as Windows Management Instrumentation (WMI). While WMI has served our customers and partners well, the true promise of standards-based management has not yet been realized. By and large, it is still a world where you have vendor-specific tools – Windows managing Windows, Linux managing Linux, and network & storage vendors managing their own stuff. Customers still have islands of management. There are examples of products which bridge these worlds but often they require bogging down the managed resource with extra vendor-specific agents. Lack of standards-based management is a major pain point for customers. We spent a lot of time talking to partners and customers to understand what they needed to succeed with Windows Server “8”. We paid particular attention to Cloud OS scenarios and from there, it was clear that we needed a major investment in standards-based management. The shift to a Cloud OS focus significantly increased the scope of the management problem. Not only did we shift our focus from managing a server to managing lots of servers, but we also need to manage all the heterogeneous devices necessary to bring those servers together into a comprehensive and coherent computing platform. Today, cloud computing works by picking and qualifying a very small set of components and having a large staff of developers write component-specific management tools. Generalized cloud computing requires standards-based management. This is why we made a major investment in standards-based management in Windows Server “8”. The heart of the management problem is that it requires a distributed group of people, often with conflicting interests, to make a common set of decisions. Our approach to this is simple: create a value-chain that makes it easy and rational to do the right thing. Development organizations look at how much it costs to develop something and what benefits it brings them. If the ratios work out, then they do the work, otherwise they don’t. So our job is to minimize the cost and effort to implement standards-based management and to maximize the benefit. This blog post describes how we accomplished that. It does not discuss our other major standards-based management initiative: Storage Management Initiative Specification (SMI-S) which allows Windows Server “8” to discover and manage external storage arrays. We’ll discuss that in a future blog post. This blog post contains content appropriate for both IT Pros and developers. It contains both code and a schema example to highlight how easy we made things for developers. If you are an IT Pro, you might find this valuable in making good decisions for your deployment and architectural decisions in your IT infrastructure. Wojtek Kozaczynski, a Principal Architect in the Windows Server team, wrote this blog. –Cheers! Jeffrey Background WMI first shipped in Windows 2000 (and was available down-level to NT4). It used a COM-based development model and writing class providers was not for the faint of heart. Frankly, it was difficult to write them and more difficult to write them correctly. In addition to a difficult programming model teams had to also learn the new world of standards-based management with CIM schemas, the Managed Object Format (MOF) language, and other new terms, mechanisms and tools. We got quite good coverage in the first few releases but many teams were not satisfied with the effort-to-benefit ratio. A big part of that equation is the benefit side. Starting a management ecosystem from scratch is incredibly difficult. If you write a provider and no one calls it, what value was generated? None. This is why Systems Management Server (SMS), now known as the System Center Configuration Manager (SCCM), added support for WMI around the same time we released it (WMI was actually spawned out of the SMS team.) This was great, but it had two problems: - It created an incentive to produce WMI providers which were largely focused on inventory and monitoring of the managed resources (vs. command and control), and - SMS was not widely deployed so the teams that wrote WMI providers didn’t see the customer impact of their investments. Since the release of the WMI there has been a steady increase in the number of management products and tools that consume its providers, but for a long time this had not been matched by a proportional increase in coverage. Then things started to change with Hyper-V. The original management schemas defined by the DMTF were focused on what existed in the world, as opposed to focusing on management scenarios. There are things to be said for both approaches but when it came to managing virtual environments the DMTF got a team of pragmatists involved and when that schema came out, it was quickly adopted. The Hyper-V team developed providers that implemented the schema classes and the System Center Virtual Machine Manager (SCVMM) team produced a management tool which consumed them. This worked really well and it turned some heads because it demonstrated that the WMI was good for more than just inventory and monitoring. WMI could be effectively support configuration, command and control as well. To be fair, a number of other teams had done similar things before, but none of them had the visibility or impact that Hyper-V and SCVMM had. The other big change in standards-based management was the definition and availability of a standard management protocol. WMI was a standard CIMOM that hosted many standard class providers, but at the time there wasn’t an interoperable management protocol, so WMI used DCOM. This, however, made it an island of management for Windows managing Windows. It worked well, but it did not deliver on the vision of standards-based management. That changed with the DMTF’s definition and approval of the WS-Management (WS-MAN) protocol, a SOAP-based, firewall-friendly protocol that allows a client on any OS to invoke operations on a CIMOM running on any platform. Microsoft shipped the first partial implementation of WS-MAN in Windows Server 2003/R2 and named it Windows Remote Management (WinRM). It interoperated with a number of CIMOM/WS-MAN stacks available on other platforms including Openwsman (Perl, Python, Java and Ruby Bindings), Wiseman (a Java implementation), and OpenPegasus. Once standards-based management clients and CIMOMs could interoperate, the ball started rolling. However it also started stressing the seams in the increasingly heterogeneous world as vendors used the protocols to develop truly agentless management solutions. Differences in the ways the specifications got implemented meant that the tools needed to write special case code. Difficult APIs made it hard to write serious applications. Gaps in coverage meant that vendors still had to install agents, and vendors and customers hate having extra agents on the machines. Vendors hate them because they require lot of work to write and keep up to date with OS releases. Customers hate them because they complicate provisioning processes and introduce yet another thing that consume precious resources and can go wrong. Why change? Early in the Windows Server “8” planning process we realized that we could not deliver a Cloud OS without a major investment in standards-based management. There are simply too many things to manage in the cloud to manage each of them differently. Considering the situation I described above, we concluded that we needed to: - Dramatically reduce the effort required to write WMI providers and standards-based management tools - Substantially improve manageability coverage particularly in the areas of configuration, command and control - Update our code to comply with the latest DMTF standards - Tightly integrate WMI and Windows PowerShell - Provide a clear and compelling value proposition for everyone to use standards based management on Windows or any other platform. Summary of what we have done Let’s take a look at what we’ve done from two perspectives; the IT Pro perspective, and the Windows/ device developer perspective. Our goal for IT Pros is to let them manage everything using Windows PowerShell, so we needed to give them simple-to-use cmdlets to remotely manage resources with standard interfaces on remote machines or heterogeneous devices. This, in turn, allows the IT Pros to script against those resources and write workflows which tie together tens, or tens of thousands of servers and devices without having to learn, configure and operate separate technologies and toolsets for each resource type. Our goal for Windows/device developers is to make it simple and easy to define and implement standard-based management interfaces, and then expose them through client APIs, cmdlets and REST endpoints. For developers writing management tools we wanted to make it simple and easy to manage all the components of a solution including down-level DCOM Windows servers, standards-based management Operating Systems, servers and devices. For Web Developers we want to make it simple and easy to manage Windows via REST APIs. Let’s start looking at what we have done starting from the developer’s perspective. The picture below shows the components of what we call the CIM stack. - In the area of provider development we introduced a new Management Infrastructure (MI) API for the WMI, which significantly simplifies development of new providers (MI Providers in the picture). New tools generate skeleton providers from the CIM class specifications. The new API supports rich cmdlets semantics that IT Pros have come to expect: –WhatIf, -Confirm, -Verbose as well as progress bars and the other cmdlet behaviors. When a new provider that supports the rich semantics is called by an old CIM client, these APIs do nothing. However new clients and Windows PowerShell can request these semantics and the APIs “light up” to deliver rich experience. - We made WS-MAN the primary protocol for management of Windows Servers and kept the COM and DCOM stacks for backwards compatibility. We have completed the full set of WS-MAN protocol operations and optimize our implementation for performance and scale. We also added support for handling connection interruptions to make management more robust. This simplifies the task of managing large sets of machines where interruptions are sure to occur. - For the client developers we created a new MI Client API and stack that can communicate with the WMI over COM locally and DCOM and WS-Man remotely. It can also communicate with any CIM-compliant server via WS-MAN. The client’s API, both C/C++ and .Net, is consistent with the provider MI API (they share the main header file). The above gave us the foundation on which we have built Windows PowerShell access to CIM classes implemented in any CIMOM, which is illustrated in the picture below. - We created a Windows PowerShell module called CIM Cmdlets with tasks that directly correspond to the generic CIM operations. The module is built on top of the client .Net API and can manage any standards-based management resource. - We modified Windows PowerShell to be able to generate resource specific CIM-Based cmdlets at run-time. These cmdlets are generated from a declarative XML specification (CDXML) of the Windows PowerShell-to-CIM mapping and can call CIM class providers locally or remotely. This allows a developer to write a CIM provider, write the CDXML mapping file, and make the generated cmdlets available on every Windows device running Windows PowerShell 3.0. This works for non-Windows providers as well. Now imagine the value of this to device vendors. If they implement a standards-compliant provider and include this CDXML mapping file, then a couple hundred million Windows machines will be able to manage that device without the vendor having to write, debug, distribute or support any additional code. When a new version of Windows comes out, their device is supported without them having to write any code. This alone gives a huge incentive to the device vendors to support standards-based management. In the picture above you may have noticed a box labeled “NanoWBEM”. Let’s talk about that now. As we engaged our partners and the community in our plans to pursue standard- based management we got mixed reactions. Some felt it was the right thing to do and understood the business opportunities it could create, but where skeptical about whether it would really work. When we drilled into that we discovered that the partners did not feel like they could succeed using the existing open source CIMOMs. At the same time our own System Center team encountered similar problems as it expanded its capabilities to manage Linux Servers. To address them the team started a project to build a portable, small-footprint, high performance CIMOM and the result is the NanoWBEM. NanoWBEM is written in portable C and runs on Linux, UNIX and Windows. Because of its very small size it is suitable for running on small devices such as network devices, storage controllers and phones. NanoWBEM uses the same MI provider APIs as WMI, so the same tools that the developers use to create Windows providers can be used to develop providers for other platforms. Now to address the original concerns of our partners and the community will are planning to make NanoWBEM available to the open-source community. With the things I described above we have the best of both worlds: - We give IT Pros powerful tools to access the standard-based management APIs realized by CIM class providers. If those classes are implemented by the MI providers, they can support the extended Windows PowerShell semantics like progress, -WhatIf, -Confirm and –Verbose. - We also gave the managed software and device developers tools to create new MI providers at a significantly lower cost than before, and make them manageable by IT Pros via Windows PowerShell modules at a very small incremental cost. Finally, for the Web developers that want to manage Windows from non-Windows platforms we have developed the Management OData IIS Extension. This contains tools and components that simplify building REST APIs (OData Service endpoints). OData is a set of URI conventions, tools, components and libraries for building REST APIs. What makes the OData services stand out is that they are based on explicit domain models, which define their data content and behavior. This allows rich client libraries (e.g. Windows/IoS/Android Phones, Browsers, Python, Java, etc.) to be generated automatically to simplify the developing solutions on a wide range of devices and platforms. There are a number of products that have full Windows PowerShell APIs and need REST APIs now that they are being hosted in the Cloud. This is why our first use of OData focused on exposing set of Windows PowerShell cmdlets. However we have architected a general purpose solution to for future releases. Rest APIs, and OData in particular, map very well to CIM data models so what we did was to provide a mechanism to map sets of cmdlets into a CIM data model and then expose that data model as an OData service endpoint. A shallow dive into the CIM stack In the preceding section, I showed a high-level overview of what we have done. This inevitably left many of you asking; So how does it work in practice? In the team blog we will take deep-dives into all features and components of the Windows Sever “8” management platform. In the meantime, for the inpatient among us, I will do a “shallow dive” into the features starting with the IT Pro experience. CIM cmdlets IT Pros have two mechanisms to manage CIM classes. The first option is to use the generic CIM cmdlets from the CimCmdlets module, which is imported to PowerShell_ISE and PowerShell by default. The cmdlets of the module should look quite familiar to IT Pros familiar with CIM because they map very directly to the generic CIM/WS-Man operations. For example three different parameter sets of the Get-CimInstance cmdlet map directly to CIM/WS-MAN generic operations: GetInstance, EnumerateInstances and QueryInstances. The module also includes cmdlets to create remote server connections (sessions) and inspecting definitions of classes registered with the CIMOM. The new CIM cmdlets are a replacement for the *-Wmi* cmdlets which only worked in Windows-to-Windows. The cmdlets are optimized to work over WS-MAN and will continue to work seamlessly over DCOM, so as an IT pro you no longer need to use two sets of commands to manage Windows and Non-Windows machines. The following example shows getting the names and types of the properties of the Win32_Service class registered in the WMI root\cimv2 namespace on the local computer. Getting the names of the Win32 services from a remote server is as simple as this. CIM-Based Cmdlets generated from CDXML IT Pros can also use cmdlets that Windows PowerShell generates using a CDXML mapping file. This model allows developers to write one piece of code and get the benefits of both the Windows PowerShell and WMI ecosystems. It also allows cmdlets to be written in native code which was of particular interest to some of the OS feature teams. The CIM-based cmdlets, although written as WMI providers look and feel just like Windows PowerShell cmdlets: - They provide task-oriented abstractions that hide implementation details like namespace, class name, method names, etc. - They support full Windows PowerShell semantics: -WhatIf, -Confirm, etc… - They have uniform support for rich operational controls: -AsJob, -Throttlelimit, etc… - They are packaged as Windows PowerShell modules and are discoverable using the get/import-module cmdlets. The CDXML file used to generate the CIM-based cmdlets maps a cmdlet verb, noun and parameters to a Cmdlet Adapter. A Cmdlet Adapter is a .NET class which maps the requirements of a Windows PowerShell cmdlet into a given technology. We ship a Cmdlet Adapter for CIM classes but anyone can write their own (e.g. to map cmdlets to a Java classes). The file extension of the mapping file is .CDXML (Cmdlet Definition XML). A number of related CDXML files can be combined into a Windows PowerShell module together with files which describe the returned objects and how to format them. The beauty of this mechanism is that Windows PowerShell can import such a .CDXML module from a remote CIMOM, and then create cmdlets which manage the classes on that server without any prior knowledge about them. In other words a CIMOM can expose a server-specific Windows PowerShell interface to its classes at run-time, without a need for any software installation! Authoring CDXML files requires a level of detail comparable with specifying any other cmdlet, plus information about mappings to the CIM class functions. To simplify that task we developed CDXML editing tools that we will detail in a separate blog. Without going into details let me illustrate the idea behind generated CIM-based cmdlets with a simple example. Above I showed how to access the Win32_Service class and its instances using the CIM cmdlets. Below is a .CDXML file that defines the Get-Win32Service generated CIM-based cmdlet that will call the enumeration method on the same class. You will not find the name of that Get-Win32Service cmdlet in the file because it is generated by default from the default noun Win32Service and the verb Get. What is in the file is the <QueryableProperties> element which defines the properties that Windows PowerShell will use to query for the instances of the Win32_Service class. In our case the property we want to query on is service Name. The following sequence of Windows PowerShell commands imports the .CDXML file as a module, lists our new cmdlet as defined in the module, and then shows its signature. Notice that because in the .CDXML file we say that the parameter Name is mandatory (<cmdletParameterMetadata IsMandatory="true" cmdletParameterSets="ByName" />), Name it is shown as the only mandatory parameter in the Get-Win32Service cmdlet signature. Note: If you are curious about how we accomplish this you can see the cmdlet we generate on the fly using the following command. Our newly created cmdlet (Get-Win32Service) behaves like any other cmdlet and as I mentioned above, can be executed as a background job. Throttling (-ThrottleLimit) is useful when executing the command against a large set of servers. You can run the command against a few hundred or thousand servers and throttle how many concurrent outstanding requests are allowed to run. We shipped Windows PowerShell V1 with 130 cmdlets and Windows PowerShell V2 with 230 cmdlets. With Windows PowerShell V3, Windows Server “8” ships with over 2,300 cmdlets. A large percentage of those cmdlets are written using the new WMI MI providers and .CDXML files. What that means is that those functions are both available via Windows PowerShell and via standards-based management. We recently shocked an audience by demonstrating the ability to install a set of Windows Server Roles from a Linux client machine using WS-MAN! The CIM Client .Net API Both CIM cmdlets and CIM-based cmdlets in Windows PowerShell are implemented on top of the new MI .Net Client API. Although it is unlikely that IT Pros will write C# client code, management tools developers certainly will, so let’s take a look at a simple example. The client API supports both synchronous and asynchronous interactions with the server. The synchronous operations return IEnumerable<CimInstance> collections of CIM class instances. The asynchronous operations use the concept of asynchronous, observable collections from the Reactive Extensions, which results in efficient, simple and compact client code. Below is an example of a simple command-line program that enumerates instances of the Win32_Service class on a remote computer. My objective is not to discuss the client API, but to illustrate how compact and clean the resulting program is. The code that handles the numeration is in the three highlighted lines of the Main function. The lines turn the result of enumerating instances of Win32_Service into an Observable collection of CimInstance objects and associate the consumer observer object with that collection. The observer contains three callbacks that handle returned instances, the final result, and errors. This makes it simple and easy to perform rich management functions against a remote CIM server in just a few lines. The New WMI Providers I said earlier that we significantly simplified development of the MI providers. There are a number of things that contributed to that simplification. The picture below shows the steps involved in writing a provider. A provider can implement one or more CIM classes and the first step is to describe them in the MOF specification language. The next step is to generate a provider skeleton to implement the CIM classes. We provide a utility Convert-MofToProvider.exe to automate this step. This utility takes the class definition(s) as input and creates a number of C header and code files. Two of these files are worth mentioning - The first one, called the schema file, contains definitions of all data structures, including the CIM class instances, which the provider uses. It makes the provider strongly typed, and a pleasure to work with Visual Studio’s intellisense and auto-completion. This file should never be edited by hand. - The other file is the provider code file, which contains the skeleton of the provider. This is the only file that should be edited. It contains the code of all the CIM class methods, with the method bodies returning the not-implemented result. So the generated provider is buildable, can be registered and will run, but will do nothing. The next step is to fill the CIM class methods with their respective logic. Once that is done, the provider can be registered and tested. We also greatly simplified the provider registration by building a new registration tool that takes only one input; the provider DLL. We could do that, because the MI providers have their class definitions complied into them in the schema file. In order to make the new providers work well with Windows PowerShell we added the extended Windows PowerShell semantics API. The essence of that feature is that a provider can obtains input from the user while an operation is executing if the cmdlet that invoked the operation contains the –Confirm or –WhatIf parameter. The following code snippet is from a test provider that enumerates, stops and starts Win32 services, and illustrates how the feature works. The code is part of the operation that stops the service and asks the user (the MI_PrompUser() function) if she wants to stop the service with the name that was given to the operation as the Name argument. If the answer is No (bContinue == FALSE) or the function failed, which means the provider could not reach the user, the provider does not stop the process but writes a message back to the user (the MI_WriteVerbose() function) and terminates the operation. Management OData The last feature I want to briefly describe is the IIS extensions for building OData-based RESTful management endpoints. The idea behind the feature is that we can declaratively configure how to dispatch the endpoint service requests to cmdlets in a Windows PowerShell module. Let me explain how it works using an example of a very simple endpoint that can be used to enumerate, create and terminate Windows processes. The endpoint’s directory includes the three files shown in the picture below. The schema file contains the definitions of the endpoints’ entities and is loaded when the endpoint is loaded. The module file defines the Windows PowerShell cmdlets that will be called to handle the endpoint requests. The dispatching file ties the two other artifacts together by defining what cmdlets are called for different client requests. In our example it maps the query for Win32 processes onto the Get-Win32_Process cmdlet, which then uses a generic CIM cmdlet to talk to the WMI. The result of this endpoint configuration is shown in the screenshot below, which is a response to the URL eq 'svchost.exe'&$select=ProcessId. In Summary Jeffrey often makes the point that “nobody cares about your first million lines of code”. The “million” is an arbitrarily picked very large number, but the idea of the metaphor is that every software product must accumulate a critical mass of foundational components before delivering a meaningful value. However once that critical mass is reached, even a few additional lines can make a significant difference. Following that metaphor, I feel that in Windows Server “8” we have written our “first million” lines of the standards-based management platform code. We bridged the gap between the IT Pros who are managing increasingly complex cloud infrastructures and the Windows and device developers who build the things that must be managed. We have laid a consistent foundation that spans from low level CIM interfaces on one end, to the IT Pro oriented Windows PowerShell and OData interfaces on the other end. We’ve created a clear and compelling value for heterogeneous devices to implement standards based managed and we’ve delivered comprehensive coverage so that management tool providers can use standards based management to manage Windows. @Zoltán, the way the WS-Man 1.1 spec is currently written, implementations are conformant whether they use /wsman-anon or not since it is not required although it is recommended to follow that convention. You are correct that for historical reasons (WinRM was originally written as WS-Man 1.0 spec was being developed) WinRM chose the current implemented approach. Supporting the recommendation is something we could consider for a future release. However, because it is just a recommendation and not required, any other implementation (such as BMCs) may have also chosen not to support the /wsman-anon suffix. My recommendation would be for the client to try both approaches in a heterogenous environment. I think it shows how much of a geek I am, but this is one of the most exciting blog posts I've read about Windows Server 8, and I just can't tell you how excited I am for the oportunity to dig into this more! Thanks for all the hard work and effort! Thank you for share! Great post and respect for the huge work done in Server 8! When talking about heterogeneous management using WS-Management, I have a question about one of the functionalities in the current implementations: anonymous identify. The WS-Man 1.1 spec [DSP0226] only contains recommendations (R5.4.5-2: “it is recommended that the network address for resources that do not require authentication be suffixed by the token sequence /wsman-anon” and R11-4 “A service that supports the wsmid:Identify operation may expose this operation without requiring client or server authentication… the network address be suffixed by the token sequence /wsman-anon/identify”). Openwsman for Linux implements anonymous identify according to this recommendation. WinRM chose a different approach (maybe for historical reasons?): anonymous identify requests also go to /wsman with the extra conditions specified in WS-WSMV 3.1.4.1.23 “…when the request is unauthenticated and the following HTTP header is present WSMANIDENTIFY: unauthenticated”. Thus if I have a mixed environment with devices all supporting WS-Management but using different implementations, as far as I can see there is no standard way to discover what type of OS/WS-Man stack they are running, and separate checks have to be implemented for Windows and Linux. I tried this in the beta of Windows Server ‘8’, but it also does not listen on wsman-anon. What would you recommend for anonymous wsman identify in heterogeneous environments? Please consider redoing the images and removing the auto-correct underlining… It's "not so pretty"… @Steve: thanks for the answer!
https://blogs.technet.microsoft.com/windowsserver/2012/03/29/standards-based-management-in-windows-server-8/
CC-MAIN-2016-50
refinedweb
4,970
51.68
Testing in Python Open Source Your Knowledge, Become a Contributor Technology knowledge has to be shared and made accessible for free. Join the movement. The Basics Everyone will agree that testing is an important part of the development process. But it is something that many people struggle with still. When it comes down to writing out your test cases, it can seem tedious or hard to wrap your head around this different way of looking at your code. However, with a little practice, this will become as second nature as coding the actual solution. There are four different types of tests, each depending on the granularity of code being tested, as well as the goal of the test. Unit Tests This tests specific methods and logic in the code. This is the most granular type of test. The goal is to verify the internal flow of the method, as well as to make sure edge cases are being handled. def func(): return 1 def test_func(): assert func() == 1 Feature Tests This tests the functionality of the component. A collection of unit tests may or may not represent a Feature test. The goal is to verify the component meets the requirements given for it. If you're thinking in terms of a work item, this would be testing a ticket as a whole. class NewEndpoint: def on_get(req, resp): resp.body = "Hello World" def test_new_endpoint(): result = simulate_get("/newendpoint") assert result.body = "Hello World" Integration Tests This tests the entire application, end to end. The goal is to guarantee the stability of the application. When new code is added, integration tests should still pass with minimal effort. class MySystem: external_system = ExternalSystemConnector() def handle_message(message): try: external_system.send_message(message) return True catch Exception as err: return False def test_MySystem(): system = MySystem() assert system.handle_message(good_message) assert not system.handle_message(bad_message) Performance Tests This tests the efficiency of a piece of code. The size of the code being tested can range from a method to the whole application. import timeit def func(i): return i * 2 def test_performance(): assert 1 > timeit.timeit("[func(x) for x in range(20)]", number=5, setup="from __main__ import func")
https://tech.io/playgrounds/41931/testing-in-python/the-basics
CC-MAIN-2022-05
refinedweb
362
58.08
Part of the introductory series to using Python for Vision Research brought to you by the GestaltReVision group (KU Leuven, Belgium). This notebook is meant as an introduction to Python's essential scientific packages: Numpy, PIL, Matplotlib, and SciPy. There is more Python learning material available on our lab's wiki. Author: Maarten Demeyer Year: 2014 Depending on what kind of values you want to store, Python variables can be of different data types. For instance: my_int = 5 print my_int, type(my_int) my_float = 5.0 print my_float, type(my_float) my_boolean = False print my_boolean, type(my_boolean) my_string = 'hello' print my_string, type(my_string) One useful data type is the list, which stores an ordered, mutable sequence of any data type, even mixed my_list = [my_int, my_float, my_boolean, my_string] print type(my_list) for element in my_list: print type(element) To retrieve or change specific elements in a list, indices and slicing can be used. Indexing starts at zero. Slices do not include the last element. print my_list[1] my_list[1] = 3.0 my_sublist = my_list[1:3] print my_sublist print type(my_sublist) Re-usable pieces of code can be put into functions. Many pre-defined functions are available in Python packages. Functions can have both required and optional input arguments. When the function has no output argument, it returns None. # Function with a required and an optional argument def regress(x, c=0, b=1): return (x*b)+c print regress(5) # Only required argument print regress(5, 10, 3) # Use argument order print regress(5, b=3) # Specify the name to skip an optional argument # Function without return argument def divisible(a,b): if a%b: print str(a) + " is not divisible by " + str(b) else: print str(a) + " is divisible by " + str(b) divisible(9,3) res = divisible(9,2) print res # Function with multiple return arguments def add_diff(a,b): return a+b, a-b # Assigned as a tuple res = add_diff(5,3) print res # Directly unpacked to two variables a,d = add_diff(5,3) print a print d Every variable in Python is actually an object. Objects bundle member variables with tightly connected member functions that (typically) use these member variables. Lists are a good example of this. my_list = [1, False, 'boo'] my_list.append('extra element') my_list.remove(False) print my_list The member variables in this case just contain the information on the elements in the list. They are 'hidden' and not intended to be used directly - you manipulate the list through its member functions. The functions above are in-place methods, changing the original list directly, and returning None. This is not always the case. Some member functions, for instance in strings, do not modify the original object, but return a second, modified object instead. return_arg = my_list.append('another one') print return_arg print my_list my_string = 'kumbaya, milord' return_arg = my_string.replace('lord', 'lard') print return_arg print my_string Do you remember why list functions are in-place, while string functions are not? While lists are great, they are not very suitable for scientific computing. Consider this example: subj_length = [180.0,165.0,190.0,172.0,156.0] subj_weight = [75.0,60.0,83.0,85.0,62.0] subj_bmi = [] # EXERCISE 1: Try to compute the BMI of each subject, as well as the average BMI across subjects # BMI = weight/(length/100)**2 Clearly, this is clumsy. MATLAB users would expect something like this to work: subj_bmi = subj_weight/(subj_length/100)**2 mean_bmi = mean(subj_bmi) But it doesn't. / and ** are not defined for lists; nor does the mean() function exist. + and * are defined, but they mean something else. Do you remember what they do? Enter Numpy, and its ndarray data type, allowing these elementwise computations on ordered sequences, and implementing a host of mathematical functions operating on them. Lists are converted to Numpy arrays through calling the np.array() constructor function, which takes a list and creates a new array object filled with the list's values. import numpy as np # Create a numpy array from a list subj_length = np.array([180.0,165.0,190.0,172.0,156.0]) subj_weight = np.array([75.0,60.0,83.0,85.0,62.0]) print type(subj_length), type(subj_weight) # EXERCISE 2: Try to complete the program now! # Hint: np.mean() computes the mean of a numpy array # Note that unlike MATLAB, Python does not need the '.' before elementwise operators Numpy is a very large package, that we can't possibly cover completely. But we will cover enough to get you started. The most basic characteristics of a Numpy array are its shape and the data type of its elements, or dtype. For those of you who have worked in MATLAB before, this should be familiar. # Multi-dimensional lists are just nested lists # This is clumsy to work with my_nested_list = [[1,2,3],[4,5,6]] print my_nested_list print len(my_nested_list) print my_nested_list[0] print len(my_nested_list[0]) # Numpy arrays handle multidimensionality better arr = np.array(my_nested_list) print arr # nicer printing print arr.shape # direct access to all dimension sizes print arr.size # direct access to the total number of elements print arr.ndim # direct access to the number of dimensions The member variables shape and size contain the dimension lengths and the total number of elements, respectively, while ndim contains the number of dimensions. The shape is represented by a tuple, where the last dimension is the inner dimension representing the columns of a 2-D matrix. The first dimension is the top-level, outer dimension and represents the rows here. We could also make 3-D (or even higher-level) arrays: arr3d = np.array([ [[1,2,3],[4,5,6]] , [[7,8,9],[10,11,12]] ]) print arr3d print arr3d.shape print arr3d.size print arr3d.ndim Now the last or inner dimension becomes the layer dimension. The inner lists of the constructor represent the values at that (row,column) coordinate of the various layers. Rows and columns remain the first two dimensions. Note how what we have here now, is three layers of two-by-two matrices. Not two layers of two-by-three matrices. This implies that dimension sizes are listed from low to high in the shape tuple. The second basic property of an array is its dtype. Contrary to list elements, numpy array elements are (typically) all of the same type. # The type of a numpy array is always... numpy.ndarray arr = np.array([[1,2,3],[4,5,6]]) print type(arr) # So, let's do a computation print arr/2 # Apparently we're doing our computations on integer elements! # How do we find out? print arr.dtype # And how do we fix this? arr = arr.astype('float') # Note: this is not an in-place function! print arr.dtype print arr/2 # Alternatively, we could have defined our dtype better from the start arr = np.array([[1,2,3],[4,5,6]], dtype='float') print arr.dtype arr = np.array([[1.,2.,3.],[4.,5.,6.]]) print arr.dtype To summarize, any numpy array is of the data type numpy.ndarray, but the data type of its elements can be set separately as its dtype member variable. It's a good idea to explicitly define the dtype when you create the array. The same indexing and slicing operations used on lists can also be used on Numpy arrays. It is possible to perform computations on slices directly. But pay attention - Numpy arrays must have an identical shape if you want to combine them. There are some exceptions though, the most common being scalar operands. arr = np.array([[1,2,3],[4,5,6],[7,8,9]], dtype='float') # Indexing and slicing print arr[0,0] # or: arr[0][0] print arr[:-1,0] # Elementwise computations on slices # Remember, the LAST dimension is the INNER dimension print arr[:,0] * arr[:,1] print arr[0,:] * arr[1,:] # Note that you could never slice across rows like this in a nested list! # This doesn't work # print arr[1:,0] * arr[:,1] # And here's why: print arr[1:,0].shape, arr[:,1].shape # This however does work. You can always use scalars as the other operand. print arr[:,0] * arr[2,2] # Or, similarly: print arr[:,0] * 9. As an exercise, can you create a 2x3 array containing the column-wise and the row-wise means of the original matrix, respectively? Without using a for-loop. # EXERCISE 3: Create a 2x3 array containing the column-wise and the row-wise means of the original matrix # Do not use a for-loop, and also do not use the np.mean() function for now. arr = np.array([[1,2,3],[4,5,6],[7,8,9]], dtype='float') This works, but it is still a bit clumsy. We will learn more efficient methods below. Creating arrays mustn't always be done by hand. The following functions are particularly common. Again, they are analogous to what you do in MATLAB. # 1-D array, filled with zeros arr = np.zeros(3) print arr # Multidimensional array of a given shape, filled with ones # This automatically allows you to fill arrays with /any/ value arr = np.ones((3,2))*5 print arr # Sequence from 1 to AND NOT including 16, in steps of 3 # Note that using a float input makes the dtype a float as well # This is equivalent to np.array(range(1.,16.,3)) arr = np.arange(1.,16.,3) print arr # Sequence from 1 to AND including 16, in 3 steps # This always returns an array with dtype float arr = np.linspace(1,16,3) print arr # Array of random numbers between 0 and 1, of a given shape # Note that the inputs here are separate integers, not a tuple arr = np.random.rand(5,2) print arr # Array of random integers from 0 to AND NOT including 10, of a given shape # Here the shape is defined as a tuple again arr = np.random.randint(0,10,(5,2)) print arr Once we have an array, we may wish to replicate it to create a larger array. Here the concept of an axis becomes important, i.e., along which of the dimensions of the array are you working? axis=0 corresponds to the first dimension of the shape tuple, axis=-1 always corresponds to the last dimension (inner dimension; columns in case of 2D, layers in case of 3D). arr0 = np.array([[1,2],[3,4]]) print arr0 # 'repeat' replicates elements along a given axis # Each element is replicated directly after itself arr = np.repeat(arr0, 3, axis=-1) print arr # We may even specify the number of times each element should be repeated # The length of the tuple should correspond to the dimension length arr = np.repeat(arr0, (2,4), axis=0) print arr print arr0 # 'tile' replicates the array as a whole # Use a tuple to specify the number of tilings along each dimensions arr = np.tile(arr0, (2,4)) print arr # 'meshgrid' is commonly used to create X and Y coordinate arrays from two vectors # where each array contains the X or Y coordinates corresponding to a given pixel in an image x = np.arange(10) y = np.arange(5) print x,y arrx, arry = np.meshgrid(x,y) print arrx print arry Concatenating an array allows you to make several arrays into one. arr0 = np.array([[1,2],[3,4]]) arr1 = np.array([[5,6],[7,8]]) # 'concatenate' requires an axis to perform its operation on # The original arrays should be put in a tuple arr = np.concatenate((arr0,arr1), axis=0) print arr # as new rows arr = np.concatenate((arr0,arr1), axis=1) print arr # as new columns # Suppose we want to create a 3-D matrix from them, # we have to create them as being three-dimensional # (what happens if you don't?) arr0 = np.array([[[1],[2]],[[3],[4]]]) arr1 = np.array([[[5],[6]],[[7],[8]]]) print arr0.shape, arr1.shape arr = np.concatenate((arr0,arr1),axis=2) print arr # hstack, vstack, and dstack are short-hand functions # which will automatically create these 'missing' dimensions arr0 = np.array([[1,2],[3,4]]) arr1 = np.array([[5,6],[7,8]]) # vstack() concatenates rows arr = np.vstack((arr0,arr1)) print arr # hstack() concatenates columns arr = np.hstack((arr0,arr1)) print arr # dstack() concatenates 2D arrays into 3D arrays arr = np.dstack((arr0,arr1)) print arr # Their counterparts are the hsplit, vsplit, dsplit functions # They take a second argument: how do you want to split arr = np.random.rand(4,4) print arr print '--' # Splitting int equal parts arr0,arr1 = hsplit(arr,2) print arr0 print arr1 print '--' # Or, specify exact split points arr0,arr1,arr2 = hsplit(arr,(1,2)) print arr0 print arr1 print arr2 Finally, we can easily reshape and transpose arrays arr0 = np.arange(10) print arr0 print '--' # 'reshape' does exactly what you would expect # Make sure though that the total number of elements remains the same arr = np.reshape(arr0,(5,2)) print arr # You can also leave one dimension blank by using -1 as a value # Numpy will then compute for you how long this dimension should be arr = np.reshape(arr0,(-1,5)) print arr print '--' # 'transpose' allows you to switch around dimensions # A tuple specifies the new order of dimensions arr = np.transpose(arr,(1,0)) print arr # For simply transposing rows and columns, there is the short-hand form .T arr = arr.T print arr print '--' # 'flatten' creates a 1D array out of everything arr = arr.flatten() print arr Time for an exercise! Can you write your own 'meshgrid3d' function, which returns the resulting 2D arrays as two layers of a 3D matrix, instead of two separate 2D arrays? # EXERCISE 4: Create your own meshgrid3d function # Like np.meshgrid(), it should take two vectors and replicate them; one into columns, the other into rows # Unlike np.meshgrid(), it should return them as a single 3D array rather than 2D arrays # ...do not use the np.meshgrid() function def meshgrid3d(xvec, yvec): # fill in! xvec = np.arange(10) yvec = np.arange(5) xy = meshgrid3d(xvec, yvec) print xy print xy[:,:,0] # = first output of np.meshgrid() print xy[:,:,1] # = second output of np.meshgrid() We can now handle arrays in any way we like, but we still don't know any operations to perform on them, other than the basic arithmetic operations. Luckily numpy implements a large collection of common computations. This is a very short review of some useful functions. arr = np.random.rand(5) print arr # Sorting and shuffling res = arr.sort() print arr # in-place!!! print res res = np.random.shuffle(arr) print arr # in-place!!! print res # Min, max, mean, standard deviation arr = np.random.rand(5) print arr mn = np.min(arr) mx = np.max(arr) print mn, mx mu = np.mean(arr) sigma = np.std(arr) print mu, sigma # Some functions allow you to specify an axis to work along, in case of multidimensional arrays arr2d = np.random.rand(3,5) print arr2d print np.mean(arr2d, axis=0) print np.mean(arr2d, axis=1) # Trigonometric functions # Note: Numpy works with radians units, not degrees arr = np.random.rand(5) print arr sn = np.sin(arr*2*np.pi) cs = np.cos(arr*2*np.pi) print sn print cs # Exponents and logarithms arr = np.random.rand(5) print arr xp = np.exp(arr) print xp print np.log(xp) # Rounding arr = np.random.rand(5) print arr print arr*5 print np.round(arr*5) print np.floor(arr*5) print np.ceil(arr*5) A complete list of all numpy functions can be found at the Numpy website. Or, a google search for 'numpy tangens', 'numpy median' or similar will usually get you there as well. Remember how you were asked to create a 2x3 array containing the column-wise and the row-wise means of a matrix above? We now have the knowledge to do this far shorter. Use a concatenation function and a statistical function to obtain the same thing! # EXERCISE 5: Make a better version of Exercise 3 with what you've just learned arr = np.array([[1,2,3],[4,5,6],[7,8,9]], dtype='float') # What we had: print np.array([(arr[:,0]+arr[:,1]+arr[:,2])/3,(arr[0,:]+arr[1,:]+arr[2,:])/3]) # Now the new version: A Gabor patch is the product of a sinusoidal grating and a Gaussian. If we ignore orientation and just create a vertically oriented Gabor, the grating luminance (bounded between -1 and 1) is created by: $grating = \sin(xf)$ where $x$ is the $x$ coordinate of a pixel, and $f$ is the frequency of the sine wave (how many peaks per $2 \pi$ coordinate units). A simple 2D Gaussian luminance profile (bounded between 0 and 1) with its peak at coordinate $(0,0)$ and a variance of $1$ is given by: $gaussian = e^{-(x^2+y^2)/2}$ where $x$ and $y$ are again the $x$ and $y$ coordinates of a pixel. The Gabor luminance (bounded between -1 and 1) for any pixel then equals: $gabor = grating \times gaussian$ To visualize this, these are the grating, the Gaussian, and the Gabor, respectively (at maximal contrast): Now you try to create a 100x100 pixel image of a Gabor. Use $x$ and $y$ coordinate values ranging from $-\pi$ to $\pi$, and a frequency of 10 for a good-looking result. # EXERCISE 6: Create a Gabor patch of 100 by 100 pixels import numpy as np import matplotlib.pyplot as plt # Step 1: Define the 1D coordinate values # Tip: use 100 equally spaced values between -np.pi and np.pi # Step 2: Create the 2D x and y coordinate arrays # Tip: use np.meshgrid() # Step 3: Create the grating # Tip: Use a frequency of 10 # Step 4: Create the Gaussian # Tip: use np.exp() to compute a power of e # Step 5: Create the Gabor # Visualize your result # (we will discuss how this works later) plt.figure(figsize=(15,5)) plt.subplot(131) plt.imshow(grating, cmap='gray') plt.subplot(132) plt.imshow(gaussian, cmap='gray') plt.subplot(133) plt.imshow(gabor, cmap='gray') plt.show() The dtype of a Numpy array can also be boolean, that is, True or False. It is then particularly convenient that given an array of the same shape, these boolean arrays can be used to index other arrays. # Check whether each element of a 2x2 array is greater than 0.5 arr = np.random.rand(2,2) print arr res = arr>0.5 print res print '--' # Analogously, check it against each element of a second 2x2 array arr2 = np.random.rand(2,2) print arr2 res = arr>arr2 print res # We can use these boolean arrays as indices into other arrays! # Add 0.5 to any element smaller than 0.5 arr = np.random.rand(2,2) print arr res = arr<0.5 print res arr[res] = arr[res]+0.5 print arr # Or, shorter: arr[arr<0.5] = arr[arr<0.5] + 0.5 # Or, even shorter: arr[arr<0.5] += 0.5 While it is possible to do multiplication and addition on boolean values (this will convert them to ones and zeros), the proper way of doing elementwise boolean logic is to use boolean operators: and, or, xor, not. arr = np.array([[1,2,3],[4,5,6]]) # The short-hand forms for elementwise boolean operators are: & | ~ ^ # Use parentheses around such expressions res = (arr<4) & (arr>1) print res print '--' res = (arr<2) | (arr==5) print res print '--' res = (arr>3) & ~(arr==6) print res print '--' res = (arr>3) ^ (arr<5) print res # To convert boolean indices to normal integer indices, use the 'nonzero' function print res print np.nonzero(res) print '--' # Separate row and column indices print np.nonzero(res)[0] print np.nonzero(res)[1] print '--' # Or stack and transpose them to get index pairs pairs = np.vstack(np.nonzero(res)).T print pairs Numpy is excellent at making programs that involve iterative operations more efficient. This then requires you to re-imagine the problem as an array of values, rather than values that change with each loop iteration. For instance, imagine the following situation: You throw a die continuously until you either encounter the sequence ‘123’ or ‘111’. Which one can be expected to occur sooner? This could be proven mathematically, but in practice it is often faster to do a simulation instead of working out an analytical solution. We could just use two nested for-loops: import numpy as np # We will keep track of the sum of first occurence positions, # as well as the number of positions entered into this sum. # This way we can compute the mean. sum111 = 0. n111 = 0. sum123 = 0. n123 = 0. for sim in range(5000): # Keep track of how far along we are in finding a given pattern d111 = 0 d123 = 0 for throw in range(2000): # Throw a die die = np.random.randint(1,7) # 111 case if d111==3: pass elif die==1 and d111==0: d111 = 1 elif die==1 and d111==1: d111 = 2 elif die==1 and d111==2: d111 = 3 sum111 = sum111 + throw n111 = n111 + 1 else: d111 = 0 # 123 case if d123==3: pass elif die==1: d123 = 1 elif die==2 and d123==1: d123 = 2 elif die==3 and d123==2: d123 = 3 sum123 = sum123 + throw n123 = n123 + 1 else: d123 = 0 # Don't continue if both have been found if d111==3 and d123==3: break # Compute the averages avg111 = sum111/n111 avg123 = sum123/n123 print avg111, avg123 # ...can you spot the crucial difference between both patterns? However this is inefficient and makes the code unwieldy. Vectorized solutions are usually preferred. Try to run these 5000 simulations using Numpy, without any loops, and see whether the result is the same. Use a maximal die-roll sequence length of 2000, and just assume that both '123' and '111' will occur before the end of any sequence. You will have to make use of 2D arrays and boolean logic. A quick solution to find the first occurence in a boolean array is to use argmax - use the only Numpy documentation to find out how to use it. Vectorizing problems is a crucial skill in scientific computing! # EXERCISE 7: Vectorize the above program # You get these lines for free... import numpy as np throws = np.random.randint(1,7,(5000,2000)) one = (throws==1) two = (throws==2) three = (throws==3) # Find out where all the 111 and 123 sequences occur find111 = find123 = # Then at what index they /first/ occur in each sequence first111 = first123 = # Compute the average first occurence location for both situations avg111 = avg123 = # Print the result print avg111, avg123 In this particular example, the nested for-loop solution does have the advantage that it can 'break' out of the die throwing sequence when first occurences of both patterns have been found, whereas Numpy will always generate complete sequences of 2000 rolls. Remove the break statement in the first solution to see what the speed difference would have been if both programs were truly doing the same thing! As vision scientists, images are a natural stimulus to work with. The Python Imaging Library will help us handle images, similar to the Image Processing toolbox in MATLAB. Note that PIL itself has nowadays been superseded by Pillow, for which an excellent documentation can be found here. The module to import is however still called 'PIL'. In practice, we will mostly use its Image module. from PIL import Image The image we will use for this example code should be in the same directory as this file. But really, any color image will do, as long as you put it in the same directory as this notebook, and change the filename string in the code to correspond with the actual image filename. # Opening an image is simple enough: # Construct an Image object with the filename as an argument im = Image.open('python.jpg') # It is now represented as an object of the 'JpegImageFile' type print im # There are some useful member variables we can inspect here print im.format # format in which the file was saved print im.size # pixel dimensions print im.mode # luminance/color model used # We can even display it # NOTE this is not perfect; meant for debugging im.show() If the im.show() call does not work well on your system, use this function instead to show images in a separate window. Note, you must always close the window before you can continue using the notebook. ( Tkinter is a package to write graphical user interfaces in Python, we will not discuss it here) # Alternative quick-show method from Tkinter import Tk, Button from PIL import ImageTk def alt_show(im): win = Tk() tkimg = ImageTk.PhotoImage(im) Button(image=tkimg).pack() win.mainloop() alt_show(im) Once we have opened the image in PIL, we can convert it to a Numpy object. # We can convert PIL images to an ndarray! arr = np.array(im) print arr.dtype # uint8 = unsigned 8-bit integer (values 0-255 only) print arr.shape # Why do we have three layers? # Let's make it a float-type for doing computations arr = arr.astype('float') print arr.dtype # This opens up unlimited possibilities for image processing! # For instance, let's make this a grayscale image, and add white noise max_noise = 50 arr = np.mean(arr,-1) noise = (np.random.rand(arr.shape[0],arr.shape[1])-0.5)*2 arr = arr + noise*max_noise # Make sure we don't exceed the 0-255 limits of a uint8 arr[arr<0] = 0 arr[arr>255] = 255 The conversion back to PIL is easy as well # When going back to PIL, it's a good idea to explicitly # specify the right dtype and the mode. # Because automatic conversions might mess things up arr = arr.astype('uint8') imn = Image.fromarray(arr, mode='L') print imn.format print imn.size print imn.mode # L = greyscale imn.show() # or use alt_show() from above if show() doesn't work well for you # Note that /any/ 2D or 2Dx3 numpy array filled with values between 0 and 255 # can be converted to an image object in this way The main operations of the PIL Image module you will probably use, are its resizing and conversion capabilities. im = Image.open('python.jpg') # Make the image smaller ims = im.resize((800,600)) ims.show() # Or you could even make it larger # The resample argument allows you to specify the method used iml = im.resize((1280,1024), resample=Image.BILINEAR) iml.show() # Rotation is similar (unit=degrees) imr = im.rotate(10, resample=Image.BILINEAR, expand=False) imr.show() # If we want to lose the black corners, we can crop (unit=pixels) imr = imr.crop((100,100,924,668)) imr.show() # 'convert' allows conversion between different color models # The most important here is between 'L' (luminance) and 'RGB' (color) imbw = im.convert('L') imbw.show() print imbw.mode imrgb = imbw.convert('RGB') imrgb.show() print imrgb.mode # Note that the grayscale conversion of PIL is more sophisticated # than simply averaging the three layers in Numpy (it is a weighted average) # Also note that the color information is effectively lost after converting to L The ImageFilter module implements several types of filters to execute on any image. You can also define your own. from PIL import Image, ImageFilter im = Image.open('python.jpg') imbw = im.convert('L') # Contour detection filter imf = imbw.filter(ImageFilter.CONTOUR) imf.show() # Blurring filter imf = imbw.filter(ImageFilter.GaussianBlur(radius=3)) imf.show() Similarly, you can import the ImageDraw module to draw shapes and text onto an image. from PIL import Image, ImageDraw im = Image.open('python.jpg') # You need to attach a drawing object to the image first imd = ImageDraw.Draw(im) # Then you work on this object imd.rectangle([10,10,100,100], fill=(255,0,0)) imd.line([(200,200),(200,600)], width=10, fill=(0,0,255)) imd.text([500,500], 'Python', fill=(0,255,0)) # The results are automatically applied to the Image object im.show() Finally, you can of course save these image objects back to a file on the disk. # PIL will figure out the file type by the extension im.save('python.bmp') # There are also further options, like compression quality (0-100) im.save('python_bad.jpg', quality=5) We mentioned that the conversion to grayscale in PIL is not just a simple averaging of the RGB layers. Can you visualize as an image what the difference in result looks like, when comparing a simple averaging to a PIL grayscale conversion? Pixels that are less luminant in the plain averaging method should be displayed in red, with a luminance depending on the size of the difference. Pixels that are more luminant when averaging in Numpy should similarly be displayed in green. Hint: you will have to make use of Boolean indexing. As an extra, try to maximize the contrast in your image, so that all values from 0-255 are used. As a second extra, save the result as PNG files of three different sizes (large, medium, small), at respectively the full image resolution, half of the image size, and a quarter of the image size. # EXERCISE 8: Visualize the difference between the PIL conversion to grayscale, and a simple averaging of RGB # Display pixels where the average is LESS luminant in red, and where it is MORE luminant in shades green # The luminance of these colors should correspond to the size of the difference # # Extra 1: Maximize the overall contrast in your image # # Extra 2: Save as three PNG files, of different sizes (large, medium, small) While PIL is useful for processing photographic images, it falls short for creating data plots and other kinds of schematic figures. Matplotlib offers a far more advanced solution for this, specifically through its pyplot module. Common figures such as scatter plots, histograms and barcharts can be generated and manipulated very simply. import numpy as np from PIL import Image import matplotlib.pyplot as plt # As data for our plots, we will use the pixel values of the image #() # QUICKPLOT 1: Correlation of luminances in the image # This works if you want to be very quick: # (xb means blue crosses, .g are green dots) plt.plot(R, B, 'xb') plt.plot(R, G, '.g') # However we will take a slightly more disciplined approach here # Note that Matplotlib wants colors expressed as 0-1 values instead of 0-255 # Create a square figure plt.figure(figsize=(5,5)) # Plot both scatter clouds # marker: self-explanatory # linestyle: 'None' because we want no line # color: RGB triplet with values 0-1 plt.plot(R, B, marker='x', linestyle='None', color=(0,0,0.6)) plt.plot(R, G, marker='.', linestyle='None', color=(0,0.35,0)) # Make the axis scales equal, and name them plt.axis([0,255,0,255]) plt.xlabel('Red value') plt.ylabel('Green/Blue value') # Show the result plt.show() # QUICKPLOT 2: Histogram of 'red' values in the image plt.hist(R) # ...and now a nicer version # Make a non-square figure plt.figure(figsize=(7,5)) # Make a histogram with 25 red bins # Here we simply use the abbreviation 'r' for red plt.hist(R, bins=25, color='r') # Set the X axis limits and label plt.xlim([0,255]) plt.xlabel('Red value', size=16) # Remove the Y ticks and labels by setting them to an empty list plt.yticks([]) # Remove the top ticks by specifying the 'top' argument plt.tick_params(top=False) # Add two vertical lines for the mean and the median plt.axvline(np.mean(R), color='g', linewidth=3, label='mean') plt.axvline(np.median(R), color='b', linewidth=1, linestyle=':', label='median') # Generate a legend based on the label= arguments plt.legend(loc=2) # Show the plot plt.show() # QUICKPLOT 3: Bar chart of mean+std of RGB values plt.bar([0,1,2],[np.mean(R), np.mean(G), np.mean(B)], yerr=[np.std(R), np.std(G), np.std(B)]) # ...and now a nicer version # Make a non-square-figure plt.figure(figsize=(7,5)) # Plot the bars with various options # x location where bars start, y height of bars # yerr: data for error bars # width: width of the bars # color: surface color of bars # ecolor: color of error bars ('k' means black) plt.bar([0,1,2], [np.mean(R), np.mean(G), np.mean(B)], yerr=[np.std(R), np.std(G), np.std(B)], width=0.75, color=['r','g','b'], ecolor='k') # Set the X-axis limits and tick labels plt.xlim((-0.25,3.)) plt.xticks(np.array([0,1,2])+0.75/2, ['Red','Green','Blue'], size=16) # Remove all X-axis ticks by setting their length to 0 plt.tick_params(length=0) # Set a figure title plt.title('RGB Color Channels', size=16) # Show the figure plt.show() Saving to a file is easy enough, using the savefig() function. However, there are some caveats, depending on the exact environment you are using. You have to use it BEFORE calling plt.show() and, in case of this notebook, within the same codebox. The reason for this is that Matplotlib is automatically deciding for you which plot commands belong to the same figure based on these criteria. # So, copy-paste this line into the box above, before the plt.show() command plt.savefig('bar.png') # There are some further formatting options possible, e.g. plt.savefig('bar.svg', dpi=300, bbox_inches=('tight'), pad_inches=(1,1), facecolor=(0.8,0.8,0.8)) Like PIL, Matplotlib is capable of displaying the contents of 2D Numpy arrays. The primary method is imshow() # A simple grayscale luminance map # cmap: colormap used to display the values plt.figure(figsize=(5,5)) plt.imshow(np.mean(arr,2), cmap='gray') plt.show() # Importantly and contrary to PIL, imshow luminances are by default relative # That is, the values are always rescaled to 0-255 first (maximum contrast) # Moreover, colormaps other than grayscale can be used plt.figure(figsize=(5,5)) plt.imshow(np.mean(arr,2)+100, cmap='jet') # or hot, hsv, cool,... plt.show() # as you can see, adding 100 didn't make a difference here As we noted, Matplotlib is behind the scenes keeping track of what your current figure is. This is often convenient, but in some cases you want to keep explicit control of what figure you're working on. For this, we will have to make a distinction between Figure and Axes objects. # 'Figure' objects are returned by the plt.figure() command fig = plt.figure(figsize=(7,5)) print type(fig) # Axes objects are the /actual/ plots within the figure # Create them using the add_axes() method of the figure object # The input coordinates are relative (left, bottom, width, height) ax0 = fig.add_axes([0.1,0.1,0.4,0.7], xlabel='The X Axis') ax1 = fig.add_axes([0.2,0.2,0.5,0.2], axisbg='gray') ax2 = fig.add_axes([0.4,0.5,0.4,0.4], projection='polar') print type(ax0), type(ax1), type(ax2) # This allows you to execute functions like savefig() directly on the figure object # This resolves Matplotlib's confusion of what the current figure is, when using plt.savefig() fig.savefig('fig.png') # It also allows you to add text to the figure as a whole, across the different axes objects fig.text(0.5, 0.5, 'splatter', color='r') # The overall figure title can be set separate from the individual plot titles fig.suptitle('What a mess', size=18) # show() is actually a figure method as well # It just gets 'forwarded' to what is thought to be the current figure if you use plt.show() fig.show() # Create a new figure fig = plt.figure(figsize=(15,10)) # As we saw, many of the axes properties can already be set at their creation ax0 = fig.add_axes([0.,0.,0.25,0.25], xticks=(0.1,0.5,0.9), xticklabels=('one','thro','twee')) ax1 = fig.add_axes([0.3,0.,0.25,0.25], xscale='log', ylim=(0,0.5)) ax2 = fig.add_axes([0.6,0.,0.25,0.25]) # Once you have the axes object though, there are further methods available # This includes many of the top-level pyplot functions # If you use for instance plt.plot(), Matplotlib is actually 'forwarding' this # to an Axes.plot() call on the current Axes object R.sort() G.sort() B.sort() ax2.plot(R, color='r', linestyle='-', marker='None') # plot directly to an Axes object of choice plt.plot(G, color='g', linestyle='-', marker='None') # plt.plot() just plots to the last created Axes object ax2.plot(B, color='b', linestyle='-', marker='None') # Other top-level pyplot functions are simply renamed to 'set_' functions here ax1.set_xticks([]) plt.yticks([]) # Show the figure fig.show() Clearly, when making a multi-panel figure, we are actually creating a single Figure object with multiple Axes objects attached to it. Having to set the Axes sizes manually is annoying though. Luckily, the subplot() method can handle much of this automatically. # Create a new figure fig = plt.figure(figsize=(15,5)) # Specify the LAYOUT of the subplots (rows,columns) # as well as the CURRENT Axes you want to work on ax0 = fig.add_subplot(231) # Equivalent top-level call on the current figure # It is also possible to create several subplots at once using plt.subplots() ax1 = plt.subplot(232) # Optional arguments are similar to those of add_axes() ax2 = fig.add_subplot(233, title='three') # We can use these Axes object as before ax3 = fig.add_subplot(234) ax3.plot(R, 'r-') ax3.set_xticks([]) ax3.set_yticks([]) # We skipped the fifth subplot, and create only the 6th ax5 = fig.add_subplot(236, projection='polar') # We can adjust the spacings afterwards fig.subplots_adjust(hspace=0.4) # And even make room in the figure for a plot that doesn't fit the grid fig.subplots_adjust(right=0.5) ax6 = fig.add_axes([0.55,0.1,0.3,0.8]) # Show the figure fig.show() Create a figure with a 2:1 aspect ratio, containing two subplots, one above the other. The TOP figure should plot one full cycle of a sine wave, that is $y=sin(x)$. Use $0$ to $2\pi$ as values on the X axis. On the same scale, the BOTTOM figure should plot $y=sin(x^2)$ instead. Tweak your figure until you think it looks good. # EXERCISE 9: Plot y=sin(x) and y=sin(x^2) in two separate subplots, one above the other # Let x range from 0 to 2*pi If you are not satisfied with the output of these general plotting functions, despite all the options they offer, you can start fiddling with the details manually. First, many figure elements can be manually added through top-level or Axes functions: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Add horizontal lines ax0.axhline(0, color='g') ax0.axhline(0.5, color='gray', linestyle=':') ax0.axhline(-0.5, color='gray', linestyle=':') ax1.axhline(0, color='g') ax1.axhline(0.5, color='gray', linestyle=':') ax1.axhline(-0.5, color='gray', linestyle=':') # Add text to the plots ax0.text(0.1,-0.9,'$y = sin(x)$', size=16) # math mode for proper formula formatting! ax1.text(0.1,-0.9,'$y = sin(x^2)$', size=16) # Annotate certain points with a value for x_an in np.linspace(0,2*np.pi,9): ax0.annotate(str(round(sin(x_an),2)),(x_an,sin(x_an))) # Add an arrow (x,y,xlength,ylength) ax0.arrow(np.pi-0.5,-0.5,0.5,0.5, head_width=0.1, length_includes_head=True) Second, all basic elements like lines, polygons and the individual axis lines are customizable objects in their own right, attached to a specific Axes object. They can be retrieved, manipulated, created from scratch, and added to existing Axes objects. # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # For instance, fetch the X axis # XAxis objects have their own methods xax = ax1.get_xaxis() print type(xax) # These methods allow you to fetch the even smaller building blocks # For instance, tick-lines are Line2D objects attached to the XAxis xaxt = xax.get_majorticklines() print len(xaxt) # Of which you can fetch AND change the properties # Here we change just one tickline into a cross print xaxt[6].get_color() xaxt[6].set_color('g') xaxt[6].set_marker('x') xaxt[6].set_markersize(10) # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Another example: fetch the lines in the plot # Change the color, change the marker, and mark only every 100 points for one specific line ln = ax0.get_lines() print ln ln[0].set_color('g') ln[0].set_marker('o') ln[0].set_markerfacecolor('b') ln[0].set_markevery(100) # Finally, let's create a graphic element from scratch, that is not available as a top-level pyplot function # And then attach it to existing Axes # NOTE: we need to import something before we can create the ellipse like this. What should we import? ell = matplotlib.patches.Ellipse((np.pi,0), 1., 1., color='r') ax0.add_artist(ell) ell.set_hatch('//') ell.set_edgecolor('black') ell.set_facecolor((0.9,0.9,0.9)) Take the scatterplot from the first example, and manually add a regression line to both the R-G and the R-B comparisons. Try not to use the plot() function for the regression line, but manually create a Line2D object instead, and attach it to the Axes. Useful functions: # EXERCISE 10: Add regression lines import numpy as np from PIL import Image import matplotlib.pyplot as plt import matplotlib.lines as lines #() # Do the plotting plt.figure(figsize=(5,5)) plt.plot(R, B, marker='x', linestyle='None', color=(0,0,0.6)) plt.plot(R, G, marker='.', linestyle='None', color=(0,0.35,0)) # Tweak the plot plt.axis([0,255,0,255]) plt.xlabel('Red value') plt.ylabel('Green/Blue value') # Fill in your code... # Show the result plt.show() Scipy is a large library of scientific functions, covering for instance numerical integration, linear algebra, Fourier transforms, and interpolation algorithms. If you can't find the equivalent of your favorite MATLAB function in any of the previous three packages, Scipy is a good place to look. A full list of all submodules can be found here. We will pick two useful modules from SciPy: stats and fftpack I will not give a lot of explanation here. I'll leave it up to you to navigate through the documentation, and find out how these functions work. import numpy as np import scipy.stats as stats # Generate random numbers between 0 and 1 data = np.random.rand(30) # Do a t-test with a H0 for the mean of 0.4 t,p = stats.ttest_1samp(data,0.4) print p # Generate another sample of random numbers, with mean 0.4 data2 = np.random.rand(30)-0.1 # Do a t-test that these have the same mean t,p = stats.ttest_ind(data, data2) print p import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Simulate the size of the F statistic when comparing three conditions # Given a constant n, and an increasing true effect size. true_effect = np.linspace(0,0.5,500) n = 100 Fres = [] # Draw random normally distributed samples for each condition, and do a one-way ANOVA for eff in true_effect: c1 = stats.norm.rvs(0,1,size=n) c2 = stats.norm.rvs(eff,1,size=n) c3 = stats.norm.rvs(2*eff,1,size=n) F,p = stats.f_oneway(c1,c2,c3) Fres.append(F) # Create the plot plt.figure() plt.plot(true_effect,Fres,'r*-') plt.xlabel('True Effect') plt.ylabel('F') plt.show() import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Compute the pdf and cdf of normal distributions, with increasing sd's # Then plot them in different colors # (of course, many other distributions are also available) x = np.linspace(-5,5,1000) sds = np.linspace(0.25,2.5,10) cols = np.linspace(0.15,0.85,10) # Create the figure fig = plt.figure(figsize=(10,5)) ax0 = fig.add_subplot(121) ax1 = fig.add_subplot(122) # Compute the densities, and plot them for i,sd in enumerate(sds): y1 = stats.norm.pdf(x,0,sd) y2 = stats.norm.cdf(x,0,sd) ax0.plot(x,y1,color=cols[i]*np.array([1,0,0])) ax1.plot(x,y2,color=cols[i]*np.array([0,1,0])) # Show the figure plt.show() The stats module of SciPy contains more statistical distributions and further tests such as a Kruskall-Wallis test, Wilcoxon test, a Chi-Square test, a test for normality, and so forth. A full listing of functions is found here. For serious statistical models however, you should be looking at the statsmodels package, or the rpy interfacing package, allowing R to be called from within Python. FFT is commonly used to process or analyze images (as well as sound). Numpy has a FFT package, numpy.fft, but SciPy has its own set of functions as well in scipy.fftpack. Both are very similar, you can use whichever package you like. I will assume that you are familiar with the basic underlying theory. That is, that any periodic function can be described as a sum of sine-waves of different frequencies, amplitudes and phases. A Fast Fourier Transform allows you to do this very quickly for equally spaced samples from the function, returning a finite set of sinusoidal components with n equal to the number of samples, ordered by frequency. Let's do this for a simple 1D function. import numpy as np import scipy.fftpack as fft # The original data: a step function data = np.zeros(200, dtype='float') data[25:100] = 1 # Decompose into sinusoidal components # The result is a series of complex numbers as long as the data itself res = fft.fft(data) # FREQUENCY is implied by the ordering, but can be retrieved as well # It increases from 0 to the Nyquist frequency (0.5), followed by its reversed negative counterpart # Note: in case of real input data, the FFT results will be symmetrical freq = fft.fftfreq(data.size) # AMPLITUDE is given by np.abs() of the complex numbers amp = np.abs(res) # PHASE is given by np.angle() of the complex numbers phase = np.angle(res) # We can plot each component separately plt.figure(figsize=(15,5)) plt.plot(data, 'k-', lw=3) xs = np.linspace(0,data.size-1,data.size)*2*np.pi for i in range(len(res)): ys = np.cos(xs*freq[i]+phase[i]) * (amp[i]/data.size) plt.plot(ys.real, 'r:', lw=1) plt.show() # Can you then plot what the SUM of all these components looks like? Of course, there is a short-hand function available for reconstructing the original input from the FFT result: # ifft = inverse fft reconstructed = fft.ifft(res) plt.figure(figsize=(15,5)) plt.plot(data,'k-', lw=3) plt.plot(reconstructed, 'r:', lw=3) plt.show() # Note that /some/ information has been lost, but very little print 'Total deviation:', np.sum(np.abs(data-reconstructed)) This allows us to perform operations in the frequency domain, like applying a high-pass filter: # Set components with low frequencies equal to 0 resf = res.copy() mask = np.abs(fft.fftfreq(data.size)) < 0.25 resf[mask] = 0 # ifft the modified components filtered = fft.ifft(resf) # And the result is high-pass filtered plt.figure(figsize=(15,5)) plt.plot(data,'k-', lw=3) plt.plot(filtered, 'r:', lw=3) plt.show() The exact same logic can be applied to 2D images, using the ftt2 and ifft2 functions. For instance, let us equalize the amplitude spectrum of an image, so that all frequencies are equally strong. import numpy as np from PIL import Image import matplotlib.pyplot as plt import scipy.fftpack as fft im = Image.open('python.jpg').convert('L') arr = np.array(im, dtype='float') res = fft.fft2(arr) # Just set all amplitudes to one new_asp = np.ones(res.shape, dtype='float') # Then recombine the complex numbers real_part = new_asp * np.cos(np.angle(res)) imag_part = new_asp * np.sin(np.angle(res)) eq = real_part+(imag_part*1j) # And do the ifft arr_eq = fft.ifft2(eq).real # Show the result # Clear the high frequencies dominate now plt.figure() plt.imshow(arr_eq, cmap='gray') plt.show() Note that in practice, it is often recommended to use image dimensions that are a power of 2, for efficiency. The fft functions allow you to automatically pad images to a given size; at the end you can just crop the result to obtain the original image size again.
https://nbviewer.jupyter.org/github/gestaltrevision/python_for_visres/blob/v1.0/Part3/Part3_Scientific_Python.ipynb
CC-MAIN-2018-13
refinedweb
8,160
58.08
This is a simple web application for users who are new to ASP.NET. This will show how we can retrieve an image from a database and display it in a GridView. GridView Sometimes we need to upload images to a web application and store it in a database, which store images in binary format. Since that can cause a loss of image quality, we can instead store the image path in the database, and retrieve that image path from it, and display the image from that location in the web page. In order to do this, first we need to use ADO.NET to connect to the database. The database I use here is SQL Server. In the database shown, the Profile_Picture field contains the image path. In my case, I have stored all my images in the application directory. You may change this to any other directory, like ~\myfolder\myimage.jpg. Now our application reads images from its current directory, so if you use a different folder from the current directory, you have to set the application's current directory by calling Directory.SetCurrentDirectory, of the System.IO namespace. Directory.SetCurrentDirectory System.IO We also need to set some properties in the GridView . Perform the following actions: DataImageUrlField = ImageFieldName The code required for the explanation above is very simple. public partial class _Default : System.Web.UI.Page { SqlConnection conn = new SqlConnection(); protected void Page_Load(object sender, EventArgs e) { conn.ConnectionString = "Data Source=MyServer; Integrated Security=True; database=test"; Load_GridData(); // call method below } void Load_GridData() { conn.Open(); // open the connection SqlDataAdapter Sqa = new SqlDataAdapter("select * from picture", conn); DataSet ds = new DataSet(); Sqa.Fill(ds); // fill the dataset GridView1.DataSource = ds; // give data to GridView GridView1.DataBind(); conn.Close(); } } Using images in your web application is always interesting, and after all, they are from a database! This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Chona1171 wrote:There are already articles on codeproject explaining how to do this Chona1171 wrote:Also the title is misleading, The title suggests that the image itself is being pulled from the database , and not just the url. Abhijit Jana wrote:Don't think so it's a misleading title. Eiter you can store image as a bytearray in DB or you can save the url. If the image size is big, it's always beeter to store it on Drive and pull the url from database. This is totaly depends on the requirments. Abhijit Jana wrote:Agree. But, This article has been written more than 3 years back ! General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/articles/20782/displaying-images-from-a-database-in-a-gridview
CC-MAIN-2017-09
refinedweb
465
56.55
While developing a screen I noticed the navigation to the screen getting sluggish. If I open the diagnostics window I can see the SQLTags Changes/Second spike (around 40) when I load the screen. It takes about 2 seconds after clicking the navigation button for the screen to load. The other main screens load almost instantly. The screen consists of about a dozen conveyor belts. Each belt is animated through a script bound to property change. Each belt (rectangle) then has a custom property of UDT type. That custom property (of each rectangle) is then bound to a conveyor UDT tag containing about 30 members. I’m only using three of these members (fault, running, test mode) to animate the state but I didn’t think it would have much performance impact. Am I over complicating and overloading the screen by referencing the entire UDT? Below is the simple animation script bound to property change. from java.awt import Color if event.source.EquipmentTag.FaultIndication: event.source.fillPaint = Color(255,0,0) elif event.source.EquipmentTag.TestModeIndication: event.source.fillPaint = Color(255,255,71) elif event.source.EquipmentTag.RunningIndication: event.source.fillPaint = Color(0,255,0) else: event.source.fillPaint = Color(0,0,255) I should also note I have a button for each conveyor that also references the entire UDT. The button opens a popup and passes the UDT into it. Thanks for any input.
https://forum.inductiveautomation.com/t/sqltag-reads-with-navigation/5823
CC-MAIN-2021-39
refinedweb
236
60.01
Object Oriented Shortcuts This chapter talks in detail about various built-in functions in Python, file I/O operations and overloading concepts. Python Built-in Functions The Python interpreter has a number of functions called built-in functions that are readily available for use. In its latest version, Python contains 68 built-in functions as listed in the table given below − This section discusses some of the important functions in brief − len() function The len() function gets the length of strings, list or collections. It returns the length or number of items of an object, where object can be a string, list or a collection. >>> len(['hello', 9 , 45.0, 24]) 4 len() function internally works like list.__len__() or tuple.__len__(). Thus, note that len() works only on objects that has a __len__() method. >>> set1 {1, 2, 3, 4} >>> set1.__len__() 4 However, in practice, we prefer len() instead of the __len__() function because of the following reasons − It is more efficient. And it is not necessary that a particular method is written to refuse access to special methods such as __len__. It is easy to maintain. It supports backward compatibility. Reversed(seq) It returns the reverse iterator. seq must be an object which has __reversed__() method or supports the sequence protocol (the __len__() method and the __getitem__() method). It is generally used in for loops when we want to loop over items from back to front. >>> normal_list = [2, 4, 5, 7, 9] >>> >>> class CustomSequence(): def __len__(self): return 5 def __getitem__(self,index): return "x{0}".format(index) >>> class funkyback(): def __reversed__(self): return 'backwards!' >>> for seq in normal_list, CustomSequence(), funkyback(): print('\n{}: '.format(seq.__class__.__name__), end="") for item in reversed(seq): print(item, end=", ") The for loop at the end prints the reversed list of a normal list, and instances of the two custom sequences. The output shows that reversed() works on all the three of them, but has a very different results when we define __reversed__. Output You can observe the following output when you execute the code given above − >list: 9, 7, 5, 4, 2, CustomSequence: x4, x3, x2, x1, x0, funkyback: b, a, c, k, w, a, r, d, s, !, Enumerate The enumerate () method adds a counter to an iterable and returns the enumerate object. The syntax of enumerate () is − >enumerate(iterable, start = 0) Here the second argument start is optional, and by default index starts with zero (0). >>> # Enumerate >>> names = ['Rajesh', 'Rahul', 'Aarav', 'Sahil', 'Trevor'] >>> enumerate(names) <enumerate object at 0x031D9F80> >>> list(enumerate(names)) [(0, 'Rajesh'), (1, 'Rahul'), (2, 'Aarav'), (3, 'Sahil'), (4, 'Trevor')] >>> So enumerate() returns an iterator which yields a tuple that keeps count of the elements in the sequence passed. Since the return value is an iterator, directly accessing it is not much useful. A better approach for enumerate() is keeping count within a for loop. >>> for i, n in enumerate(names): print('Names number: ' + str(i)) print(n) Names number: 0 Rajesh Names number: 1 Rahul Names number: 2 Aarav Names number: 3 Sahil Names number: 4 Trevor There are many other functions in the standard library, and here is another list of some more widely used functions − hasattr, getattr, setattr and delattr, which allows attributes of an object to be manipulated by their string names. all and any, which accept an iterable object and return True if all, or any, of the items evaluate to be true. nzip, which takes two or more sequences and returns a new sequence of tuples, where each tuple contains a single value from each sequence. File I/O The concept of files is associated with the term object-oriented programming. Python has wrapped the interface that operating systems provided in abstraction that allows us to work with file objects. The open() built-in function is used to open a file and return a file object. It is the most commonly used function with two arguments − open(filename, mode) The open() function calls two argument, first is the filename and second is the mode. Here mode can be ‘r’ for read only mode, ‘w’ for only writing (an existing file with the same name will be erased), and ‘a’ opens the file for appending, any data written to the file is automatically added to the end. ‘r+’ opens the file for both reading and writing. The default mode is read only. On windows, ‘b’ appended to the mode opens the file in binary mode, so there are also modes like ‘rb’, ‘wb’ and ‘r+b’. >>>>> file = open('datawork','w') >>> file.write(text) 22 >>> file.close() In some cases, we just want to append to the existing file rather then over-writing it, for that we could supply the value ‘a’ as a mode argument, to append to the end of the file, rather than completely overwriting existing file contents. >>> f = open('datawork','a') >>>>> f.write(text1) 20 >>> f.close() Once a file is opened for reading, we can call the read, readline, or readlines method to get the contents of the file. The read method returns the entire contents of the file as a str or bytes object, depending on whether the second argument is ‘b’. For readability, and to avoid reading a large file in one go, it is often better to use a for loop directly on a file object. For text files, it will read each line, one at a time, and we can process it inside the loop body. For binary files however it’s better to read fixed-sized chunks of data using the read() method, passing a parameter for the maximum number of bytes to read. >>> f = open('fileone','r+') >>> f.readline() 'This is the first line. \n' >>> f.readline() 'This is the second line. \n' Writing to a file, through write method on file objects will writes a string (bytes for binary data) object to the file. The writelines method accepts a sequence of strings and write each of the iterated values to the file. The writelines method does not append a new line after each item in the sequence. Finally the close() method should be called when we are finished reading or writing the file, to ensure any buffered writes are written to the disk, that the file has been properly cleaned up and that all resources tied with the file are released back to the operating system. It’s a better approach to call the close() method but technically this will happen automatically when the script exists. An alternative to method overloading Method overloading refers to having multiple methods with the same name that accept different sets of arguments. Given a single method or function, we can specify the number of parameters ourself. Depending on the function definition, it can be called with zero, one, two or more parameters. class Human: def sayHello(self, name = None): if name is not None: print('Hello ' + name) else: print('Hello ') #Create Instance obj = Human() #Call the method, else part will be executed obj.sayHello() #Call the method with a parameter, if part will be executed obj.sayHello('Rahul') Output >Hello Hello Rahul Default Arguments Functions Are Objects Too A callable object is an object can accept some arguments and possibly will return an object. A function is the simplest callable object in Python, but there are others too like classes or certain class instances. Every function in a Python is an object. Objects can contain methods or functions but object is not necessary a function. def my_func(): print('My function was called') my_func.description = 'A silly function' def second_func(): print('Second function was called') second_func.description = 'One more sillier function' def another_func(func): print("The description:", end=" ") print(func.description) print('The name: ', end=' ') print(func.__name__) print('The class:', end=' ') print(func.__class__) print("Now I'll call the function passed in") func() another_func(my_func) another_func(second_func) In the above code, we are able to pass two different functions as argument into our third function, and get different Output for each one − >The description: A silly function The name: my_func The class: Now I'll call the function passed in My function was called The description: One more sillier function The name: second_func The class: Now I'll call the function passed in Second function was called callable objects Just as functions are objects that can have attributes set on them, it is possible to create an object that can be called as though it were a function. In Python any object with a __call__() method can be called using function-call syntax.
https://scanftree.com/tutorial/python/object-oriented-python/object-oriented-shortcuts/
CC-MAIN-2022-40
refinedweb
1,428
60.95
This is my first class with C++ and I'm having some trouble with this array. I an array int guesses[100] and I set up a loop so that the user enters different integers into each element. the user input would be guesses[guess] i need to compare the current user input with previous inputs in the array. so i set up another loop for (int i=0 ; i <= MAX_GUESSES ; i++) then i compared if ( guesses[guess] == guesses[i] ) { ........ } This didn't work but I don't know why.help me out here. and please try to remember that I'm only just learning this. Also, the code is attached: Oh my. I can't believe you are using GOTO in a "C++" program like this. There are time in which it's use may be nesisarry, but for a beginner like you, you can go ahead and make a mental note to never ever use GOTO, ever. Also, Code: #include <stdlib.h> #include <time.h> Those are C header files. If you really must use C functions, then use: Code: #include <cstdlib> #include <ctime> #include <stdlib.h> #include <time.h> #include <cstdlib> #include <ctime> In your inner loop (when you are comparing the guess) you are looping from 0 - MAX_GUESSES -- but you only want to loop through the guesses that exist (i.e. you have only filled out guesses from 0 - guess, not 0 - MAX_GUESSES Also, remember that c++ arrays are zero-based indexes, so you want to loop 0 through (MAX_GUESSES - 1) in your outer loop, or you will run into trouble since guesses[MAX_GUESSES] is not defined. You can fix this simply by using "<" rather than "<=" in your outer loop. just like english might not be your first language, this isn't mine. my prof lets me use them without deducting points so why not? i don't understand why stroustrup would have implemented this in the language when it is somewhat 'forbidden'. Originally Posted by cjpaul just like english might not be your first language, this isn't mine. my prof lets me use them without deducting points so why not? i don't understand why stroustrup would have implemented this in the language when it is somewhat 'forbidden'. Well, I question how good of a professor he is then. They are there, but that doesn't mean they should be used. If you are using them for something, you should have a real good reason for it. GOTO's inevitably lead to spaghetti code, and in addition, virtually anything that can be done with a GOTO statment can be done much neater with a loop. I can pretty much guarantee you you will never be hired as a C++ programmer (if that's your goal) if you simply ignore advice and create code littered with GOTOs. It's bad for a reason. I wouldn't even consider your code to be a real example of C++. It's really just BASIC in disguise. EDIT: As a side note, GOTO isn't a C++ language feature, it's a C language feature. God knows why it was even kept in C, but the only reason it's there in C++ is because it maintains backwards comparability with C. Last edited by Chris_F; December 12th, 2010 at 04:18 AM. Originally Posted by cjpaul just like english might not be your first language, this isn't mine. my prof lets me use them without deducting points so why not? Then your professor isn't teaching you programming. You should get your money back (if you paid for the course). I don't know of one professor (and I know many) that would encourage goto, and most of them would deduct points from any code that uses it. Secondly, code littered with "goto" will not be looked at by most people that may want to help you. The reason is as what Chris_F stated -- your code will be spaghetti. No one is going to waste time untying all of those goto knots. The code with "goto" all over the place has no structure, no logical paths, etc. So if you want to get any substantial help here, in other programming forums or websites, or even from other students/professors, ditch using "goto" and use structured concepts. You should be learning structured programming, and using the proper looping constructs (for, while, do-while), not goto. i don't understand why stroustrup would have implemented this in the language when it is somewhat 'forbidden'. You and your professor should read E. Dijkstra's "Goto Considered Harmful". Also, "goto" exists in most languages -- Stroustrup adopted it from 'C', and 'C' adopted it from most other mid to high-level languages at the time 'C' was being developed. It is a legacy command that has been around a long time, but should rarely, if ever be used, and if the program is a beginner app such as yours, it shouldn't be used at all. Regards, Paul McKenzie Hey Paul, Thanks for actually responding politely. My prof has genuinely expressed how we should stay away from goto statements, and has told us all about why they are bad for a program. He does deduct points if they are used in higher level classes. And I doubt I will read that book as this will most likely be my only programming class throughout my schooling. If I do ever move on with this subject, I will take the time to learn proper coding. Thanks Forum Rules
http://forums.codeguru.com/showthread.php?506352-Password-Changer-program&goto=nextnewest
CC-MAIN-2016-36
refinedweb
924
72.66
As I’ve started to find my feet in using the Raspberry Pi with Windows 10 IoT Core, I’ve tried to take some of the common hardware sensors that I’ve used with my Arduino and develop ways to make them work with the Raspberry Pi. Obviously there’s a software challenge in porting that code across to C# from the Arduino programming language – but there are also interesting challenges presented by the hardware differences also. When writing this code, I found it helpful to refer to the information at these links: How to talk to the HC-SR04 I’ve previously used the HC-SR04 as an ultrasonic distance measurement device with my Arduino. It’s a fantastic peripheral device, which I’ve found to be reliable and intuitive to use. It was first on my list of devices to test with the Raspberry Pi. The protocol for using it is: - Set the trigger pin to logic zero for at least 10 microseconds, and then bring this pin to logic 1. - Immediately after this, measure the length of time that the pulse sent through the echo pin is at logic 1. I had read in several online sources that C# on the Raspberry Pi wasn’t capable of sending or measuring pulses at this level of fidelity so I was skeptical whether I could make the HC-SR04 work directly with the Pi 3, but I wanted to give it a try. The usual way of holding a pin at a particular level is to set it to that level, and then call a “Sleep” function (effectively the same as Thread.Sleep or Task.Delay) for the length of time that you want to hold it low. Selecting a pin with C# and setting it as input or output is very easy – the code below shows how to do it. Since I wanted to hold the pin low for only 10 microseconds, I decided to use the ManualResetEvent object (which I’ve blogged about before), and tell it to wait for a time determined by TimeSpan.FromMilliseconds(0.01). I put this into its own static function. private static ManualResetEvent manualResetEvent = new ManualResetEvent(false); public static void Sleep(int delayMicroseconds) { manualResetEvent.WaitOne( TimeSpan.FromMilliseconds((double)delayMicroseconds / 1000d)); } This method has a flaw – the Pi and C# presently can’t handle signals with microsecond accuracy. I’ll post more about this soon. Next, I wanted to measure the length of the pulse back on the echo pin. First I set this pin to be an input. Ideally I needed something similar to the pulseIn feature available on the Arduino, but this isn’t available as a standard method through C#. It’s reasonably simple to replicate this function in C# however. private static Stopwatch stopWatch = new Stopwatch(); public static double GetTimeUntilNextEdge(GpioPin pin, GpioPinValue edgeToWaitFor) { stopWatch.Reset(); while (pin.Read() != edgeToWaitFor) { }; stopWatch.Start(); while (pin.Read() == edgeToWaitFor) { }; stopWatch.Stop(); return stopWatch.Elapsed.TotalSeconds; } I put both of these static functions into a static class named Gpio. So my code presently was quite simple, but should initiate a request to read the distance in front of the device, and then measure the length of the pulse that was returned. public class HCSR04 { private GpioPin triggerPin { get; set; } private GpioPin echoPin { get; set; } private const double SPEED_OF_SOUND_METERS_PER_SECOND = 343; public HCSR04(int triggerPin, int echoPin) { GpioController controller = GpioController.GetDefault(); //initialize trigger pin. this.triggerPin = controller.OpenPin(triggerPin); this.triggerPin.SetDriveMode(GpioPinDriveMode.Output); //initialize echo pin. this.echoPin = controller.OpenPin(echoPin); this.echoPin.SetDriveMode(GpioPinDriveMode.Input); } private double LengthOfHighPulse { get { // The sensor is triggered by a logic 1 pulse of 10 or more microseconds. // We give a short logic 0 pulse first to ensure a clean logic 1. this.triggerPin.Write(GpioPinValue.Low); Gpio.Sleep(5); this.triggerPin.Write(GpioPinValue.High); Gpio.Sleep(10); this.triggerPin.Write(GpioPinValue.Low); // Read the signal from the sensor: a HIGH pulse whose // duration is the time (in microseconds) from the sending // of the ping to the reception of its echo off of an object. return Gpio.GetTimeUntilNextEdge(echoPin, GpioPinValue.High, 100); } } public double Distance { get { // convert the time into a distance // duration of pulse * speed of sound (343m/s) // remember to divide by two because we're measuring the time for the signal to reach the object, and return. return (SPEED_OF_SOUND_METERS_PER_SECOND / 2) * LengthOfHighPulse; } } } Time to connect up the HC-SR04 One thing to be particularly aware of is that the HC-SR04 takes a 5v input, and echos a 5v signal. The Raspberry Pi’s pins can handle a potential difference maximum of 3.3v – if you’re senting 5v to your Pi, sooner or later you’re going to burn it out. Fortunately it’s very simple to divide the voltage returned with a couple of resistors, bringing it down to 3.3v. I connected the HC-SR04 and voltage divider to my Pi…and it worked. And then it stopped. Argh! I found that the hardware sometimes freezes up – often sending another request for a reading fixes the problem. So if I wrap the function to read a pulse in an asynchronous call which times out after 50ms, this effectively resolves the problem for me. I blogged about this technique here, and changed my function to measure the signal so that it also has a maximum time to wait before returning a default value of -1. public static double GetTimeUntilNextEdge(GpioPin pin, GpioPinValue edgeToWaitFor, int maximumTimeToWaitInMilliseconds) { var t = Task.Run(() => { stopWatch.Reset(); while (pin.Read() != edgeToWaitFor) { }; stopWatch.Start(); while (pin.Read() == edgeToWaitFor) { }; stopWatch.Stop(); return stopWatch.Elapsed.TotalSeconds; }); bool isCompleted = t.Wait(TimeSpan.FromMilliseconds(maximumTimeToWaitInMilliseconds)); if (isCompleted) { return t.Result; } else { return -1d; } } Next time I’m going to look at the issues with the Pi and sending signals with microsecond resolution.
https://jeremylindsayni.wordpress.com/2016/06/01/using-the-hc-sr04-range-finder-with-c-and-the-raspberry-pi/
CC-MAIN-2017-26
refinedweb
968
54.42
This code is quite straight forward, it should explain itself with the help of comments, although it's not very fast :( def generateAbbreviations(word): #Replace characters with '*', collect all the permutations of replacements def permutations(word, mp): ret = [] if not word: return [""] if not mp.has_key(word): nxt = permutations(word[1:], mp) for item in nxt: ret += ['*' + item, word[0] + item] mp[word] = ret return mp[word] #Turn all the '*' into numbers, ie, '*'->'1', '**'->'2', '***'->3 def replace(s): i = j = 0 ret = '' while j <= len(s): if j == len(s) or s[j] != '*': if j > i: ret += '%d' % (j - i) i = j + 1 if j != len(s): ret += s[j] j += 1 return ret return map(replace, permutations(word, {}))
https://discuss.leetcode.com/topic/32117/a-straight-forward-python-solution-using-backtracking
CC-MAIN-2018-05
refinedweb
120
62.51
Maneuver The class Maneuver is a member of com.here.android.mpa.routing . Class Summary public class Maneuver extends java.lang.Object Represents the action required to leave one street segment and enter the next in the chain of directions that comprises a calculated Route. [For complete information, see the section Class Details] Nested Class Summary Method Summary Class Details Represents the action required to leave one street segment and enter the next in the chain of directions that comprises a calculated Route. Method Details public Action getAction () Gets the Maneuver.Action required to complete the maneuver. Returns: The Maneuver.Action public int getAngle () Gets the angle of the maneuver. Returns: The angle in degrees from end of the start road to the start of the end road. Angle has a value from 0, 360, north is up, clockwise. For some roundabouts, this angle is an approximation from the entry to the exit point of the roundabout, which may be used for customization of the roundabout icon. public GeoBoundingBox getBoundingBox () Gets the GeoBoundingBox of the maneuver, which is a group of GeoCoordinates forming a polygon Returns: The GeoBoundingBox public GeoCoordinate getCoordinate () Gets the GeoCoordinate of the maneuver. Returns: The GeoCoordinate public int getDistanceFromPreviousManeuver () Gets the distance from the previous maneuver to the current maneuver, in meters. Returns: The distance public int getDistanceFromStart () Gets the distance from the start of the route to the maneuver, in meters. Returns: The distance public int getDistanceToNextManeuver () Gets the distance to the next maneuver from the current maneuver, in meters. Returns: The distance Gets the Maneuver.Icon enum that represents the icon that should be displayed for this maneuver. Returns: The Maneuver.Icon public java.util.List <GeoCoordinate> getManeuverGeometry () Puts all points of the maneuvers polyline in the right order into the given collection. Returns: a collection of GeoCoordinates. public int getMapOrientation () Gets the map orientation at the start of the maneuver, in degrees. Note: a returned value of zero represents true-north, with increasing values representing a clockwise progression of map orientation. Returns: The orientation public Image getNextRoadImage () Gets the image of the road this maneuver leads to. Returns: The Image for the next road (may be null). public String getNextRoadName () Gets the name of the road to which the maneuver leads. Next road name is provided if available for a given Maneuver . If not provided, it should be left blank. It's erroneous to assume that it is the same as prior maneuvers. Returns: The next road name public String getNextRoadNumber () Gets the road number to which the maneuver leads. Returns: The road number of the next road element public java.util.List <RoadElement> getRoadElements () Returns a list of RoadElements within the maneuver. Returns: a collection of RoadElements. public String getRoadName () Gets the name of the road on which the maneuver takes place. Road name is provided if available for a given Maneuver . If not provided, it should be left blank. It's erroneous to assume that it is the same as prior maneuvers. Returns: The road name public String getRoadNumber () Gets the road number on which the maneuver takes place. The road number is a short label for the road or highway, such as "5" for Interstate 5. If the road number is unknown, this method will return an empty string. Returns: The road number public java.util.List <RouteElement> getRouteElements () Returns a list of RouteElement within the maneuver Returns: a collection of RouteElement. public Signpost getSignpost () Gets the Signpost for this maneuver. If the signpost is not valid, null is returned. Returns: A Signpost object if a valid one exists. Otherwise, returns null. public Date getStartTime () Gets the (estimated) time at which the maneuver starts. If no departure time was set for the RouteOptions associated with the maneuver, then the time is relative to the system time when the route calculation took place. Otherwise, the times are relative to the specified departure time. Returns: The start time, or null if not available See also: public TrafficDirection getTrafficDirection () Return traffic direction. Returns: LEFT, if left side traffic, RIGHT if right side traffic. public TransportMode getTransportMode () Gets the RouteOptions.TransportMode used for the maneuver. This might differ from the RouteOptions.TransportMode used when calculating the Route with which the particular maneuver is associated. For example, in the case where a Route is calculated using PUBLIC_TRANSPORT, the overall route is a public transport route, but some individual maneuvers may be pedestrian (for example, walking to a bus stop, or transfers which involve walking to a new stop). Returns: The RouteOptions.TransportMode Gets the Maneuver.Turn required to complete the maneuver. Returns: The Maneuver.Turn
https://developer.here.com/documentation/android-premium/topics_api_nlp_hybrid_plus/com-here-android-mpa-routing-maneuver.html
CC-MAIN-2018-22
refinedweb
770
50.43
Where is LAN Module example?? I would like to try "LAN Module with W5500 Chip". Where is LAN Module example?? +1 I am looking also for the example. Hello, can anyone help with a working LAN Module example for W5500? Hello M5Stack-Admin, the link for the LAN-Modul at your homepage is corrupt. This part didn't work: Documents ... Example Arduino Example Please look at "Documents" at this page: I found an example of W5500, but it did not work. W5500 example is But,I get the following error in my environment. Failed to configure Ethernet using DHCP My code is #include <SPI.h> #include <Ethernet2.h> byte mac[] = { 0x00, 0xAA, 0xBB, 0xCC, 0xDE, 0x02 }; void setup() { Serial.begin(115200); if (Ethernet.begin(mac) == 0) { Serial.println("Failed to configure Ethernet using DHCP"); // no point in carrying on, so do nothing forevermore: for(;;) ; } Serial.print("My IP address: "); for (byte thisByte = 0; thisByte < 4; thisByte++) { Serial.print(Ethernet.localIP()[thisByte], DEC); Serial.print("."); } Serial.println(); } void loop() { } And, Arduino Example page in M5Stack LAN Module is "Page not found". M5Stack-Admin,I want you to correspond. My problem was solved below. Here is the LAN example I have updated at the m5stack document website please visit Hi! Here is an example (Korean but you can using code, Simple Chat Server)-모팅-서버-만들기/ And I will publish this document in English as soon as possible. :D
https://forum.m5stack.com/topic/292/where-is-lan-module-example
CC-MAIN-2022-40
refinedweb
237
53.27
Python. Example: Mann-Kendall Trend Test in Python To perform a Mann-Kendall Trend Test in Python, we will first install the pymannkendall package: pip install pymannkendall Once we’ve installed this package, we can perform the Mann-Kendall Trend Test on a set of time series data: #create dataset data = [31, 29, 28, 28, 27, 26, 26, 27, 27, 27, 28, 29, 30, 29, 30, 29, 28] #perform Mann-Kendall Trend Test import pymannkendall as mk mk.original_test(data) Mann_Kendall_Test(trend='no trend', h=False, p=0.422586268671707, z=0.80194241623, Tau=0.147058823529, s=20.0, var_s=561.33333333, slope=0.0384615384615, intercept=27.692307692) Here is how to interpret the output of the test: - trend: This tells the trend. Possible output includes increasing, decreasing, or no trend. - h: True if trend is present. False if no trend is present. - p: The p-value of the test. - z: The normalize test statistic. - Tau: Kendall Tau. - s: Mann-Kendal’s score - var_s: Variance S - slope: Theil-Sen estimator/slope - intercept: Intercept of Kendall-Theil Robust Line The main value we’re interested in is the p-value, which tells us whether or not there is a statistically significant trend in the data. In this example, the p-value is 0.4226 which is not less than .05. Thus, there is no significant trend in the time series data. Along with performing the Mann-Kendall Trend test, we can create a quick line plot using Matplotlib to visualize the actual time series data: import matplotlib.pyplot as plt plt.plot(data) From the plot we can see that the data is a bit all over the place, which confirms that there is no clear trend in the data. Related: How to Perform a Mann-Kendall Trend Test in R
https://www.statology.org/mann-kendall-test-python/
CC-MAIN-2021-21
refinedweb
299
72.87
Fonts Mangled In Flash CCryan546783 Jun 18, 2013 9:33 AM I just installed Flash CC eager to try out the features and quickly discovered that any fla that I opened that uses Library embedded fonts completely looks mangled, the font is wrong, kerning gets screwed up, text randomly is cut off. Very dissapointing. Is this something that might get fixed soon. 1. Re: Fonts Mangled In Flash CCkglad Jun 18, 2013 10:09 AM (in response to ryan546783) i don't see a problem. create a new fla in cs6 add the minimum needed to exhibit the problem then post your results. 2. Re: Fonts Mangled In Flash CCryan546783 Jun 18, 2013 10:29 AM (in response to kglad) Here is a quick example: SWF exported using CS6: SWF exported using CC (this example only really shows a line spacing problem) Here is also an example of one of my project FLAs which reveals more issues such as the wrong font variant being used (non bold) and the "height" property of the textfield being wrong which is causing the buttons to misdraw as being too tall (and the input elements being too short) CS6: CC: You can download my test FLA here: The font problems seem prevailent in any FLA I open that uses embedded fonts. I am on a Mac running the latest version of Mountain Lion. I suspect this may be caused by a few default params changing in the TextFormat object perhaps? Though if so I won't be happy if I have to go back through every text field in every FLA in order to use the CC version of Flash. 3. Re: Fonts Mangled In Flash CCSean @ Cupcake Jun 18, 2013 12:26 PM (in response to ryan546783) Experiencing the same kind of issues here... using flash.text.TextField objects in AIR 3.6 on Mac OS 10.8.4. Text is smaller, with much larger line spacing, breaking out of its bounding area on the right-hand side. On the upside, the issue I was having with punctuation spacing too close to special characters (accented characters and such) is now fixed! Weird stuff. 4. Re: Fonts Mangled In Flash CCkglad Jun 18, 2013 12:40 PM (in response to ryan546783) i don't see those problems in windows. 5. Re: Fonts Mangled In Flash CCrobdillon Jun 18, 2013 3:51 PM (in response to ryan546783) I'm seeing the same behaviour on my Mac. It seems to be a leading problem only with textfields created in Actionscript. If you place a textfield on the stage, the leading is correct. I tried quite a few fonts and font types and they all acted the same way. You can set a leading property for a TextFormat to get the text to look correct, but that's just stupid. I have logged this as a bug. 6. Re: Fonts Mangled In Flash CCkglad Jun 18, 2013 4:19 PM (in response to robdillon) ok, now i see a problem with textformat, but that's not necessarily textfield issue. the textformat default leading is not handled the same in cc as in previous versions of flash. 7. Re: Fonts Mangled In Flash CCDevarai Jun 19, 2013 2:51 AM (in response to kglad) I have the TextField issues on Windows 7 as well. It seems so Flash CC cannot render an embedded font in bold style anymore. setTextFormat does not have any effect on this. 8. Re: Fonts Mangled In Flash CCkglad Jun 19, 2013 4:07 AM (in response to Devarai) bold fonts embed without problem. like all flash versions since i can remember, you must embed a bold font to display an embedded bold font. you will fail if you embed a regular weight font and then try to display that font bolded. 9. Re: Fonts Mangled In Flash CCyanivyaldainlink Jun 19, 2013 4:38 AM (in response to kglad) I've just imported CS6 project into CC (Mac Os). The leading is the issue, I had to set leading again on all textfields for it to work, which is a pain. my steps to re-create the problem In Flash CS6 1. Create a dynamic textfield reference, with multiline support 2. Embed a Font 3. In actionscript create a new textfield using the textfield reference 4. Assign font to new textfield 5. Make sure new textfield has multiline,embedfonts set 6. Run. The result should be multlined text with correct leading In Flash CC 1. Open project 2. Run. The result shows the leading is quite large and second line of text is not visible. linked fla for reference. 10. Re: Fonts Mangled In Flash CCDevarai Jun 19, 2013 4:46 AM (in response to kglad) Ok, I tell you what I do Embed Arial font Create in actionscript a textfield Set the textfield all up (embedded, settextformat...) In the format I set it to bold I create a bitmapdata object I render the textfield into a bitmap I add the bitmap to the stage The final bitmap contains the text in the Arial font and has the correct size However whatever I try cannot get it in bold Best, Henning 11. Re: Fonts Mangled In Flash CCDevarai Jun 19, 2013 4:48 AM (in response to Devarai) Arial is embedded in normal and bold 12. Re: Fonts Mangled In Flash CCkglad Jun 19, 2013 4:53 AM (in response to yanivyaldainlink) you can use the following function to set the leading of all textfields (that exist and are on the display list). just past the main timeline (cast as a movieclip) to setLeadingF and specify the leading you want to apply. the function can be made more efficient by creating the textformat instance outside setLeadingF to prevent it from being created repeatedly. function setLeadingF(mc:MovieClip,leading:int):void{ var tfor:TextFormat = new TextFormat(); tfor.leading = leading; for(var i:int=0;i<mc.numChildren;i++){ if(mc.getChildAt(i) is TextField){ TextField(mc.getChildAt(i)).defaultTextFormat=tfor; TextField(mc.getChildAt(i)).setTextFormat(tfor); } else if(mc.getChildAt(i) is MovieClip){ setLeadingF(MovieClip(mc.getChildAt(i)),leading); } } } 13. Re: Fonts Mangled In Flash CCrobdillon Jun 19, 2013 5:04 AM (in response to robdillon) I did some more looking this morning and it seems that the trigger is the line: txt.embedFonts = true; If you leave that line out, the leading returns to normal. But, of course, this also means that you can't use an embedded font which makes the whole thing useless. 14. Re: Fonts Mangled In Flash CCDevarai Jun 19, 2013 5:05 AM (in response to kglad) That is not really practical. Advanced coders create TextField objects dynamically during run-time. Some of my projects have hundreds of source code files. Correcting each one manually is really a pain in the ***. 15. Re: Fonts Mangled In Flash CCkglad Jun 19, 2013 5:10 AM (in response to Devarai) it works for dynamically created textfields. but, if it doesn't work for you, don't use it. 16. Re: Fonts Mangled In Flash CCyanivyaldainlink Jun 19, 2013 5:53 AM (in response to kglad) I forgot to add this to pre-release bugs, got sidetracked. I think its going to be a common issue for most advanced users. 17. Re: Fonts Mangled In Flash CCNgl Robert Jun 28, 2013 1:51 AM (in response to ryan546783) Simple way to recreate the bug: import flash.text.TextField; import flash.text.TextFormat; import flash.text.TextFieldAutoSize; var tf : TextFormat; var tb1 : TextField; var tb2 : TextField; tf = new TextFormat(); tf.font = "Arial"; tf.color = 0x000000; tf.size = 20; function makeTb( txt : String ) : TextField { var tb : TextField = new TextField(); tb = new TextField(); tb.multiline = false; tb.selectable = false; tb.defaultTextFormat = tf; tb.autoSize = TextFieldAutoSize.LEFT; tb.mouseWheelEnabled = false; tb.antiAliasType = AntiAliasType.ADVANCED; tb.embedFonts = true; tb.border = true; tb.borderColor = 0xff00ff; tb.condenseWhite = true; tb.text = txt; return tb; } tb1 = makeTb( "how the bug looks" ); tb2 = makeTb( "how it should be" ); tb2.embedFonts = false; tb2.x = 200; addChild( tb1 ); addChild( tb2 ); And a screenshot ( win7 pro + flash cc )" 18. Re: Fonts Mangled In Flash CCpierBover Jun 28, 2013 5:35 PM (in response to Ngl Robert) Just to add more info here, I'm having exactly the same bug. Adobe do something!!! Does anyone have the link to the bug report? 19. Re: Fonts Mangled In Flash CCyanivyaldainlink Jun 28, 2013 6:01 PM (in response to pierBover) Hey PierBover, Adobe Bugbase doesnt have the selection. I've notified some people on twitter on how to report bugs for flash cc. 20. Re: Fonts Mangled In Flash CCpierBover Jun 28, 2013 6:07 PM (in response to yanivyaldainlink) So how can I vote for the bug? - 22. Re: Fonts Mangled In Flash CCpierBover Jun 28, 2013 6:29 PM (in response to kglad) Thanks, but is it possible to simply vote for an existing bug? - 24. Re: Fonts Mangled In Flash CCdharmk Jun 28, 2013 8:48 PM (in response to kglad) Thanks for reporting the issue, we are able to reproduce it at our end. We will investigate the issue. -Dharmendra 25. Re: Fonts Mangled In Flash CCkglad Jun 29, 2013 6:26 AM (in response to dharmk) the cause is the default textformat's leading. 26. Re: Fonts Mangled In Flash CCpierBover Jun 29, 2013 8:01 AM (in response to kglad) In my case changing the leading did not solve the issue. There always remained some big offset from the top of the textfield to the top of the text. 27. Re: Fonts Mangled In Flash CCkglad Jun 29, 2013 9:06 AM (in response to pierBover) copy and paste the code you used to change the leading of your textfield. 28. Re: Fonts Mangled In Flash CCpierBover Jun 29, 2013 10:53 AM (in response to kglad) 29. Re: Fonts Mangled In Flash CCdharmk Jul 4, 2013 3:23 AM (in response to pierBover) We are looking into this issue. However, for the time being you can use a workaround for this issue. Just create an invisible text element on the first frame which uses the same embedded font, it should solve the issue. You may need to clearn the publish cache in some cases (Control -> Clear publish cache). One way you can create an invisible text on stage would be to create a text element, apply the embedded font and create a movie clip out of it. You can then go to the PI and uncheck the visible checkbox to hide the movieclip. -Dharmendra. 30. Re: Fonts Mangled In Flash CCpierBover Jul 4, 2013 9:27 AM (in response to dharmk) Thanks dharmk, that works. 31. Re: Fonts Mangled In Flash CCdharmk Jul 4, 2013 9:47 AM (in response to pierBover) You're welcome. I'm glad it did. -Dharmendra. 32. Re: Fonts Mangled In Flash CCyanivyaldainlink Jul 4, 2013 3:37 PM (in response to dharmk) Confirmed working as well. Im on Mac osx, I'm using resources files so its a pretty easy fix. Thanks guys. 33. Re: Fonts Mangled In Flash CCpierBover Jul 15, 2013 8:05 PM (in response to AbductedMind) Hi Kevin I'm having a similar problem here: It seems it's a bug with CC. I think the simplest solution is to go back to CS6 until they fix the bug... 34. Re: Fonts Mangled In Flash CCAbductedMind Sep 12, 2013 9:23 AM (in response to ryan546783) This thread doesn't specifically recommend a fix that works in all cases. I am unsure how adding a textfield to frame one has helped others but it has not helped me (for example do you have to still set leading to some silly number such as "-50" for the font to display correctly?). In my case Arial is rendering correctly but another Font is not. When I use both in the same textfield (classic text field) the leading is different for the two. There doesn't appear to be a way to specify leading inline with HTML within the font tag so I am stuck. If there is a way around the problem in the case of multiple fonts in the same field, please let me know! This bug needs to get fixed! 35. Re: Fonts Mangled In Flash CCpierBover Sep 12, 2013 1:52 PM (in response to AbductedMind) There was a Flash CC update but this was not resolved. How can this bug not be a top priority? Adobe do something. 36. Re: Fonts Mangled In Flash CCpermanyer Oct 11, 2013 3:05 AM (in response to ryan546783) this worked for me: var tf:TextFormat = new TextFormat(); tf.leading = -70; this.flashTextField.defaultTextFormat = tf; this.flashTextField.text = 'this text appeared with correct leading'; 37. Re: Fonts Mangled In Flash CCDaftK Oct 23, 2013 1:50 PM (in response to dharmk) Tried these workarounds and am having no success. Is it just about Arial and if so, why? All of my legacy files are built with it, as it was assumed that this was a mostly universal font. This is costing me time and resources. 38. Re: Fonts Mangled In Flash CCstaublicht Nov 13, 2013 8:44 AM (in response to ryan546783) I have a similar problem. I replaced the font in it with a font from an external library - and then the line spacing went haywire. I have a separate movieclip where I used the external embedded fonts from the start which is fine. The workaround mentioned is not working for me. As soon as I uncheck "import for runtime sharing" for the externally embedded font, the leading issues disappear.
https://forums.adobe.com/message/5508663
CC-MAIN-2017-26
refinedweb
2,280
71.65
Elixir v1.6.1 Kernel.SpecialForms View Source Special forms are the basic building blocks of Elixir, and therefore cannot be overridden by the developer. We define them in this module. Some of these forms are lexical (like alias/2, case/2, etc). The macros {}/1 and <<>>/1 are also special forms used to define tuple and binary data structures respectively. This module also documents macros that return information about Elixir’s compilation environment, such as ( __ENV__/0, __MODULE__/0, __DIR__/0 and __CALLER__/0). Finally, it also documents two special forms, __block__/1 and __aliases__/1, which are not intended to be called directly by the developer but they appear in quoted contents since they are essential in Elixir’s constructs. Link to this section Summary Functions Defines a remote call, a call to an anonymous function, or an alias Used by types and bitstrings to specify types Matches the value on the right against the pattern on the left Returns the current module name as an atom or nil otherwise Internal special form to hold aliases information Internal special form for block expressions Matches the given expression against the given clauses Evaluates the expression corresponding to the first clause that evaluates to a truthy value Defines an anonymous function Comprehensions allow you to quickly build a data structure from an enumerable or a bitstring Imports functions and macros from other modules Gets the representation of any expression Checks if there is a message matching the given clauses in the current process mailbox Requires a module in order to use its macros Calls the overridden function when overriding it with Kernel.defoverridable/1 Evaluates the given expressions and handles any error, exit, or throw that may have happened Unquotes the given expression from inside a macro Used to combine matching clauses Link to this section Functions Creates a struct. A struct is a tagged map that allows developers to provide default values for keys, tags to be used in polymorphic dispatches and compile time assertions. Structs are usually defined with the Kernel.defstruct/1 macro: defmodule User do defstruct name: "john", age: 27 end Now a struct can be created as follows: %User{} Underneath a struct is just a map with a :__struct__ key pointing to the User module: %User{} == %{__struct__: User, name: "john", age: 27} A struct also validates that the given keys are part of the defined struct. The example below will fail because there is no key :full_name in the User struct: %User{full_name: "john doe"} An update operation specific for structs is also available: %User{user | age: 28} The syntax above will guarantee the given keys are valid at compilation time and it will guarantee at runtime the given argument is a struct, failing with BadStructError otherwise. Although structs are maps, by default structs do not implement any of the protocols implemented for maps. Check Kernel.defprotocol/2 for more information on how structs can be used with protocols for polymorphic dispatch. Also see Kernel.struct/2 and Kernel.struct!/2 for examples on how to create and update structs dynamically. Creates a map. See the Map module for more information about maps, their syntax, and ways to access and manipulate them. AST representation Regardless of whether => or the keyword syntax is used, key-value pairs in maps are always represented internally as a list of two-element tuples for simplicity: iex> quote do ...> %{"a" => :b, c: :d} ...> end {:%{}, [], [{"a", :b}, {:c, :d}]} Captures or creates an anonymous function. Capture The capture operator is most commonly used to capture a function with given name and arity from a module: iex> fun = &Kernel.is_atom/1 iex> fun.(:atom) true iex> fun.("string") false In the example above, we captured Kernel.is_atom/1 as an anonymous function and then invoked it. The capture operator can also be used to capture local functions, including private ones, and imported functions by omitting the module name: &local_function/1 Anonymous functions The capture operator can also be used to partially apply functions, where &1, &2 and so on can be used as value placeholders. For example: iex> double = &(&1 * 2) iex> double.(2) 4 In other words, &(&1 * 2) is equivalent to fn x -> x * 2 end. We can partially apply a remote function with placeholder: iex> take_five = &Enum.take(&1, 5) iex> take_five.(1..10) [1, 2, 3, 4, 5] Another example while using an imported or local function: iex> first_elem = &elem(&1, 0) iex> first_elem.({0, 1}) 0 The & operator can be used with more complex expressions: iex> fun = &(&1 + &2 + &3) iex> fun.(1, 2, 3) 6 As well as with lists and tuples: iex> fun = &{&1, &2} iex> fun.(1, 2) {1, 2} iex> fun = &[&1 | &2] iex> fun.(1, [2, 3]) [1, 2, 3] The only restrictions when creating anonymous functions is that at least one placeholder must be present, i.e. it must contain at least &1, and that block expressions are not supported: # No placeholder, fails to compile. &(:foo) # Block expression, fails to compile. &(&1; &2) Defines a remote call, a call to an anonymous function, or an alias. The dot ( .) in Elixir can be used for remote calls: iex> String.downcase("FOO") "foo" In this example above, we have used . to invoke downcase in the String module, passing "FOO" as argument. The dot may be used to invoke anonymous functions too: iex> (fn(n) -> n end).(7) 7 in which case there is a function on the left hand side. We can also use the dot for creating aliases: iex> Hello.World Hello.World This time, we have joined two aliases, defining the final alias Hello.World. Syntax The right side of . may be a word starting in upcase, which represents an alias, a word starting with lowercase or underscore, any valid language operator or any name wrapped in single- or double-quotes. Those are all valid examples: iex> Kernel.Sample Kernel.Sample iex> Kernel.length([1, 2, 3]) 3 iex> Kernel.+(1, 2) 3 iex> Kernel."length"([1, 2, 3]) 3 iex> Kernel.'+'(1, 2) 3 Note that Kernel."FUNCTION_NAME" will be treated as a remote call and not an alias. This choice was done so every time single- or double-quotes are used, we have a remote call regardless of the quote contents. This decision is also reflected in the quoted expressions discussed below. When the dot is used to invoke an anonymous function there is only one operand, but it is still written using a postfix notation: iex> negate = fn(n) -> -n end iex> negate.(7) -7 Quoted expression When . is used, the quoted expression may take two distinct forms. When the right side starts with a lowercase letter (or underscore): iex> quote do ...> String.downcase("FOO") ...> end {{:., [], [{:__aliases__, [alias: false], [:String]}, :downcase]}, [], ["FOO"]} Notice we have an inner tuple, containing the atom :. representing the dot as first element: {:., [], [{:__aliases__, [alias: false], [:String]}, :downcase]} This tuple follows the general quoted expression structure in Elixir, with the name as first argument, some keyword list as metadata as second, and the list of arguments as third. In this case, the arguments are the alias String and the atom :downcase. The second argument in a remote call is always an atom regardless of the literal used in the call: iex> quote do ...> String."downcase"("FOO") ...> end {{:., [], [{:__aliases__, [alias: false], [:String]}, :downcase]}, [], ["FOO"]} The tuple containing :. is wrapped in another tuple, which actually represents the function call, and has "FOO" as argument. In the case of calls to anonymous functions, the inner tuple with the dot special form has only one argument, reflecting the fact that the operator is unary: iex> quote do ...> negate.(0) ...> end {{:., [], [{:negate, [], __MODULE__}]}, [], [0]} When the right side is an alias (i.e. starts with uppercase), we get instead: iex> quote do ...> Hello.World ...> end {:__aliases__, [alias: false], [:Hello, :World]} We go into more details about aliases in the __aliases__/1 special form documentation. Unquoting We can also use unquote to generate a remote call in a quoted expression: iex> x = :downcase iex> quote do ...> String.unquote(x)("FOO") ...> end {{:., [], [{:__aliases__, [alias: false], [:String]}, :downcase]}, [], ["FOO"]} Similar to Kernel."FUNCTION_NAME", unquote(x) will always generate a remote call, independent of the value of x. To generate an alias via the quoted expression, one needs to rely on Module.concat/2: iex> x = Sample iex> quote do ...> Module.concat(String, unquote(x)) ...> end {{:., [], [{:__aliases__, [alias: false], [:Module]}, :concat]}, [], [{:__aliases__, [alias: false], [:String]}, Sample]} Used by types and bitstrings to specify types. This operator is used in two distinct occasions in Elixir. It is used in typespecs to specify the type of a variable, function or of a type itself: @type number :: integer | float @spec add(number, number) :: number It may also be used in bit strings to specify the type of a given bit segment: <<int::integer-little, rest::bits>> = bits Read the documentation on the Typespec page and <<>>/1 for more information on typespecs and bitstrings respectively. Defines a new bitstring. Examples iex> <<1, 2, 3>> <<1, 2, 3>> Types A bitstring is made of many segments and each segment has a type. There are 9 types used in bitstrings: integer float bits(alias for bitstring) bitstring binary bytes(alias for binary) utf8 utf16 utf32 When no type is specified, the default is integer: iex> <<1, 2, 3>> <<1, 2, 3>> Elixir also accepts by default the segment to be a literal string or a literal charlist, which are by default expanded to integers: iex> <<0, "foo">> <<0, 102, 111, 111>> Variables or any other type need to be explicitly tagged: iex> rest = "oo" iex> <<102, rest>> ** (ArgumentError) argument error We can solve this by explicitly tagging it as binary: iex> rest = "oo" iex> <<102, rest::binary>> "foo" The utf8, utf16, and utf32 types are for Unicode codepoints. They can also be applied to literal strings and charlists: iex> <<"foo"::utf16>> <<0, 102, 0, 111, 0, 111>> iex> <<"foo"::utf32>> <<0, 0, 0, 102, 0, 0, 0, 111, 0, 0, 0, 111>> Options Many options can be given by using - as separator. Order is arbitrary, so the following are all equivalent: <<102::integer-native, rest::binary>> <<102::native-integer, rest::binary>> <<102::unsigned-big-integer, rest::binary>> <<102::unsigned-big-integer-size(8), rest::binary>> <<102::unsigned-big-integer-8, rest::binary>> <<102::8-integer-big-unsigned, rest::binary>> <<102, rest::binary>> Unit and Size The length of the match is equal to the unit (a number of bits) times the size (the number of repeated segments of length unit). Sizes for types are a bit more nuanced. The default size for integers is 8. For floats, it is 64. For floats, size * unit must result in 32 or 64, corresponding to IEEE 754 binary32 and binary64, respectively. For binaries, the default is the size of the binary. Only the last binary in a match can use the default size. All others must have their size specified explicitly, even if the match is unambiguous. For example: iex> <<name::binary-size(5), " the ", species::binary>> = <<"Frank the Walrus">> "Frank the Walrus" iex> {name, species} {"Frank", "Walrus"} Failing to specify the size for the non-last causes compilation to fail: <<name::binary, " the ", species::binary>> = <<"Frank the Walrus">> ** (CompileError): a binary field without size is only allowed at the end of a binary pattern Shortcut Syntax Size and unit can also be specified using a syntax shortcut when passing integer values: iex> x = 1 iex> <<x::8>> == <<x::size(8)>> true iex> <<x::8*4>> == <<x::size(8)-unit(4)>> true This syntax reflects the fact the effective size is given by multiplying the size by the unit. Modifiers Some types have associated modifiers to clear up ambiguity in byte representation. Sign Integers can be signed or unsigned, defaulting to unsigned. iex> <<int::integer>> = <<-100>> <<156>> iex> int 156 iex> <<int::integer-signed>> = <<-100>> <<156>> iex> int -100 signed and unsigned are only used for matching binaries (see below) and are only used for integers. iex> <<-100::signed, _rest::binary>> = <<-100, "foo">> <<156, 102, 111, 111>> Endianness Elixir has three options for endianness: big, little, and native. The default is big: iex> <<number::little-integer-size(16)>> = <<0, 1>> <<0, 1>> iex> number 256 iex> <<number::big-integer-size(16)>> = <<0, 1>> <<0, 1>> iex> number 1 native is determined by the VM at startup and will depend on the host operating system. Binary/Bitstring Matching Binary matching is a powerful feature in Elixir that is useful for extracting information from binaries as well as pattern matching. Binary matching can be used by itself to extract information from binaries: iex> <<"Hello, ", place::binary>> = "Hello, World" "Hello, World" iex> place "World" Or as a part of function definitions to pattern match: defmodule ImageTyper @png_signature <<137::size(8), 80::size(8), 78::size(8), 71::size(8), 13::size(8), 10::size(8), 26::size(8), 10::size(8)>> @jpg_signature <<255::size(8), 216::size(8)>> def type(<<@png_signature, rest::binary>>), do: :png def type(<<@jpg_signature, rest::binary>>), do: :jpg def type(_), do :unknown end Performance & Optimizations The Erlang compiler can provide a number of optimizations on binary creation and matching. To see optimization output, set the bin_opt_info compiler option: ERL_COMPILER_OPTIONS=bin_opt_info mix compile To learn more about specific optimizations and performance considerations, check out Erlang’s Efficiency Guide on handling binaries. Matches the value on the right against the pattern on the left. Accesses an already bound variable in match clauses. Also known as the pin operator. Examples Elixir allows variables to be rebound via static single assignment: iex> x = 1 iex> x = x + 1 iex> x 2 However, in some situations, it is useful to match against an existing value, instead of rebinding. This can be done with the ^ special form, colloquially known as the pin operator: iex> x = 1 iex> ^x = List.first([1]) iex> ^x = List.first([2]) ** (MatchError) no match of right hand side value: 2 Note that ^x always refers to the value of x prior to the match. The following example will match: iex> x = 0 iex> {x, ^x} = {1, 0} iex> x 1 Returns the current calling environment as a Macro.Env struct. In the environment you can access the filename, line numbers, set up aliases, the function and others. Returns the absolute path of the directory of the current file as a binary. Although the directory can be accessed as Path.dirname(__ENV__.file), this macro is a convenient shortcut. Returns the current environment information as a Macro.Env struct. In the environment you can access the current filename, line numbers, set up aliases, the current function and others. Returns the current module name as an atom or nil otherwise. Although the module can be accessed in the __ENV__/0, this macro is a convenient shortcut. Internal special form to hold aliases information. It is usually compiled to an atom: iex> quote do ...> Foo.Bar ...> end {:__aliases__, [alias: false], [:Foo, :Bar]} Elixir represents Foo.Bar as __aliases__ so calls can be unambiguously identified by the operator :.. For example: iex> quote do ...> Foo.bar ...> end {{:., [], [{:__aliases__, [alias: false], [:Foo]}, :bar]}, [], []} Whenever an expression iterator sees a :. as the tuple key, it can be sure that it represents a call and the second argument in the list is an atom. On the other hand, aliases holds some properties: The head element of aliases can be any term that must expand to an atom at compilation time. The tail elements of aliases are guaranteed to always be atoms. When the head element of aliases is the atom :Elixir, no expansion happens. Internal special form for block expressions. This is the special form used whenever we have a block of expressions in Elixir. This special form is private and should not be invoked directly: iex> quote do ...> 1 ...> 2 ...> 3 ...> end {:__block__, [], [1, 2, 3]} alias/2 is used to setup aliases, often useful with modules names. Examples alias/2 can be used to setup an alias for any module: defmodule Math do alias MyKeyword, as: Keyword end In the example above, we have set up MyKeyword to be aliased as Keyword. So now, any reference to Keyword will be automatically replaced by MyKeyword. In case one wants to access the original Keyword, it can be done by accessing Elixir: Keyword.values #=> uses MyKeyword.values Elixir.Keyword.values #=> uses Keyword.values Notice that calling alias without the as: option automatically sets an alias based on the last part of the module. For example: alias Foo.Bar.Baz Is the same as: alias Foo.Bar.Baz, as: Baz We can also alias multiple modules in one line: alias Foo.{Bar, Baz, Biz} Is the same as: alias Foo.Bar alias Foo.Baz alias Foo.Biz Lexical scope import/2, require/2 and alias/2 are called directives and all have lexical scope. This means you can set up aliases inside specific functions and it won’t affect the overall scope. Warnings If you alias a module and you don’t use the alias, Elixir is going to issue a warning implying the alias is not being used. In case the alias is generated automatically by a macro, Elixir won’t emit any warnings though, since the alias was not explicitly defined. Both warning behaviours could be changed by explicitly setting the :warn option to true or false. Matches the given expression against the given clauses. Examples case thing do {:selector, i, value} when is_integer(i) -> value value -> value end In the example above, we match thing against each clause “head” and execute the clause “body” corresponding to the first clause that matches. If no clause matches, an error is raised. For this reason, it may be necessary to add a final catch-all clause (like _) which will always match. x = 10 case x do 0 -> "This clause won't match" _ -> "This clause would match any value (x = #{x})" end #=> "This clause would match any value (x = 10)" Variables handling Notice that variables bound in a clause “head” do not leak to the outer context: case data do {:ok, value} -> value :error -> nil end value #=> unbound variable value However, variables explicitly bound in the clause “body” are accessible from the outer context: value = 7 case lucky? do false -> value = 13 true -> true end value #=> 7 or 13 In the example above, value is going to be 7 or 13 depending on the value of lucky?. In case value has no previous value before case, clauses that do not explicitly bind a value have the variable bound to nil. If you want to pattern match against an existing variable, you need to use the ^/1 operator: x = 1 case 10 do ^x -> "Won't match" _ -> "Will match" end #=> "Will match" Evaluates the expression corresponding to the first clause that evaluates to a truthy value. cond do hd([1, 2, 3]) -> "1 is considered as true" end #=> "1 is considered as true" Raises an error if all conditions evaluate to nil or false. For this reason, it may be necessary to add a final always-truthy condition (anything non- false and non- nil), which will always match. Examples cond do 1 + 1 == 1 -> "This will never match" 2 * 2 != 4 -> "Nor this" true -> "This will" end #=> "This will" Defines an anonymous function. Examples iex> add = fn a, b -> a + b end iex> add.(1, 2) 3 Anonymous functions can also have multiple clauses. All clauses should expect the same number of arguments: iex> negate = fn ...> true -> false ...> false -> true ...> end iex> negate.(false) true Comprehensions allow you to quickly build a data structure from an enumerable or a bitstring. Let’s start with an example: iex> for n <- [1, 2, 3, 4], do: n * 2 [2, 4, 6, 8] A comprehension accepts many generators and filters. Enumerable generators are defined using <-: # A list generator: iex> for n <- [1, 2, 3, 4], do: n * 2 [2, 4, 6, 8] # A comprehension with two generators iex> for x <- [1, 2], y <- [2, 3], do: x * y [2, 3, 4, 6] Filters can also be given: # A comprehension with a generator and a filter iex> for n <- [1, 2, 3, 4, 5, 6], rem(n, 2) == 0, do: n [2, 4, 6] Note generators can also be used to filter as it removes any value that doesn’t match the pattern on the left side of <-: iex> users = [user: "john", admin: "meg", guest: "barbara"] iex> for {type, name} when type != :guest <- users do ...> String.upcase(name) ...> end ["JOHN", "MEG"] Bitstring generators are also supported and are very useful when you need to organize bitstring streams: iex> pixels = <<213, 45, 132, 64, 76, 32, 76, 0, 0, 234, 32, 15>> iex> for <<r::8, g::8, b::8 <- pixels>>, do: {r, g, b} [{213, 45, 132}, {64, 76, 32}, {76, 0, 0}, {234, 32, 15}] Variable assignments inside the comprehension, be it in generators, filters or inside the block, are not reflected outside of the comprehension. Into In the examples above, the result returned by the comprehension was always a list. The returned result can be configured by passing an :into option, that accepts any structure as long as it implements the Collectable protocol. For example, we can use bitstring generators with the :into option to easily remove all spaces in a string: iex> for <<c <- " hello world ">>, c != ?\s, into: "", do: <<c>> "helloworld" The IO module provides streams, that are both Enumerable and Collectable, here is an upcase echo server using comprehensions: for line <- IO.stream(:stdio, :line), into: IO.stream(:stdio, :line) do String.upcase(line) end Uniq uniq: true can also be given to comprehensions to guarantee that that results are only added to the collection if they were not returned before. For example: iex> for(x <- [1, 1, 2, 3], uniq: true, do: x * 2) [2, 4, 6] iex> for(<<x <- "abcabc">>, uniq: true, into: "", do: <<x-32>>) "ABC" Imports functions and macros from other modules. import/2 allows one to easily access functions or macros from others modules without using the qualified name. Examples If you are using several functions from a given module, you can import those functions and reference them as local functions, for example: iex> import List iex> flatten([1, [2], 3]) [1, 2, 3] Selector By default, Elixir imports functions and macros from the given module, except the ones starting with underscore (which are usually callbacks): import List A developer can filter to import only macros or functions via the only option: import List, only: :functions import List, only: :macros Alternatively, Elixir allows a developer to pass pairs of name/arities to :only or :except as a fine grained control on what to import (or not): import List, only: [flatten: 1] import String, except: [split: 2] Notice that calling except is always exclusive on a previously declared import/2. If there is no previous import, then it applies to all functions and macros in the module. For example: import List, only: [flatten: 1, keyfind: 4] import List, except: [flatten: 1] After the two import calls above, only List.keyfind/4 will be imported. Underscore functions By default functions starting with _ are not imported. If you really want to import a function starting with _ you must explicitly include it in the :only selector. import File.Stream, only: [__build__: 3] Lexical scope It is important to notice that import/2 is lexical. This means you can import specific macros inside specific functions: defmodule Math do def some_function do # 1) Disable "if/2" from Kernel import Kernel, except: [if: 2] # 2) Require the new "if/2" macro from MyMacros import MyMacros # 3) Use the new macro if do_something, it_works end end In the example above, we imported macros from MyMacros, replacing the original if/2 implementation by our own within that specific function. All other functions in that module will still be able to use the original one. Warnings If you import a module and you don’t use any of the imported functions or macros from this module, Elixir is going to issue a warning implying the import is not being used. In case the import is generated automatically by a macro, Elixir won’t emit any warnings though, since the import was not explicitly defined. Both warning behaviours could be changed by explicitly setting the :warn option to true or false. Ambiguous function/macro names If two modules A and B are imported and they both contain a foo function with an arity of 1, an error is only emitted if an ambiguous call to foo/1 is actually made; that is, the errors are emitted lazily, not eagerly. Gets the representation of any expression. Examples iex> quote do ...> sum(1, 2, 3) ...> end {:sum, [], [1, 2, 3]} Explanation Any Elixir code can be represented using Elixir data structures. The building block of Elixir macros is a tuple with three elements, for example: {:sum, [], [1, 2, 3]} The tuple above represents a function call to sum passing 1, 2 and 3 as arguments. The tuple elements are: The first element of the tuple is always an atom or another tuple in the same representation. The second element of the tuple represents metadata. The third element of the tuple are the arguments for the function call. The third argument may be an atom, which is usually a variable (or a local call). Options :unquote- when false, disables unquoting. Useful when you have a quote inside another quote and want to control what quote is able to unquote. :location- when set to :keep, keeps the current line and file from quote. Read the Stacktrace information section below for more information. :line- sets the quoted expressions to have the given line. :generated- marks the given chunk as generated so it does not emit warnings. Currently it only works on special forms (for example, you can annotate a casebut not an if). :context- sets the resolution context. :bind_quoted- passes a binding to the macro. Whenever a binding is given, unquote/1is automatically disabled. Quote literals Besides the tuple described above, Elixir has a few literals that when quoted return themselves. They are: :sum #=> Atoms 1 #=> Integers 2.0 #=> Floats [1, 2] #=> Lists "strings" #=> Strings {key, value} #=> Tuples with two elements Quote and macros quote/2 is commonly used with macros for code generation. As an exercise, let’s define a macro that multiplies a number by itself (squared). Note there is no reason to define such as a macro (and it would actually be seen as a bad practice), but it is simple enough that it allows us to focus on the important aspects of quotes and macros: defmodule Math do defmacro squared(x) do quote do unquote(x) * unquote(x) end end end We can invoke it as: import Math IO.puts "Got #{squared(5)}" At first, there is nothing in this example that actually reveals it is a macro. But what is happening is that, at compilation time, squared(5) becomes 5 * 5. The argument 5 is duplicated in the produced code, we can see this behaviour in practice though because our macro actually has a bug: import Math my_number = fn -> IO.puts "Returning 5" 5 end IO.puts "Got #{squared(my_number.())}" The example above will print: Returning 5 Returning 5 Got 25 Notice how “Returning 5” was printed twice, instead of just once. This is because a macro receives an expression and not a value (which is what we would expect in a regular function). This means that: squared(my_number.()) Actually expands to: my_number.() * my_number.() Which invokes the function twice, explaining why we get the printed value twice! In the majority of the cases, this is actually unexpected behaviour, and that’s why one of the first things you need to keep in mind when it comes to macros is to not unquote the same value more than once. Let’s fix our macro: defmodule Math do defmacro squared(x) do quote do x = unquote(x) x * x end end end Now invoking square(my_number.()) as before will print the value just once. In fact, this pattern is so common that most of the times you will want to use the bind_quoted option with quote/2: defmodule Math do defmacro squared(x) do quote bind_quoted: [x: x] do x * x end end end :bind_quoted will translate to the same code as the example above. :bind_quoted can be used in many cases and is seen as good practice, not only because it helps prevent us from running into common mistakes, but also because it allows us to leverage other tools exposed by macros, such as unquote fragments discussed in some sections below. Before we finish this brief introduction, you will notice that, even though we defined a variable x inside our quote: quote do x = unquote(x) x * x end When we call: import Math squared(5) x #=> ** (CompileError) undefined variable x or undefined function x/0 We can see that x did not leak to the user context. This happens because Elixir macros are hygienic, a topic we will discuss at length in the next sections as well. Hygiene in variables Consider the following example: defmodule Hygiene do defmacro no_interference do quote do a = 1 end end end require Hygiene a = 10 Hygiene.no_interference a #=> 10 In the example above, a returns 10 even if the macro is apparently setting it to 1 because variables defined in the macro do not affect the context the macro is executed in. If you want to set or get a variable in the caller’s context, you can do it with the help of the var! macro: defmodule NoHygiene do defmacro interference do quote do var!(a) = 1 end end end require NoHygiene a = 10 NoHygiene.interference a #=> 1 Note that you cannot even access variables defined in the same module unless you explicitly give it a context: defmodule Hygiene do defmacro write do quote do a = 1 end end defmacro read do quote do a end end end Hygiene.write Hygiene.read #=> ** (RuntimeError) undefined variable a or undefined function a/0 For such, you can explicitly pass the current module scope as argument: defmodule ContextHygiene do defmacro write do quote do var!(a, ContextHygiene) = 1 end end defmacro read do quote do var!(a, ContextHygiene) end end end ContextHygiene.write ContextHygiene.read #=> 1 Hygiene in aliases Aliases inside quote are hygienic by default. Consider the following example: defmodule Hygiene do alias Map, as: M defmacro no_interference do quote do M.new end end end require Hygiene Hygiene.no_interference #=> %{} Notice that, even though the alias M is not available in the context the macro is expanded, the code above works because M still expands to Map. Similarly, even if we defined an alias with the same name before invoking a macro, it won’t affect the macro’s result: defmodule Hygiene do alias Map, as: M defmacro no_interference do quote do M.new end end end require Hygiene alias SomethingElse, as: M Hygiene.no_interference #=> %{} In some cases, you want to access an alias or a module defined in the caller. For such, you can use the alias! macro: defmodule Hygiene do # This will expand to Elixir.Nested.hello defmacro no_interference do quote do Nested.hello end end # This will expand to Nested.hello for # whatever is Nested in the caller defmacro interference do quote do alias!(Nested).hello end end end defmodule Parent do defmodule Nested do def hello, do: "world" end require Hygiene Hygiene.no_interference #=> ** (UndefinedFunctionError) ... Hygiene.interference #=> "world" end Hygiene in imports Similar to aliases, imports in Elixir are hygienic. Consider the following code: defmodule Hygiene do defmacrop get_length do quote do length([1, 2, 3]) end end def return_length do import Kernel, except: [length: 1] get_length end end Hygiene.return_length #=> 3 Notice how Hygiene.return_length/0 returns 3 even though the Kernel.length/1 function is not imported. In fact, even if return_length/0 imported a function with the same name and arity from another module, it wouldn’t affect the function result: def return_length do import String, only: [length: 1] get_length end Calling this new return_length/0 will still return 3 as result. Elixir is smart enough to delay the resolution to the latest possible moment. So, if you call length([1, 2, 3]) inside quote, but no length/1 function is available, it is then expanded in the caller: defmodule Lazy do defmacrop get_length do import Kernel, except: [length: 1] quote do length("hello") end end def return_length do import Kernel, except: [length: 1] import String, only: [length: 1] get_length end end Lazy.return_length #=> 5 Stacktrace information When defining functions via macros, developers have the option of choosing if runtime errors will be reported from the caller or from inside the quote. Let’s see an example: # adder.ex defmodule Adder do @doc "Defines a function that adds two numbers" defmacro defadd do quote location: :keep do def add(a, b), do: a + b end end end # sample.ex defmodule Sample do import Adder defadd end require Sample Sample.add(:one, :two) #=> ** (ArithmeticError) bad argument in arithmetic expression #=> adder.ex:5: Sample.add/2 When using location: :keep and invalid arguments are given to Sample.add/2, the stacktrace information will point to the file and line inside the quote. Without location: :keep, the error is reported to where defadd was invoked. Note location: :keep affects only definitions inside the quote. Binding and unquote fragments Elixir quote/unquote mechanisms provides a functionality called unquote fragments. Unquote fragments provide an easy way to generate functions on the fly. Consider this example: kv = [foo: 1, bar: 2] Enum.each kv, fn {k, v} -> def unquote(k)(), do: unquote(v) end In the example above, we have generated the functions foo/0 and bar/0 dynamically. Now, imagine that, we want to convert this functionality into a macro: defmacro defkv(kv) do Enum.map kv, fn {k, v} -> quote do def unquote(k)(), do: unquote(v) end end end We can invoke this macro as: defkv [foo: 1, bar: 2] However, we can’t invoke it as follows: kv = [foo: 1, bar: 2] defkv kv This is because the macro is expecting its arguments to be a keyword list at compilation time. Since in the example above we are passing the representation of the variable kv, our code fails. This is actually a common pitfall when developing macros. We are assuming a particular shape in the macro. We can work around it by unquoting the variable inside the quoted expression: defmacro defkv(kv) do quote do Enum.each unquote(kv), fn {k, v} -> def unquote(k)(), do: unquote(v) end end end If you try to run our new macro, you will notice it won’t even compile, complaining that the variables k and v do not exist. This is because of the ambiguity: unquote(k) can either be an unquote fragment, as previously, or a regular unquote as in unquote(kv). One solution to this problem is to disable unquoting in the macro, however, doing that would make it impossible to inject the kv representation into the tree. That’s when the :bind_quoted option comes to the rescue (again!). By using :bind_quoted, we can automatically disable unquoting while still injecting the desired variables into the tree: defmacro defkv(kv) do quote bind_quoted: [kv: kv] do Enum.each kv, fn {k, v} -> def unquote(k)(), do: unquote(v) end end end In fact, the :bind_quoted option is recommended every time one desires to inject a value into the quote. Checks if there is a message matching the given clauses in the current process mailbox. In case there is no such message, the current process hangs until a message arrives or waits until a given timeout value. Examples receive do {:selector, i, value} when is_integer(i) -> value value when is_atom(value) -> value _ -> IO.puts :stderr, "Unexpected message received" end An optional after clause can be given in case the message was not received after the given timeout period, specified in milliseconds: receive do {:selector, i, value} when is_integer(i) -> value value when is_atom(value) -> value _ -> IO.puts :stderr, "Unexpected message received" after 5000 -> IO.puts :stderr, "No message in 5 seconds" end The after clause can be specified even if there are no match clauses. The timeout value given to after can be any expression evaluating to one of the allowed values: :infinity- the process should wait indefinitely for a matching message, this is the same as not using a timeout 0- if there is no matching message in the mailbox, the timeout will occur immediately positive integer smaller than 4_294_967_295( 0xFFFFFFFFin hex notation) - it should be possible to represent the timeout value as an unsigned 32-bit integer. Variables handling The receive/1 special form handles variables exactly as the case/2 special macro. For more information, check the docs for case/2. Requires a module in order to use its macros. Examples Public functions in modules are globally available, but in order to use macros, you need to opt-in by requiring the module they are defined in. Let’s suppose you created your own if/2 implementation in the module MyMacros. If you want to invoke it, you need to first explicitly require the MyMacros: defmodule Math do require MyMacros MyMacros.if do_something, it_works end An attempt to call a macro that was not loaded will raise an error. Alias shortcut require/2 also accepts as: as an option so it automatically sets up an alias. Please check alias/2 for more information. Calls the overridden function when overriding it with Kernel.defoverridable/1. See Kernel.defoverridable/1 for more information and documentation. Evaluates the given expressions and handles any error, exit, or throw that may have happened. Examples try do do_something_that_may_fail(some_arg) rescue ArgumentError -> IO.puts "Invalid argument given" catch value -> IO.puts "Caught #{inspect(value)}" else value -> IO.puts "Success! The result was #{inspect(value)}" after IO.puts "This is printed regardless if it failed or succeed" end The rescue clause is used to handle exceptions while the catch clause can be used to catch thrown values and exits. The else clause can be used to control flow based on the result of the expression. catch, rescue, and else clauses work based on pattern matching (similar to the case special form). Note that calls inside try/1 are not tail recursive since the VM needs to keep the stacktrace in case an exception happens. rescue clauses Besides relying on pattern matching, rescue clauses provide some conveniences around exceptions that allow one to rescue an exception by its name. All the following formats are valid patterns in rescue clauses: # Rescue a single exception without binding the exception # to a variable try do UndefinedModule.undefined_function rescue UndefinedFunctionError -> nil end # Rescue any of the given exception without binding try do UndefinedModule.undefined_function rescue [UndefinedFunctionError, ArgumentError] -> nil end # Rescue and bind the exception to the variable "x" try do UndefinedModule.undefined_function rescue x in [UndefinedFunctionError] -> nil end # Rescue all kinds of exceptions and bind the rescued exception # to the variable "x" try do UndefinedModule.undefined_function rescue x -> nil end Erlang errors Erlang errors are transformed into Elixir ones when rescuing: try do :erlang.error(:badarg) rescue ArgumentError -> :ok end #=> :ok The most common Erlang errors will be transformed into their Elixir counterpart. Those which are not will be transformed into the more generic ErlangError: try do :erlang.error(:unknown) rescue ErlangError -> :ok end #=> :ok In fact, ErlangError can be used to rescue any error that is not a proper Elixir error. For example, it can be used to rescue the earlier :badarg error too, prior to transformation: try do :erlang.error(:badarg) rescue ErlangError -> :ok end #=> :ok catch clauses The catch clause can be used to catch thrown values, exits, and errors. Catching thrown values catch can be used to catch values thrown by Kernel.throw/1: try do throw(:some_value) catch thrown_value -> IO.puts "A value was thrown: #{inspect(thrown_value)}" end Catching values of any kind The catch clause also supports catching exits and errors. To do that, it allows matching on both the kind of the caught value as well as the value itself: try do exit(:shutdown) catch :exit, value IO.puts "Exited with value #{inspect(value)}" end try do exit(:shutdown) catch kind, value when kind in [:exit, :throw] -> IO.puts "Caught exit or throw with value #{inspect(value)}" end The catch clause also supports :error alongside :exit and :throw as in Erlang, although this is commonly avoided in favor of raise/ rescue control mechanisms. One reason for this is that when catching :error, the error is not automatically transformed into an Elixir error: try do :erlang.error(:badarg) catch :error, :badarg -> :ok end #=> :ok after clauses An after clause allows you to define cleanup logic that will be invoked both when the block of code passed to try/1 succeeds and also when an error is raised. Note that the process will exit as usual when receiving an exit signal that causes it to exit abruptly and so the after clause is not guaranteed to be executed. Luckily, most resources in Elixir (such as open files, ETS tables, ports, sockets, and so on) are linked to or monitor the owning process and will automatically clean themselves up if that process exits. File.write!("tmp/story.txt", "Hello, World") try do do_something_with("tmp/story.txt") after File.rm("tmp/story.txt") end else clauses else clauses allow the result of the body passed to try/1 to be pattern matched on: x = 2 try do 1 / x rescue ArithmeticError -> :infinity else y when y < 1 and y > -1 -> :small _ -> :large end If an else clause is not present and no exceptions are raised, the result of the expression will be returned: x = 1 ^x = try do 1 / x rescue ArithmeticError -> :infinity end However, when an else clause is present but the result of the expression does not match any of the patterns then an exception will be raised. This exception will not be caught by a catch or rescue in the same try: x = 1 try do try do 1 / x rescue # The TryClauseError cannot be rescued here: TryClauseError -> :error_a else 0 -> :small end rescue # The TryClauseError is rescued here: TryClauseError -> :error_b end Similarly, an exception inside an else clause is not caught or rescued inside the same try: try do try do nil catch # The exit(1) call below can not be caught here: :exit, _ -> :exit_a else _ -> exit(1) end catch # The exit is caught here: :exit, _ -> :exit_b end This means the VM no longer needs to keep the stacktrace once inside an else clause and so tail recursion is possible when using a try with a tail call as the final call inside an else clause. The same is true for rescue and catch clauses. Only the result of the tried expression falls down to the else clause. If the try ends up in the rescue or catch clauses, their result will not fall down to else: try do throw(:catch_this) catch :throw, :catch_this -> :it_was_caught else # :it_was_caught will not fall down to this "else" clause. other -> {:else, other} end Variable handling Since an expression inside try may not have been evaluated due to an exception, any variable created inside try cannot be accessed externally. For instance: try do x = 1 do_something_that_may_fail(same_arg) :ok catch _, _ -> :failed end x #=> unbound variable "x" In the example above, x cannot be accessed since it was defined inside the try clause. A common practice to address this issue is to return the variables defined inside try: x = try do x = 1 do_something_that_may_fail(same_arg) x catch _, _ -> :failed end Unquotes the given expression from inside a macro. Examples Imagine the situation you have a variable value and you want to inject it inside some quote. The first attempt would be: value = 13 quote do sum(1, value, 3) end Which would then return: {:sum, [], [1, {:value, [], quoted}, 3]} Which is not the expected result. For this, we use unquote: iex> value = 13 iex> quote do ...> sum(1, unquote(value), 3) ...> end {:sum, [], [1, 13, 3]} Unquotes the given list expanding its arguments. Similar to unquote/1. Examples iex> values = [2, 3, 4] iex> quote do ...> sum(1, unquote_splicing(values), 5) ...> end {:sum, [], [1, 2, 3, 4, 5]} Used to combine matching clauses. Let’s start with an example: iex> opts = %{width: 10, height: 15} iex> with {:ok, width} <- Map.fetch(opts, :width), ...> {:ok, height} <- Map.fetch(opts, :height), ...> do: {:ok, width * height} {:ok, 150} If all clauses match, the do block is executed, returning its result. Otherwise the chain is aborted and the non-matched value is returned: iex> opts = %{width: 10} iex> with {:ok, width} <- Map.fetch(opts, :width), ...> {:ok, height} <- Map.fetch(opts, :height), ...> do: {:ok, width * height} :error Guards can be used in patterns as well: iex> users = %{"melany" => "guest", "bob" => :admin} iex> with {:ok, role} when not is_binary(role) <- Map.fetch(users, "bob"), ...> do: {:ok, to_string(role)} {:ok, "admin"} As in for/1, variables bound inside with/1 won’t leak; “bare expressions” may also be inserted between the clauses: iex> width = nil iex> opts = %{width: 10, height: 15} iex> with {:ok, width} <- Map.fetch(opts, :width), ...> double_width = width * 2, ...> {:ok, height} <- Map.fetch(opts, :height), ...> do: {:ok, double_width * height} {:ok, 300} iex> width nil Note that if a “bare expression” fails to match, it will raise a MatchError instead of returning the non-matched value: with :foo = :bar, do: :ok #=> ** (MatchError) no match of right hand side value: :bar An else option can be given to modify what is being returned from with in the case of a failed match: iex> opts = %{width: 10} iex> with {:ok, width} <- Map.fetch(opts, :width), ...> {:ok, height} <- Map.fetch(opts, :height) do ...> {:ok, width * height} ...> else ...> :error -> ...> {:error, :wrong_data} ...> end {:error, :wrong_data} If there is no matching else condition, then a WithClauseError exception is raised. Creates a tuple. More information about the tuple data type and about functions to manipulate tuples can be found in the Tuple module; some functions for working with tuples are also available in Kernel (such as Kernel.elem/2 or Kernel.tuple_size/1). AST representation Only two-item tuples are considered literals in Elixir and return themselves when quoted. Therefore, all other tuples are represented in the AST as calls to the :{} special form. iex> quote do ...> {1, 2} ...> end {1, 2} iex> quote do ...> {1, 2, 3} ...> end {:{}, [], [1, 2, 3]}
https://hexdocs.pm/elixir/Kernel.SpecialForms.html
CC-MAIN-2018-09
refinedweb
7,774
60.24
Get the highlights in your inbox every week. Sending custom emails with Python | Opensource.com Sending custom emails with Python Customize your group emails with Mailmerge, a command-line program that can handle simple and complex emails. Subscribe now Email. Install Mailmerge Mailmerge is packaged and available in Fedora, and you can install it from the command line with sudo dnf install python3-mailmerge. You can also install it from PyPI using pip, as the project's README explains. Configure your Mailmerge files Three files control how Mailmerge works. If you run mailmerge --sample, it will create template files for you. The files are: - mailmerge_server.conf: This contains the configuration details for your SMTP host to send emails. Your password is not stored in this file. - mailmerge_database.csv: This holds the custom data for each email, including the recipients' email addresses. - mailmerge_template.txt: This is your email's text with placeholder fields that will be replaced using the data from mailmerge_database.csv. Server.conf The sample mailmerge_server.conf file includes several examples that should be familiar. If you've ever added email to your phone or set up a desktop email client, you've seen this data before. The big thing to remember is to update your username in the file, especially if you are using one of the example configurations. Database.csv The mailmerge_database.csv file is a bit more complicated. It must contain (at minimum) the recipients' email addresses and any other custom details necessary to replace the fields in your email. It is a good idea to write the mailmerge_template.txt file at the same time you create the fields list for this file. I find it helpful to use a spreadsheet to capture this data and export it as a CSV file when I am done. This sample file: myself@mydomain.com,"Myself",17 bob@bobdomain.com,"Bob",42 allows you to send emails to two people, using their first name and telling them a number. This file, while not terribly interesting, illustrates an important habit: Always make yourself the first recipient in the file. This enables you to send yourself a test email to verify everything works as expected before you email the entire list. If any of your values contain commas, you must enclose the entire value in double-quotes ("). If you need to include a double-quote in a double-quoted field, use two double-quotes in a row. Quoting rules are fun, so read about CSVs in Python 3 for specifics. Template.txt As part of my work, I get to share news about travel-funding decisions for our Fedora contributor conference, Flock. A simple email tells people they've been selected for travel funding and their specific funding details. One user-specific detail is how much money we can allocate for their airfare. Here is an abbreviated version of my template file (I've snipped out a lot of the text for brevity): $ cat mailmerge_template.txt TO: {{Email}} SUBJECT: Flock 2019 Funding Offer FROM: Brian Exelbierd <bexelbie@redhat.com> Hi {{Name}}, I am writing you on behalf of the Flock funding committee. You requested funding for your attendance at Flock. After careful consideration we are able to offer you the following funding: Travel Budget: {{Travel_Budget}} <<snip>> The top of the template specifies the recipient, sender, and subject. After the blank line, there's the body of the email. This email needs the recipients' Email, Name, and Travel_Budget from the database.csv file. Notice that those fields are surrounded by double curly braces ({{ and }}). The corresponding mailmerge_database.csv looks like this: $ cat mailmerge_database.csv Name,Email,Travel_Budget Brian,bexelbie@redhat.com,1000 PersonA,persona@fedoraproject.org,1500 PèrsonB,personb@fedoraproject.org,500 Notice that I listed myself first (for testing) and there are two other people in the file. The second person, PèrsonB, has an accented character in their name; Mailmerge will automatically encode it. That's the whole template concept: Write your email and put placeholders in double curly braces. Then create a database that provides those values. Now let's test the email. Test and send simple email merges Do a dry-run Start by doing a dry-run that prints the emails, with the placeholder fields completed, to the screen. By default, if you run the command mailmerge, it will do a dry-run of the first email: $ mailmerge >>>:17:15 -0000 Hi Brian, I am writing you on behalf of the Flock funding committee. You requested funding for your attendance at Flock. After careful consideration we are able to offer you the following funding: Travel Budget: 1000 <<snip>> >>> sent message 0 DRY RUN >>> No attachments were sent with the emails. >>> Limit was 1 messages. To remove the limit, use the --no-limit option. >>> This was a dry run. To send messages, use the --no-dry-run option. Reviewing the first email (message 0, as counting starts from zero, like many things in computer science), you can see my name and travel budget are correct. If you want to review every email, enter mailmerge --no-limit to tell Mailmerge not to limit itself to the first email. Here's the dry-run of the third email, which shows the special character encoding: >>> message 2 TO: personb@fedoraproject.org SUBJECT: Flock 2019 Funding Offer FROM: Brian Exelbierd <bexelbie@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Date: Sat, 20 Jul 2019 18:22:48 -0000 Hi P=E8rsonB, That's not an error; P=E8rsonB is the encoded form of PèrsonB. Send a test message Now, send a test email with the command mailmerge --no-dry-run, which tells Mailmerge to send a message to the first email on the list: $ mailmerge --no-dry-run >>>:25:45 -0000 Hi Brian, I am writing you on behalf of the Flock funding committee. You requested funding for your attendance at Flock. After careful consideration we are able to offer you the following funding: Travel Budget: 1000 <<snip>> >>> Read SMTP server configuration from mailmerge_server.conf >>> host = smtp.gmail.com >>> port = 587 >>> username = bexelbie@redhat.com >>> security = STARTTLS >>> password for bexelbie@redhat.com on smtp.gmail.com: >>> sent message 0 >>> No attachments were sent with the emails. >>> Limit was 1 messages. To remove the limit, use the --no-limit option. On the fourth to last line, you can see it prompts you for your password. If you're using two-factor authentication or domain-managed logins, you will need to create an application password that bypasses these controls. If you're using Gmail and similar systems, you can do it directly from the interface; otherwise, contact your email system administrator. This will not compromise the security of your email system, but you should still keep the password complex and secret. When I checked my email account, I received a beautifully formatted test email. If your test email looks ready, send all the emails by entering mailmerge --no-dry-run --no-limit. Send complex emails You can really see the power of Mailmerge when you take advantage of Jinja2 templating. I've found it useful for including conditional text and sending attachments. Here is a complex template and the corresponding database: $ cat mailmerge_template.txt TO: {{Email}} SUBJECT: Flock 2019 Funding Offer FROM: Brian Exelbierd <bexelbie@redhat.com> ATTACHMENT: attachments/{{File}} Hi {{Name}}, I am writing you on behalf of the Flock funding committee. You requested funding for your attendance at Flock. After careful consideration we are able to offer you the following funding: Travel Budget: {{Travel_Budget}} {% if Hotel == "Yes" -%} Lodging: Lodging in the hotel Wednesday-Sunday (4 nights) {%- endif %} <<snip>> $ cat mailmerge_database.csv Name,Email,Travel_Budget,Hotel,File Brian,bexelbie@redhat.com,1000,Yes,visa_bex.pdf PersonA,persona@fedoraproject.org,1500,No,visa_person_a.pdf PèrsonB,personb@fedoraproject.org,500,Yes,visa_person_b.pdf There are two new things in this email. First, there's an attachment. I have to send visa invitation letters to international travelers to help them come to Flock, and the ATTACHMENT part of the header specifies which file to attach. To keep my directory clean, I put all of them in my Attachments subdirectory. Second, it includes conditional information about a hotel, because some people receive funding for their hotel stay, and I need to include those details for those who do. This is done with the if construction: {% if Hotel == "Yes" -%} Lodging: Lodging in the hotel Wednesday-Sunday (4 nights) {%- endif %} This works just like an if in most programming languages. Jinja2 is very expressive and can do multi-level conditions. Experiment with making your life easier by including database elements that control the contents of the email. Using whitespace is important for email readability. The minus (-) symbols in if and endif are part of how Jinja2 controls whitespace. There are lots of options, so experiment to see what looks best for you. Also note that I extended the database with two fields, Hotel and File. These are the values that control the inclusion of the hotel text and provide the name of the attachment. In my example, PèrsonB and I got hotel funding, while PersonA didn't. Doing a dry-run and sending the emails is the same whether you're using a simple or a complex template. Give it a try! You can also experiment with using conditionals (if … endif) in the header. You can, for example, have an attachment only if one is in the database, or maybe you need to change the sender's name for some emails but not others. Mailmerge's advantages The Mailmerge program provides a powerful but simple method of sending lots of customized emails. Everyone gets only the information they need, and extraneous steps and details are omitted. Even for simple group emails, I have found this method much more effective than sending one email to a bunch of people using CC or BCC. A lot of people filter their email and delay reading anything not sent directly to them. Using Mailmerge ensures that every person gets their own email. Messages will filter properly for the recipient and no one can accidentally "reply all" to the entire group. 1 Comments Thank you for the in-depth article on mailmerge in Python. Very clearly explained.
https://opensource.com/article/19/8/sending-custom-emails-python
CC-MAIN-2020-05
refinedweb
1,713
65.93
0 HI, I was trying to learn more about I/O with files in C++ and I can't figure out why this program can't output to the file... #include<iostream> #include<string> #include<fstream> #include<algorithm> using namespace std; char userinp; void print_interface() { cout << "Enter E to Encrypt" << endl << "Enter D to Decrypt" << endl; cout << "Enter R to Erase the file" << endl << "Enter X to Exit" << endl; } int main() { int cipher = 0, i = 0, iter = 0; string data[1000]; fstream myfile; myfile.open("myfile.txt"); // For some reasons this gives me error : myfile.open("myfile.txt", ios::app, ios::in, ios::out); if(! myfile.is_open()) // It was something about overloaded function can't take 4 arguments... { cout << "Could not open the file." << endl; } do { print_interface(); cin >> userinp; if (userinp == 'E' || userinp == 'e') { system("cls"); cout << "Enter the cipher to user (Number)" << endl; cin >> cipher; while(! myfile.eof()) { getline(myfile, data[iter]); cout << data[iter]; for(i = 0; i < data[iter].length(); i++) { data[iter][i] += cipher; } cout << data[iter]; iter++; } for(i = 0; i < iter; i++) { cout << data[i] << endl; myfile << data[i] << endl; } } else if (userinp == 'D' || userinp == 'd') { } else if (userinp == 'R' || userinp == 'r') { } else if (userinp == 'X' || userinp == 'x') { break; } } while(1); myfile.close(); return 0; } What I want this program to do is take input from myfile.txt and then in the end, completely overwrite the data read with the scrambled code. It reads in the data but it can't write to the file. I tried completely erasing the file between the reading and writting part. It didn't make any difference. This didn't work either : for(i = 0; i < iter; i++) { outfile.open("myfile.txt"); cout << data[i] << endl; myfile << data[i] << endl; outfile.close(); } outfile is ofstream file...
https://www.daniweb.com/programming/software-development/threads/213431/file-handling-giving-trouble
CC-MAIN-2016-50
refinedweb
298
73.37
ConcurrentDictionary<TKey, TValue> is slow. Or is it? On this year’s MS Fest Jarda Jirava had a presentation about Akka.Net. I was interested in this topic, because the actor model is one way (out if many – sadly no silver bullet yet) to tackle concurrency and parallelism problems. While showing some actor demos there was a nice comparison with home made solutions. So you don’t have to go to full framework to try thinking in actors. The demo was just storing some key and value into a hashtable. The hand made was first using just plain old Dictionary<int, int> with lock/ Monitor and then using ConcurrentDictionary<int, int>. To mine (and Jarda’s surprise too) the ConcurrentDictionary<int, int> was slower. So I started digging into it and looking for a reason. Because I was confident the ConcurrentDictionary<int, int> should be faster compared to only single Monitor. The test code Based on the presentation code I extracted a simple test bench to play with. I focused on having as less moving parts as possible, to be able to exactly control what my code is doing. I ended up with this code. public class Test { object _syncRoot; Dictionary<int, int> _monitorDictionary; ConcurrentDictionary<int, int> _concurrentDictionary; public Test() { _syncRoot = new object(); _monitorDictionary = new Dictionary<int, int>(); _concurrentDictionary = new ConcurrentDictionary<int, int>(); } public void MonitorDictionary() { TestHelper(i => { lock (_syncRoot) { _monitorDictionary[i] = i; } }, "Monitor"); } public void ConcurrentDictionary() { TestHelper(i => { _concurrentDictionary[i] = i; }, "Concurrent"); } static void TestHelper(Action<int> test, string name) { Console.Write($"{name}:\t"); var sw = Stopwatch.StartNew(); Parallel.For(0, 40000000, test); Console.WriteLine(sw.Elapsed); } } I’m running it with optimizations turned on, without debugger attached and in 64-bits (this doesn’t matter that much). Depending on your machine, number of iterations (thus number of items in hashtable) you’ll get different numbers. But for sure the Monitor version will be way faster. Concurrent: 00:00:14.3312184 Monitor: 00:00:03.4833102 So why is this? Thinking Looking at the code, you clearly spot two interesting pieces. The code is adding unique keys to the dictionary. Thus there will be no updates of values and it will be just inserting new items. Also, there’s nothing happening around. It’s just adding the items as quickly as possible. Hammering the locking. That’s far from regular case. And finally, running the code shows that for Monitor version the CPU is 100% used (all cores) while for the Concurrent it isn’t. Reason If you look at the TryAddInternal method of ConcurrentDictionary<TKey, TValue> you’ll see it’s using Node class to handle the items. So that means allocations. Second clue is in GrowTable method. And it’s doing quite a locking and shuffling of locks (and of course also the resizing). It must be GC. I’m pretty sure. Let’s test it. I’ll use the Diagnostic Tools window in Visual Studio. Whoa. There’s a lot of GC-ing happening. Theory confirmed. Then also running a profiler shows a hot spot in GrowTable method. As expected. We’re adding a lot of items. Solution Well, there’s really none. In this specific edge case the single, hand crafted, Monitor will beat the ConcurrentDictionary<TKey, TValue>. But is it really a problem for a real world application? The items are unique and just added as quickly as possible. It could be some list or bag implementations (i.e. ConcurrentBag<T>) might behave better for our case. Closer to the real world application? What if I modify the code that it’s not only adding, but also updating items? public void MonitorDictionary() { TestHelper(i => { lock (_syncRoot) { _monitorDictionary[i % 10] = i; } }, "Monitor"); } public void ConcurrentDictionary() { TestHelper(i => { _concurrentDictionary[i % 10] = i; }, "Concurrent"); } Running the code with this modification gives me comparable results for both versions. Using then i % 100 as the index makes the ConcurrentDictionary<TKey, TValue> clear winner. What if I do some processing around? public void MonitorDictionary() { TestHelper(i => { Processing(); lock (_syncRoot) { _monitorDictionary[i] = i; } }, "Monitor"); } public void ConcurrentDictionary() { TestHelper(i => { Processing(); _concurrentDictionary[i] = i; }, "Concurrent"); } static int Processing() { var sum = 0; for (var i = 0; i < 4000; i++) { sum += i; } return sum; } This time the difference is not that significant (in real world it would depend on what is the Processing method doing and how long it takes). The locking is not hammered like crazy and the GC has a room to breathe. Conclusion During the presentation my guess was that there’s some false sharing and/or trashing happening. I couldn’t have been more wrong, as you can see from two modifications above. Testing concurrent performance with synthetic test is far from what the code will do in real, hence it’s more than desired to run tests with as close to real setup as possible. As usual with this deep stuff. It has been a joy – for me – to dig into this problem and to start connecting the dots. And then slightly modifying the code to get to expected or desired behavior.
https://www.tabsoverspaces.com/233590-concurrentdictionary-is-slow-or-is-it
CC-MAIN-2021-31
refinedweb
843
58.38
This is mostly a fairly minor update, just a handful of bug fixes I wanted to get out there. Notably though, it does have the much-requested auto complete support in the API. Auto complete itself is now implemented via a plugin, and you can choose to either hook into that, or show a separate completions menu (via the new method view.showCompletions). A short example of hooking into the existing auto complete command: from AutoComplete import AutoCompleteCommand def AddGreeting(view, pos, prefix, completions): return "Hello!"] + completions AutoCompleteCommand.completionCallbacks'AddGreeting'] = AddGreeting This will add "Hello!" as the first available auto complete suggestion. Freaking awesome! Thanks Jon Can you elaborate on bug fixes in this release? They're listed in the changelog at: the last 4 items mentioned are bug fixes. Nice, my bad I forgot to check that page, I was on the phone when posting that Really looking forward to The changed autocompletion makes me happy as you removed the dependecy on trailing punctuation. This was sometimes annoying when programming PL/SQL. Many thanks either way you do it, make it optional. In a really LARGE project I wouldn't want a bunch of sublime-snippets files reflecting all the ctags from that project, it would easily get to the 10k snippet files in a sec lol...On the other hand I would like for the ctags to pop down and show me all the definitions and allow me to select whichever Now, if youre thinking of doing insertInlineSnippet when selecting the definition (function) from the drop down then I don't see a problem why not have the snippet dynamically generated for you. (without creating sublime-snippet files tho)
https://forum.sublimetext.com/t/20090530-beta/189
CC-MAIN-2016-07
refinedweb
281
51.99
layout: documentation title: SPEC Tutorial doc: gem5art parent: tutorial permalink: /documentation/gem5art/tutorials/spec-tutorial Authors: In this tutorial, we will demonstrate how to utilize gem5art and gem5-resources to run SPEC CPU 2017 benchmarks in gem5 full system mode. The scripts in this tutorial work with gem5art v1.3.0, gem5 20.1.0.4, and gem5-resources 20.1.0.4. The content of this tutorial is mostly for conducting SPEC CPU 2017 experiments. However, due to the similarity of SPEC 2006 and SPEC 2017 resources, this tutorial also applies to conducting SPEC 2006 experiment by using src/spec-2006 folder instead of src/spec-2017 of gem5-resources. gem5-resources is an actively maintained collections of gem5-related resources that are commonly used. The resources include scripts, binaries and disk images for full system simulation of many commonly used benchmarks. This tutorial will offer guidance in utilizing gem5-resources for full system simulation. Different from gem5 SE mode (system emulation mode), the FS mode (full system mode) uses an actual Linux kernel binary instead of emulating the responsibilities of a typical modern OS such as managing page tables and taking care of system calls. As a result, gem5 FS simulation would be more realistic compared to gem5 SE simulation, especially when the interactions between the workload and the OS are significant part of the simulation. A typical gem5 full system simulation requires a compiled Linux kernel, a disk image containing compiled benchmarks, and gem5 system configurations. gem5-resources typically provides all required all of the mentioned resources for every supported benchmark such that one could download the resources and run the experiment without much modification. However, due to license issue, gem5-resources does not provide a disk image containing SPEC CPU 2017 benchmarks. In this tutorial, we will provide a set of scripts that generates a disk image containing the benchmarks assuming the ISO file of the SPEC CPU 2017 benchmarks is available. spec-2017/ |___ gem5/ # gem5 folder | |___ disk-image/ | |___ shared/ | |___ spec-2017/ | |___ spec-2017-image/ | | |___ spec-2017 # the disk image will be generated here | |___ spec-2017.json # the Packer script | |___ cpu2017-1.1.0.iso # SPEC 2017 ISO (add here) | |___ configs | |___ system/ | |___ run_spec.py # gem5 run script | |___ vmlinux-4.19.83 # Linux kernel, link to download provided below | |___ README.md A visual depict of how gem5 interacts with the host system. gem5 is configured to do the following: booting the Linux kernel, running the benchmark, and copying the SPEC outputs to the host system. However, since we are interested in getting the stats only for the benchmark, we will configure gem5 to exit after the kernel is booted, and then we reset the stats before running the benchmark. We use KVM CPU model in gem5 for Linux booting process to quickly boot the system, and after the process is complete, we switch to the desired detailed CPU to run the benchmark. Similarly, after the benchmark is complete, gem5 exits to host, which allows us to get the stats at that point. After that, optionally, we switch the CPU back to KVM, which allows us to quickly write the SPEC output files to the host. Note: gem5 will output the stats again when the gem5 run is complete. Therefore, we will see two sets of stats in one file in stats.txt. The stats of the benchmark is the the first part of stats.txt, while the second part of the file contains the stats of the benchmark AND the process of writing output files back to the host. We are only interested in the first part of stats.txt. In this part, we have two concurrent tasks: setting up the resources and documenting the process using gem5art. We will structure the SPEC 2017 resources as laid out by gem5-resources. The script launch_spec2017_experiment.py will contain the documentation about the artifacts we create and will also serve as Python script that launches the experiment. First, we clone the gem5-resource repo and check out the stable branch upto the 1fe56ffc94005b7fa0ae5634c6edc5e2cb0b7357 commit, which is the most recent version of gem5-resources that is compatible with gem5 20.1.0.4 as of March 2021. git clone cd gem5-resources git checkout 1fe56ffc94005b7fa0ae5634c6edc5e2cb0b7357 Since all resources related to the SPEC CPU 2006 benchmark suite are in the src/spec-2017 and other folders in src/ are not related to this experiment, we set the root folder of the experiment in the src/spec-2017 folder of the cloned repo. To keep track of changes that are specific to src/spec-2017, we set up a git structure for the folder. Also, the git remote pointing to origin should also be setup as gem5art will use origin information. In the gem5-resources folder, cd src/spec-2017 git init git remote add origin We document the root folder of the experiment in launch_spec2017_experiment.py as follows, experiments_repo = Artifact.registerArtifact( command = ''' git clone cd gem5-resources git checkout 1fe56ffc94005b7fa0ae5634c6edc5e2cb0b7357 cd src/spec-2017 git init git remote add origin ''', typ = 'git repo', name = 'spec2017 Experiment', path = './', cwd = './', documentation = ''' local repo to run spec 2017 experiments with gem5 full system mode; resources cloned from upto commit 1fe56ffc94005b7fa0ae5634c6edc5e2cb0b7357 of stable branch ''' ) We use .gitignore file to ingore changes of certain files and folders. In this experiment, we will use this .gitignore file, *.pyc m5out .vscode results gem5art-env disk-image/packer disk-image/packer_cache disk-image/spec-2017/spec-2017-image/spec-2017 disk-image/spec-2017/cpu2017-1.1.0.iso gem5 vmlinux-4.19.83 In the script above, we ignore files and folders that we use other gem5art Artifact objects to keep track of them, or the presence of those files and folders do not affect the experiment‘s results. For example, disk-image/packer is the path to the packer binary which generates the disk image, and newer versions packer probably won’t affect the content of the disk image. Another example is that we use another gem5art Artifact object to keep track of vmlinux-4.19.83, so we put the name of the file in the .gitignore file. Note: You probably notice that there are more than one way of keeping track of the files in the experiment folder: either the git structure of the experiment will keep track of a file, or we can create a separate gem5art Artifact object to keep track of that file. The decision of letting the git structure or creating a new Artifact object leads to different outcomes. The difference lies on the type of the Artifact object (specified by the typ parameter): for Artifact objects that has typ of git repo, gem5art won‘t upload the files in the git structure to gem5art’s database, instead, it will only keep track of the hash of the HEAD commit of the git structure. However, for Artifact's that do not have typ that is git repo, the file specfied in the path parameter will be uploaded to the database. Essentially, we tend to keep small-size files (such as scripts and texts) in a git structure, and to keep large-size files (such as gem5 binaries and disk images) in Artifact's of type gem5 binary or binary. Another important difference is that gem5art does not keep track of files in a git Artifact, while it does upload other types of Artifact to its database. In this step, we download the source code and build gem5 v20.1.0.4. In the root folder of the experiment, git clone -b v20.1.0.4 cd gem5 scons build/X86/gem5.opt -j8 We have two artifacts: one is the gem5 source code (the gem5 git repo), and the gem5 binary ( gem5.opt). In launch_spec2017_experiments.py, we document the step in Artifact objects as follows, gem5_repo = Artifact.registerArtifact( command = ''' git clone -b v20.1.0.4 cd gem5 scons build/X86/gem5.opt -j8 ''', typ = 'git repo', name = 'gem5', path = 'gem5/', cwd = './', documentation = 'cloned gem5 v20.1.0.4' ) gem5_binary = Artifact.registerArtifact( command = 'scons build/X86/gem5.opt -j8', typ = 'gem5 binary', name = 'gem5-20.1.0.4', cwd = 'gem5/', path = 'gem5/build/X86/gem5.opt', inputs = [gem5_repo,], documentation = 'compiled gem5 v20.1.0.4 binary' ) m5 is a binary that facilitates the communication between the host system and the guest system (gem5). The use of the m5 binary will be demonstrated in the runscripts that we will describe later. m5 binary will be copied to the disk image so that the guest could run m5 binary during the simulation. m5 binary should be compiled before we build the disk image. Note: it's important to compile the m5 binary with -DM5_ADDR=0xFFFF0000 as is default in the SConscript. This address is used by the guest binary to communicate with the simulator. If you change the address in the guest binary, you also have to update the simulator to use the new address. Additionally, when running in KVM, it is required that you use the address form of guest<->simulator communication and not the pseudo instruction form (i.e., using -DM5_ADDR is required when compiling a guest binary for which you want to run in KVM mode on gem5). To compile m5 binary, in the root folder of the experiment, cd gem5/util/m5/ scons build/x86/out/m5 In launch_spec2017_experiments.py, we document the step in an Artifact object as follows, m5_binary = Artifact.registerArtifact( command = 'scons build/x86/out/m5', typ = 'binary', name = 'm5', path = 'gem5/util/m5/build/x86/out/m5', cwd = 'gem5/util/m5', inputs = [gem5_repo,], documentation = 'm5 utility' ) In this step, we will build the disk image using packer. Note: If you are interested in modifying the SPEC configuration file, Appendix II describes how the scripts that build the disk image work. Also, more information about using packer and building disk images can be found here. First, we download the packer binary. The current version of packer as of December 2020 is 1.6.6. cd disk-image/ wget unzip packer_1.6.6_linux_amd64.zip rm packer_1.6.6_linux_amd64.zip In launch_spec2017_experiments.py, we document how we obtain the binary as follows, packer = Artifact.registerArtifact( command = ''' wget unzip packer_1.6.6_linux_amd64.zip; ''', typ = 'binary', name = 'packer', path = 'disk-image/packer', cwd = 'disk-image', documentation = 'Program to build disk images. Downloaded from ) Second, we build the disk image. The script disk-image/spec-2017/spec-2017.json specifies how the disk image is built. In this step, we assume the SPEC 2017 ISO file is in the disk-image/spec-2017 folder and the ISO file name is cpu2017-1.1.0.iso. The path and the name of the ISO file could be changed in the JSON file. To build the disk image, in the root folder of the experiment, cd disk-image/ ./packer validate spec-2017/spec-2017.json # validate the script, including checking the input files ./packer build spec-2017/spec-2017.json The process should take about than an hour to complete on a fairly recent machine with a cable internet speed. The disk image will be in disk-image/spec-2017/spec-2017-image/spec-2017. Note: Packer will output a URL to a VNC server that could be connected to to inspect the building process. Note: More about using packer and building disk images. Now, in launch_spec2017_experiments.py, we make an Artifact object of the disk image. disk_image = Artifact.registerArtifact( command = './packer build spec-2017/spec-2017.json', typ = 'disk image', name = 'spec-2017', cwd = 'disk-image/', path = 'disk-image/spec-2017/spec-2017-image/spec-2017', inputs = [packer, experiments_repo, m5_binary,], documentation = 'Ubuntu Server with SPEC 2017 installed, m5 binary installed and root auto login' ) The compiled Linux kernel binaries that is known to work with gem5 can be found here: The Linux kernel configurations that are used to compile the Linux kernel binaries are documented and maintained in gem5-resources: The following command downloads the compiled Linux kernel of version 4.19.83. In the root folder of the experiment, wget Now, in launch_spec2017_experiments.py, we make an Artifact object of the Linux kernel binary. linux_binary = Artifact.registerArtifact( name = 'vmlinux-4.19.83', typ = 'kernel', path = './vmlinux-4.19.83', cwd = './', command = ''' wget inputs = [experiments_repo,], documentation = "kernel binary for v4.19.83", ) The gem5 system configurations can be found in the configs/ folder. The gem5 run script located in configs/run_spec.py, takes the following parameters: --kernel: (required) the path to vmlinux file. --disk: (required) the path to spec image. --cpu: (required) name of the detailed CPU model. Currently, we are supporting the following CPU models: kvm, o3, atomic, timing. More CPU models could be added to getDetailedCPUModel() in run_spec.py. --benchmark: (required) name of the SPEC CPU 2017 benchmark. The availability of the benchmarks could be found at the end of the tutorial. --size: (required) size of the benchmark. There are three options: ref, train, test. --no-copy-logs: this is an optional parameter specifying whether the spec log files should be copied to the host system. --allow-listeners: this is an optional parameter specifying whether gem5 should open ports so that gdb or telnet could connect to. No listeners are allowed by default. We don't use another Artifact object to document this file. The Artifact repository object of the root folder will keep track of the changes of the script. Note: The first two parameters of the gem5 run script for full system simulation should always be the path to the linux binary and the path to the disk image, in that order gem5art code works with Python 3.5 or above. The following script will set up a python3 virtual environment named gem5art-env. In the root folder of the experiment, virtualenv -p python3 gem5art-env To activate the virtual environment, in the root folder of the experiment, source gem5art-env/bin/activate To install the gem5art dependency (this should be done when we are in the virtual environment), pip install gem5art-artifact gem5art-run gem5art-tasks To exit the virtual environment, deactivate Note: the following steps should be done while using the Python virtual environment. The following script will run the MongoDB database server in a docker container. docker run -p 27017:27017 -v /path/in/host:/data/db --name mongo-1 -d mongo The -p 27017:27017 option maps the port 27017 in the container to port 27017 on the host. The -v /path/in/host:/data/db option mounts the /data/db folder in the docker container to the folder /path/in/host in the host. The path of the host folder should an absoblute path, and the database files created by MongoDB will be in that folder. The --name mongo-1 option specifies the name of the docker container. We can use this name to identify to the container. The -d option will let the container run in the background. mongo is the name of the offical mongo image. This step is only necessary if you want to use Celery to manage processes. Inisde the path in the host specified above, celery -E -A gem5art.tasks.celery worker --autoscale=[number of workers],0 Now, we can put together the run script! In launch_spec2017_experiments.py, we import the required modules and classes at the beginning of the file, import os import sys from uuid import UUID from gem5art.artifact import Artifact from gem5art.run import gem5Run from gem5art.tasks.tasks import run_job_pool And then, we put the launch function at the end of launch_spec2017_experiments.py, if __name__ == "__main__": cpus = ['kvm', 'atomic', 'o3', 'timing'] benchmark_sizes = {'kvm': ['test', 'ref'], 'atomic': ['test'], 'o3': ['test'], 'timing': ['test'] } benchmarks = ["503.bwaves_r", "507.cactuBSSN_r", "508.namd_r", "510.parest_r", "511.povray_r", "519.lbm_r", "521.wrf_r", "526.blender_r", "527.cam4_r", "538.imagick_r", "544.nab_r", "549.fotonik3d_r", "554.roms_r", "997.specrand_fr", "603.bwaves_s", "607.cactuBSSN_s", "619.lbm_s", "621.wrf_s", "627.cam4_s", "628.pop2_s", "638.imagick_s", "644.nab_s", "649.fotonik3d_s", "654.roms_s", "996.specrand_fs", "500.perlbench_r", "502.gcc_r", "505.mcf_r", "520.omnetpp_r", "523.xalancbmk_r", "525.x264_r", "531.deepsjeng_r", "541.leela_r", "548.exchange2_r", "557.xz_r", "999.specrand_ir", "600.perlbench_s", "602.gcc_s", "605.mcf_s", "620.omnetpp_s", "623.xalancbmk_s", "625.x264_s", "631.deepsjeng_s", "641.leela_s", "648.exchange2_s", "657.xz_s", "998.specrand_is"] runs = [] for cpu in cpus: for size in benchmark_sizes[cpu]: for benchmark in benchmarks: run = gem5Run.createFSRun( 'gem5 v20.1.0.4 spec 2017 experiment', # name 'gem5/build/X86/gem5.opt', # gem5_binary 'gem5-configs/run_spec.py', # run_script 'results/{}/{}/{}'.format(cpu, size, benchmark), # relative_outdir gem5_binary, # gem5_artifact gem5_repo, # gem5_git_artifact run_script_repo, # run_script_git_artifact 'linux-4.19.83/vmlinux-4.19.83', # linux_binary 'disk-image/spec2017/spec2017-image/spec2017', # disk_image linux_binary, # linux_binary_artifact disk_image, # disk_image_artifact cpu, benchmark, size, # params timeout = 10*24*60*60 # 10 days ) runs.append(run) run_job_pool(runs) The above launch function will run the all the available benchmarks with kvm, atomic, timing, and o3 cpus. For kvm, both test and ref sizes will be run, while for the rest, only benchmarks of size test will be run. Note that the line 'results/{}/{}/{}'.format(cpu, size, benchmark), # relative_outdir specifies how the results folder is structured. The results folder should be carefully structured so that there does not exist two gem5 runs write to the same place. Having celery and mongoDB servers running, we can start the experiment. In the root folder of the experiment, python3 launch_spec2017_experiment.py Note: The URI to a remote database server could be specified by specifying the environment variable GEM5ART_DB. For example, if the mongo database server is running at localhost123, the command to run the launch script would be, GEM5ART_DB="mongodb://localhost123" python3 launch_spec2017_experiment.py Not all benchmarks are compiled in the above set up as of March 2020. The working status of SPEC 2017 workloads is available here: disk-image/spec-2017/install-spec2017.sh: a Bash script that will be executed on the guest machine after Ubuntu Server is installed in the disk image; this script installs depedencies to compile and run SPEC workloads, mounts the SPEC ISO and installs the benchmark suite on the disk image, and creates a SPEC configuration from gcc42 template. disk-image/spec-2017/post-installation.sh: a script that will be executed on the guest machine; this script copies the serial-getty@.service file to the systemd folder, copies m5 binary to /sbin, and appends the content of runscript.sh to the disk image's .bashrc file, which will be executed after the booting process is done. disk-image/spec-2017/runscript.sh: a script that will be copied to .bashrc on the disk image so that the commands in this script will be run immediately after the booting process. disk-image/spec-2017/spec-2017.json: contains a configuration telling Packer how the disk image should be built.
https://gem5.googlesource.com/public/gem5-website/+/d45861c05957a7913b513e57a24569e5acdb4dfa/_pages/documentation/gem5art/tutorials/tutorial_4_spec.md
CC-MAIN-2022-21
refinedweb
3,135
55.84
The Q3TextEdit widget provides a powerful single-page rich text editor. More... #include <Q3TextEdit> This class is part of the Qt 3 support library. It is provided to keep old source code working. We strongly advise against using it in new code. See Porting to Qt 4 for more information. Inherits Q3ScrollView. Inherited by Q3MultiLineEdit, Q3TextBrowser, and Q3TextView. The Q3TextEdit widget provides a powerful single-page rich text editor. Q3TextEdit is an advanced WYSIWYG viewer/editor supporting rich text formatting using HTML-style tags. It is optimized to handle large documents and to respond quickly to user input. Q3TextEdit has four modes of operation: Q3TextEdit can be used as a syntax highlighting editor when used in conjunction with QSyntaxHighlighter. We recommend that you always call setTextFormat() to set the mode you want to use. If you use Qt::AutoText then setText() and append() will try to determine whether the text they are given is plain text or rich text. If you use Qt::RichText then setText() and append() will assume that the text they are given is rich text. insert() simply inserts the text it is given. Q3TextEdit works on paragraphs and characters. A paragraph is a formatted string which is word-wrapped to fit into the width of the widget. By default when reading plain text, one newline signify a paragraph. A document consists of zero or more paragraphs, indexed from 0. Characters are indexed on a per-paragraph basis, also indexed from 0. The words in the paragraph are aligned in accordance with the paragraph's alignment(). Paragraphs are separated by hard line breaks. Each character within a paragraph has its own attributes, for example, font and color. The text edit documentation uses the following concepts: Q3TextEdit can display images (using Q3MimeSourceFactory), lists and tables. If the text is too large to view within the text edit's viewport, scroll bars will appear. The text edit can load both plain text and HTML files (a subset of HTML 3.2 and 4). The rendering style and the set of valid tags are defined by a styleSheet(). Custom tags can be created and placed in a custom style sheet. Change the style sheet with setStyleSheet(); see Q3StyleSheet for details. The images identified by image tags are displayed if they can be interpreted using the text edit's Q3MimeSourceFactory; see setMimeSourceFactory(). If you want a text browser with more navigation use QTextBrowser. If you just need to display a small piece of rich text use QLabel or QSimpleRichText. If you create a new Q3TextEdit, and want to allow the user to edit rich text, call setTextFormat(Qt::RichText) to ensure that the text is treated as rich text. (Rich text uses HTML tags to set text formatting attributes. See Q3StyleSheet for information on the HTML tags that are supported.). If you don't call setTextFormat() explicitly the text edit will guess from the text itself whether it is rich text or plain text. This means that if the text looks like HTML or XML it will probably be interpreted as rich text, so you should call setTextFormat(Qt::PlainText) to preserve such text. Note that we do not intend to add a full-featured web browser widget to Qt (because that would easily double Qt's size and only a few applications would benefit from it). The rich text support in Qt is designed to provide a fast, portable and efficient way to add reasonable online help facilities to applications, and to provide a basis for rich text editors. Q3TextEdit can display a large HTML subset, including tables and images. The text is set or replaced using setText() which deletes any existing text and replaces it with the text passed in the setText() call. If you call setText() with legacy HTML (with setTextFormat(Qt::RichText) in force), and then call text(), the text that is returned may have different markup, but will render the same. Text can be inserted with insert(), paste(), pasteSubType() and append(). Text that is appended does not go into the undo history; this makes append() faster and consumes less memory. Text can also be cut(). The entire text is deleted with clear() and the selected text is deleted with removeSelectedText(). Selected (marked) text can also be deleted with del() (which will delete the character to the right of the cursor if no text is selected). Loading and saving text is achieved using setText() and text(), for example: QFile file(fileName); // Read the text from a file if (file.open(IO_ReadOnly)) { QTextStream stream(&file); textEdit->setText(stream.read()); } QFile file(fileName); // Write the text to a file if (file.open(IO_WriteOnly)) { QTextStream stream(&file); stream << textEdit->text(); textEdit->setModified(false); } By default the text edit wraps words at whitespace to fit within the text edit widget. The setWordWrap() function is used to specify the kind of word wrap you want, or NoWrap if you don't want any wrapping. Call setWordWrap() to set a fixed pixel width FixedPixelWidth, or character column (e.g. 80 column) FixedColumnWidth with the pixels or columns specified with setWrapColumnOrWidth(). If you use word wrap to the widget's width WidgetWidth, you can specify whether to break on whitespace or anywhere with setWrapPolicy(). The background color is set differently than other widgets, using setPaper(). You specify a brush style which could be a plain color or a complex pixmap. Hypertext links are automatically underlined; this can be changed with setLinkUnderline(). The tab stop width is set with setTabStopWidth(). The zoomIn() and zoomOut() functions can be used to resize the text by increasing (decreasing for zoomOut()) the point size used. Images are not affected by the zoom functions. The lines() function returns the number of lines in the text and paragraphs() returns the number of paragraphs. The number of lines within a particular paragraph is returned by linesOfParagraph(). The length of the entire text in characters is returned by length(). You can scroll to an anchor in the text, e.g. <a name="anchor"> with scrollToAnchor(). The find() function can be used to find and select a given string within the text. A read-only Q3TextEdit provides the same functionality as the (obsolete) QTextView. (QTextView is still supplied for compatibility with old code.) When Q3TextEdit is used read-only the key-bindings are limited to navigation, and text may only be selected with the mouse: The text edit may be able to provide some meta-information. For example, the documentTitle() function will return the text from within HTML <title> tags. The text displayed in a text edit has a context. The context is a path which the text edit's Q3MimeSourceFactory uses to resolve the locations of files and images. It is passed to the mimeSourceFactory() when quering data. (See Q3TextEdit() and context().) Setting the text format to Qt::LogText puts the widget in a special mode which is optimized for very large texts. In this mode editing and rich text support are disabled (the widget is explicitly set to read-only mode). This allows the text to be stored in a different, more memory efficient manner. However, a certain degree of text formatting is supported through the use of formatting tags. A tag is delimited by < and >. The characters <, > and & are escaped by using <, > and &. A tag pair consists of a left and a right tag (or open/close tags). Left-tags mark the starting point for formatting, while right-tags mark the ending point. A right-tag always start with a / before the tag keyword. For example <b> and </b> are a tag pair. Tags can be nested, but they have to be closed in the same order as they are opened. For example, <b><u></u></b> is valid, while <b><u></b></u> will output an error message. By using tags it is possible to change the color, bold, italic and underline settings for a piece of text. A color can be specified by using the HTML font tag <font color=colorname>. The color name can be one of the color names from the X11 color database, or a RGB hex value (e.g #00ff00). Example of valid color tags: <font color=red>, <font color="light blue">,<font color="#223344">. Bold, italic and underline settings can be specified by the tags <b>, <i> and <u>. Note that a tag does not necessarily have to be closed. A valid example: This is <font color=red>red</font> while <b>this</b> is <font color=blue>blue</font>. <font color=green><font color=yellow>Yellow,</font> and <u>green</u>. Stylesheets can also be used in Qt::LogText mode. To create and use a custom tag, you could do the following: Q3TextEdit * log = new Q3TextEdit(this); log->setTextFormat(Qt::LogText); Q3StyleSheetItem * item = new Q3StyleSheetItem(log->styleSheet(), "mytag"); item->setColor("red"); item->setFontWeight(QFont::Bold); item->setFontUnderline(true); log->append("This is a <mytag>custom tag</mytag>!"); Note that only the color, bold, underline and italic attributes of a Q3StyleSheetItem is used in Qt::LogText mode. Note that you can use setMaxLogLines() to limit the number of lines the widget can hold in Qt::LogText mode. There are a few things that you need to be aware of when the widget is in this mode: All the information about using Q3TextEdit as a display widget also applies here. The current format's attributes are set with setItalic(), setBold(), setUnderline(), setFamily() (font family), setPointSize(), setColor() and setCurrentFont(). The current paragraph's alignment is set with setAlignment(). Use setSelection() to select text. The setSelectionAttributes() function is used to indicate how selected text should be displayed. Use hasSelectedText() to find out if any text is selected. The currently selected text's position is available using getSelection() and the selected text itself is returned by selectedText(). The selection can be copied to the clipboard with copy(), or cut to the clipboard with cut(). It can be deleted with removeSelectedText(). The entire text can be selected (or unselected) using selectAll(). Q3TextEdit supports multiple selections. Most of the selection functions operate on the default selection, selection 0. If the user presses a non-selecting key, e.g. a cursor key without also holding down Shift, all selections are cleared. Set and get the position of the cursor with setCursorPosition() and getCursorPosition() respectively. When the cursor is moved, the signals currentFontChanged(), currentColorChanged() and currentAlignmentChanged() are emitted to reflect the font, color and alignment at the new cursor position. If the text changes, the textChanged() signal is emitted, and if the user inserts a new line by pressing Return or Enter, returnPressed() is emitted. The isModified() function will return true if the text has been modified. Q3TextEdit provides command-based undo and redo. To set the depth of the command history use setUndoDepth() which defaults to 100 steps. To undo or redo the last operation call undo() or redo(). The signals undoAvailable() and redoAvailable() indicate whether the undo and redo operations can be executed.. By default the text edit widget operates in insert mode so all text that the user enters is inserted into the text edit and any text to the right of the cursor is moved out of the way. The mode can be changed to overwrite, where new text overwrites any text to the right of the cursor, using setOverwriteMode(). The AutoFormatting type is a typedef for QFlags<AutoFormattingFlag>. It stores an OR combination of AutoFormattingFlag values. This enum is used by moveCursor() to specify in which direction the cursor should be moved: This enum is used by doKeyboardAction() to specify which action should be executed: This enum is used to set the vertical alignment of the text. This enum defines the Q3TextEdit's word wrap modes. See also setWordWrap() and wordWrap(). This enum defines where text can be wrapped in word wrap mode. See also setWrapPolicy(). This property holds the enabled set of auto formatting features. The value can be any combination of the values in the AutoFormattingFlag enum. The default is AutoAll. Choose AutoNone to disable all automatic formatting. Currently, the only automatic formatting feature provided is AutoBulletList; future versions of Qt may offer more. Access functions: This property holds the title of the document parsed from the text. For Qt::PlainText the title will be an empty string. For Qt::RichText the title will be the text between the <title> tags, if present, otherwise an empty string. Access functions: This property holds whether some text is selected in selection 0. Access functions: This property holds the number of characters in the text. Access functions: This property holds whether hypertext links will be underlined. If true (the default) hypertext links will be displayed underlined. If false links will not be displayed underlined. Access functions: This property holds whether the document has been modified by the user. Access functions: This property holds the text edit's overwrite mode. If false (the default) characters entered by the user are inserted with any characters to the right being moved out of the way. If true, the editor is in overwrite mode, i.e. characters entered by the user overwrite any characters to the right of the cursor position. Access functions: This property holds the background (paper) brush. The brush that is currently used to draw the background of the text edit. The initial setting is an empty brush. Access functions: This property holds whether the text edit is read-only. In a read-only text edit the user can only navigate through the text and select text; modifying the text is not possible. This property's default is false. Access functions: This property holds the selected text (from selection 0) or an empty string if there is no currently selected text (in selection 0). The text is always returned as Qt::PlainText if the textFormat() is Qt::PlainText or Qt::AutoText, otherwise it is returned as HTML. Access functions: See also hasSelectedText. This property holds whether TAB changes focus or is accepted as input. In some occasions text edits should not allow the user to input tabulators or change indentation using the TAB key, as this breaks the focus chain. The default is false. Access functions: This property holds the tab stop width in pixels. Access functions: This property holds the text edit's text. There is no default text. On setting, any previous text is deleted. The text may be interpreted either as plain text or as rich text, depending on the textFormat(). The default setting is Qt::AutoText, i.e. the text edit auto-detects the format of the text. For richtext, calling text() on an editable Q3TextEdit will cause the text to be regenerated from the textedit. This may mean that the QString returned may not be exactly the same as the one that was set. Access functions: See also textFormat. This property holds the text format: rich text, plain text, log text or auto text. The text format is one of the following: Access functions: This property holds the depth of the undo history. The maximum number of steps in the undo/redo history. The default is 100. Access functions: See also undo() and redo(). This property holds whether undo/redo is enabled. When changing this property, the undo/redo history is cleared. The default is true. Access functions: This property holds the word wrap mode. The default mode is WidgetWidth which causes words to be wrapped at the right edge of the text edit. Wrapping occurs at whitespace, keeping whole words intact. If you want wrapping to occur within words use setWrapPolicy(). If you set a wrap mode of FixedPixelWidth or FixedColumnWidth you should also call setWrapColumnOrWidth() with the width you want. Access functions: See also WordWrap, wrapColumnOrWidth, and wrapPolicy.. Access functions: See also wordWrap. This property holds the word wrap policy, at whitespace or anywhere. Defines where text can be wrapped when word wrap mode is not NoWrap. The choices are AtWordBoundary (the default), Anywhere and AtWordOrDocumentBoundary Access functions: See also wordWrap. Constructs a Q3TextEdit called name, with parent parent. The text edit will display the text text using context context. The context is a path which the text edit's Q3MimeSourceFactory uses to resolve the locations of files and images. It is passed to the mimeSourceFactory() when quering data. For example if the text contains an image tag, <img src="image.png">, and the context is "path/to/look/in", the Q3MimeSourceFactory will try to load the image from "path/to/look/in/image.png". If the tag was <img src="/image.png">, the context will not be used (because Q3MimeSourceFactory recognizes that we have used an absolute path) and will try to load "/image.png". The context is applied in exactly the same way to hrefs, for example, <a href="target.html">Target</a>, would resolve to "path/to/look/in/target.html". Constructs an empty Q3TextEdit called name, with parent parent. Destructor. Returns the alignment of the current paragraph. See also setAlignment(). Returns the text for the attribute attr (Qt::AnchorHref by default) if there is an anchor at position pos (in contents coordinates); otherwise returns an empty string. Appends a new paragraph with text to the end of the text edit. Note that the undo/redo history is cleared by this function, and no undo history is kept for appends which makes them faster than insert()s. If you want to append text which is added to the undo/redo history as well, use insertParagraph(). Returns true if the current format is bold; otherwise returns false. See also setBold(). Returns the index of the character (relative to its paragraph) at position pos (in contents coordinates). If para is not 0, *para is set to the character's paragraph. Deletes all the text in the text edit. See also cut(), removeSelectedText(), and setText(). Clears the background color of the paragraph para, so that the default color is used again. This signal is emitted when the mouse is clicked on the paragraph para at character position pos. See also doubleClicked(). Returns the color of the current format. See also setColor() and setPaper(). Returns the context of the text edit. The context is a path which the text edit's Q3MimeSourceFactory uses to resolve the locations of files and images. See also text. Copies any selected text (from selection 0) to the clipboard. See also hasSelectedText() and copyAvailable().(). This function is called to create a right mouse button popup menu at the document position pos. If you want to create a custom popup menu, reimplement this function and return the created popup menu. Ownership of the popup menu is transferred to the caller. Warning: The QPopupMenu ID values 0-7 are reserved, and they map to the standard operations. When inserting items into your custom popup menu, be sure to specify ID values larger than 7. This is an overloaded member function, provided for convenience. This function is called to create a right mouse button popup menu. If you want to create a custom popup menu, reimplement this function and return the created popup menu. Ownership of the popup menu is transferred to the caller. This function is only called if createPopupMenu(const QPoint &) returns 0. This signal is emitted if the alignment of the current paragraph has changed. The new alignment is a. See also setAlignment(). This signal is emitted if the color of the current format has changed. The new color is c. See also setColor(). Returns the font of the current format. See also setCurrentFont(), setFamily(), and setPointSize(). This signal is emitted if the font of the current format has changed. The new font is f. See also setCurrentFont(). This signal is emitted if the vertical alignment of the current format has changed. The new vertical alignment is a. This signal is emitted if the position of the cursor has changed. para contains the paragraph index and pos contains the character position within the paragraph. See also setCursorPosition(). Copies the selected text (from selection 0) to the clipboard and deletes it from the text edit. If there is no selected text (in selection 0) nothing happens. See also Q3TextEdit::copy(), paste(), and pasteSubType(). If there is some selected text (in selection 0) it is deleted. If there is no selected text (in selection 0) the character to the right of the text cursor is deleted. See also removeSelectedText() and cut(). Executes keyboard action action. This is normally called by a key event handler. This signal is emitted when the mouse is double-clicked on the paragraph para at character position pos. See also clicked(). Ensures that the cursor is visible by scrolling the text edit if necessary. See also setCursorPosition(). Returns the font family of the current format. See also setFamily(), setCurrentFont(), and setPointSize(). Finds the next occurrence of the string, expr. Returns true if expr was found; otherwise returns false. If para and index are both 0 the search begins from the current cursor position. If para and index are both not 0, the search begins from the *index character position in the *para paragraph. If cs is true the search is case sensitive, otherwise it is case insensitive. If wo is true the search looks for whole word matches only; otherwise it searches for any matching text. If forward is true (the default) the search works forward from the starting position to the end of the text, otherwise it works backwards to the beginning of the text. If expr is found the function returns true. If index and para are not 0, the number of the paragraph in which the first character of the match was found is put into *para, and the index position of that character within the paragraph is put into *index. If expr is not found the function returns false. If index and para are not 0 and expr is not found, *index and *para are undefined. Please note that this function will make the next occurrence of the string (if found) the current selection, and will thus modify the cursor position. Using the para and index parameters will not work correctly in case the document contains tables. Reimplemented to allow tabbing through links. If n is true the tab moves the focus to the next child; if n is false the tab moves the focus to the previous child. Returns true if the focus was moved; otherwise returns false. Reimplemented from QWidget. Returns Q3ScrollView::font() Warning: In previous versions this function returned the font of the current format. This lead to confusion. Please use currentFont() instead. This function sets the *para and *index parameters to the current cursor position. para and index must not be 0. See also setCursorPosition(). If there is a selection, *paraFrom is set to the number of the paragraph in which the selection begins and *paraTo is set to the number of the paragraph in which the selection ends. (They could be the same.) *indexFrom is set to the index at which the selection begins within *paraFrom, and *indexTo is set to the index at which the selection ends within *paraTo. If there is no selection, *paraFrom, *indexFrom, *paraTo and *indexTo are all set to -1. If paraFrom, indexFrom, paraTo or indexTo is 0 this function does nothing. The selNum is the number of the selection (multiple selections are supported). It defaults to 0 (the default selection). See also setSelection() and selectedText. Returns how many pixels high the text edit needs to be to display all the text if the text edit is w pixels wide. Reimplemented from QWidget. Inserts text at the current cursor position. The insertionFlags define how the text is inserted. If RedoIndentation is set, the paragraph is re-indented. If CheckNewLines is set, newline characters in text result in hard line breaks (i.e. new paragraphs). If checkNewLine is not set, the behavior of the editor is undefined if the text contains newlines. (It is not possible to change Q3TextEdit's newline handling behavior, but you can use QString::replace() to preprocess text before inserting it.) If RemoveSelected is set, any selected text (in selection 0) is removed before the text is inserted. The default flags are CheckNewLines | RemoveSelected. If the widget is in Qt::LogText mode this function will do nothing. See also paste() and pasteSubType(). This is an overloaded member function, provided for convenience. Inserts the given text. If indent is true the paragraph that contains the text is reindented; if checkNewLine is true the text is checked for newlines and relaid out. If removeSelected is true and there is a selection, the insertion replaces the selected text. Inserts text in the paragraph para at position index. Inserts text as a new paragraph at position para. If para is -1, the text is appended. Use append() if the append operation is performance critical. Returns true if redo is available; otherwise returns false. Returns true if undo is available; otherwise returns false. Returns true if the current format is italic; otherwise returns false. See also setItalic(). Processes the key event, e. By default key events are used to provide keyboard navigation and text editing. Reimplemented from QWidget. Returns the line number of the line in paragraph para in which the character at position index appears. The index position is relative to the beginning of the paragraph. If there is no such paragraph or no such character at the index position (e.g. the index is out of range) -1 is returned. Returns the number of lines in the text edit; this could be 0. Warning: This function may be slow. Lines change all the time during word wrapping, so this function has to iterate over all the paragraphs and get the number of lines from each one individually. Returns the number of lines in paragraph para, or -1 if there is no paragraph with index para. Returns the Q3MimeSourceFactory which is being used by this text edit. See also setMimeSourceFactory(). This signal is emitted when the modification status of the document has changed. If m is true, the document was modified, otherwise the modification state has been reset to unmodified. See also modified. Moves the text cursor according to action. This is normally used by some key event handler. select specifies whether the text between the current cursor position and the new position should be selected. Returns the paragraph which is at position pos (in contents coordinates). Returns the background color of the paragraph para or an invalid color if para is out of range or the paragraph has no background set See also setParagraphBackgroundColor(). Returns the length of the paragraph para (i.e. the number of characters), or -1 if there is no paragraph with index para. This function ignores newlines. Returns the rectangle of the paragraph para in contents coordinates, or an invalid rectangle if para is out of range. Returns the number of paragraphs in the text; an empty textedit is always considered to have one paragraph, so 1 is returned in this case. Pastes the text from the clipboard into the text edit at the current cursor position. Only plain text is pasted. If there is no text in the clipboard nothing happens. See also pasteSubType(), cut(), and Q3TextEdit::copy(). Pastes the text with format subtype from the clipboard into the text edit at the current cursor position. The subtype can be "plain" or "html". If there is no text with format subtype in the clipboard nothing happens. See also paste(), cut(), and Q3TextEdit::copy(). Places the cursor c at the character which is closest to position pos (in contents coordinates). If c is 0, the default text cursor is used. See also setCursorPosition(). Returns the point size of the font of the current format. See also setFamily(), setCurrentFont(), and setPointSize(). Redoes the last operation. If there is no operation to redo, i.e. there is no redo step in the undo/redo history, nothing happens. See also redoAvailable(), undo(), and undoDepth(). This signal is emitted when the availability of redo changes. If yes is true, then redo() will work until redoAvailable(false) is next emitted. See also redo() and undoDepth(). Removes the paragraph para. Deletes the text of selection selNum (by default, the default selection, 0). If there is no selected text nothing happens. See also selectedText and removeSelection(). Removes the selection selNum (by default 0). This does not remove the selected text. See also removeSelectedText(). Repaints any paragraphs that have changed. Although used extensively internally you shouldn't need to call this yourself. This signal is emitted if the user pressed the Return or the Enter key. Scrolls the text edit to make the text at the anchor called name visible, if it can be found in the document. If the anchor isn't found no scrolling will occur. An anchor is defined using the HTML anchor tag, e.g. <a name="target">. Scrolls to the bottom of the document and does formatting if required. If select is true (the default), all the text is selected as selection 0. If select is false any selected text is unselected, i.e. the default selection (selection 0) is cleared. See also selectedText. This signal is emitted whenever the selection changes. See also setSelection() and copyAvailable(). Sets the alignment of the current paragraph to a. Valid alignments are Qt::AlignLeft, Qt::AlignRight, Qt::AlignJustify and Qt::AlignCenter (which centers horizontally). See also alignment(). If b is true sets the current format to bold; otherwise sets the current format to non-bold. See also bold(). Sets the color of the current format, i.e. of the text, to c. See also color() and setPaper(). Sets the font of the current format to f. If the widget is in Qt::LogText mode this function will do nothing. Use setFont() instead. See also currentFont(), setPointSize(), and setFamily(). Sets the cursor to position index in paragraph para. See also getCursorPosition(). Sets the font family of the current format to fontFamily. See also family() and setCurrentFont(). If b is true sets the current format to italic; otherwise sets the current format to non-italic. See also italic(). Sets the text edit's mimesource factory to factory. See Q3MimeSourceFactory for further details. See also mimeSourceFactory(). Sets the background color of the paragraph para to bg. See also paragraphBackgroundColor(). Sets the point size of the current format to s. Note that if s is zero or negative, the behavior of this function is not defined. See also pointSize(), setCurrentFont(), and setFamily(). Sets a selection which starts at position indexFrom in paragraph paraFrom and ends at position indexTo in paragraph paraTo. Any existing selections which have a different id (selNum) are left alone, but if an existing selection has the same id as selNum it is removed and replaced by this selection. Uses the selection settings of selection selNum. If selNum is 0, this is the default selection. The cursor is moved to the end of the selection if selNum is 0, otherwise the cursor position remains unchanged. See also getSelection() and selectedText. Sets the background color of selection number selNum to back and specifies whether the text of this selection should be inverted with invertText. This only works for selNum > 0. The default selection (selNum == 0) gets its attributes from the text edit's palette(). Sets the stylesheet to use with this text edit to styleSheet. Changes will only take effect for new text added with setText() or append(). See also styleSheet(). If b is true sets the current format to underline; otherwise sets the current format to non-underline. See also underline(). Sets the vertical alignment of the current format, i.e. of the text, to a. See also verticalAlignment(), color(), and setPaper(). Returns the Q3StyleSheet which is being used by this text edit. See also setStyleSheet(). Q3TextEdit is optimized for large amounts text. One of its optimizations is to format only the visible text, formatting the rest on demand, e.g. as the user scrolls, so you don't usually need to call this function. In some situations you may want to force the whole text to be formatted. For example, if after calling setText(), you wanted to know the height of the document (using contentsHeight()), you would call this function first. Returns the QSyntaxHighlighter set on this Q3TextEdit. 0 is returned if no syntax highlighter is set. This signal is emitted whenever the text in the text edit changes. See also setText() and append(). Returns the text edit's text cursor. Warning: Q3TextCursor is not in the public API, but in special circumstances you might wish to use it. Returns true if the current format is underlined; otherwise returns false. See also setUnderline(). Undoes the last operation. If there is no operation to undo, i.e. there is no undo step in the undo/redo history, nothing happens. See also undoAvailable(), redo(), and undoDepth(). This signal is emitted when the availability of undo changes. If yes is true, then undo() will work until undoAvailable(false) is next emitted. See also undo() and undoDepth(). Returns the vertical alignment of the current format. See also setVerticalAlignment(). Zooms in on the text by making the base font size range points larger and recalculating all font sizes to be the new size. This does not change the size of any images. See also zoomOut(). This is an overloaded member function, provided for convenience. Zooms in on the text by making the base font size one point larger and recalculating all font sizes to be the new size. This does not change the size of any images. See also zoomOut(). Zooms out on the text by making the base font size range points smaller and recalculating all font sizes to be the new size. This does not change the size of any images. See also zoomIn(). This is an overloaded member function, provided for convenience. Zooms out on the text by making the base font size one point smaller and recalculating all font sizes to be the new size. This does not change the size of any images. See also zoomIn(). Zooms the text by making the base font size size points and recalculating all font sizes to be the new size. This does not change the size of any images.
http://doc.trolltech.com/4.4/q3textedit.html
crawl-002
refinedweb
5,702
67.96
Cover Photo by Scott Webb on Unsplash When testing React apps, there can be many ways to write a test. Yet small changes can make a big difference in readability and effectiveness. In this post I'm going to explore a common scenario. Testing a component that renders some text based on a variable prop. I'll assume a basic familiarity with React and React Testing Library. For this example I have a greeting component which accepts a name prop. This renders a welcome message customised with the provided name. function Greeting({name}) { return <h1>Welcome {name}!</h1> } Let's test this. import {render, screen} from '@testing-library/react' import Greeting from './greeting' test('it renders the given name in the greeting', () => { render(<Greeting name="Jane"/>) expect(screen.getByText(`Welcome Jane!`)).toBeInTheDocument() }) We can write a test like this, and sure enough it passes. Here we're checking that the text we expect renders. But there are a few problems we can try and fix. - First off, the name 'Jane' appears twice in our test, we can pull that out into a variable making our test more readable. - Second, if we change the component to render a different element rather than a heading, this test will still pass. But that's a change we would like our tests to tell us about. - Third, if we break the component, and stop rendering the name, we don't get a great test failure message. Use Variables in Tests test('it renders the given name in the greeting', () => { const name = 'Jane' render(<Greeting name={name}/>) expect(screen.getByText(`Welcome ${name}!`)).toBeInTheDocument() }) Here we extract the name into a variable. It is now clearer that the name is the focus of the test. We could go even further and use a library like FakerJs to generate a random name. That way we can communicate that the specific name itself is not important, just that the name is rendered. import faker from 'faker' test('it renders the given name in the greeting', () => { const name = faker.name.firstName() render(<Greeting name={name}/>) expect(screen.getByText(`Welcome ${name}!`)).toBeInTheDocument() }) Test for Accessible Elements Now we can address the element that is being rendered. Instead of only looking for the element by its text, we can check by its role, in this case heading. We provide the text we are looking for as the name property in the optional second argument to getByRole. expect( screen.getByRole('heading', { name: `Welcome ${name}!` } ).toBeInTheDocument() If we were to change the component to render a div instead of an h1 our test would fail. Our previous version would have still passed, not alerting us to this change. Checks like these are very important to preserve the semantic meaning of our rendered markup. Improving Test Failure Message If we break the component, and stop rendering the name, our failure message still isn't ideal. It's not terrible. Jest gives us the accessible elements that it found, and we can see here that the name is missing. But if this was a larger component it may be time consuming to search through this log to find what's wrong. We can do better. expect( screen.getByRole('heading', { name: /welcome/i } ).toHaveTextContent(`Welcome ${name}!`) We've done a couple of things here. We've extracted the static part of the text, which in this case is the word 'welcome'. Instead of searching by the full text string, we'll find the heading element that includes /welcome/i. We use a regex here instead of a plain string, so we can do a partial match on just that part of the text. Next, instead of expecting what we found toBeInTheDocument we can use a different matcher from jest-dom. Using toHaveTextContent checks that the text in the element is what we expect. This is better for two reasons. First, reading the text it communicates that the text content is the thing that we are checking - not only that some element exits. Second, we get a far better test failure message. Here we see right away what the problem is, we don't have to hunt anywhere to find it. Recap - We have extracted variables in our test to communicate what is important data for our test. - We used getByRoleto validate the semantics of our component. - We used toHaveTextContentto communicate what output our test is checking. And to get more useful test failure messages. I picked up some of the techniques here from Kent C Dodd's Epic React course. It has supercharged my understanding of all things React, even things I thought I already knew well. This guide of which query to use with React Testing Library is also very useful. The jest-dom documentation gives you an idea of all the matchers you can use to improve your tests. Discussion (4) Thanks, Please i need more of these Great post! You should definitely create a few more Thanks, I do have a couple more testing-related things planned. Good read! Thanks for sharing ;)
https://dev.to/alexkmarshall/better-tests-for-text-content-with-react-testing-library-2mc7
CC-MAIN-2022-21
refinedweb
839
66.44
HI, I am trying to run a very simple python tool as Geo-processing Service. As usual I created the tool as follows: import arcpy inputLong = float(arcpy.GetParameterAsText(0)) inputLat = float(arcpy.GetParameterAsText(1)) xy = [(inputLat, inputLong)] fc = r"Database Connections\Connection to XXX-XXXXX.sde\A.DBO.CropPoint" cursor = arcpy.da.InsertCursor(fc, ["SHAPE@XY"]) for row in xy: cursor.insertRow([row]) del cursor and shared as Geo-processing Service on my ArcGIS server. On JavaScript I wrote the code as follows: var gpUrl = ""; var gp = new Geoprocessor(gpUrl); var params1 = { Longitude: -101.55, Latitude: 35.6 }; gp.submitJob(params1); It does not work (i.e. does not add point to my point map), however I can directly run it in the "ArcGIS REST Services Directory" on my browser. Also I tried "gp.execute(params1)" to run the tool. Solved! Go to Solution. Found the problem. The rest URL is : Bu in the code you should use the complete url (that you can find " ArcGIS REST Services Directory" in the browser) as follows: Mohammadreza, When you published the geoprocessing service did you publish it as a synchronous or asynchronous service? This will help determine if you run execute or sumbitJob. Second, when publishing the service what input types were selected for the parameters? Best, - Tyler Hello I tried with both execute and submit job correctly. The parameters are Double Thanks Found the problem. The rest URL is : Bu in the code you should use the complete url (that you can find " ArcGIS REST Services Directory" in the browser) as follows:
https://community.esri.com/t5/arcgis-api-for-javascript-questions/run-geo-processing-service/m-p/156884/highlight/true
CC-MAIN-2021-31
refinedweb
262
60.21
This is part 9 of Categories for Programmers. Previously: Functoriality. See the Table of Contents. So far I’ve been glossing over the meaning of function types. A function type is different from other types. Take Integer, for instance: It’s just a set of integers. Bool is a two element set. But a function type a->b is more than that: it’s a set of morphisms between objects a and b. A set of morphisms between two objects in any category is called a hom-set. It just so happens that in the category Set every hom-set is itself an object in the same category —because it is, after all, a set. The same is not true of other categories where hom-sets are external to a category. They are even called external hom-sets. It’s the self-referential nature of the category Set that makes function types special. But there is a way, at least in some categories, to construct objects that represent hom-sets. Such objects are called internal hom-sets. Universal Construction Let’s forget for a moment that function types are sets and try to construct a function type, or more generally, an internal hom-set, from scratch. As usual, we’ll take our cues from the Set category, but carefully avoid using any properties of sets, so that the construction will automatically work for other categories. A function type may be considered a composite type because of its relationship to the argument type and the result type. We’ve already seen the constructions of composite types — those that involved relationships between objects. We used universal constructions to define a product type and a coproduct types. We can use the same trick to define a function type. We will need a pattern that involves three objects: the function type that we are constructing, the argument type, and the result type. The obvious pattern that connects these three types is called function application or evaluation. Given a candidate for a function type, let’s call it z (notice that, if we are not in the category Set, this is just an object like any other object), and the argument type a (an object), the application maps this pair to the result type b (an object). We have three objects, two of them fixed (the ones representing the argument type and the result type). We also have the application, which is a mapping. How do we incorporate this mapping into our pattern? If we were allowed to look inside objects, we could pair a function f (an element of z) with an argument x (an element of a) and map it to f x (the application of f to x, which is an element of b). In Set we can pick a function f from a set of functions z and we can pick an argument x from the set (type) a. We get an element f x in the set (type) b. But instead of dealing with individual pairs (f, x), we can as well talk about the whole product of the function type z and the argument type a. The product z×a is an object, and we can pick, as our application morphism, an arrow g from that object to b. In Set, g would be the function that maps every pair (f, x) to f x. So that’s the pattern: a product of two objects z and a connected to another object b by a morphism g. Is this pattern specific enough to single out the function type using a universal construction? Not in every category. But in the categories of interest to us it is. And another question: Would it be possible to define a function object without first defining a product? There are categories in which there is no product, or there isn’t a product for all pairs of objects. The answer is no: there is no function type, if there is no product type. We’ll come back to this later when we talk about exponentials. Let’s review the universal construction. We start with a pattern of objects and morphisms. That’s our imprecise query, and it usually yields lots and lots of hits. In particular, in Set, pretty much everything is connected to everything. We can take any object z, form its product with a, and there’s going to be a function from it to b (except when b is an empty set). That’s when we apply our secret weapon: ranking. This is usually done by requiring that there be a mapping between candidate objects — a mapping that somehow factorizes our construction. In our case, we’ll decree that z together with the morphism g from z×a to b is better than some other z' with its own application g', if and only if there is a mapping h from z' to z such that the application of g' factors through the application of g. (Hint: Read this sentence while looking at the picture.) Now here’s the tricky part, and the main reason I postponed this particular universal construction till now. Given the morphism h :: z'-> z, we want to close the diagram that has both z' and z crossed with a. What we really need, given the mapping h from z' to z, is a mapping from z'×a to z×a. And now, after discussing the functoriality of the product, we know how to do it. Because the product itself is a functor (more precisely an endo-bi-functor), it’s possible to lift pairs of morphisms. In other words, we can define not only products of objects but also products of morphisms. Since we are not touching the second component of the product z'×a, we will lift the pair of morphisms (h, id), where id is an identity on a. So, here’s how we can factor one application, g, out of another application g': g' = g ∘ (h × id) The key here is the action of the product on morphisms. The third part of the universal construction is selecting the object that is universally the best. Let’s call this object a⇒b (think of this as a symbolic name for one object, not to be confused with a Haskell typeclass constraint — I’ll discuss different ways of naming it later). This object comes with its own application — a morphism from (a⇒b)×a to b — which we will call eval. The object a⇒b is the best if any other candidate for a function object can be uniquely mapped to it in such a way that its application morphism g factorizes through eval. This object is better than any other object according to our ranking. The definition of the universal function object. This is the same diagram as above, but now the object a⇒b is universal. Formally: Of course, there is no guarantee that such an object a⇒b exists for any pair of objects a and b in a given category. But it always does in Set. Moreover, in Set, this object is isomorphic to the hom-set Set(a, b). This is why, in Haskell, we interpret the function type a->b as the categorical function object a⇒b. Currying Let’s have a second look at all the candidates for the function object. This time, however, let’s think of the morphism g as a function of two variables, z and a. g :: z × a -> b Being a morphism from a product comes as close as it gets to being a function of two variables. In particular, in Set, g is a function from pairs of values, one from the set z and one from the set a. On the other hand, the universal property tells us that for each such g there is a unique morphism h that maps z to a function object a⇒b. h :: z -> (a⇒b) In Set, this just means that h is a function that takes one variable of type z and returns a function from a to b. That makes h a higher order function. Therefore the universal construction establishes a one-to-one correspondence between functions of two variables and functions of one variable returning functions. This correspondence is called currying, and h is called the curried version of g. This correspondence is one-to-one, because given any g there is a unique h, and given any h you can always recreate the two-argument function g using the formula: g = eval ∘ (h × id) The function g can be called the uncurried version of h. Currying is essentially built into the syntax of Haskell. A function returning a function: a -> (b -> c) is often thought of as a function of two variables. That’s how we read the un-parenthesized signature: a -> b -> c This interpretation is apparent in the way we define multi-argument functions. For instance: catstr :: String -> String -> String catstr s s’ = s ++ s’ The same function can be written as a one-argument function returning a function — a lambda: catstr’ s = \s’ -> s ++ s’ These two definitions are equivalent, and either can be partially applied to just one argument, producing a one-argument function, as in: greet :: String -> String greet = catstr “Hello “ Strictly speaking, a function of two variables is one that takes a pair (a product type): (a, b) -> c It’s trivial to convert between the two representations, and the two (higher-order) functions that do it are called, unsurprisingly, curry and uncurry: curry :: ((a, b)->c) -> (a->b->c) curry f a b = f (a, b) and uncurry :: (a->b->c) -> ((a, b)->c) uncurry f (a, b) = f a b Notice that curry is the factorizer for the universal construction of the function object. This is especially apparent if it’s rewritten in this form: factorizer :: ((a, b)->c) -> (a->(b->c)) factorizer g = \a -> (\b -> g (a, b)) (As a reminder: A factorizer produces the factorizing function from a candidate.) In non-functional languages, like C++, currying is possible but nontrivial. You can think of multi-argument functions in C++ as corresponding to Haskell functions taking tuples (although, to confuse things even more, in C++ you can define functions that take an explicit std::tuple, as well as variadic functions, and functions taking initializer lists). You can partially apply a C++ function using the template std::bind. For instance, given a function of two strings: std::string catstr(std::string s1, std::string s2) { return s1 + s2; } you can define a function of one string: using namespace std::placeholders; auto greet = std::bind(catstr, "Hello ", _1); std::cout << greet("Haskell Curry"); Scala, which is more functional than C++ or Java, falls somewhere in between. If you anticipate that the function you’re defining will be partially applied, you define it with multiple argument lists: def catstr(s1: String)(s2: String) = s1 + s2 Of course that requires some amount of foresight or prescience on the part of a library writer. Exponentials In mathematical literature, the function object, or the internal hom-object between two objects a and b, is often called the exponential and denoted by ba. Notice that the argument type is in the exponent. This notation might seem strange at first, but it makes perfect sense if you think of the relationship between functions and products. We’ve already seen that we have to use the product in the universal construction of the internal hom-object, but the connection goes deeper than that. This is best seen when you consider functions between finite types — types that have a finite number of values, like Bool, Char, or even Int or Double. Such functions, at least in principle, can be fully memoized or turned into data structures to be looked up. And this is the essence of the equivalence between functions, which are morphisms, and function types, which are objects. For instance a (pure) function from Bool is completely specified by a pair of values: one corresponding to False, and one corresponding to True. The set of all possible functions from Bool to, say, Int is the set of all pairs of Ints. This is the same as the product Int × Int or, being a little creative with notation, Int2. For another example, let’s look at the C++ type char, which contains 256 values (Haskell Char is larger, because Haskell uses Unicode). There are several functions in the part of the C++ Standard Library that are usually implemented using lookups. Functions like isupper or isspace are implemented using tables, which are equivalent to tuples of 256 Boolean values. A tuple is a product type, so we are dealing with products of 256 Booleans: bool × bool × bool × ... × bool. We know from arithmetics that an iterated product defines a power. If you “multiply” bool by itself 256 (or char) times, you get bool to the power of char, or boolchar. How many values are there in the type defined as 256-tuples of bool? Exactly 2256. This is also the number of different functions from char to bool, each function corresponding to a unique 256-tuple. You can similarly calculate that the number of functions from bool to char is 2562, and so on. The exponential notation for function types makes perfect sense in these cases. We probably wouldn’t want to fully memoize a function from int or double. But the equivalence between functions and data types, if not always practical, is there. There are also infinite types, for instance lists, strings, or trees. Eager memoization of functions from those types would require infinite storage. But Haskell is a lazy language, so the boundary between lazily evaluated (infinite) data structures and functions is fuzzy. This function vs. data duality explains the identification of Haskell’s function type with the categorical exponential object — which corresponds more to our idea of data. Cartesian Closed Categories Although I will continue using the category of sets as a model for types and functions, it’s worth mentioning that there is a larger family of categories that can be used for that purpose. These categories are called cartesian closed, and Set is just one example of such a category. A cartesian closed category must contain: - The terminal object, - A product of any pair of objects, and - An exponential for any pair of objects. If you consider an exponential as an iterated product (possibly infinitely many times), then you can think of a cartesian closed category as one supporting products of an arbitrary arity. In particular, the terminal object can be thought of as a product of zero objects — or the zero-th power of an object. What’s interesting about cartesian closed categories from the perspective of computer science is that they provide models for the simply typed lambda calculus, which forms the basis of all typed programming languages. The terminal object and the product have their duals: the initial object and the coproduct. A cartesian closed category that also supports those two, and in which product can be distributed over coproduct a × (b + c) = a × b + a × c (b + c) × a = b × a + c × a is called a bicartesian closed category. We’ll see in the next section that bicartesian closed categories, of which Set is a prime example, have some interesting properties. Exponentials and Algebraic Data Types The interpretation of function types as exponentials fits very well into the scheme of algebraic data types. It turns out that all the basic identities from high-school algebra relating numbers zero and one, sums, products, and exponentials hold pretty much unchanged in any bicartesian closed category theory for, respectively, initial and final objects, coproducts, products, and exponentials. We don’t have the tools yet to prove them (such as adjunctions or the Yoneda lemma), but I’ll list them here nevertheless as a source of valuable intuitions. Zeroth Power a0 = 1 In the categorical interpretation, we replace 0 with the initial object, 1 with the final object, and equality with isomorphism. The exponential is the internal hom-object. This particular exponential represents the set of morphisms going from the initial object to an arbitrary object a. By the definition of the initial object, there is exactly one such morphism, so the hom-set C(0, a) is a singleton set. A singleton set is the terminal object in Set, so this identity trivially works in Set. What we are saying is that it works in any bicartesian closed category. In Haskell, we replace 0 with Void; 1 with the unit type (); and the exponential with function type. The claim is that the set of functions from Void to any type a is equivalent to the unit type — which is a singleton. In other words, there is only one function Void->a. We’ve seen this function before: it’s called absurd. This is a little bit tricky, for two reasons. One is that in Haskell we don’t really have uninhabited types — every type contains the “result of a never ending calculation,” or the bottom. The second reason is that all implementations of absurd are equivalent because, no matter what they do, nobody can ever execute them. There is no value that can be passed to absurd. (And if you manage to pass it a never ending calculation, it will never return!) Powers of One 1a = 1 This identity, when interpreted in Set, restates the definition of the terminal object: There is a unique morphism from any object to the terminal object. In general, the internal hom-object from a to the terminal object is isomorphic to the terminal object itself. In Haskell, there is only one function from any type a to unit. We’ve seen this function before — it’s called unit. You can also think of it as the function const partially applied to (). First Power a1 = a This is a restatement of the observation that morphisms from the terminal object can be used to pick “elements” of the object a. The set of such morphisms is isomorphic to the object itself. In Set, and in Haskell, the isomorphism is between elements of the set a and functions that pick those elements, ()->a. Exponentials of Sums ab+c = ab × ac Categorically, this says that the exponential from a coproduct of two objects is isomorphic to a product of two exponentials. In Haskell, this algebraic identity has a very practical, interpretation. It tells us that a function from a sum of two types is equivalent to a pair of functions from individual types. This is just the case analysis that we use when defining functions on sums. Instead of writing one function definition with a case statement, we usually split it into two (or more) functions dealing with each type constructor separately. For instance, take a function from the sum type (Either Int Double): f :: Either Int Double -> String It may be defined as a pair of functions from, respectively, Int and Double: f (Left n) = if n < 0 then "Negative int" else "Positive int" f (Right x) = if x < 0.0 then "Negative double" else "Positive double" Here, n is an Int and x is a Double. Exponentials of Exponentials (ab)c = ab×c This is just a way of expressing currying purely in terms of exponential objects. A function returning a function is equivalent to a function from a product (a two-argument function). Exponentials over Products (a × b)c = ac × bc In Haskell: A function returning a pair is equivalent to a pair of functions, each producing one element of the pair. It’s pretty incredible how those simple high-school algebraic identities can be lifted to category theory and have practical application in functional programming. Curry-Howard Isomorphism I have already mentioned the correspondence between logic and algebraic data types. The Void type and the unit type () correspond to false and true. Product types and sum types correspond to logical conjunction ∧ (AND) and disjunction ⋁ (OR). In this scheme, the function type we have just defined corresponds to logical implication ⇒. In other words, the type a->b can be read as “if a then b.” According to the Curry-Howard isomorphism, every type can be interpreted as a proposition — a statement or a judgment that may be true or false. Such a proposition is considered true if the type is inhabited and false if it isn’t. In particular, a logical implication is true if the function type corresponding to it is inhabited, which means that there exists a function of that type. An implementation of a function is therefore a proof of a theorem. Writing programs is equivalent to proving theorems. Let’s see a few examples. Let’s take the function eval we have introduced in the definition of the function object. Its signature is: eval :: ((a -> b), a) -> b It takes a pair consisting of a function and its argument and produces a result of the appropriate type. It’s the Haskell implementation of the morphism: eval :: (a⇒b) × a -> b which defines the function type a⇒b (or the exponential object ba). Let’s translate this signature to a logical predicate using the Curry-Howard isomorphism: ((a ⇒ b) ∧ a) ⇒ b Here’s how you can read this statement: If it’s true that b follows from a, and a is true, then b must be true. This makes perfect intuitive sense and has been known since antiquity as modus ponens. We can prove this theorem by implementing the function: eval :: ((a -> b), a) -> b eval (f, x) = f x If you give me a pair consisting of a function f taking a and returning b, and a concrete value x of type a, I can produce a concrete value of type b by simply applying the function f to x. By implementing this function I have just shown that the type ((a -> b), a) -> b is inhabited. Therefore modus ponens is true in our logic. How about a predicate that is blatantly false? For instance: if a or b is true then a must be true. a ⋁ b ⇒ a This is obviously wrong because you can chose an a that is false and a b that is true, and that’s a counter-example. Mapping this predicate into a function signature using the Curry-Howard isomorphism, we get: Either a b -> a Try as you may, you can’t implement this function — you can’t produce a value of type a if you are called with the Right value. (Remember, we are talking about pure functions.) Finally, we come to the meaning of the absurd function: absurd :: Void -> a Considering that Void translates into false, we get: false ⇒ a Anything follows from falsehood (ex falso quodlibet). Here’s one possible proof (implementation) of this statement (function) in Haskell: absurd (Void a) = absurd a where Void is defined as: newtype Void = Void Void As always, the type Void is tricky. This definition makes it impossible to construct a value because in order to construct one, you would need to provide one. Therefore, the function absurd can never be called. These are all interesting examples, but is there a practical side to Curry-Howard isomorphism? Probably not in everyday programming. But there are programming languages like Agda or Coq, which take advantage of the Curry-Howard isomorphism to prove theorems. Computers are not only helping mathematicians do their work — they are revolutionizing the very foundations of mathematics. The latest hot research topic in that area is called Homotopy Type Theory, and is an outgrowth of type theory. It’s full of Booleans, integers, products and coproducts, function types, and so on. And, as if to dispel any doubts, the theory is being formulated in Coq and Agda. Computers are revolutionizing the world in more than one way. Bibliography - Ralph Hinze, Daniel W. H. James, Reason Isomorphically!. This paper contains proofs of all those high-school algebraic identities in category theory that I mentioned in this chapter. Next: Natural Transformations. Acknowledgments I’d like to thank Gershom Bazerman for checking my math and logic, and André van Meulebrouck, who has been volunteering his editing help throughout this series of posts. Follow @BartoszMilewski
https://bartoszmilewski.com/2015/03/
CC-MAIN-2020-50
refinedweb
4,051
60.35
Origami Yoda → With instructions and everything! ]]> Category Archives: Elsewhere Welcome to the new school Welcome to the new school → Design by Fire is back? ]]> Mac Attack Mac Attack → Comment on/Review of the “I’m a mac” ads. ]]> Opera 9 Opera 9 → It’s out! The new version includes bittorrent support, widgets and many other new exciting features. ]]> Rare rainbow spotted over Idaho Rare rainbow spotted over Idaho → Rare rainbow indeed, and awesome color in that pic! ]]> Javascript namespaces Javascript namespaces → I code like this, do you? ]]> Giving the X-Men a digital Facelift Giving the X-Men a digital Facelift → Or: making actors Sir Ian McKellen and Patrick Stewart appear 25 years younger in X3 ]]> Netscape Copies Digg Netscape Copies Digg → And this just a few days (?) before Digg V3 is about to launch. ]]> How to tie a tie How to tie a tie → Wow, I only knew of 2 methods … ]]> Cyber Detective: Case #238532 Cyber Detective: Case #238532 → That’s what one can get from scamming on ebay! Excellent read! ]]>
https://www.bram.us/category/elsewhere/page/550/
CC-MAIN-2019-51
refinedweb
172
71.55
At 01:44 AM 5/25/01 +0100, Jose Alberto Fernandez wrote: >> These are my ideas >> 1. Each library will provide a unique name based on the dns >> >> <antlib id="com.m64.tasks.fubar"> >> <taskdef name="foo" .../> >> <taskdef name="bar" /> >> </antlib> >> > >I like it! me too. >>. Ick. I don't think I like using namespace in this way. I can handle naespace for "static" structural aspects (ie indicating task library or aspect attribute/element) but it can get confusing to use namespace to also indicate other projects. >Something like this is what I think is the right implementation for what >people have refer as <include>. Macro like <include> I feel is a mistake, >here you have a structured a object being referenced and accessed. I will >just add the ability to specify properties to use during the instantiation >of the project reference: > > <projectref name="cat" location="catalina.xml" > > <property name="port" value="8080" /> > </projectref> nice - what happens when the project is included multiple times ... Do we assume that it shouldn't be and if it is included multiple times it counts as multiple projects ? (I like this approach). Or do we do something else? >Now shall we use the name space for properties also? Hopefully only those that are explicitly marked as public. >Finally, I would suggest adding to <project>s a way to specify required >properties to be suplied by the caller: as long as they can also accept other ones aswell. Cheers, Pete *-----------------------------------------------------* | "Faced with the choice between changing one's mind, | | and proving that there is no need to do so - almost | | everyone gets busy on the proof." | | - John Kenneth Galbraith | *-----------------------------------------------------*
http://mail-archives.apache.org/mod_mbox/ant-dev/200105.mbox/%3C3.0.6.32.20010525131616.01182c60@mail.alphalink.com.au%3E
CC-MAIN-2014-10
refinedweb
274
65.52
Regions is an automatic resource management technique that statically ensures that all allocated resources are freed and a freed resource cannot be used. Regions also promote efficiency by helping to structure the computation so that resources will be freed soon, and en masse. Therefore, regions are particularly suitable for scarce resources such as file handles, database or network connections, etc. A lightweight monadic region library is available on Hackage. Iteratee IO also aims, among other things, to encapsulate resources such as file handles and network connections, ensuring their safe use and prompt disposal. One may wonder how much Iteratees and Regions have in common and if that commonality can be factored out. There seem to be several ways to combine regions and iteratees. This message describes the most straightforward attempt, combining a monadic region library (mostly as it is) with an Iteratee IO library, also mostly as it is. We use monadic regions to manage file handles or file descriptors, ensuring that file handles are always closed even in case of IO and other asynchronous exceptions. An enumerator like enumFile provided similar guarantees for its handles. (Since an enumerator keeps its handles to itself, there is no danger of iteratees' misusing them.) With the monadic region library, the enumerator code becomes simpler: we no longer have to worry about exceptions. The main benefit of monadic region library is to manage files opened by iteratees. The latter being passed around, and so their resources are harder to keep track. We thus demonstrate enumFile and iterFile for incremental file reading and writing, with the same safety guarantees. All opened files are *always* closed, regardless of any (asynchronous) exceptions that may arise during opening, reading, writing or transforming. The code has many examples of throwing errors from various stages of the pipeline and at various times. All files are closed. The commented code is available at which uses the lightweight monadic regions library from Since lightweight monadic library needs rank-2 types (now standard), it seemed appropriate to avail ourselves to common GHC extensions. We can clearly see that enumerators and enumeratees unify, both being instances of a general type forall a. Iteratee e mi a -> mo (Iteratee e mi a) which is a Monoid. To compose enumerators or enumeratees, we use the standard mappend. An alias in the code type R e m = Iteratee e m suggests that Iteratees are the view from the right -- the System.IO, getChar-like view. From that point of view, Iteratee IO is hardly different from System.IO (getChar, getLine, peekChar, etc). The dual newtype L e mi mo = L{unL :: forall a. R e mi a -> mo (R e mi a)} is the view from the left. Here are a few examples from the IterReg code. The first simply copies one file to another, block-by-clock. tIF1 = runSIO $ run =<< unL (enumFile "/etc/motd") (iterFile "/tmp/x") According to the trace opened file /etc/motd iterFile: opening /tmp/x Closing {handle: /etc/motd} Closing {handle: /tmp/x} the files are indeed closed, but _not_ in the LIFO order. That is important, so to let an iteratee write data coming from several sources. For example: tIF3 = runSIO $ run =<< unL (enumFile "/etc/motd" `mappend` enumFile "/usr/share/dict/words") (iterFile "/tmp/x") opened file /etc/motd iterFile: opening /tmp/x Closing {handle: /etc/motd} opened file /usr/share/dict/words Closing {handle: /usr/share/dict/words} Closing {handle: /tmp/x} The files will be closed even in case of exceptions: tIF4 = runSIO $ run =<< unL (enumFile "/etc/motd" `mappend` enumFile "/nonexistent") (iterFile "/tmp/x") opened file /etc/motd iterFile: opening /tmp/x Closing {handle: /etc/motd} opened file /nonexistent Closing {handle: /tmp/x} *** Exception: /nonexistent: openFile: does not exist All monadic region monads all support shCatch, so we can write our own exception-handling code. Other examples in IterReg.hs raise errors during data transformation. Monadic regions plus GHC extensions simplify code. For example, here are iterFile and enumFile (the signatures could be omitted; they will be inferred) iterFile :: (SMonad1IO m, m ~ (IORT s' m')) => FilePath -> R ByteString m () iterFile fname = lift (newSHandle fname WriteMode) >>= loop where loop h = getChunk >>= check h check h (Chunk s) = lift (mapM (shPut h) s) >> loop h check h e = return () enumFile :: (SMonadIO m) => FilePath -> L ByteString m m enumFile filepath = L $ \iterv -> do newRgn $ do h <- newSHandle filepath ReadMode unL (enumHandle h) iterv
http://www.haskell.org/pipermail/haskell-cafe/2012-January/098704.html
CC-MAIN-2014-35
refinedweb
732
52.39
The Southern States of America The History of Virginia - Chapter IV FROM COLONY TO COMMONWEALTH, 1763-1776. The French and Indian War, which closed the issue as to whether the English, French or Spanish should dominate this continent, opened the question as to whether sovereignty over the country should be British or American. The American Revolution was less a revolt from England than the growth of instinctive forces in the life of Anglo-Saxons settled in the western wilderness. In that creative era there culminated three tendencies which sprang naturally out of the conditions of colonial life - democracy, union and independence. Hence the significance attaching to that period is the genesis of ideas, the progress of social forces, and the subtle motives that weave institutions. It was, in fact, an evolution rather than a revolution. Virginia gladly acknowledged itself the child of England, but a child having substantive aims, and claiming as an heir the great "moral discoveries of habeas corpus and trial by jury, of a representative government and a free press." The Virginia Assembly as early as 1624 declared that it had the right "to lay taxes and impositions, and none other." When, therefore, the intention of the British Ministry as to the Stamp Act became known in 1764, the Virginia Burgesses promptly forwarded their remonstrance. Despite colonial protests, the Stamp Act was to go into effect Nov. 1, 1765. Acquiescence seemed the only course, when Patrick Henry entered the House of Burgesses on May 1 of that year. He had sprung into prominence in the famous Parsons' Cause, by upholding with rare eloquence the right of Virginia to make her own laws without the intervention of the king's veto. Nine days after Henry took his seat in the Assembly he wrote on the fly-leaf of an old copy of Coke Upon Littleton a series of resolutions against the Stamp Act, which he presented to the House, and thereby "gave the first impulse to the ball of the revolution." Jefferson, then a student, witnessed "the bloody debate," and heard Peyton Randolph exclaim after the count, "By God, I would have given five hundred guineas for a single vote!" Another negative vote would have killed the measure. Governor Fauquier, affrighted, at once dissolved the Assembly. But the work had been done. Virginia's voice echoed in the New York Congress, and the Stamp Act was repealed. The crisis seemed past. Not so, for Townshend, in 1767, aroused anew the colonies by import duties upon glass and tea. In the choice of the courtly Botetourt as a successor to Fauquier, the Ministry hoped to detach Virginia from the side of Massachusetts. The Burgesses, however, would not desert New England at that critical moment. They embodied, in 1769, their patriotic views in energetic resolves, while sitting behind closed doors. Hardly had the vote been taken when the governor abruptly summoned them to meet him in the council chamber. With flushed face he angrily dissolved them. Turned out of the capitol, the representatives with one accord went to the Raleigh Tavern and agreed to import no more goods from Britain. It is worthy of record that at this session of the Assembly Thomas Jefferson urged a bill allowing owners to manumit their slaves. Of like import was the attempt of the Burgesses, in 1772, to put an end to the iniquitous slave trade. The king denied this appeal and thereby laid himself open to Jefferson's fierce indictment on that score in his draft of the Declaration of Independence. After 1769 there was a lull in Virginia, in spite of the unrepealed tax on tea. Upon the death of the genial Botetourt, the suspicious Dunmore took his place. Violence, however, manifested itself in other provinces. The Boston Massacre, the burning of the Gaspee in Rhode Island, and the counter coercive measures of the British Ministry, kept alive the great debate. To secure unity, the Assembly, in 1773, devised committees of correspondence to act as a nervous system for the colonial cause. Virginia's Opposition to Boston Port Bill. Throughout the events that led up to the Revolution, it seemed ordained that Massachusetts was to suffer and Virginia to sympathize. Until the outbreak of actual hostilities scarcely anything of moment occurred on the soil of Virginia to incite her sons to champion the cause of freedom. Indeed, from the beginning of the controversy between the colonies and the mother country, the British Ministry seemed to have avoided any special cause of irritation to the people of the Old Dominion. The part, therefore, which Virginia took in the events of those days must be attributed to her devotion to the principles of liberty, to her interest in the common cause of the colonies, and particularly to her sympathy with Massachusetts in the suffering which that province was called upon to endure. If we lose sight of these motives as the springs of Virginia's conduct in that struggle, we shall be unable to appreciate either the nobility of her spirit or the wisdom and energy which marked her initiative. The Port Bill, which closed the harbor of Boston as a retaliation for the famous Tea-party, reached Boston on May 10, 1774, the day of the accession of Louis XVI. Three days later the Bay patriots drafted a circular-letter, appealing to the colonists for united support and urging the cessation of all trade with Great Britain. One writing from the doomed city in New England on May 29, just before the Port Bill was to go into effect, sketches for us the situation there: "Preparations are now making for blocking up this harbor, and affairs at present bear a gloomy aspect in this metropolis. However, we are in good spirits, and if the other colonists will but stand by us we doubt not of doing well. Nothing but an union can be the salvation of America." On the afternoon of the very Sunday on which the writer was penning these words to his friend, Boston's circular-letter arrived by special messenger in the quiet Virginia capital at Williamsburg, causing hurried consultations among the score or more members of the General Assembly that still lingered in town. On the previous Thursday the House of Burgesses had been abruptly dissolved by the irate governor on account of an active expression of sympathy with the cause of Massachusetts. The reply to Boston's proposal to break off all trade relations with Britain seemed too grave a step for the Virginia Committee of Correspondence, instituted the previous year, to take. Accordingly, at a meeting on the following morning, at which all the twenty-five remaining ex-Burgesses were present, it was decided to ask the counties to appoint deputies to a convention which should consider the question of the cessation of all trade with Great Britain and which should select delegates to a proposed Congress of the American colonies. The Revolution in Virginia had begun ; a body, deriving its mandates not from the Crown but from the people of the colonies, had been called into existence, and this democratic legislature was gradually to draw to itself all the governmental functions of the province. Boston's appeal for support was thus referred by the Committee of Correspondence in Virginia to the representatives of the sovereign people, whom royal writs did not summon nor royal governors dissolve. This call for the first Virginia convention, the original of which is in the State Library at Richmond, was evidently written by Peyton Randolph, the recent Speaker of the Burgesses, whose signature stands first in the list of signers. There follow the names of Thomas Jefferson, Henry, Lee, George Washington, etc. June the first, the very day on which the Boston Port Bill was to go into effect, had, by appointment of the Virginia Burgesses, been set apart "as a day of fasting, humiliation and prayer to avert the heavy calamity which threatened destruction to their civil rights"-the precise resolution that brought Lord Dunmore's wrath down upon their heads. Food was not tasted from the rising to the setting of the sun throughout the colony, and solemn services were held in the local churches. George Mason, in writing from Williamsburg to a neighbor, mentions the day of fasting appointed and adds, "please tell my dear little family that I charge them to pay strict attention to it, and that I desire my three eldest sons and my two eldest daughters may attend church in mourning." At Bruton Church in the ancient capital Rev. Mr. Price, before whom sat Washington and his fellow Burgesses, took as the text of his discourse the words: "Be strong and of good courage; fear not nor be afraid of them, for the Lord thy God, He it is that doth go with thee. He will not fail thee nor forsake thee" - admirably chosen as suggesting divine succor and ultimate success. "The people," wrote Jefferson, "met generally with anxiety and alarm in their countenances, and the effect of the day through the whole colony was like a shock of electricity, arousing every man and placing him erect and solidly on his centre." The First Convention, 1774. During the summer of 1774 the Revolution was organizing itself throughout the province by the appointment of local committees of correspondence as a means of promoting union and diffusing information, and by spirited county mass-meetings called to consider the crisis of public affairs and to elect delegates to the Virginia convention, in which the Burgesses were in general empowered to act as representatives of the people. All eyes now turned to the convention which was to meet at the capital on August 1, just eleven days previous to the time set by Lord Dunmore for the session of the General Assembly. The sinister governor, by way of avoiding any pretext for the gathering of the people's representatives, began a series of six prorogations of the legislature, hoping that meantime patriotic feeling would subside. His proclamation to that effect stands on the page of the yellowed Journal just opposite to the record of the impetuous words with which he dissolved the May Assembly. Little did Lord Dunmore suspect that his act on that occasion virtually closed the labors of a legislature that dated from 1619. The first Virginia convention met at Williamsburg on Aug. 1, 1774, and remained in session six days. Peyton Randolph was made president. In support of Boston it was unanimously agreed that after November 1 following, no goods except medicines should be imported from Great Britain; that the Virginians would neither import nor purchase slaves imported, after that date, from any place whatsoever; and that, unless American grievances were redressed by Aug. 10, 1775, they would stop all exports of their product to the British Isles. Delegates were chosen to represent Virginia in a general Congress of the colonies. Provision was made for the future sessions of the convention, should the course of affairs demand. The spirit of the planters voiced itself in the words of Washington: "I will raise one thousand men, subsist them at my own expense, and march myself at their head for the relief of Boston." Following the session of the first Continental Congress which met in Philadelphia on Sept. 5, 1774, local military companies were raised in various parts of Virginia and steps were taken to arm and provision them. Events in Boston hastened the pace of the patriots, while Parliament, in January, 1775, declared Massachusetts in a state of rebellion and interdicted all trade on the part of the resisting colonies with Britain and the West Indies. The Second Convention, 1775. It was under such circumstances that the second Virginia convention was held at Richmond on March 20, 1775. It sat in St. John's church, which crowns an eminence overlooking the valley of the James. The historic building stands to-day amid a beautiful grove under whose shade sleep the village fathers. A hundred and nineteen delegates were present and remained in session for one week. A cleavage in parties soon appeared. The conservative members brought forward a conciliatory resolution, expressing a desire "to see a speedy return to those halcyon days when we lived a free and happy people" under British rule. There were, however, some men in the convention who favored action on the part of the colony. Seeing no reason to put their trust in papers addressed to King and Parliament-were not the royal wastebaskets full of these?-they began to rely on their muskets as the means of freedom. Were not the Virginian youth from sea to mountains already on the drill-field, but without authoritative organization? Did not a state of war then exist in Massachusetts? Moved by such considerations, Patrick Henry sprang to his feet and offered a barbed resolution to the effect "that this colony be immediately put into a state of defense." The scene that followed this proposal was a repetition of that which the House of Burgesses had witnessed ten years before in the fiery protest against the Stamp Act, when Patrick Henry, by eloquence as natural as it was overwhelming, carried all before him. Bland, Nicholas, Harrison and Pendleton fought the martial resolution, while Richard Henry Lee and Thomas Jefferson seconded the impassioned words of the son of Hanover. The proposition to arm the colony was carried, and the committee, including Patrick Henry, Richard Henry Lee, Benjamin Harrison, George Washington and Thomas Jefferson, at once formulated plans for executing it. Companies of infantry and horse were soon marshalled in the various counties. Trade was stagnant; government was practically suspended, and the courts closed. For instance, Patrick Henry's fee-books show that in 1765 he charged 555 fees, and in 1774 none. The convention appointed the same delegates as in the previous year to represent Virginia in the Continental Congress, adding the name of Thomas Jefferson as an alternate in case Peyton Randolph should be unable to attend. It took steps for promoting woolen, cotton and linen manufactures, salt works and the making of gunpowder, steel and paper. The delegates concluded that their labors must be submitted to the approval of the people; that future conventions would be necessary; and that delegates thereto should be elected for one year. Thus a body, which was hastily summoned to give advice on a knotty question proposed by Boston, had largely assumed the direction of affairs in Virginia. It is easy at this time to observe the parts of the patriot government taking shape; first, a committee of correspondence with advisory powers in all questions touching the patriot cause; secondly, similar committees in the counties calling forth military companies; thirdly, a representative body, at present only consultative, but soon to become legislative; fourthly, a militia made up of men trained to the use of the musket and pulsing with patriotism. The Virginians in fashioning these democratic institutions showed how well they had profited by their long political experience. Needless to say, Lord Dunmore growled his dissent at such patriot proceedings by a public proclamation, which went unheeded. While the sturdy New Englanders were burying the farmers who met death on the Lexington Green, an act of Lord Dunmore in removing some ammunition from the "Powderhorn" to a British man-of-war seemed, for the moment, to threaten bloodshed in Virginia. Patrick Henry headed a movement of troops against Williamsburg. Dunmore became alarmed, fortified the palace, summoned marines from the Fowey, sent his wife and children aboard this ship lying at York, and drew a full breath only after he had learned that Henry had turned back at Doncastle's ordinary upon receiving payment for the powder. The governor's threat that if injury were offered to him or his he would free the slaves and burn the town, greatly embittered the feeling of the people against him. The Last House of Burgesses. After repeated prorogations of the General Assembly Dunmore summoned it to meet on the first day of June, 1775, so as to receive Lord North's "olive branch." In order to preside over the House of Burgesses, Peyton Randolph left the session of the second Congress in Philadelphia at a time when the news of the battle of Lexington, the capture of Ticonderoga, the investment of Boston by a provincial army, and the arrival of large bodies of fresh British troops at New York and Boston had swept the public mind toward the precipice of revolution. Such was the enthusiasm in his home town for the Speaker, who had been twice honored with the presidency of the general Congress, that companies of horse and foot met him on his approach to Williamsburg and escorted him into the city. When the Burgesses assembled on that June morning, it was noted by Randolph that many of them were habited in hunting shirts and armed with rifles. This assembly marked the last rehearsal of royalty in Virginia. Following the report of a committee that Dunmore had declared his purpose to raise, free and arm the slaves, it was enacted that the import of slaves from the West Indies be checked by a specific duty of five pounds on the head, to which measure the governor refused his assent. "The last exercise of the veto power by the King's representative in Virginia was for the protection of the slave trade." Consideration of Lord North's conciliatory proposition was interrupted by an untoward incident. The people were uneasy lest the governor should remove the remaining guns from the "Powderborn." When, through curiosity, a Burgess and two other men sought an entrance into the arsenal, three guns went off automatically upon the opening of the door, as had been deliberately planned. The men were all wounded; excitement ran high; the governor, upon being questioned, threw the blame upon his servants, who declared to his face that it had been done by his orders. Stricken with guilt and fear, Lord Dunmore with his family fled on June 7 to the Fowey, anchored at York. From the cabin of this man-of-war he sent repeated communications to the legislature at Williamsburg, twelve miles away; and finally, as this method proved tedious, he requested the House to meet him on shipboard-an invitation which the planters were in no way minded to accept. The Fowney sailing up the Thames with the Virginia House of Burgesses aboard would have been a sight to thrill the heart of King George. Jefferson was called upon to draft the answer to Lord North's proposal, which purposed to divide the colonies by getting them to treat separately on conciliatory terms. The import of the reply to the King is sufficiently indicated by this sentence: "We consider ourselves as bound in honor as well as interest to share one general fate with our sister colonies, and should hold ourselves base deserters of that union to which we have acceded were we to agree on any measures distinct and apart from them." Along with Jefferson's "Summary of Rights," which was intended to be presented to the first Virginia convention, this paper marks another step in the genesis of the Declaration of Independence. "In my life," said Shelburne, "I was never more pleased with a State paper than with the Assembly of Virginia's discussion of Lord North's proposition. It is masterly." With Virginia's reply in his pocket, Jefferson hastened to Philadelphia, where he reported its passage to Congress. He was likewise requested by that body to write its report on Lord North's terms, and did so with no less cogency. When the House of Burgesses adjourned on June 4, 1775, it completed a legislative career that extended over 156 years. As the members strolled out of the House, Richard Henry Lee, standing with two colleagues on the portico of the capitol, inscribed with his pencil on a pillar these lines: "When shall we three meet again In thunder, lightning and in rain? When the hurlyburly's done, When the battle's lost and won." True, there were three other attempts to hold sessions, but in each case a quorum did not appear. The last entry on the manuscript Journal stands thus: "Monday, the sixth of May, 16 George III., 1776. Several members met, but did neither proceed to business nor adjourn as a House of Burgesses. Finis." The Third Convention, 1775. While the House of Burgesses must decrease, the convention must increase. The third session of this Revolutionary body was held at "Richmond town" from July 17 to Aug. 26, 1775. Fifteen days before the planters came together on the James, George Washington had taken command, under the old elm at Cambridge, of the American armies. Both the circumstances of the colony and the movement of thought strengthened the hands of the delegates and forced the convention to assume responsibilities undreamt of by those who suggested in the previous year calling it for the first time. Lord Dunmore had not only abandoned the capital, but he was also threatening to make war on the colony. The royal government was dissolved. The convention tried to meet this new turn in affairs. No longer content with resolutions and recommendations, it followed legislative methods and gave to its acts the forms of law, terming them ordinances. The chief measures adopted by this convention were to organize the forces for the defense of the colony, to create an executive to act during the recess of the convention, to raise adequate revenue for the provisional government, to establish executive county committees, to regulate the election of delegates to the convention, and to elect new representatives to Congress. As the bare enumeration shows, these were tasks of no little difficulty, and we find the members laboring at hours early and late to solve them. The chaplain was "desired to read prayers every morning at eight o'clock." Patrick Henry was made colonel of the first regiment, and as such acted as commander-in-chief of the Virginia forces. Fortunately there is extant the little slip of paper on which the tellers made their report to the convention as to the balloting for representatives in Congress: "Peyton Randolph 89, Richard Henry Lee 88, Thomas Jefferson 85, Benjamin Harrison 83, Thomas Nelson 66, Richard Bland 61, George Wythe 58, Carter Braxton 24, George Washington 22, George Mason 19, etc." It will be seen that twenty-two members insisted upon honoring Washington again with a seat in Congress in spite of his military commission. The formation of a temporary. executive was a subject of much discussion. There existed the committee of correspondence, originally a kind of bureau of agitation. Now, however, agitation had done its perfect work; war was at hand. It seemed expedient, therefore, to create a Committee of Safety, consisting of eleven members, of whom Edmund Pendleton was made president. This committee piloted the colony during he trying time from Aug. 17, 1775, until July 5,1776, when Patrick Henry took the oath as governor of the commonwealth of Virginia. During his era of political excitement religious dissent increased rapidly. The spirit of patriotism which animated all classes of citizens finds expression in a petition from the Baptists to the convention, asking for four of their brethren to he granted liberty of preaching, at convenient times, to the troops of that religious persuasion, without molestation or abuse. The petition was granted "for the ease of such scrupulous consciences." War with Dunmore. Toward the close of the summer of 1775 the fugitive governor had gathered a flotilla in the Chesapeake, troubling merchant ships and threatening a descent on the coast towns. In October one of his landing parties seized, at Norfolk, and carried on shipboard the press of a newspaper imbued with the patriotic sentiments of the day. On this press was printed Dunmore's proclamation of November 7, in which he proclaimed martial law, declared traitors all persons capable of bearing arms who did not resort to his standard, and offered freedom to "all indentured servants, negroes, or others appertaining to rebels." A messenger was even despatched to the western border to incite the savages against the Virginians. The war in Virginia really began at Hampton, at the very place where occurred the first encounters of the early settlers with the Indians. In a severe storm in September, 1775, one of Dunmore's ships was beached near Hampton and subsequently captured and fired by the inhabitants of the little seaside town. To avenge this act the governor blockaded and attempted to burn the village. The British assault made on October 26 was bravely repulsed by the citizens, reenforced by the Culpeper riflemen. On December 8 the battle of Great Bridge took place, where the regulars were again routed, losing over sixty killed and wounded. On Jan. 1, 1776, after a severe cannonade from sixty guns, Dunmore fired Norfolk, the chief town of the colony with a population of 6,000. Fourth Convention, 1775. The fourth Virginia convention was sitting almost within hearing distance of the cannon at the battle of Great Bridge. It had met at Richmond on Dec. 1, 1775, but, after organizing, adjourned to meet at Williamsburg. The chief matters that engaged the attention of this convention were the increase of the troops, which were straightway merged into the continental army; the establishment of an admiralty court ; the appointment of a commission of five men in each county to try the causes of those deemed enemies of America; the authorization of county courts to elect severally a sheriff for one year; and instruction to the Virginia delegates in Congress to urge the opening of the ports of the colonies to the commerce of the world, excepting Britain and the British West Indies. After the harrowing assaults of Lord Dunmore, it is not surprising that the demand for independence of British rule echoed in every quarter of Virginia. We find, accordingly, during that spring, the several county committees instructing their delegates "to cause a total and final separation from Great Britain to take place as soon as possible." Meantime the prime question in the mind of the Virginian statesmen was how to bridge the chasm from royalty to republicanism, from colony to commonwealth. There was a brisk correspondence between the leading men in the province with a view to the declaration of independence and the taking up of government. The Fifth Convention, 1776 - Adoption of a Constitution. The fifth convention met at Williamsburg on May 6, 1776, sixty counties and corporations being represented by 131 delegates. Edmund Pendleton was elected president. The three constructive measures which it formulated were : first, the instructions to the Virginia delegates in Congress to propose Independence of Great Britain; second, the Bill of Rights ; and third, the constitution of the new Commonwealth. After the passage, on May 15, of the resolution instructing their delegates in Congress to propose independence, the British flag on the capitol was at once struck and the colonial colors hoisted in its stead. At night the town was illuminated in celebration of that epochal event. On June 12 the convention adopted the Bill of Rights. This summary of liberties, at once so comprehensive and concise, we owe to George Mason, whose original draft was afterwards presented to the state. The only serious amendment made to this celebrated paper was that urged by the youthful James Madison, substituting religious liberty for toleration. The air was rife with political theories. Seven different plans of government came before the convention. From these, guided by political sagacity of rare order, they wrought out a republican constitution which, though conceived in the midst of war and framed in a brief space of time, met admirably the needs of the people and presided for more than half a century over the rapidly expanding state. The constitution was adopted finally on June 29, 1776 - the natal day of the Commonwealth of Virginia. BIBLIOGRAPHY. - Burk, John: History of Virginia; Chandler, J. A. C.: Representation in Virginia; Frothingham, Richard: The Rise of the Republic; Gordon's History of America; Grigsby, Hugh Blair: The Virginia Convention of 1776; Henry, William Wirt: Life of Patrick Henry; Hart, A. B.: Formation of the Union; Hening's Virginia Statutes at Large; Ingle: Local Institutions in Virginia; James, C. F.: History of the Struggle for Religious Liberty in Virginia; Johnson, T. C.: Religious Liberty in Virginia, Rowland, Kate Mason: Life of George Mason; Tyler, Lyon G.: Life and Times of the Tylers; Wilson, Woodrow: Life of George Washington; The Works of Thomas Jefferson, John Adams, James Madison, etc.; The Journals of the Virginia House of Burgesses; The Journals of the Virginia Conventions of the Revolution; Calendar of Virginia State Papers; The files of the Williamsburg Gazette; Manuscripts in Virginia State Archives. S. C. MITCHELL, President of the University.
http://www.electricscotland.com/history/america/south/south5.htm
CC-MAIN-2014-15
refinedweb
4,813
55.98
XmListDeleteItemsPos man page XmListDeleteItemsPos — A List function that deletes items from the list starting at the given position Synopsis #include <Xm/List.h> void XmListDeleteItemsPos( Widget widget, int item_count, int position); Description XmListDeleteItemsPos deletes the specified number of items from the list starting at the specified position. - widget Specifies the ID of the List from whose list an item is deleted. - item_count Specifies the number of items to be deleted. This number must be nonnegative. - position Specifies the position in the list of the first item to be deleted. A value of 1 indicates that the first deleted item is the first item in the list; a value of 2 indicates that it is the second item; and so on. For a complete definition of List and its associated resources, see XmList(3). Related XmList(3). Referenced By XmList(3).
https://www.mankier.com/3/XmListDeleteItemsPos
CC-MAIN-2017-17
refinedweb
140
56.25
What is Stack in C#? The stack is a special case collection which represents a last in first out (LIFO) concept. To first understand LIFO, let's take an example. Imagine a stack of books with each book kept on top of each other. The concept of last in first out in the case of books means that only the top most book can be removed from the stack of books. It is not possible to remove a book from between, because then that would disturb the setting of the stack. Hence in C#, the stack also works in the same way. Elements are added to the stack, one on the top of each other. The process of adding an element to the stack is called a push operation. To remove an element from a stack, you can also remove the top most element of the stack. This operation is known as pop. Let's look at the operations available for the Stack collection in more detail. Declaration of the stack A stack is created with the help of the Stack Data type. The keyword "new" is used to create an object of a Stack. The object is then assigned to the variable st. Stack st = new Stack() Adding elements to the stack The push method is used to add an element onto the stack. The general syntax of the statement is given below. Stack.push(element) Removing elements from the stack The pop method is used to remove an element from the stack. The pop operation will return the topmost element of the stack. The general syntax of the statement is given below Stack.pop() Count This property is used to get the number of items in the Stack. Below is the general syntax of this statement. Stack.Count Contains This method is used to see if an element is present in the Stack. Below is the general syntax of this statement. The statement will return true if the element exists, else it will return the value false. Stack.Contains(element) Now let's see this working at a code level. All of the below-mentioned code will be written to our Console application. The code will be written to our Program.cs file. In the below program, we will write the code to see how we can use the above-mentioned methods. Example 1 In this example, we will see - How a stack gets created. - How to display the elements of the stack, and use the Count and Contain methods. using System;using System.Collections;using System.Collections.Generic;using System.Linq;using System.Threading.Tasks;using System.Text;class Programnamespace DemoApplication { {Stack st = new Stack();static void Main(string[] args) { st.Push(1);{st.Push(2); st.Push(3); foreach (Object obj in st)Console.WriteLine(); Console.WriteLine();Console.WriteLine(obj); }Console.WriteLine("Does the stack contain the elements 3 "+st.Contains(3));Console.WriteLine("The number of elements in the stack " +st.Count); Console.ReadKey(); } }} Code Explanation:- - The first step is used to declare the Stack. Here we are declaring "st" as a variable to hold the elements of our stack. - Next, we add 3 elements to our stack. Each element is added via the Push method. - Now since the stack elements cannot be accessed via the index position like the array list, we need to use a different approach to display the elements of the stack. The Object (obj) is a temporary variable, which is declared for holding each element of the stack. We then use the foreach statement to go through each element of the stack. For each stack element, the value is assigned to the obj variable. We then use the Console.Writeline command to display the value to the console. - We are using the Count property (st.count) to get the number of items in the stack. This property will return a number. We then display this value to the console. - We then use the Contains method to see if the value of 3 is present in our stack. This will return either a true or false value. We then display this return value to the console. If the above code is entered properly and the program is run the following output will be displayed. Output: From the output, we can see that the elements of the stack are displayed. Also, the value of True is displayed to say that the value of 3 is defined on the stack. Note: You have noticed that the last element pushed onto the stack is displayed first. This is the topmost element of the stack. The count of stack elements is also shown in the output. Example 2 Now let's look at the "remove" functionality. We will see the code required to remove the topmost element from the stack. using System;using System.Collections;using System.Collections.Generic;using System.Linq;using System.Threading.Tasks;using System.Text;class Programnamespace DemoApplication { {Stack st = new Stack();static void Main(string[] args) { st.Push(1);foreach (Object obj in st)st.Push(2); st.Push(3); st.Pop(); {}Console.WriteLine(obj); } Console.ReadKey(); }} Code Explanation:- - Here we just issue the pop method which is used to remove an element from the stack. If the above code is entered properly and the program is run, the following output will be displayed. Output: We can see that the element 3 was removed from the stack. Summary - A Stack is based on the last in first out concept. The operation of adding an element to the stack is called the push operation. The operation of removing an element to the stack is called the pop operation.
https://www.viastudy.com/2018/10/c-stack-with-example.html
CC-MAIN-2021-31
refinedweb
947
67.35
Let's take a look at Java's magic sauce — or should I say Java’s Magic Source? If we go all the way back to JDK 1.0, there were 211 classes in the public API. Out of interest, I created a graph showing the growth in public classes over time. To extract my data I used the API documentation and copied the list of all classes. I stopped at JDK 8; both JDK 9 and 10 have the same number of public classes as JDK 8 (there are a few changes in available methods but not classes). Next, I took the rt.jar file for each JDK and extracted a list of all the classes it contained. Since rt.jar was only introduced in JDK 1.2, I used the classes.zip file from JDK 1.0 and JDK 1.1. After removing all inner classes to simplify things a bit, I took the difference between the total number of classes and those in the public API. The graph below shows this data. As you can see, back in JDK 1.0, there were only thirteen internal classes. This peaked in JDK 8 at a whopping 8709. I did start trying to get numbers for JDK 9 and 10, but this proved too complicated due to the elimination of the rt.jar and tools.jar files, being replaced with 97 modules. The interesting thing about this is that from JDK 1.1 onwards there was a lot of functionality buried in the JDK. Right from the very start, developers have been warned by Sun and then Oracle that the internal APIs are not intended for use in application development. Along with that, they are not documented and may be removed from the JDK without notice. Despite these warnings, there are plenty of developers who have used these classes. Oracle conducted an analysis of a large amount of their own code written in Java and found that the top three internal classes used were: - sun.misc.BASE64Encoder - sun.misc.Unsafe - sun.misc.BASE64Decoder Apparently, their code does more encoding than decoding. The use of these two classes was driven, primarily, by the lack of a public BASE64 API until JDK 8. This brings us to the most notorious internal API in the JDK: sun.misc.Unsafe . The clue is, most definitely, in the name meaning this a class that will let you do things that are well outside of the defined boundaries of regular Java code. I was surprised to find that the Unsafe class was only introduced in JDK 1.4. According to a presentation given by Mark Reinhold at the JVM Language Summit, the Unsafe introduction was part of a major rewrite of reflection and serialization, as well as being required for support of direct buffers and NIO. Is It Safe? Despite being undocumented, the source code for Unsafe is readily available thanks to the OpenJDK project. You can run Javadoc on Unsafe.java and get at least a minimal set of documentation for all 113 (in JDK 8) available methods. The theme of this post is not to discuss the details of Unsafe functionality. It suffices to say that creating an instance of Unsafe requires more work than a typical class. The constructor is private and, although there is a static method, getUnsafe(), you need to access this via the bootclasspath. The most common way to access Unsafe is through reflection so that you can gain access to the internal instance. Once you have access to Unsafe, the gloves are off. You had better know what you are doing and be careful. As the name of the class suggests, many of the things you can do are inherently unsafe. For example, you can allocate and free native memory (analogous to malloc and free in C). You can also manipulate memory using addresses, as you would using pointers in C. It’s not just memory that you get access to — you can do things like allocating a new instance of an object but not run the constructor. Just think about what might happen if you use that object. One phrase crops up many times in the documentation: “results are undefined." There are many ways you can use these methods that will return a result that is meaningless (if you’re lucky) or causes the JVM to stop abruptly (if you’re not). The Java language was designed to be safe. This is the whole rationale behind eliminating the use of explicit pointers and manual memory management, as well as numerous other basic features. Java was never intended to be used as a systems programming language, which is where C initially started life (thanks to good ol' UNIX). To write systems code, you need these types of low-level, potentially dangerous interfaces to let you implement what’s required. Java was always intended to be used for developing application code that would not need this kind of access directly. The concept is for developers to trust the developers of the JVM and the core libraries to guarantee the safety of the application code. The Unsafe class was introduced to allow the developers of the public classes to use it to deliver better performance or to make use of low-level features (like memory fencing). This would not be possible with standard Java code. The issue, which became quickly apparent when it was proposed to encapsulate all internal APIs in JDK 9, was that many people had used these classes. There was a fascinating study that analyzed 74 Gb of compiled Java code from Maven Central only looking at the use of sun.mis.Unsafe. The results showed that 25 percent of the code relied in some way on this internal class. This heavy reliance, especially from popular open-source libraries and frameworks, is a large part of the reason that the JEP 260 was included in JDK 9. This provides a module, JDK.unsupported, which exports the internal packages that are deemed critical to the JDK. (Interestingly, the module API documentation still does not provide any information on this). This module exports the com.sun.nio.file package and exports and opens (for reflective access) the sun.misc and sun.reflect packages. Oracle, to its credit, has performed a cleanup of the Unsafe class in JDK 9. I particularly liked the comment about the link between the over-use of extern and the increased mortality rate of kittens. The reason Unsafe has been heavily used is that library developers need the same enhanced level of performance and extended capabilities that the core Java API developers use. Without it, many enterprise applications would run a lot slower than they currently do. This brings me to the crux of my post. Java is a hugely successful application development platform and the reasons for that are endless. The very safety that the language provides is essential to why developers find it so appealing. Not having to worry about incorrect pointer manipulation, forgetting to free memory, and memory leaks (although you can still have these in Java) make reliable, fast code much easier to write. However, having the secret sauce has also been vitally important to the success of Java as it has allowed for powerful, high performing libraries and frameworks to be developed, providing developers with a wealth of functionality to build on. Already, we’ve seen parts of Unsafe being implemented in a way that makes them accessible through a public API. Variable handles, introduced in JDK 9, are an excellent example of this. Developers can use these to fence memory access operations and perform atomic operations directly on variables without the need to create instances of classes in the java.util.concurrent.atomic package. I’ll leave you with a question to ponder, would Java have been as successful as it has been if sun.misc.Unsafe not been hidden in the library code but still made accessible? {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/javas-magic-sauce
CC-MAIN-2019-13
refinedweb
1,341
63.8
How to declare an array in Python In this article, we will learn about arrays and how to declare arrays in Python. We will use some custom codes as well to understand the arrays. Let's first have a quick look over what is an array in Python. Note: Array does not exist as a built-in data structure in Python. Python uses list type instead of the array. What is Array An array is like storage containers where multiple items of the same type are stored. Like other data structures, arrays can also be accessed using indexes. Elements stored in an array have a location as a numerical index. Each index starts from 0 and ends with the length of the array-1. Arrays use contiguous memory locations to store data. Arrays look similar to the Python Lists but have different properties and different declarations. Array Example array1 = [0, 0, 0, 1, 2] array2 = ["cap", "bat", "rat"] Let us look at the different ways to declare arrays in Python. We will use simple approaches, an array module supported by Python, NumPy module, and also a direct method to initialize an array. Declare an Array using array module Array does not exist as a built-in data structure in Python. However, Python provides an array module to declare a set of data as an array. Syntax arrayName = array(typecode, [Initializers]) Parameters typecode - the codes that are used to define the type of value the array will hold. Initializers - a set of similar type of data Example: Creating Array using array Module The below example imports the Python array module. It declares an array of a set of signed integers and prints the elements. from array import * array1 = array('i', [10,20,30,40,50]) for x in array1: print(x) 10 20 30 40 50 Example: Creating array like list in Python Here, we declare an empty array. Python for loop and range() function is used to initialize an array with a default value. You might get confused between lists and arrays but lists are dynamic arrays. Also, arrays stores similar type of data in it while lists store different types of data. The below example has an empty array. It is initialized with 5 elements carrying a default value (0). arr = [] arr = [0 for i in range(5)] print(arr) [0, 0, 0, 0, 0] Example: Python NumPy module to create an array Python has a module numpy that can be used to declare an array. It creates arrays and manipulates the data in them efficiently. numpy.empty() function is used to create an array. import numpy as np arr = np.empty(10, dtype=object) print(arr) [None None None None None None None None None None] Example: Create Array using an initializer This method creates an array with the default values along with the specified size inside the initializer. See the example below. arr_num = [0] * 2 print(arr_num) arr_str = ['P'] * 5 print(arr_str) [0, 0] ['P', 'P', 'P', 'P', 'P'] Conclusion In this article, we learned to declare an array in Python using various methods such as numpy module, array module. We also created empty arrays using for loop and range() and discussed a simple approach as well. We also read about the difference between lists and arrays.
https://www.studytonight.com/python-howtos/how-to-declare-an-array-in-python
CC-MAIN-2022-21
refinedweb
550
63.59
In the last lesson, we mapped out a plan for the structure of our online store application: The application will contain multiple routes. Depending on the route to which the user navigates, the URL in the browser and the child component displayed in the root component (between the <router-outlet> tags) will change. In this lesson, we'll walk through the process of creating a router and implementing it in an Angular application. We'll create a route to display the WelcomeComponent we described in the last lesson. In most MVCs, including Angular, we can create multiple-page applications by using a router. In terms of the MVC architecture, a router is responsible for navigating between different views in an application. As the user clicks navigational links within our app or uses the 'back' and 'forward' browser buttons in our application, a new URL is produced. This invokes the router, which matches the path of the URL with the defined route that matches that path. It then loads the components and other content that route requires. We'll need to have a special tag called a base tag in our index.html file. Angular CLI has already added this base tag for us. Look below the <title> tags to find the line <base href="/">: <> The base tag enables an HTML 5 feature called pushState routing, which Angular's own routing depends on. This allows us to make our in-app URL paths look the way we choose (ie: localhost:4200/about). You're not required to know the details of this for our course. If you'd like to explore this concept further, check out the Mozilla Developer Network entry on pushState() and the Angular Documentation on Base Tags Next we need to create the router itself. It will reside in a file known as the routes or router file. We'll create this now. In the app directory, create a file named app.routing.ts. This file name is special. All route files should be named app.routing.ts. (Note: At the time of this writing, the Angular CLI tool has disabled route creation directly from the command line. Check the Generating a Route section of their documentation in the future for updates regarding when/whether this feature will be available again.) Within our router file, we'll import two important pieces of the Angular framework: import { ModuleWithProviders } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; The ModuleWithProviders package from the Angular core helps provide our router to the rest of the application. We'll learn about "providers" in detail in an upcoming lesson when we discuss something called dependency injection. Routes and RouterModule contain code that will help us render specific components when specific URLs are provided to the router. They're not part of the Angular core by default, so we must import them here. appRoutes Next, we'll define an array called appRoutes. It will contain the master list of all available routes in our application: import { ModuleWithProviders } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; const appRoutes: Routes = [ ]; Let's walk through what's going on here: Notice appRoutes has been declared as the Routes data type. This file has access to Routes because we've imported { Routes, RouterModule }. Also note that the keyword const precedes our new appRoutes array. Including const before declaring a property or variable makes something a constant. A constant is a value that other code in our application cannot change. It's a read-only reference that cannot be redefined. Check out the Mozilla Developer Network's const entry for more details. We don't want to risk any other portion of our application altering our appRoutes array, so we declare it a constant. Additionally, keep in mind that all routes must be included in the appRoutes list. That means we'll have to manually update appRoutes to include each new route we create. Next, our file needs to export our routes to the rest of the application. We do this by passing our appRoutes variable into the forRoot() method of the RouterModule we imported, like this: import { ModuleWithProviders } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; const appRoutes: Routes = [ ]; export const routing: ModuleWithProviders = RouterModule.forRoot(appRoutes); The return value of forRoot() method is passed into the new variable called routing. Our routing object is a ModuleWithProviders data type. This is simply a special type of module that includes something called providers to help make information (like our routes) available to the rest of the application. Again, we'll learn about providers in an upcoming lesson on dependency injection. Notice that routing is being exported with the export keyword and that it's a constant. This makes our appRoutes list of routes available to our root module in app.module.ts. We'll also update our root module before the end of this lesson. At the end of the lesson, our application structure will look like this: Let's create a WelcomeComponent and a route to it. First, generate the new component: ng g component welcome Don't worry about adding content to this component yet. Next, let's add the route. Each route in an Angular application is actually a special type of object with code that looks like this: { path: '', component: WelcomeComponent }, Note: The code above is only for demonstration purposes. Don't worry about adding this to your application yet! Every route has path and component properties. path refers to the URL segment that should correspond with this route. For instance, if we wanted to create a route users could navigate to by visiting localhost:4200/super-crazy-route, the path property in our route object would read super-crazy-route. If a route has a blank string as its path property, as seen above, that means it's the index path located at localhost:4200. component refers to the primary component for a route. That is, the component that should be rendered when the user navigates to this route. The route object above will ensure the WelcomeComponent is displayed when the user visits the application's root path URL at. Let's add this first Route object into our app.routing.ts router file. First, we'll need to import any components our new route will need: ... import { WelcomeComponent } from './welcome/welcome.component'; ... We'll also add the new route object to our appRoutes array: import { ModuleWithProviders } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; import { WelcomeComponent } from './welcome/welcome.component'; const appRoutes: Routes = [ { path: '', component: WelcomeComponent } ]; export const routing: ModuleWithProviders = RouterModule.forRoot(appRoutes); Next, we need to add our routes file and any components it loads to our root module. (Angular CLI should have imported our WelcomeComponent when we generated it, but always make sure to double-check.) First, we import them: ... import { WelcomeComponent } from './welcome/welcome.component'; import { routing } from './app.routing'; ... routing refers to the constant being exported at the bottom of our routes file: ... export const routing: ModuleWithProviders = RouterModule.forRoot(appRoutes); We'll also need to add the routing constant to our root module's imports array: ... imports: [ BrowserModule, FormsModule, HttpModule, routing ], ... The imports array is used to import other modules into the current module. Note that this differs from the import statements at the top of our files. (The ones that look like import { WelcomeComponent } from './welcome/welcome.component';. Import statements simply import other modules' code into a single file. Finally, we need to update our root AppComponent. We must designate where the router should load content for our different routes. For instance, we told our router to load the WelcomeComponent when the user visits the root path at localhost:4200/. We need to tell Angular where the WelcomeComponent should be rendered. We do this with a special tag called <router-outlet>. This tag denotes exactly where a route's components will be rendered. Let's place <router-outlet></router-outlet> tags in our root component's template now. <h1>{{title}}</h1> <router-outlet></router-outlet> Let's go ahead and add a div and page title to our root component: <div class="container"> <h1>{{title}}</h1> <router-outlet></router-outlet> </div> Let's change the title property in our AppComponent class: ... export class AppComponent { title = 'Epicodus Tunes'; } And the template of our WelcomeComponent: <h2>Welcome to our store!</h2> Launch the application with the $ ng serve command and the WelcomeComponent will be displayed at the default root path: Now that we have a router and our very first route, we'll discuss how to navigate between routes and manage multiple routes in an Angular application in the next lesson. We'll also create our remaining About, Marketplace and Album Detail pages.
https://www.learnhowtoprogram.com/javascript/angular-extended/implementing-a-router
CC-MAIN-2018-43
refinedweb
1,455
55.84
At 06:13 10/05/2002, [EMAIL PROTECTED] wrote: >'else' is tricky within the block oriented structure of anything XML-ish, >because of the concept of 'well-formedness'. The 'if' statement would have >to be singly wrapped, and the else block wrapped separately, looking at >least somewhat awkward any way you go about it. The best I can come up with >in my mind is this, in order to have the 'else' pick up on the condition >expressed in its surrounding container. But, yuck: > ><if ...> > true stuff > <else> > false stuff > </else> ></if> > > >A good page template way is something like this: > ><tal:if > truth ></tal:if> ><tal:else > false ></tal:else> > >The 'not' TALES namespace is valuable. The downside is that you evaluate >the expression twice. A good way to work within this is something that I >did earlier today, outside of this conversation, where I evaluate an >expression earlier and assign it to a variable: > ><div id="edit-area" > tal: > > <h3>Edit Menu Items</h3> > <form action="Delete" method="post" name="actForm" > tal: > > ... (form and table elements, and a loop over editItems > contained in here if there were results) ... > > </form> > > <div class="emph" > tal: > No menu items available > </div> > ></div> > > >This is something I did a lot in DTML too, setting a search result to either >a global variable, or inside of a large <dtml-let> namescape It is maybe not clear that the above is really usable and allowed. I do not know how and where to stress that the tal: marked tags are underdocumented what a pity. I have already tried : Sorry that I am not able to explain it better... -- _______________________________________________ Zope-Dev maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - )
https://www.mail-archive.com/zope-dev@zope.org/msg10423.html
CC-MAIN-2016-50
refinedweb
288
65.66
A few days ago, it was announced on the Wolfram Blog that a 13-year-old had made a record calculating 458 million terms for the continued fraction of pi. In the spirit of that, I thought I would show how to solve a question that sometimes gets asked at interviews: Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places. Using Factor, we can calculate the nth approximation of pi using vector arithmetic: : approximate-pi ( n -- approx ) [1,b] 2 v*n 1 v-n 1 swap n/v [ odd? [ neg ] when ] map-index sum 4 * ; This isn't ideal if we want to try an increasing number of terms (looking for a particularly accuracy), since a lot of the the work would be redone unnecessarily. Instead, we can write a word that adds successive terms until the difference between the previous approximation and the current approximation is less than our requested accuracy. : next-term ( approx i -- approx' ) [ 2 * 1 + ] [ odd? [ neg ] when ] bi 4.0 swap / + ; inline :: find-pi-to ( accuracy -- n approx ) 1 4.0 [ dup pick next-term [ - ] keep swap abs accuracy >= [ 1 + ] 2dip ] loop ; To show its performance, we can time it: ( scratchpad ) [ 0.00001 find-pi-to ] time . Running time: 0.026030341 seconds 3.141597653564762 An equivalent function in Python might look like this: def find_pi_to(accuracy): i = 1 approx = 4.0 while 1: term = (2 * i) + 1 if i % 2 == 1: term = -term new = approx + 4.0/term if abs(new - approx) < accuracy: approx = new break i += 1 approx = new return approx But, if we time this version (not counting startup or compile time), it takes 0.134 seconds. Doing the math shows that Factor is 5 times faster than Python in this case. Not bad.
https://re-factor.blogspot.com/2011_09_01_archive.html
CC-MAIN-2017-13
refinedweb
319
62.58
MP4SetTrackTimeScale - Set the time scale of a track #include <mp4.h> bool MP4SetTrackTimeScale( MP4FileHandle hFile, MP4TrackId trackId, u_int32_t timeScale ) Upon success, true (1). Upon an error, false (0). MP4SetTrackTimeScale sets the time scale of the specified track in the mp4 file. The time scale determines the number of clock ticks per second for this track. Typically this value is set once when the track is created. However this call can be used to modify the value if that is desired. Since track sample durations are expressed in units of the track time scale, any change to the time scale value will effect the real time duration of the samples. MP4(3)
http://www.makelinux.net/man/3/M/MP4SetTrackTimeScale
CC-MAIN-2014-41
refinedweb
111
74.59
# This is a package that is used to generate vectors suitable # to draw relatively accurate arcs using TCL's canvas item # -smooth raw option (which uses cubic Beziers). # # The approach taken here is to generate the Bezier vectors for # arcs by (linearly) scaling the so-called "magic number" used # to draw 90 degree arcs. I am not sure if this is a standard # approach, but it seems to (visually) work reasonably well. package require Tcl 8.5 package require Trans2D set NS BezierArc package provide $NS 0.0 namespace eval $NS { namespace export Arc # The derivation of this number (and slight variations to it) # can be found from various online sources such as # # This is the number for a unit circle, 90 degree arcs. variable MAGICNUMBER 0.551915024494 variable PI set PI [expr {acos(-1)}] } proc ${NS}::Arc {arccenter arcradius arcstartangle arcstopangle direction} { # Returns a vector intended for use in canvas line and polygon item # creation with the "-smooth raw" option. Angles are in radians. variable MAGICNUMBER variable PI # Ensure the arc angles are positive. set arcstartangle [MakePosAngle $arcstartangle] set arcstopangle [MakePosAngle $arcstopangle] # Determine arc magnitude. switch -exact -- [string tolower $direction] { cw { set arcmagnitude [MakePosAngle [expr {$arcstartangle - $arcstopangle}]] } ccw { set arcmagnitude [MakePosAngle [expr {$arcstopangle - $arcstartangle}]] } default { error "Invalid arc direction, $direction. Limited to cw|ccw." } } # First generate an arc from 0 degrees. set nfullquadrants [expr int(floor($arcmagnitude * 2.0 / $PI))] set partialquadrant [expr {$arcmagnitude - ($nfullquadrants * $PI)/2}] # ...starting with the partial quadrant. This should work even if # there is no partial. A close-to-zero angle here could likely be # filtered out, but that is left to the calling scope. set m_adj [expr {$MAGICNUMBER * $partialquadrant * 2.0 / $PI}] set fixedcontrolpoints [list 1.0 0.0 1.0 $m_adj] set adjcontrolpoints [list 1.0 -$m_adj 1.0 0.0] # Rotate adjcontrolpoints to match the desired angle. set R [Trans2D::Rotation $partialquadrant] set unitarc [list {*}$fixedcontrolpoints {*}[Trans2D::ApplyTransform $R $adjcontrolpoints]] # ...and then rotating this by 90 degrees and prepending with # a full-quadrant arc, deleting the intermediary "knot" point. set R [Trans2D::Rotation [expr {$PI / 2.0}]] for {set i 0} {$i < $nfullquadrants} {incr i} { set unitarc [Trans2D::ApplyTransform $R $unitarc] set unitarc [list 1.0 0.0 1.0 $MAGICNUMBER $MAGICNUMBER 1.0 {*}$unitarc] } # unitarc now contains a ccw circular vector of the appropriate angle # Rotate to make this match the arcstartangle, scale to the appropriate # radius and translate to the arc center, in that order. set R [Trans2D::Rotation $arcstartangle] set S [Trans2D::Scale $arcradius] set T [Trans2D::Position $arccenter] # A clockwise direction requires flipping the data y coordinate since the arc # was generated in a counter-clockwise fashion. if {[string match cw [string tolower $direction]]} { set F [Trans2D::Reflection y] set Tnet [Trans2D::CompoundTransforms $T $S $R $F] } else { # CCW arc. set Tnet [Trans2D::CompoundTransforms $T $S $R] } set unitarc [Trans2D::ApplyTransform $Tnet $unitarc] return $unitarc } proc ${NS}::MakePosAngle { angle_rad } { # Ensures/converts a negative angle to positive by adding # 360 degrees. variable PI while {$angle_rad < 0.0} { set angle_rad [expr {$angle_rad + 2.0 * $PI}] } return $angle_rad }The following demonstration code creates a canvas that is replicated in the animated gif at the top of this page. #!/bin/sh # the next line restarts using tclsh \ exec wish "$0" ${1+"$@"} package require Trans2D package require BezierArc set PI [expr {acos(-1)}] set c [canvas .c -width 150 -height 150 -background black] pack .c # Right-handed coordinate system with origin at the canvas centre. set cTw [Trans2D::CompoundTransforms [Trans2D::Position 75 75] [Trans2D::Reflection y]] # Create a static circle with a slightly smaller radius than the arc # we are about to draw - for comparison purposes. $c create oval {*}[Trans2D::ApplyTransform $cTw {-45 -45 45 45}] -fill purple # Create a dummy line item type that will be used to draw the arc. # Note the required smooth "raw" option. set arcID [$c create line 0 0 0 0 0 0 0 0 -smooth raw -fill orange -arrow last] # ... and another one that will be used to show the control points. set lineID [$c create line 0 0 0 0 0 0 0 0 -fill grey -dash .] set ang_step [expr {(2.0 * $PI/75)}] for {set ang_rad 0.0} {$ang_rad < (2.0 * $PI)} {set ang_rad [expr {$ang_rad + $ang_step}]} { set coords [Trans2D::ApplyTransform $cTw [BezierArc::Arc {0.0 0.0} 50.0 0.0 $ang_rad ccw]] $c coords $arcID {*}$coords $c coords $lineID {*}$coords update after 100 }
http://wiki.tcl.tk/41289
CC-MAIN-2017-30
refinedweb
732
57.87
Provided by: freebsd-manpages_10.1~RC1-1_all NAME ng_deflate — Deflate PPP compression (RFC 1979) netgraph node type SYNOPSIS #include <sys/types.h> #include <netgraph/ng_deflate.h> DESCRIPTION The deflate node type implements the Deflate. Corresponding ng_ppp(4) node hook must be switched to NG_PPP_DECOMPRESS_FULL mode to permit sending uncompressed frames. HOOKS This node type supports the following hooks: comp Connection to ng_ppp(4) comp hook. Incoming frames are compressed (if possible) and sent back out the same hook. decomp Connection to ng_ppp(4) decomp hook. Incoming frames are decompressed (if they are compressed), and sent back out the same hook. Only one hook can be connected at the same time, specifying node's operation. NGM_DEFLATE_RESETREQ (resetreq) This message contains no arguments, and is bi-directional. If an error is detected during decompression, this message is sent by the node to the originator of the NGM_DEFLATE_DEFLATE_GET_STATS (getstats) This control message obtains statistics for a given hook. The statistics..
https://manpages.ubuntu.com/manpages/xenial/man4/ng_deflate.4freebsd.html
CC-MAIN-2021-49
refinedweb
157
50.84
Date: Mon, 19 Feb 2001 20:27:40 -0500 From: ethan fremen mindlace@digicool.com To: zope-announce@zope.org Subject: February 19th Zope News The Content Management Framework formerly known as PTK, Zope 2.3.1 beta is out, Presentation Templates will save us, Zope.org stabilized, all in this week's Zope News. The opinions expressed in Zope News are solely the authors', and not the opinions of Digital Creations, The Zope Community at-large, or the Spanish Inquisition. If you or your company are doing something cool with zope, "submit it to Zope News", mailto:zope-web@zope.org for possible inclusion. And Now For Something Completely Different: ---- Zope Presentation Templates -- by Ethan Fremen "Zope Presentation Templates", now in an alpha release, have the promise of both replacing dtml and integrating well with wysiwyg tools, like Adobe GoLive. I'm really excited about this project: it should go a long way towards us being able to have real separation of logic, presentation, and data. The basic idea is that attributes in a namespace will control manipulation of nodes in a native XML namespace: the most common example will be XHTML. This means that you'll no longer have a page so saturated with additional tags that you cannot identify the underlying XHTML: instead, you'll have a well formed document that your editor can play nicely with. A crude example is: <p tal:Dummy Text</p> Assuming the document had a property called "saying", with a value of "Hello World", the result would be: <p>Hello World&l;t;/p> There are "more examples", including basic loop constructs. After you download and try out the software, please give some feedback to the "development project.", Zope Status by Brian Lloyd Summary Zope 2.3.1 beta 1 released Recent News This week's not-so-weekly update: Thursday Feb. 15 we released beta 1 of Zope 2.3.1. 2.3.1 will follow in a week or so barring any problems. The 2.3.1 release contains quite a number of bug fixes, both issues found during the upgrade cycle to 2.3 and quite a few older issues that had been languishing in the Collector. We anticipate that 2.3.1 will be quite solid, so we will be able to turn some attention to other things and start talking about some new longer-term initiatives for Zope. The Content Management Framework Formerly Known as the PTK --by Ethan Fremen The CMF has been on the fast track for the past month or so. Tres reports that we should have a 1.0 release by the end of this month. You can read all about it at the "CMF dogbowl", so today I'll just hilight one of the exciting things in the CMF: Access-based filtered searching. An article that has just been written but not approved might appear in a list of to-be-approved items for an editor. This list is generated by a catalog query: however, a regular user, searching for articles, will not see the yet-approved article. Perhaps once this article is published, it is in a members only area. Then, members will see it both in a catalog-driven list of content, but also in raw searches, yet it will remain invisible to the "Anonymous User". I reviewed quite a few indexing engines a while ago. Finding one that stayed up-to date with the changes made to the site was hard enough (All CMF content is automatically catalogued): Finding one that gave a different set of results to different kinds of viewers was impossible. Zope Web ZopeOrgBlowsUp For the past few weeks, Zope.org has been rather unstable. I'm pretty confident in the zope software, and so I found this rather unnerving. I put some work into tracking it down, but Shane Hathaway really found the culprit, and Ken Manheimer finally put it to rest. The new wikis had an infinite-recursion loop that was causing zope to segfault. Luckily, this has been corrected, and my pager has been silent since (yay!). Cluster Grows Zope.org's cluster is also getting a bit bigger: We're adding a new storage server so that the mail can have a machine to itself. Other benifits of the new server will be support for files greater than 2GB, so that zope.org will stop coming to a writing halt every time the database passes that point. We're also adding another ZEO client box to the cluster that will allow us to have a place to do things (like load in all of the mailing list messages) that are a bit ... resource intensive. Zope.org grows up Since the CMF is nearing release status, it's high time that Zope.org starts moving in that direction. We're going to be building a new Zope.org off of this one-dot-oh release, so stay tuned for a lot of activity next month. -EOT- Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
http://www.linuxtoday.com/developer/2001022000104OSSW
CC-MAIN-2017-04
refinedweb
849
63.7
SYNOPSIS #include <aio.h> int aio_read(struct aiocb *aiocbp); Link with -lrt. DESCRIPTION The aio_read() function requests an asynchronous "n = read). The data is read starting at the absolute file offset aiocbp->aio_off- set, regardless of the current file position. After this request, the value of the current file position is unspecified. The "asynchronous" means that this call returns as soon as the request has been enqueued; the read reading. EINVAL One or more of aio_offset, aio_reqprio,. This page is part of release 3.23 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://www.linux-directory.com/man3/aio_read.shtml
crawl-003
refinedweb
105
69.79
0 I'm having trouble with a module in my program that is supposed to read in the files in a directory and return those with the .xml extension. When I import the module from the command line it works perfectly and will return a different list as soon as I change the contents of the directory. On the other hand, this is not the case when I call it from a cgi script module that displays a web page. I get the right list at first but if I change the contents of the directory there is no change in the list as I refresh the page. Here is what the module looks like: import os import re def findXML(): xML = [] os.system("ls data > xml") file = open("xml",'r') lines = file.readlines() for l in lines: if re.search("\.xml",l): xML.append(l.rstrip(".xml\n")) return xML if __name__ == '__main__': pass Any ideas would be appreciated :)
https://www.daniweb.com/programming/software-development/threads/47637/call-to-module-won-t-refresh
CC-MAIN-2018-30
refinedweb
160
78.28
Multithreaded issues Here it is, the first blog post which has been requested by one of you, my fellow readers! Today, we dive into a multi-threaded queue and discover nasty concurrency problems. The problem The problem we want to look at is a multi-thread ready queue. It shall support the following functions: - Insert: A non-blocking insert which returns immediately. - Get: A get which blocks until there is something to return. - Wait: A blocking wait which - who would believe it - waits until the queue is empty again. So here we go. We’ll take a single Mutex to guard access to the local variables, and two conditions. One conditions will check if the queue is empty; we will use it to block the wait call. The other condition will be used to block the get call. Implementation We’ll implement it in Python, so we don’t have to implement threading primitives on our own. from threading import * class MT_Queue: def __init__ (self): self.queue = list () self.mainLock = Lock () self.queueEmptyCondition = Condition (self.mainLock) self.queueFullCondition = Condition (self.mainLock) def get (self): self.mainLock.acquire () # straightforward? if (len (self.queue) == 0): self.queueFullCondition.wait () item = self.queue.pop (0) if (len (self.queue) == 0): self.queueEmptyCondition.notifyAll () self.mainLock.release () return item def insert (self, item): self.mainLock.acquire () self.queue.append (item) self.mainLock.release () def wait (self): self.mainLock.acquire () if (len (self.queue) == 0): self.mainLock.release () return else: self.queueEmptyCondition.wait () self.mainLock.release () Seems pretty simple, yet we have a real big problem as soon as we have one thread blocked on the get call and another which is running somewhere else while a third thread inserts a new item. (Probably you say this is kinda unlikely, but try for yourself, most probably you’ll run into this problem immediately.) The error The error we will get is that a thread will try to pop an empty list. How is that you might ask? Let me show you a picture which illustrates the problem. The start situation is like this: - Thread #1 is blocked inside get on the wait condition. - Thread #2 is working somewhere. - Thread #3 is about to insert an item. Shame on us, we’ve just written a race condition! If thread #2 is faster than thread #1, everything fails! Let’s see how to fix this problem: def get (self): self.mainLock.acquire () # fixed while True: if (len (self.queue) == 0): self.queueFullCondition.wait () if (len (self.queue) == 0): continue else: break item = self.queue.pop (0) if (len (self.queue) == 0): self.queueEmptyCondition.notifyAll () self.mainLock.release () return item In case we’ve been waiting but someone else was faster, we’ll simply run once more into the wait condition. This way, nothing bad can happen. Conclusion Multithreading has many very subtle points one has to be aware of. When using a condition, always take a look in which order threads may wake up, and be aware that two threads waiting on the same mutex have the same chance to get it. There is no preference to the thread which waits on a condition. If you have questions, feel free to ask in the comments!
https://anteru.net/blog/2006/multithreaded-issues/
CC-MAIN-2019-13
refinedweb
537
70.39
Posts11 Joined Last visited Contact Methods - Website URL Profile Information - GenderMale - LocationKingston, ON Jeff S's Achievements Newbie (1/14) 4 Reputation - That worked really great! - I also want to note, that it is the combination of reordering the elements AND executing an animation that causes the undesirable behaviour. You will notice that you can reorder the elements 20 times and then animate and it looks pretty good. You can then animate 20 times and the behaviour is still consistent. You have to reorder, animate, reorder, animate .... several times to see the behaviour compound like I am talking about. - Hi Jonathan, I have changed my example and codepen code so that "Z-Index +" now says "Order +" and "Z-Index -" now says "Order -" to reduce confusion. You correctly observed that I wasn't using CSS z-index properties and I can see how this could cause confusion. Hopefully this alteration will make it less confusing. My goal with GreenSock is to create vector illustrations and make slight modifications to those illustrations which will allow me to animate from one variation to another extremely rapidly. The codepen example uses a process extremely similar to how I am going about things. In particular, there is no CSS involved. I am working on a process that will allow very complex animations in as little time as it takes for an illustrator to move some vectors around and upload some files. If I were to add CSS into the mix, it would make the process a lot more tedious and essentially defeat the purpose of what I am working towards. So I am really hoping for a CSS-free solution to this problem. I am looking for something along the lines of the fixMatrix (javascript solution) which will keep the workflow to a minimum. If you refer to this thread, you can see that I am doing some weird things that were never anticipated to be done with these libraries and explains better the necessity for the fixMatrix function. I understand your point with the jQuery clone function. In my example, I am removing the element first and then putting it in a different position in the DOM. So there should never be any namespace collision. When you click the "Order +" and "Order -" buttons and you observe the image, you will notice the stacking layer of the SVG elements change. It is basically just changing <g id="layer1"> <g id="humerous" ...></g> <path id="forearm-4" ...></path> </g> to <g id="layer1"> <path id="forearm-4" ...></path> <g id="humerous" ...></g> </g> and back. There seems to be some transformations under the hood (like we had with fixMatrix) causing some undesirable behaviour for me. And the more the elements are reordered, the more compouned the undesirable behaviour becomes. Let me know if that clears things up Jonathan or if you have any other questions. Reording SVG Elements creates inconsistent animation Jeff S posted a topic in GSAPUse case: As a user, I want to be able to modify the order of the SVG elements and be able to animate predictably afterwards. Codepen: Steps to reproduce: Hit the "Go Left" and "Go Right" buttons, this is correct baseline animation I want to preserve. Then if you restack the elements by hitting Order +, it will remove the element and reposition it above. When you hit "Go Left and "Go Right" again, the animation is slightly changed. Now if you hit "Order -" the element will reposition below. Hit "Go Left" and "Go Right" again, the animation is different. Repeating steps 2 through 5 will make the animation progressively worse. In the fixMatrix function, I log the transform element. As you can see, the xOrigin and yOrigin values change each time the stacking order is modified. At this stage, I don't know what to propose for a solution, so I am just opening the dialog about this issue. - It was working without the beginning +/- because it was only replacing the value after. I do think the +/- at the beginning does make it more readable. From the values I saw, in my example (Inkscpae generated svgs), there were never any + signs, but that is not to say some other program wouldn't. I think this is good. I think Jack might be right about the escaping because you can do stuff like [a-z] in the [] and that means a to z. If you wanted to include the hyphen I think you would need to use [a-z\-]. Also, I just remembered Jack, in the original code I made it return n not m. So this: function clean(selectionText) { var e = document.querySelector(selectionText); e.setAttribute("d", e.getAttribute("d").replace(/\d+e[\-\+]\d+/ig, function(m) { var n = +m; return (n < 0.0001 && n > -0.0001) ? 0 : m; }); } became function clean(selectionText) { var e = document.querySelector(selectionText); e.setAttribute("d", e.getAttribute("d").replace(/\d+e[\-\+]\d+/ig, function(m) { var n = +m; return (n < 0.0001 && n > -0.0001) ? 0 : n; }); } Otherwise it was just giving back the scientific notation. Not sure if you used that exact code in the library or not, but just a heads up. So this is what I have now. function clean(selectionText) { var e = document.querySelector(selectionText); e.setAttribute("d", e.getAttribute("d").replace(/[\-\+]?\d*\.?\d+e[\-\+]?\d+/ig, function(m) { var n = +m; return (n < 0.0001 && n > -0.0001) ? 0 : n; }); } - My intention with the new regex was to mean that sometimes the scientific notation will take on the form 2.6e-4 but other times it may take be 6e-4. So the integer followed by the decimal point may or may not exist. But rather than close this off right now, I am going to do some more testing. I will provide a large list of values that are getting replaced so that we can examine them for an exhaustive and minified regex. - I had to make one minor adjustment to the regex: /\d+e[\-\+]\d+/ig needs to be /(\d\.)?\d+e[\-\+]\d+/ig This is because sometimes the scientific notation was taking the form 2.6e-4. Hope you can add this in to the next release. Thanks Jack! Transforming already transformed matrix Jeff S replied to Jeff S's topic in GSAPIt helps a lot actually. Thanks Jack!. - Thank you to Jack and Carl. I did play around with SVGOMG before I posted. But basically, I want to morph one SVG shape to another by saving them in separate files which could potentially lead to thousands of SVG files. So adding a step to resave the files to filter them into good form would be tedious. I have taken Jack's code and applied it in my own Javascript class that loads SVGs. You saved me writing the regex Jack so thank you very much for that! Thanks for the great libraries and great support. I am very happy with my purchase.!
https://staging.greensock.com/profile/41492-jeff-s/
CC-MAIN-2022-27
refinedweb
1,158
65.32
$ cnpm install webpack-isomorphic-tools webpack-isomorphic-tools is a small helper module providing very basic support for isomorphic (universal) rendering when using Webpack. It was created a long time ago when Webpack was v1 and the whole movement was just starting. Therefore webpack-isomorphic-tools is a hacky solution. It allowed many projects to set up basic isomorphic (universal) rendering in the early days but is now considered deprecated and new projects shouldn't use it. This library can still be found in legacy projects. For new projects use either universal-webpack or all-in-one frameworks like Next.js. Suppose you have an application which is built using Webpack. It works in the web browser. Should it be "isomorphic" ("universal")? It's better if it is. One reason is that search engines will be able to index your page. The other reason is that we live in a realtime mobile age which declared war on network latency, and so it's always better to fetch an already rendered content than to first fetch the application code and only then fetch the content to render the page. Every time you release a client-side only website to the internet someone writes a frustrated blog post. So, it's obvious then that web applications should be "isomorphic" ("universal"), i.e. be able to render both on the client and the server, depending on circumstances. And it is perfectly possible nowadays since javascript runs everywhere: both in web browsers and on servers. Ok, then one can just go ahead and run the web application in Node.js and its done. But, there's one gotcha: a Webpack application will usually crash when tried to be run in Node.js straight ahead (you'll get a lot of SyntaxErrors with Unexpected tokens). The reason is that Webpack introduces its own layer above the standard javascript. This extra layer handles all require() calls magically resolving them to whatever it is configured to. For example, Webpack is perfectly fine with the code require()ing CSS styles or SVG images. Bare Node.js doesn't come with such trickery up its sleeve. Maybe it can be somehow enhanced to be able to do such things? Turned out that it can, and that's what webpack-isomorphic-tools do: they inject that require() magic layer above the standard javascript in Node.js. Still it's a hacky solution, and a better way would be to compile server-side code with Webpack the same way it already compiles the client-side code. This is achieved via target: "node" configuration option, and that's what universal-webpack library does. However, webpack-isomorphic-tools happened to be a bit simpler to set up, so they made their way into many now-legacy projects, so some people still use this library. It's not being maintained anymore though, and in case of any issues people should just migrate to universal-webpack or something similar. webpack-isomorphic-tools mimics (to a certain extent) Webpack's require() magic when running application code on a Node.js server without Webpack. It basically fixes all those require()s of assets and makes them work instead of throwing SyntaxErrors. It doesn't provide all the capabilities of Webpack (for example, plugins won't work), but for the basic stuff, it works. For example, consider images. Images are require()d in React components and then used like this: // alternatively one can use `import`, // but with `import`s hot reloading won't work // import imagePath from '../image.png' // Just `src` the image inside the `render()` method class Photo extends React.Component { render() { // When Webpack url-loader finds this `require()` call // it will copy `image.png` to the build folder // and name it something like `9059f094ddb49c2b0fa6a254a6ebf2ad.png`, // because Webpack is set up to use the `[hash]` file naming feature // which makes browser asset caching work correctly. return <img src={ require('../image.png') }/> } } It works on the client-side because Webpack intelligently replaces all the require() calls with a bit of magic. But it wouldn't work on the server-side because Node.js only knows how to require() javascript modules. It would just throw a SyntaxError. To solve this issue one can use webpack-isomorphic-tools. With the help of webpack-isomorphic-tools in this particular case the require() call will return the real path to the image on the disk. It would be something like ../../build/9059f094ddb49c2b0fa6a254a6ebf2ad.png. How did webpack-isomorphic-tools figure out this weird real file path? It's just a bit of magic. webpack-isomorphic-tools is extensible, and finding the real paths for assets is the simplest example of what it can do inside require() calls. Using custom configuration one can make require() calls (on the server) return anything (not just a String; it may be a JSON object, for example). For example, if one is using Webpack css-loader modules feature (also referred to as "local styles") one can make require(*.css) calls return JSON objects with generated CSS class names maps like they do in este and react-redux-universal-hot-example. webpack-isomorphic-tools are required both for development and production $ npm install webpack-isomorphic-tools --save First you add webpack-isomorphic-tools plugin to your Webpack configuration. var WebpackIsomorphicToolsPlugin = require('webpack-isomorphic-tools/plugin') var webpackIsomorphicToolsPlugin = // webpack-isomorphic-tools settings reside in a separate .js file // (because they will be used in the web server code too). new WebpackIsomorphicToolsPlugin(require('./webpack-isomorphic-tools-configuration')) // also enter development mode since it's a development webpack configuration // (see below for explanation) .development() // usual Webpack configuration module.exports = { context: '(required) your project path here', module: { loaders: [ ..., { test: webpackIsomorphicToolsPlugin.regularExpression('images'), loader: 'url-loader?limit=10240', // any image below or equal to 10K will be converted to inline base64 instead } ] }, plugins: [ ..., webpackIsomorphicToolsPlugin ] ... } What does .development() method do? It enables development mode. In short, when in development mode, it disables asset caching (and enables asset hot reload), and optionally runs its own "dev server" utility (see port configuration setting). Call it in development webpack build configuration, and, conversely, don't call it in production webpack build configuration. For each asset type managed by webpack-isomorphic-tools there should be a corresponding loader in your Webpack configuration. For this reason webpack-isomorphic-tools/plugin provides a .regularExpression(assetType) method. The assetType parameter is taken from your webpack-isomorphic-tools configuration: import WebpackIsomorphicToolsPlugin from 'webpack-isomorphic-tools/plugin' export default { assets: { images: { extensions: ['png', 'jpg', 'gif', 'ico', 'svg'] } } } That's it for the client side. Next, the server side. You create your server side instance of webpack-isomorphic-tools in the very main server javascript file (and your web application code will reside in some server.js file which is require()d in the bottom) var WebpackIsomorphicTools = require('webpack-isomorphic-tools') // this must be equal to your Webpack configuration "context" parameter var projectBasePath = require('path').resolve(__dirname, '..') // this global variable will be used later in express middleware global.webpackIsomorphicTools = new WebpackIsomorphicTools(require('./webpack-isomorphic-tools-configuration')) // initializes a server-side instance of webpack-isomorphic-tools // (the first parameter is the base path for your project // and is equal to the "context" parameter of you Webpack configuration) // (if you prefer Promises over callbacks // you can omit the callback parameter // and then it will return a Promise instead) .server(projectBasePath, function() { // webpack-isomorphic-tools is all set now. // here goes all your web application code: // (it must reside in a separate *.js file // in order for the whole thing to work) require('./server') }) Then you, for example, create an express middleware to render your pages on the server import React from 'react' // html page markup import Html from './html' // will be used in express_application.use(...) export function pageRenderingMiddleware(request, response) { // clear require() cache if in development mode // (makes asset hot reloading work) if (process.env.NODE_ENV !== 'production') { webpackIsomorphicTools.refresh() } // for react-router example of determining current page by URL take a look at this: // const pageComponent = [determine your page component here using request.path] // for a Redux Flux store implementation you can see the same example: // const fluxStore = [initialize and populate your flux store depending on the page being shown] // render the page to string and send it to the browser as text/html response.send('<!doctype html>\n' + React.renderToString(<Html assets={webpackIsomorphicTools.assets()} component={pageComponent} store={fluxStore}/>)) } And finally you use the assets inside the Html component's render() method import React, {Component, PropTypes} from 'react' import serialize from 'serialize-javascript' export default class Html extends Component { static propTypes = { assets : PropTypes.object, component : PropTypes.object, store : PropTypes.object } // a sidenote for "advanced" users: // (you may skip this) // // this file is usually not included in your Webpack build // because this React component is only needed for server side React rendering. // // so, if this React component is not `require()`d from anywhere in your client code, // then Webpack won't ever get here // which means Webpack won't detect and parse any of the `require()` calls here, // which in turn means that if you `require()` any unique assets here // you should also `require()` those assets somewhere in your client code, // otherwise those assets won't be present in your Webpack bundle and won't be found. // render() { const { assets, component, store } = this.props // "import" will work here too // but if you want hot reloading to work while developing your project // then you need to use require() // because import will only be executed a single time // (when the application launches) // you can refer to the "Require() vs import" section for more explanation const picture = require('../assets/images/cat.jpg') // favicon const icon = require('../assets/images/icon/32x32.png') const html = ( <html lang="en-us"> <head> <meta charSet="utf-8"/> <title>xHamster</title> {/* favicon */} <link rel="shortcut icon" href={icon} /> {/* styles (will be present only in production with webpack extract text plugin) */} {Object.keys(assets.styles).map((style, i) => <link href={assets.styles[style]} key={i})} {/* resolves the initial style flash (flicker) on page load in development mode */} { Object.keys(assets.styles).length === 0 ? <style dangerouslySetInnerHTML={{__html: require('../assets/styles/main_style.css')}}/> : null } </head> <body> {/* image requiring demonstration */} <img src={picture}/> {/* rendered React page */} <div id="content" dangerouslySetInnerHTML={{__html: React.renderToString(component)}}/> {/* Flux store data will be reloaded into the store on the client */} <script dangerouslySetInnerHTML={{__html: `window._flux_store_data=${serialize(store.getState())};`}} /> {/* javascripts */} {/* (usually one for each "entry" in webpack configuration) */} {/* (for more informations on "entries" see) */} {Object.keys(assets.javascript).map((script, i) => <script src={assets.javascript[script]} key={i}/> )} </body> </html> ) return html } } assets in the code above are simply the contents of webpack-assets.json which is created by webpack-isomorphic-tools in your project base folder. webpack-assets.json (in the simplest case) keeps track of the real paths to your assets, e.g. { "javascript": { "main": "/assets/main-d8c29e9b2a4623f696e8.js" }, "styles": { "main": "/assets/main-d8c29e9b2a4623f696e8.css" }, "assets": { "./assets/images/cat.jpg": "", "./assets/images/icon/32x32.png": "s it, now you can require()your assets "isomorphically" (both on client and server). A working example webpack-isomorphic-toolsare featured in react-redux-universal-hot-example. There it is used to require()images and CSS styles (in the form of CSS modules). Also you may look at this sample project. There it is used to require()images and CSS styles (without using CSS modulesfeature). Some source code guidance for the aforementioned project: - webpack-isomorphic-tools configuration - webpack-isomorphic-tools plugin - webpack-isomorphic-tools server-side initialization Configuration Available configuration parameters: { // debug mode. // when set to true, lets you see debugging messages in the console. // debug: true, // is false by default // (optional) // (recommended) // // when `port` is set, then this `port` is used // to run an HTTP server serving Webpack assets. // (`express` npm package must be installed in order for this to work) // // this way, in development mode, `webpack-assets.json` won't ever // be written to disk and instead will always reside in memory // and be served from memory (just as `webpack-dev-server` does). // // this `port` setting will take effect only in development mode. // // port: 8888, // is false by default // verbosity. // // when set to 'no webpack stats', // outputs no Webpack stats to the console in development mode. // this also means no Webpack errors or warnings will be output to the console. // // when set to 'webpack stats for each build', // outputs Webpack stats to the console // in development mode on each incremental build. // (i guess no one is gonna ever use this setting) // // when not set (default), outputs Webpack stats to the console // in development mode for the first build only. // // verbosity: ..., // is `undefined` by default // enables support for `require.context()` and `require.ensure()` functions. // is turned off by default // to skip unnecessary code instrumentation // because not everyone uses it. // // patch_require: true, // is false by default // By default it creates 'webpack-assets.json' file at // webpackConfiguration.context (which is your project folder). // You can change the assets file path as you wish // (therefore changing both folder and filename). // // (relative to webpackConfiguration.context which is your project folder) // webpack_assets_file_path: 'webpack-assets.json', // By default, when running in debug mode, it creates 'webpack-stats.json' file at // webpack_configuration.context (which is your project folder). // You can change the stats file path as you wish // (therefore changing both folder and filename). // // (relative to webpack_configuration.context which is your project folder) // webpack_stats_file_path: 'webpack-stats.json', // Makes `webpack-isomorphic-tools` aware of Webpack aliasing feature // (if you use it) // // // The `alias` parameter corresponds to `resolve.alias` // in your Webpack configuration. // alias: webpackConfiguration.resolve.alias, // is {} by default // if you're using Webpack's `resolve.modulesDirectories` // then you should also put them here. // // modulesDirectories: webpackConfiguration.resolve.modulesDirectories // is ['node_modules'] by default // here you can define all your asset types // assets: { // keys of this object will appear in: // * webpack-assets.json // * .assets() method call result // * .regularExpression(key) method call // pngImages: { // which file types belong to this asset type // extension: 'png', // or extensions: ['png', 'jpg', ...], // [optional] // // here you are able to add some file paths // for which the require() call will bypass webpack-isomorphic-tools // (relative to the project base folder, e.g. ./sources/server/kitten.jpg.js) // (also supports regular expressions, e.g. /^\.\/node_modules\/*/, // and functions(path) { return true / false }) // // exclude: [], // [optional] // // here you can specify manually the paths // for which the require() call will be processed by webpack-isomorphic-tools // (relative to the project base folder, e.g. ./sources/server/kitten.jpg.js) // (also supports regular expressions, e.g. /^\.\/node_modules\/*/, // and functions(path) { return true / false }). // in case of `include` only included paths will be processed by webpack-isomorphic-tools. // // include: [], // [optional] // // determines which webpack stats modules // belong to this asset type // // arguments: // // module - a webpack stats module // // (to understand what a "module" is // read the "What's a "module"?" section of this readme) // // regularExpression - a regular expression // composed of this asset type's extensions // e.g. /\.scss$/, /\.(ico|gif)$/ // // options - various options // (development mode flag, // debug mode flag, // assets base url, // project base folder, // regular_expressions{} for each asset type (by name), // webpack stats json object) // // log // // returns: a Boolean // // by default is: "return regularExpression.test(module.name)" // // premade utility filters: // // WebpackIsomorphicToolsPlugin.styleLoaderFilter // (for use with style-loader + css-loader) // filter: function(module, regularExpression, options, log) { return regularExpression.test(module.name) }, // [optional] // // transforms a webpack stats module name // to an asset path (usually is the same thing) // //: a String // // by default is: "return module.name" // // premade utility path extractors: // // WebpackIsomorphicToolsPlugin.styleLoaderPathExtractor // (for use with style-loader + css-loader) // path: function(module, options, log) { return module.name }, // [optional] // // parses a webpack stats module object // for an asset of this asset type // to whatever you need to get // when you require() these assets // in your code later on. // // this is what you'll see as the asset value in webpack-assets.json: // { ..., path(): compile(parser()), ... } // // can be a CommonJS module source code: // module.exports = ...what you export here is // what you get when you require() this asset... // // if the returned value is not a CommonJS module source code // (it may be a string, a JSON object, whatever) // then it will be transformed into a CommonJS module source code. // // in other words: // // // making of webpack-assets.json // for each type of configuration.assets // modules.filter(type.filter).for_each (module) // assets[type.path()] = compile(type.parser(module)) // // // requiring assets in your code // require(path) = (path) => return assets[path] // //: whatever (could be a filename, could be a JSON object, etc) // // by default is: "return module.source" // // premade utility parsers: // // WebpackIsomorphicToolsPlugin.urlLoaderParser // (for use with url-loader or file-loader) // require() will return file URL // (is equal to the default parser, i.e. no parser) // // WebpackIsomorphicToolsPlugin.cssLoaderParser // (for use with css-loader when not using "modules" feature) // require() will return CSS style text // // WebpackIsomorphicToolsPlugin.cssModulesLoaderParser // (for use with css-loader when using "modules" feature) // require() will return a JSON object map of style class names // which will also have a `_style` key containing CSS style text // parser: function(module, options, log) { log.info('# module name', module.name) log.info('# module source', module.source) log.info('# debug mode', options.debug) log.info('# development mode', options.development) log.info('# webpack version', options.webpackVersion) log.debug('debugging') log.warning('warning') log.error('error') } }, ... }, ...] } Configuration examples url-loader / file-loader (images, fonts, etc) url-loaderand file-loaderare supported with no additional configuration { assets: { images: { extensions: ['png', 'jpg'] }, fonts: { extensions: ['woff', 'ttf'] } } } style-loader (standard CSS stylesheets) If you aren't using "CSS modules" feature of Webpack, and if in your production Webpack config you use ExtractTextPluginfor CSS styles, then you can set it up like this { assets: { styles: { extensions: ['less', 'scss'], // which `module`s to parse CSS from: filter: function(module, regularExpression,, regularExpression, options, log) } // In production mode there will be no CSS text at all // because all styles will be extracted by Webpack Extract Text Plugin // into a .css file (as per Webpack configuration). // // Therefore in production mode `filter` function always returns non-`true`. }, // How to correctly transform kinda weird `module.name` // of the `module` created by Webpack "css-loader" // into the correct asset path: path: WebpackIsomorphicToolsPlugin.styleLoaderPathExtractor, // How to extract these Webpack `module`s' javascript `source` code. // basically takes `module.source` and modifies `module.exports` a little. parser: WebpackIsomorphicToolsPlugin.cssLoaderParser } } } style-loader (CSS stylesheets with "CSS modules" feature) If you are using "CSS modules" feature of Webpack, and if in your production Webpack config you use ExtractTextPluginfor CSS styles, then you can set it up like this { assets: { styleModules: { extensions: ['less', 'scss'], // which `module`s to parse CSS style class name maps from: filter: function(module, regex,, regex, options, log) } // In production mode there's no Webpack "style-loader", // so `module.name`s of the `module`s created by Webpack "css-loader" // (those which contain CSS text) // will be simply equal to the correct asset path return regex.test(module.name) }, // How to correctly transform `module.name`s // into correct asset paths path: function(module, options, log) { if (options.development) { // In development mode there's Webpack "style-loader", // so `module.name`s of the `module`s created by Webpack "css-loader" // (those picked by the `filter` function above) // will be kinda weird, and this path extractor extracts // the correct asset paths from these kinda weird `module.name`s return WebpackIsomorphicToolsPlugin.styleLoaderPathExtractor(module, options, log); } // in production mode there's no Webpack "style-loader", // so `module.name`s will be equal to correct asset paths return module.name }, // How to extract these Webpack `module`s' javascript `source` code. // Basically takes `module.source` and modifies its `module.exports` a little. parser: function(module, options, log) { if (options.development) { // In development mode it adds an extra `_style` entry // to the CSS style class name map, containing the CSS text return WebpackIsomorphicToolsPlugin.cssModulesLoaderParser(module, options, log); } // In production mode there's Webpack Extract Text Plugin // which extracts all CSS text away, so there's // only CSS style class name map left. return module.source } } } } svg-react-loader (CSS stylesheets with "CSS modules" feature) { assets: { svg: { extension: 'svg', runtime: true } } } { module: { rules: [{ test: /\.svg$/, use: [{ loader: 'babel-loader' }, { loader: 'svg-react-loader' }] }] } } What are webpack-assets.json? This file is needed for webpack-isomorphic-toolsoperation on server. It is created by a custom Webpack plugin and is then read from the filesystem by webpack-isomorphic-toolsserver instance. When you require(pathToAnAsset)an asset on server then what you get is simply what's there in this file corresponding to this pathToAnAssetkey (under the assetssection). Pseudocode: // requiring assets in your code require(path) = (path) => return assets[path] Therefore, if you get such a message in the console: [webpack-isomorphic-tools] [error] asset not found: ./~/react-toolbox/lib/font_icon/style.scss Then it means that the asset you requested ( require()d) is absent from your webpack-assets.jsonwhich in turn means that you haven't placed this asset to your webpack-assets.jsonin the first place. How to place an asset into webpack-assets.json? Pseudocode: // making of webpack-assets.json inside the Webpack plugin for each type of configuration.assets modules.filter(type.filter).for_each (module) assets[type.path()] = compile(type.parser(module)) Therefore, if you get the "asset not found" error, first check your webpack-assets.jsonand second check your webpack-isomorphic-toolsconfiguration section for this asset type: are your filter, pathand parserfunctions correct? What are Webpack stats? Webpack stats are a description of all the modules in a Webpack build. When running in debug mode Webpack stats are output to a file named webpack-stats.jsonin the same folder as your webpack-assets.jsonfile. One may be interested in the contents of this file when writing custom filter, pathor parserfunctions. This file is not needed for operation, it's just some debugging information. What's a "module"? This is an advanced topic on Webpack internals A "module" is a Webpack entity. One of the main features of Webpack is code splitting. When Webpack builds your code it splits it into "chunks" - large portions of code which can be downloaded separately later on (if needed) therefore reducing the initial page load time for your website visitor. These big "chunks" aren't monolithic and in their turn are composed of "modules" which are: standard CommonJS javascript modules you require()every day, pictures, stylesheets, etc. Every time you require()something (it could be anything: an npm module, a javascript file, or a css style, or an image) a moduleentry is created by Webpack. And the file where this require()call originated is called a reasonfor this require()d module. Each moduleentry has a nameand a sourcecode, along with a list of chunksit's in and a bunch of other miscellaneous irrelevant properties. For example, here's a piece of an example webpack-stats.jsonfile (which is generated along with webpack-assets.jsonin debug mode). Here you can see a random moduleentry created by Webpack. { ... "modules": [ { "id": 0, ... }, { "id": 1, "name": "./~/fbjs/lib/invariant.js", "source": "module.exports = global[\"undefined\"] = require(\"-!G:\\\\work\\\\isomorphic-demo\\\\node_modules\\\\fbjs\\\\lib\\\\invariant.js\");", // the rest of the fields are irrelevant "chunks": [ 0 ], "identifier": "G:\\work\\isomorphic-demo\\node_modules\\expose-loader\\index.js?undefined!G:\\work\\isomorphic-demo\\node_modules\\fbjs\\lib\\invariant.js", "index": 27, "index2": 7, "size": 117, "cacheable": true, "built": true, "optional": false, "prefetched": false, "assets": [], "issuer": "G:\\work\\isomorphic-demo\\node_modules\\react\\lib\\ReactInstanceHandles.js", "failed": false, "errors": 0, "warnings": 0, "reasons": [ { "moduleId": 418, "moduleIdentifier": "G:\\work\\isomorphic-demo\\node_modules\\react\\lib\\ReactInstanceHandles.js", "module": "./~/react/lib/ReactInstanceHandles.js", "moduleName": "./~/react/lib/ReactInstanceHandles.js", "type": "cjs require", "userRequest": "fbjs/lib/invariant", "loc": "17:16-45" }, ... { "moduleId": 483, "moduleIdentifier": "G:\\work\\isomorphic-demo\\node_modules\\react\\lib\\traverseAllChildren.js", "module": "./~/react/lib/traverseAllChildren.js", "moduleName": "./~/react/lib/traverseAllChildren.js", "type": "cjs require", "userRequest": "fbjs/lib/invariant", "loc": "19:16-45" } ] }, ... ] } Judging by its reasonsand their userRequests one can deduce that this moduleis require()d by many other modules in this project and the code triggering this moduleentry creation could look something like this var invariant = require('fbjs/lib/invariant') Every time you require()anything in your code, Webpack detects it during build process and the require()d moduleis "loaded" (decorated, transformed, replaced, etc) by a corresponding module "loader" (or loaders) specified in Webpack configuration file ( webpack.conf.js) under the "module.loaders" path. For example, say, all JPG images in a project are configured to be loaded with a "url-loader": // Webpack configuration module.exports = { ... module: { loaders: [ ... { test : /\.jpg$/, loader : 'url-loader' } ] }, ... } This works on client: require()calls will return URLs for JPG images. The next step is to make require()calls to these JPG images behave the same way when this code is run on the server, with the help of webpack-isomorphic-tools. So, the fields of interest of the moduleobject would be nameand source: first you find the modules of interest by their names (in this case, the module names would end in ".jpg") and then you parse the sources of those modules to extract the information you need (in this case that would be the real path to an image). The moduleobject for an image would look like this { ... "name": "./assets/images/husky.jpg", "source": "module.exports = __webpack_public_path__ + \"9059f094ddb49c2b0fa6a254a6ebf2ad.jpg\"" } Therefore, in this simple case, in webpack-isomorphic-toolsconfiguration file we create an "images" asset type with extension "jpg" and these parameters: - the filterfunction would be module => module.name.endsWith('.jpg')(and it's the default filterif no filteris specified) - the pathparser function would be module => module.name(and it's the default pathparser if no pathparser is specified) - the parserfunction would be module => module.source(and it's the default parserif no parseris specified) When the javascript sourcecode returned by this parserfunction gets compiled by webpack-isomorphic-toolsit will yeild a valid CommonJS javascript module which will return the URL for this image, resulting in the following piece of webpack-assets.json: { ... assets: { "./assets/images/husky.jpg": "/assets/9059f094ddb49c2b0fa6a254a6ebf2ad.jpg", ... } } And so when you later require("./assets/images/husky.jpg")in your server code it will return "/assets/9059f094ddb49c2b0fa6a254a6ebf2ad.jpg"and that's it. API Note : All exported functions and public methods have camelCase aliases Constructor (both Webpack plugin and server tools) Takes an object with options (see Configuration section above) process.env.NODE_ENV (server tools instance only) process.env.NODE_ENVvariable is examined to determine if it's production mode or development mode. Any value for process.env.NODE_ENVother than productionwill indicate development mode. For example, in development mode, assets aren't cached, and therefore support hot reloading (if anyone would ever need that). Also developmentvariable is passed to asset type's filter, pathand parserfunctions. The prevously available .development()method for the server-side instance is now deprecated and has no effect. .development(true or false, or undefined -> true) (Webpack plugin instance only) Is it development mode or is it production mode? By default it's production mode. But if you're instantiating webpack-isomorphic-tools/pluginfor use in Webpack development configuration, then you should call this method to enable asset hot reloading (and disable asset caching), and optinally to run its own "dev server" utility (see portconfiguration setting). It should be called right after the constructor. .regularExpression(assetType) (aka .regexp(pathToAnAsset)) (Webpack plugin instance) Returns the regular expression for this asset type (based on this asset type's extension(or extensions)) WebpackIsomorphicToolsPlugin.urlLoaderParser (Webpack plugin) A parser (see Configuration section above) for Webpack url-loader, also works for Webpack file-loader. Use it for your images, fonts, etc. .server(projectPath, [callback]) (server tools instance) Initializes a server-side instance of webpack-isomorphic-toolswith the base path for your project and makes all the server-side require()calls work. The projectPathparameter must be identical to the contextparameter of your Webpack configuration and is needed to locate webpack-assets.json(contains the assets info) which is output by Webpack process. When you're running your project in development mode for the very first time the webpack-assets.jsonfile doesn't exist yet because in development mode webpack-dev-serverand your application server are run concurrently and by the time the application server starts the webpack-assets.jsonfile hasn't yet been generated by Webpack and require()calls for your assets would return undefined. To fix this you can put your application server code into a callbackand pass it as a second parameter and it will be called as soon as webpack-assets.jsonfile is detected. If not given a callbackthis method will return a Promisewhich is fulfilled as soon as webpack-assets.jsonfile is detected (in case you prefer Promises over callbacks). When choosing a Promiseway you won't be able to get the webpack-isomorphic-toolsinstance variable reference out of the .server()method call result, so your code can be a bit more verbose in this case. .refresh() (server tools instance) Refreshes your assets info (re-reads webpack-assets.jsonfrom disk) and also flushes cache for all the previously require()d assets .assets() (server tools instance) Returns the contents of webpack-assets.jsonwhich is created by webpack-isomorphic-toolsin your project base folder Troubleshooting Cannot find module If encountered when run on server, this error means that the require()d path doesn't exist in the filesystem (all the require()d assets must exist in the filesystem when run on server). If encountered during Webpack build, this error means that the require()d path is absent from webpack-stats.json. As an illustration, consider an example where a developer transpiles all his ES6 code using Babel into a single compiled file ./build/server-bundle-es5.js. Because all the assets still remain in the ./srcdirectory, Cannot find moduleerror will be thrown when trying to run the compiled bundle. As a workaround use babel-registerinstead. Or copy all assets to the ./buildfolder (keeping the file tree structure) and point Webpack contextto the ./srcfolder. SyntaxError: Unexpected token ILLEGAL This probably means that in some asset module source there's a require()call to some file extension that isn't specified in "TypeError: require.context is not a function" or "TypeError: require.ensure is not a function" You should enable patch_require: trueflag in your webpack-isomorphic-toolsconfiguration file. The reason is that the support for require.context()and require.ensure()is hacky at the moment. It works and does its thing but the solution is not elegant enough if you know what I mean. Infinite "(waiting for the first Webpack build to finish)" If you're getting this message infinitely then it means that webpack-assets.jsonis never generated by Webpack. It can happen, for example, in any of these cases - you forgot to add webpack-isomorphic-toolsplugin to your Webpack configuration - you aren't running your Webpack build either in parallel with your app or prior to running you app - you're using webpack-dev-middlewareinside your main server code which you shouldn't - your Webpack configuration's contextpath doesn't point to the project base directory If none of those is your case, enable debug: trueflag in webpack-isomorphic-toolsconfiguration to get debugging info. Miscellaneous Webpack 2 System.import Instead of implementing System.importin this library I think that it would be more rational to use existing tools for transforming System.import()calls into require()calls. See this stackoverflow answer for a list of such tools. .gitignore Make sure you add this to your .gitignoreso that you don't commit these unnecessary files to your repo # webpack-isomorphic-tools /webpack-stats.json /webpack-assets.json Require() vs import In the image requiring examples above we could have wrote it like this: import picture from './cat.jpg' That would surely work. Much simpler and more modern. But, the disadvantage of the new ES6 module importing is that by design it's static as opposed to dynamic nature of require(). Such a design decision was done on purpose and it's surely the right one: - it's static so it can be optimized by the compiler and you don't need to know which module depends on which and manually reorder them in the right order because the compiler does it for you - it's smart enough to resolve cyclic dependencies - it can load modules both synchronously and asynchronously if it wants to and you'll never know because it can do it all by itself behind the scenes without your supervision - the exports are static which means that your IDE can know exactly what each module is gonna export without compiling the code (and therefore it can autocomplete names, detect syntax errors, check types, etc); the compiler too has some benefits such as improved lookup speed and syntax and type checking - it's simple, it's transparent, it's sane If you wrote your code with just imports it would work fine. But imagine you're developing your website, so you're changing files constantly, and you would like it all refresh automagically when you reload your webpage (in development mode). webpack-isomorphic-toolsgives you that. Remember this code in the express middleware example above? if (process.env.NODE_ENV !== 'production') { webpackIsomorphicTools.refresh() } It does exactly as it says: it refreshes everything on page reload when you're in development mode. And to leverage this feature you need to use dynamic module loading as opposed to static one through imports. This can be done by require()ing your assets, and not at the top of the file where all require()s usually go but, say, inside the render()method for React components. I also read on the internets that ES6 supports dynamic module loading too and it looks something like this: System.import('module') .then((module) => { // Use `module` }) .catch(error => { ... }) I'm currently unfamiliar with ES6 dynamic module loading system because I didn't research this question. Anyway it's still a draft specification so I guess good old require()is just fine to the time being. Also it's good to know that the way all this require('./asset.whatever_extension')magic is based on Node.js require hooks and it works with imports only when your ES6 code is transpiled by Babel which simply replaces all the imports with require()s. For now, everyone out there uses Babel, both on client and server. But when the time comes for ES6 to be widely natively adopted, and when a good enough ES6 module loading specification is released, then I (or someone else) will port this "require hook" to ES6 to work with imports. References Initially based on the code from react-redux-universal-hot-example by Erik Rasmussen Also the same codebase (as in the project mentioned above) can be found in isomorphic500 by Giampaolo Bellavite Also uses require()hooking techniques from node-hook by Gleb Bahmutov Contributing After cloning this repo, ensure dependencies are installed by running: npm install This module is written in ES6 and uses Babel for ES5 transpilation. Widely consumable JavaScript can be produced by running: npm run build Once npm run buildhas run, you may importor require()directly from node. After developing, the full test suite can be evaluated by running: npm test When you're ready to test your new functionality on a real project, you can run npm pack It will build, testand then create a .tgzarchive which you can then install in your project folder npm install [module name with version].tar.gz To do - Implement require.context(folder, include_subdirectories, regular_expression)and require.ensureWebpack helper functions properly - Proper testing for log(output to a variable rather than console) - Proper testing for notify_stats(output to a logvariable) - Proper testing for parsers (using eval()CommonJS module compilation) - Proper testing for require('./node_modules/whatever.jpg')test case License Current Tags - 3.0.6 ... latest (2 years ago) 124 Versions - 3.0.6 ... 2 years ago - 3.0.5 ... 3 years ago - 3.0.4 ... 3 years ago - 3.0.3 ... 3 years ago - 3.0.2 ... 3 years ago - 3.0.1 ... 3 years ago - 3.0.0 ... 3 years ago - 2.6.6 ... 3 years ago - 2.6.5 ... 3 years ago - 2.6.4 ... 3 years ago - 2.6.3 ... 3 years ago - 2.6.2 ... 3 years ago - 2.6.1 ... 3 years ago - 2.6.0 ... 3 years ago - 2.5.8 ... 4 years ago - 2.5.7 ... 4 years ago - 2.5.6 ... 4 years ago - 2.5.5 ... 4 years ago - 2.5.4 ... 4 years ago - 2.5.3 ... 4 years ago - 2.5.2 ... 4 years ago - 2.5.1 ... 4 years ago - 2.5.0 ... 4 years ago - 2.4.2 ... 4 years ago - 2.3.2 ... 4 years ago - 2.3.1 ... 4 years ago - 2.3.0 ... 4 years ago - 2.2.50 ... 4 years ago - 2.2.49 ... 4 years ago - 2.2.48 ... 4 years ago - 2.2.47 ... 4 years ago - 2.2.46 ... 4 years ago - 2.2.45 ... 4 years ago - 2.2.44 ... 4 years ago - 2.2.43 ... 4 years ago - 2.2.42 ... 4 years ago - 2.2.41 ... 4 years ago - 2.2.40 ... 4 years ago - 2.2.39 ... 4 years ago - 2.2.38 ... 4 years ago - 2.2.37 ... 4 years ago - 2.2.36 ... 4 years ago - 2.2.35 ... 4 years ago - 2.2.34 ... 4 years ago - 2.2.33 ... 4 years ago - 2.2.32 ... 4 years ago - 2.2.31 ... 4 years ago - 2.2.30 ... 4 years ago - 2.2.29 ... 4 years ago - 2.2.28 ... 4 years ago - 2.2.27 ... 4 years ago - 2.2.26 ... 4 years ago - 2.2.25 ... 4 years ago - 2.2.24 ... 4 years ago - 2.2.23 ... 4 years ago - 2.2.22 ... 4 years ago - 2.2.21 ... 4 years ago - 2.2.20 ... 4 years ago - 2.2.19 ... 4 years ago - 2.2.18 ... 4 years ago - 2.2.17 ... 4 years ago - 2.2.16 ... 4 years ago - 2.2.15 ... 4 years ago - 2.2.14 ... 4 years ago - 2.2.13 ... 4 years ago - 2.2.12 ... 4 years ago - 2.2.11 ... 4 years ago - 2.2.10 ... 4 years ago - 2.2.9 ... 4 years ago - 2.2.8 ... 4 years ago - 2.2.7 ... 4 years ago - 2.2.6 ... 4 years ago - 2.2.5 ... 4 years ago - 2.2.4 ... 4 years ago - 2.2.3 ... 4 years ago - 2.2.2 ... 4 years ago - 2.2.1 ... 4 years ago - 2.2.0 ... 4 years ago - 2.1.2 ... 4 years ago - 2.1.1 ... 4 years ago - 2.1.0 ... 4 years ago - 2.0.2 ... 4 years ago - 2.0.1 ... 4 years ago - 2.0.0 ... 4 years ago - 1.0.2 ... 4 years ago - 1.0.1 ... 4 years ago - 1.0.0 ... 4 years ago - 0.9.3 ... 4 years ago - 0.9.2 ... 4 years ago - 0.9.1 ... 4 years ago - 0.9.0 ... 5 years ago - 0.8.8 ... 5 years ago - 0.8.7 ... 5 years ago - 0.8.6 ... 5 years ago - 0.8.5 ... 5 years ago - 0.8.4 ... 5 years ago - 0.8.3 ... 5 years ago - 0.8.2 ... 5 years ago - 0.8.1 ... 5 years ago - 0.8.0 ... 5 years ago - 0.7.1 ... 5 years ago - 0.7.0 ... 5 years ago - 0.6.1 ... 5 years ago - 0.6.0 ... 5 years ago - 0.5.2 ... 5 years ago - 0.5.1 ... 5 years ago - 0.5.0 ... 5 years ago - 0.4.0 ... 5 years ago - 0.3.6 ... 5 years ago - 0.3.5 ... 5 years ago - 0.3.4 ... 5 years ago - 0.3.3 ... 5 years ago - 0.3.2 ... 5 years ago - 0.3.1 ... 5 years ago - 0.3.0 ... 5 years ago - 0.2.0 ... 5 years ago - 0.1.7 ... 5 years ago - 0.1.6 ... 5 years ago - 0.1.5 ... 5 years ago - 0.1.4 ... 5 years ago - 0.1.3 ... 5 years ago - 0.1.2 ... 5 years ago - 0.1.1 ... 5 years ago - 0.1.0 ... 5 years ago
https://npm.taobao.org/package/webpack-isomorphic-tools
CC-MAIN-2020-16
refinedweb
6,676
50.84
:( one more stupid misstake, pls help me ! Hiep Nguyen Ranch Hand Joined: Oct 26, 2001 Posts: 46 posted Feb 19, 2003 20:41:00 0 this is my real code: public class Stupid { public void closeInternalFrame(int cmdID){ switch(cmdID){ case Define.CMD_ADD_GROUP: break; /* case Define.CMD_DETAIL_GROUP: break; */ case Define.CMD_REM_GROUP: break; case Define.CMD_LIST_GROUP: break; } } public void doCommand(int cmdID){ switch(cmdID){ case Define.CMD_ADD_GROUP: break; /* case Define.CMD_DETAIL_GROUP: break; */ case Define.CMD_REM_GROUP: break; case Define.CMD_LIST_GROUP: break; } } public interface Define { //id commands int CMD_ADD_GROUP = 10; int CMD_REM_GROUP = 11; int CMD_LIST_GROUP = 12; int CMD_DETAIL_GROUP = 10; } } if i open one of two comments or open two comments, when compile, it has these error [error] gui/Stupid.java [21:1] duplicate case label case Define.CMD_DETAIL_GROUP: ^ gui/Stupid.java [34:1] duplicate case label case Define.CMD_DETAIL_GROUP: ^ 2 errors Errors compiling Stupid. [/errors] i already check, but i don't why it has the error. do you have the error? my code stupid or my JVM is stupid? thanks ! David Weitzman Ranch Hand Joined: Jul 27, 2001 Posts: 1365 posted Feb 19, 2003 21:56:00 0 Any field defined in an interface must be static and final. Since CMD_ADD_GROUP and CMD_DETAIL are declared in the 'Define' interface, they are therfore implicity static and final (even though you didn't have to write out the keywords). Java compilers inline static final constants. As an illustration: static final double PI = 3.14; void doSomething() { double area = PI*radius*radius; // The compiler changes that statement to read // "double area = 3.14*radius*radius" // in the bytecode. double area2 = 3.14*radius*radius; // If you disassembled the first statement // above, you would see that the bytecode // is exactly the same as the bytecode // for this second statement. } Here's a slightly off-topic digression that you may read or ignore as you like: Constant inlining is actually something to be careful of when you're writing libraries for other developers to use from their code. Suppose you distributed fancyLogger-1.0.jar with this declaration: interface ErrorLevelConstants { public static final int WARN = 0; public static final int ERROR = 1; } I compiled against that version. Then you write a new version, fancyLogger-2.0.jar: interface ErrorLevelConstants { // the constant values have been changed public static final int WARN = 1; public static final int ERROR = 2; } If I drop fancyLogger-2.0.jar next to my code (compiled agains fancyLogger-1.0.jar), the logging will be messed up. My compiled code treats WARN as 0 and ERROR as 1 -- the only way to change that is to recompile. When I try to run the incompatable versions, where I wanted ERROR I'll get WARN. Where I wanted WARN, I'll probably get an error claiming there is no such constant as 0. That was a long-winded way of explaining something that isn't too complex. Back on topic: The is probably a mistake, but CMD_ADD_GROUP and CMD_DETAIL are both assigned the value 10. switch(cmdID){ case Define.CMD_ADD_GROUP: break; case Define.CMD_DETAIL_GROUP: break; case Define.CMD_REM_GROUP: break; case Define.CMD_LIST_GROUP: break; } Is treated by the compiler as switch(cmdID){ case 10: break; case 10: /* <-- Illegal: 10 appears twice! */ break; case 11: break; case 12: break; } You probably wanted CMD_DETAIL_GROUP to equal 13, not 10. [ February 19, 2003: Message edited by: David Weitzman ] Hiep Nguyen Ranch Hand Joined: Oct 26, 2001 Posts: 46 posted Feb 20, 2003 01:45:00 0 oh! i'm really stupid , because my code so long so i have a misstake, and i also don't careful when post the short code. thank you so much! I agree. Here's the link: subject: :( one more stupid misstake, pls help me ! Similar Threads How close am I in getting this program right? the "break" statement switch doubt.. Switch Case question Do while error, I'm going crazy All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/393306/java/java/stupid-misstake-pls
CC-MAIN-2015-22
refinedweb
663
67.65
Proposed features/Description Contents Key Name: description A "description=" tag is proposed for putting information meant for the general public. On interactive maps, this can appear as a rollover pop-up. The existing "note=" tag would then be reserved for notes to self and fellow mappers that you don't really want to see on a final map. The value would be short human-readable descripion of the parent node/segment/way/area. For example: "This park includes childrens' play area and boat-hire", "This path is flat and well graded, suitable for push chairs and young children". Multiple language versions can be supported using a language namespace prefix. Notes: Primary purpose: - to save surveyed information and observation in a directly readable form in a uniform across-the-board tag that can be used directly in a secondary map mouse-over pop-up, in a text listings and for text search-engines; - sometimes it unnecessarily clutters a general map to use the name tag, for example every footway that crosses a road could be marked description=footbridge in preference to name=footbridge. Secondary purpose: - would also save a rapid increase in the tag namespace; - a place to put unstructured information until there is an appropriate custom tag (but see also 'note' annotation tag). May also be used with a language namespace prefix. For example, description:en=Here be Dragons or description:es=Aquí, hay dragones description is part of the Dublin Core Metadata Element Set for describing resources in XML data (ISO Standard 15836): Examples: May apply to: - <node>s: Yes - Lines <Segment>s: Yes - <way>s: Yes - <area>s: Yes Proposed by Ewmjc 02:23, 19 September 2006 (BST) Opinion - This has been here for ages, and no interest. All I can say is, is "notes=" not enough? If there is additional advantages with your proposal that I have missed then I have nothing against it. Ben. 03:06 3rd February 2007 (UTC) - Really the key advantage is that is indeed NOT the "note=" [sic] tag. "description=" is for the general public and "note=" for oneself and fellow mappers. MikeCollinson 01:28, 15 February 2007 (UTC) - Ok, I see. if its just for information that isn't for mapping can't it be "info=___"? It might not be a description just becasue its not notes for mappers. It could be anything. Ben. 03:01 15th February 2007 (UTC) - Indeed it could be "info=". I slightly prefer "description=" because of the Dublin core tie-up (and, to be honest, because I have many such tags already entered informally). MikeCollinson 02:19, 18 February 2007 (UTC) - Language should be a suffix, as with other tags, so description:en, but I'm sure that's just a typo above. Other than that, seems like a good idea. Rjmunro 10:34, 8 May 2007 (BST) - Corrected, thanks MikeCollinson 23:16, 9 May 2007 (BST) Tagwatch Obviously this tag is in use: Votes - I approve this proposal MikeCollinson 02:19, 18 February 2007 (UTC) - I approve this proposal Ben. 00:41 20th February 2007 (UTC) - I approve this proposal Rjmunro 10:35, 8 May 2007 (BST) - I approve this proposal Rammer 12:05, 6 November 2008 (UTC) - I approve this proposal MapFlea 12:31, 10 November 2008 (UTC) - I approve this proposal --Wanderer 19:28, 23 November 2008 (UTC) I approve this proposal. --Meme 05:28, 18 December 2008 (UTC) I approve this proposal. --Daniel27 20:02, 17 January 2009 (UTC) I approve this proposal. --Lulu-Ann 13:28, 19 January 2009 (UTC) I approve this proposal. --Andre68 12:37, 20 January 2009 (UTC) I approve this proposal.--Liber 18:07, 20 January 2009 (UTC) I approve this proposal. --Riechfield 14:43, 22 January 2009 (UTC) I approve this proposal. --@themis 00:40, 1 February 2009 (UTC) I approve this proposal. --go2sh 12:56, 12 February 2009 (UTC) I approve this proposal, would like to see this tag in other languages as well, but what language is default, English or local?. --Skippern 20:22, 17 February 2009 (UTC) I approve this proposal. --Willem1 19:31, 1 March 2009 (UTC) I approve this proposal. -- fatbozz 09:26, 07 April 2009 (UTC)
http://wiki.openstreetmap.org/wiki/Proposed_features/Description
CC-MAIN-2017-34
refinedweb
697
61.36
The best way to invoke methods in Python class declarations? Say I am declaring a class C and a few of the declarations are very similar. I'd like to use a function f to reduce code repetition for these declarations. It's possible to just declare and use f as usual: >>> class C(object): ... def f(num): ... return '<' + str(num) + '>' ... v = f(9) ... w = f(42) ... >>> C.v '<9>' >>> C.w '<42>' >>> C.f(4) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unbound method f() must be called with C instance as first argument (got int instance instead) Oops! I've inadvertently exposed f to the outside world, but it doesn't take a self argument (and can't for obvious reasons). One possibility would be to del the function after I use it: >>> class C(object): ... def f(num): ... return '<' + str(num) + '>' ... v = f(9) ... del f ... >>> C.v '<9>' >>> C.f Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: type object 'C' has no attribute 'f' But what if I want to use f again later, after the declaration? It won't do to delete the function. I could make it "private" (i.e., prefix its name with __) and give it the @staticmethod treatment, but invoking staticmethod objects through abnormal channels gets very funky: >>> class C(object): ... @staticmethod ... def __f(num): ... return '<' + str(num) + '>' ... v = __f.__get__(1)(9) # argument to __get__ is ignored... ... >>> C.v '<9>' I have to use the above craziness because staticmethod objects, which are descriptors, are not themselves callable. I need to recover the function wrapped by the staticmethod object before I can call it. There has got to be a better way to do this. How can I cleanly declare a function in a class, use it during its declaration, and also use it later from within the class? Should I even be doing this? Answers Quite simply, the solution is that f does not need to be a member of the class. I am assuming that your thought-process has gone through a Javaish language filter causing the mental block. It goes a little something like this: def f(n): return '<' + str(num) + '>' class C(object): v = f(9) w = f(42) Then when you want to use f again, just use it >>> f(4) '<4>' I think the moral of the tale is "In Python, you don't have to force everything into a class". Need Your Help Open a new page in a new window, print it and closed it scheduling tasks on JBoss with clustering java jboss ejb-3.0 schedulingI need to be able to run some scheduled tasks (reports) for an EJB application running on JBoss 4.2.
http://unixresources.net/faq/304655.shtml
CC-MAIN-2019-04
refinedweb
463
71.24
Subject: Re: [Boost-build] [boost-build] QT4 build from source question From: Juergen Hunold (juergen.hunold_at_[hidden]) Date: 2009-11-19 15:14:25 Hi Brian ! On Wednesday 18 November 2009, you wrote: > Is there a set of Jamfile's that need to be put in the qt tree so > that my app can reference /qt//QtCore etc? This all works just > great if I use the system installed QT4 and qmake to build things, > but I have to cross-compile now and would like to have it all > self-contained for multiple targets. First, did you take a look at qt4.jam ? A simple import qt ; using qt : <where your qt is> ; in user-config.jam (or Jamroot) should get you started using a pre-built qt. You can then use the targets like /qt//QtSvg as usual. Please scan qt4.jam for more options, including target requirements and feel free to ask any remaining questions here or on IRC. Second, I've tried to get Qt built by Boost.Build, but have given up due to lack of time and the missing 3rdparty library configure support. Qt's configure system allows a *lot* of different configs and emulating that (several image libs, X libraries, databases plugins (or not),you name it) is a huge task. If you're interested, I could see if I still can produce a tarball of it ;-) It did build for my frozen Linux and Windows company setup at least. But using a precompiled Qt is much better as adding _thousands_ of files to a Boost.Build tree will slow things down noticeably...
https://lists.boost.org/boost-build/2009/11/22743.php
CC-MAIN-2020-24
refinedweb
269
72.87
This tool further creates diversity in the training data by rotating some of your training images. The fraction of the images rotated depends on the probability of the rotation. This gives the probability of rotating an image. A higher probability will rotate more of the images. Hence it determines the fraction of the training data that would be rotated. All images aren't rotated to the same angle. For each image that is to be rotated, the angle is picked at random from a certain range specified by the user. The unit for the angle is in degrees. A negative angle specifies a clockwise direction, whereas a positive angle is for a counter-clockwise direction. import albumentations as albu from PIL import Image transform =albu.RandomRotation(degrees=(-90,90)) augmented_image = transform(image=figure)['image'] # we have our required rotated image in augmented_image. Only 13% of vision AI projects make it to production, with Hasty we boost that number to 100%.
https://hasty.ai/docs/mp-wiki/augmentations/resize
CC-MAIN-2022-40
refinedweb
160
50.23
BacDive-API - Programmatic Access to the BacDive Database Project description BacDive API Using the BacDive API requires registration. Registration is free but the usage of BacDive data is only permitted when in compliance with the BacDive terms of use. See About BacDive for details. The Python package can be initialized using your login credentials: import bacdive client = bacdive.BacdiveClient('name@mail.example', 'password') # the search method fetches all BacDive-IDs matching your query # and returns the number of IDs found count = client.search(taxonomy='Bacillus subtilis subtilis') print(count, 'strains found.') # the retrieve method lets you iterate over all strains # and returns the full entry as dict # Entries can be further filtered using a list of keys (e.g. ['keywords']) for strain in client.retrieve(): print(strain) Example queries: # Search by BacDive-IDs (either semicolon separated or as list): query = {"id": 24493} query = {"id": "24493;12;132485"} query = {"id": [24493, 12, 132485]} # Search by culture collection number query = {"culturecolno": "DSM 26640"} # Search by taxonomy (either as full name or as list): # With genus name, species epithet (optional), and subspecies (optional). query = {"taxonomy": "Bacillus subtilis subsp. subtilis"} query = {"taxonomy": ("Escherichia", "coli")} # Search by sequence accession numbers: query = {"16s": "AF000162"} # 16S sequence query = {"genome": "GCA_006094295"} # genome sequence # run query client.search(**query) Filtering Results from the retrieve Method of both clients can be further filtered. The result contains a list of matched keyword dicts: filter=['keywords', 'culture collection no.'] result = client.retrieve(filter) print({k:v for x in result for k,v in x.items()}) The printed result will look like this: {'1161': [{'keywords': ['human pathogen', 'Bacteria']}, {'culture collection no.': 'DSM 4393, pC194, SB202'}], '1162': [{'keywords': ['human pathogen', 'Bacteria']}, {'culture collection no.': 'DSM 4514, ATCC 37015, BD170, NCIB 11624, ' 'pUB110'}], '1163': [{'keywords': ['human pathogen', 'Bacteria']}, {'culture collection no.': 'DSM 4554, ATCC 37128, BGSC 1E18, pE194'}], '1164': [{'keywords': 'Bacteria'}, {'culture collection no.': 'DSM 4750, 1E7, BGSC 1E7, pE194-cop6'}], ... Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages. Source Distribution bacdive-0.2.tar.gz (5.7 kB view hashes) Built Distribution bacdive-0.2-py3-none-any.whl (6.1 kB view hashes)
https://pypi.org/project/bacdive/
CC-MAIN-2022-33
refinedweb
372
57.37
Taming Dynamic Data in TypeScript I really like static types. A lot. When I would otherwise be using JavaScript, I’ve now fully embraced TypeScript. Fully utilizing static types, with all the safety they provide, can be a bit tricky when dealing with dynamic data — like JSON from an API call. This problem is not unique to TypeScript, but TypeScript does have some fairly unique considerations. Let’s consider some JSON that we want to parse: const rawJson = `{ "name": "Alice", "age": 31 }` const parsed = JSON.parse(rawJson) //'parsed' is type 'any' This is how we would do it in JavaScript, too. Indeed, if you hover over parsed in your IDE, you’ll see that its type is any. This means TypeScript will let us do anything with parsed without giving us any static type checking. For example, we might make a typo: console.log(parsed.nam) //prints 'undefined' TypeScript doesn’t catch the error; it will happily print out undefined. Avoiding any To get the most from TypeScript, you really should avoid using any whenever possible.1 It’s hard to trust your static types when you have places in your code that bypass the type system via any. In cases where you really don’t know the type (like after parsing some raw JSON), use unknown, a type-safe counterpart to any. For example, we could define a safer parse function like this: const parseJson = (str: string): unknown => JSON.parse(str) The body is just a pass-through, but the unknown return type annotation makes the type much narrower. Now if we try to access any properties off of our parsed JSON, we’ll get a type error: const parsed = parseJson(rawJson) console.log(parsed.nam) //type error: Object is of type 'unknown'. console.log(parsed.name) //also a type error - we'll come back to this one :) Super safe, but not very useful yet. That’s OK — it will force us to be explicit about the type, as we’ll see below. Type Assertions First, let’s define a type that matches the JSON: type User = { name: string age: number } With this, we can now use a type assertion of User on the parsed value to get our static typing: const parsed = parseJson(rawJson) as User console.log(parsed.nam) //type error: Property 'nam' does not exist on type console.log(parsed.name) //works This effectively tells TypeScript, “I know something you don’t; trust me here.” Taking it Further Type assertions are simple and effective, but there is one problem: TypeScript doesn’t do any validation at runtime to make sure your assertion is correct. If the data is in an unexpected shape, or you declared the type incorrectly, you will likely get errors, but they may occur far from where you initially asserted the type. This can make it hard to track down the exact problem. It may be feasible to do your own validation for simple objects, but this gets tedious fast, especially as your objects get bigger or have any sort of nesting. How do Other Languages Handle This? To fully understand the quandary that we’re in, it’s helpful to look at how other static languages turn dynamic data into typed objects. Many languages — such as Java, C#, and Go — have type information at runtime that can be accessed via reflection. These languages can use the type information from classes to deserialize JSON into well-typed objects. Languages like Rust have macros that can automatically generate decoders for a given struct at build-time. Languages that have neither reflection, nor macros, typically have libraries to manually construct these decoders. Elm is a great example. TypeScript falls into this latter camp of a language without reflection or macros, so we have to go the manual route. Manual Decoding The two major libraries I’ve seen for writing these decoders in TypeScript are io-ts and runtypes. If you come from a functional programming background, you’ll probably like io-ts. Otherwise, you may find runtypes more approachable. Let’s take a brief look at how to construct decoders in runtypes: import { Record, String, Number } from 'runtypes' const UserRuntype = Record({ name: String, age: Number }) That’s it. It’s nearly as easy as declaring a TypeScript type, and it will provide us with methods to validate our data: import { Record, String, Number } from 'runtypes' const UserRuntype = Record({ name: String, age: Number }) type User = { name: string age: number } const rawJson = `{ "name": "Alice", "age": 31 }` const user = parseJson(rawJson) const printUser = (user: User) => { console.log(`User ${user.name} is ${user.age} years old`) } if (UserRuntype.guard(user)) printUser(user) The guard method used at the end is a type guard for safely checking whether an object conforms to our type. Within the if statement, the type is refined to be of type { name: string, age: number } — essentially the User type that we defined above. Seeing Double You probably noticed that we basically defined the same type twice: const UserRuntype = Record({ name: String, age: Number }) type User = { name: string age: number } Having to define both a TypeScript type and a corresponding runtype is not ideal. Luckily, runtypes is able to derive a TypeScript type from our runtype like this: import { Record, String, Number, Static } from 'runtypes' const UserRuntype = Record({ name: String, age: Number }) type User = Static<typeof UserRuntype> //equivalent to: type User = { name: string, age: number } There you have it. You do need to learn the library’s DSL, but at least you don’t have to define the type twice! Complete Example Let’s put it all together: import { Record, String, Number, Static } from 'runtypes' const parseJson = (str: string): unknown => JSON.parse(str) //expell 'any' const UserRuntype = Record({ //create a runtype name: String, age: Number }) type User = Static<typeof UserRuntype> //derive a TypeScript type from our runtype const printUser = (user: User) => { console.log(`User ${user.name} is ${user.age} years old`) } const rawJson = `{ "name": "Alice", "age": 31 }` const user = parseJson(rawJson) //the 'user' type is 'unknown' if (UserRuntype.guard(user)) printUser(user) //'user' is refined by our guard to type 'User' This Seems Hard! It may seem easier to just use a dynamic language, like JavaScript, but that just defers possible type errors to runtime. You still need to be aware of the structure of your data. Time that you spend upfront being explicit about that structure with the type system will pay dividends, both in initial development and beyond. The type assertion approach to typing your dynamic data is low-cost and certainly better than falling back to dynamic typing.2 Then layer on some runtime verification for even more confidence. Happy typing! - Also make sure you enable noImplicitAnyin your tsconfig.json. (Or, better yet, just use strict, which will get you this and a bunch of other safer defaults.) Unfortunately, this won’t catch all cases of any, such as when data is explicitly annotated with any(like JSON.parse). [return] - If you decide to just go the type assertion route, make sure you run the code to verify you asserted the type correctly. Better yet, write a test! [return]
https://www.pluralsight.com/tech-blog/taming-dynamic-data-in-typescript/
CC-MAIN-2020-16
refinedweb
1,186
61.46
From one of our client's requirement, I have to develop an application which should be able to process huge CSV files. File size could be in the range of 10 MB - 2GB in size. Depending on size, module decides whether to read the file using Multiprocessing pool CSV reader multi processing CSV reading def set_file_processing_mode(self, fpath): """ """ fsize = self.get_file_size(fpath) if fsize > FILE_SIZE_200MB: self.read_in_async_mode = True else: self.read_in_async_mode = False def read_line_by_line(self, filepath): """Reads CSV line by line""" with open(filepath, 'rb') as csvin: csvin = csv.reader(csvin, delimiter=',') for row in iter(csvin): yield row def read_huge_file(self, filepath): """Read file in chunks""" pool = mp.Pool(1) for chunk_number in range(self.chunks): #self.chunks = 20 proc = pool.apply_async(read_chunk_by_chunk, args=[filepath, self.chunks, chunk_number]) reader = proc.get() yield reader pool.close() pool.join() def iterate_chunks(self, filepath): """Read huge file rows""" for chunklist in self.read_huge_file(filepath): for row in chunklist: yield row @timeit #-- custom decorator def read_csv_rows(self, filepath): """Read CSV rows and pass it to processing""" if self.read_in_async_mode: print("Reading in async mode") for row in self.iterate_chunks(filepath): self.process(row) else: print("Reading in sync mode") for row in self.read_line_by_line(filepath): self.process(row) def process(self, formatted_row): """Just prints the line""" self.log(formatted_row) def read_chunk_by_chunk(filename, number_of_blocks, block): ''' A generator that splits a file into blocks and iterates over the lines of one of the blocks. ''' results = [] assert 0 <= block and block < number_of_blocks assert 0 < number_of_blocks with open(filename) as fp : fp.seek(0,2) file_size = fp.tell() ini = file_size * block / number_of_blocks end = file_size * (1 + block) / number_of_blocks if ini <= 0: fp.seek(0) else: fp.seek(ini-1) fp.readline() while fp.tell() < end: results.append(fp.readline()) return results if __name__ == '__main__': classobj.read_csv_rows(sys.argv[1]) $ python csv_utils.py "input.csv" Reading in async mode FINISHED IN 3.75 sec $ python csv_utils.py "input.csv" Reading in sync mode FINISHED IN 0.96 sec Is this correct behaviour? Yes - it may not be what you expect, but it is consistent with the way you implemented it and how multiprocessing works. Why Async mode is taking longer? The way your example works is perhaps best illustrated by a parable - bare with me please: Let's say you ask your friend to engage in an experiment. You want him to go through a book and mark each page with a pen, as fast as he can. There are two rounds with a distinct setup, and you are going to time each round and then compare which one was faster: open the book on the first page, mark it, then flip the page and mark the following pages as they come up. Pure sequential processing. process the book in chunks. For this he should run through the book's pages chunk by chunk. That is he should first make a list of page numbers as starting points, say 1, 10, 20, 30, 40, etc. Then for each chunk, he should close the book, open it on the page for the starting point, process all pages before the next starting point comes up, close the book, then start all over again for the next chunk. Which of these approaches will be faster? Am I doing something wrong? You decide both approaches take too long. What you really want to do is ask multiple people (processes) to do the marking in parallel. Now with a book (as with a file) that's difficult because, well, only one person (process) can access the book (file) at any one point. Still it can be done if the order of processing doesn't matter and it is the marking itself - not the accessing - that should run in parallel. So the new approach is like this: This approach will most certainly speed up the whole process. Perhaps surprisingly though the speed up will be less than a factor of 10 because step 1 takes some time, and only one person can do it. That's called Amdahl's law [wikipedia]: Essentially what it means is that the (theoretical) speed-up of any process can only be as fast as the parallel processing part p is reduced in speed in relation to the part's sequential processing time (p/s). Intuitively, the speed-up can only come from the part of the task that is processed in parallel, all the sequential parts are not affected and take the same amount of time, whether p is processed in parallel or not. That said, in our example, obviously the speed-up can only come from step 2 (marking pages in parallel by multiple people), as step 1 (tearing up the book) is clearly sequential. develop an application which should be able to process huge CSV files Here's how to approach this: Something like this: def process(rows): # do all the processing ... return result if __name__ == '__main__': pool = mp.Pool(N) # N > 1 chunks = get_chunks(...) for rows in chunks: result += pool.apply_async(process, rows) pool.close() pool.join() I'm not defining get_chunks here because there are several documented approaches to doing this e.g. here or here. Conclusion Depending on the kind of processing required for each file, it may well be that the sequential approach to processing any one file is the fastest possible approach, simply because the processing parts don't gain much from being done in parallel. You may still end up processing it chunk by chunk due to e.g. memory constraints. If that is the case, you probably don't need multiprocessing. If you have multiple files that can be processed in parallel, multiprocessing is a very good approach. It works the same way as shown above, where the chunks are not rows but filenames.
https://codedump.io/share/7FKdDVh35isT/1/reading-csv-with-multiprocessing-pool-is-taking-longer-than-csv-reader
CC-MAIN-2017-30
refinedweb
964
74.39
The Samba-Bugzilla – Bug 10790 fchown with posix extensions to Samba fails with access denied Last modified: 2014-10-13 10:52:38 UTC Created attachment 10238 [details] wireshark trace of running the script above to Samba 4.2 master The following works locally but fails remotely over cifs to Samba 4.2 master (similar to what causes xfstest generic/088 to fail) #!/usr/bin/env python # # Test chmod import os, grp, pwd, sys filename = raw_input("Enter a file name to create: ") uid = pwd.getpwnam("nobody").pw_uid gid = grp.getgrnam("nogroup").gr_gid # create a new file with us as owner fd = os.open(filename, os.O_RDWR | os.O_CREAT) os.fchown(fd, uid, gid) os.close(fd) xfstest generic/088 fails as a result of the equivalent problem. See below QA output created by 088 fchown: Permission denied User error - my userid was not correctly mapped to root This test case must be run as root
https://bugzilla.samba.org/show_bug.cgi?id=10790
CC-MAIN-2016-44
refinedweb
155
56.96
LiveView has given us the ability to implement flexible and responsive UX almost entirely with server-side code. But what happens when our need for a responsive UI surpasses what LiveView seemingly offers? When the demands of a particular feature have us reaching for JavaScript? It is possible to incorporate custom JS into the LiveView life cycle with the help of a custom LiveView channel and a Registry. Keep reading to see how we did it! The Problem In a recent post, we built a straightforward chatting application backed by LiveView, PubSub and Presence. We implemented nearly all of the necessary features (live updates as users type in new messages, a list that keeps track of users in the chat room and who is typing!) with only 90 lines of LiveView code. But then we ran into a blocker. When new chat messages were appended to the chat window, they appeared just out of frame. The chat window needed to scroll down to accommodate and display the new message. This is easy enough to do with just one or two lines of JavaScript: grab the height of the chat window, and set the scrollTop accordingly. If you’re familiar with Phoenix Channels, you might reach for something like this: channel.on("new_message", (msg) => { const targetNode = document.getElementsByClassName("messages")[0] targetNode.scrollTop = targetNode.scrollHeight }) But wait! The LiveView client-side library only responds to one event from the LiveView process running on the server––the diff event. This event isn’t granular enough to tell us what changed on the page. It merely forces the appropriate portions of the page to re-render. So, how can we get our LiveView to emit an event that our front-end can respond to in order to fire our scrollTop-adjusting JS? The Solution We need to do a few things in order to get this working: - Extend the LiveView socket with a custom channel - Teach our LiveView processes to send messages to that channel, so that the channel can push them to the client. It’s worth noting here that the responsibility of a custom LiveView channel should be narrowly scoped. LiveView can and should handle almost all of the updates to the LiveView template. That’s the beauty of LiveView! We don’t need to write a set of custom client-side functions for updating the page based on specific events like we’ve become used to doing when working with Phoenix Channels. However, when we need to trigger a client-side interaction, like our scrollTop adjustment, that the LiveView client isn’t capable of handling, we can reach for a custom channel. Now that we have a basic understanding of the problem we’re trying to solve, and the tools we’ll use to solve it, let’s get started! The Process Before we start writing code, let’s walk through the desired code flow of this feature, one step at a time. - User visits /chats/:id - Controller mounts the live view and renders the static template - Client connects to the Live View socket and joins a custom channel on this same socket Later… - User submits new chat message, sending an event to the live view - The live view responds to the message by updating state, re-rendering the page and broadcasting the event to the other live view processes subscribing to that chat room topic - The other live views receive the broadcast, update their own state and re-render the template - The live views send the message to their “associated” channel (i.e. the channel joined on the live view’s socket) - The channel receives the message and pushes it out to the front-end - Front-end receives the message and responds by triggering our scrollTopadjustment JavaScript There is a lot of code to get through, so we’ve organized our approach into the following parts: I. Establishing the Socket and Channel II. Handling Events in the LiveView III. Communicating from the LiveView to the Channel IV. Sending Messages From the Channel to the Front-End Getting Started If you’d like to follow along with this tutorial, we recommend reading and completing the tutorial in our previous post here first. This will get your code into the correct starting state. You can also clone down the repo here to get the starting code. Otherwise, you can checkout the completed code here. Part I: Establishing the Socket and Channel In order to guarantee that the live view process can send a message to the right channel at the right time, we need to have the live view share a socket with that channel. Let’s start by focusing on this portion of the code flow: - User visits /chats/:id - Controller mounts the live view and renders the static template - Client connects to the Live View socket and joins the channel on this same socket Here’s a closer look at how this procedure works: Let’s dive in and write some code! Extending the LiveView Socket In order to define a custom channel that will share a socket with our LiveView process, we need to extend the LiveView socket that the LiveView library provides us. LiveView doesn’t (yet) provide a way for us to extend this module programmatically, so we’ll define our own socket with everything it needs to support our LiveView and our custom channel: # lib/phat_web/channels/live_socket.ex defmodule PhatWeb.LiveSocket do @moduledoc """ The LiveView socket for Phoenix Endpoints. """ use Phoenix.Socket defstruct id: nil, endpoint: nil, parent_pid: nil, assigns: %{}, changed: %{}, fingerprints: {nil, %{}}, private: %{}, stopped: nil, connected?: false channel "lv:*", Phoenix.LiveView.Channel channel "event_bus:*", PhatWeb.ChatChannel @doc """ Connects the Phoenix.Socket for a LiveView client. """ @impl Phoenix.Socket def connect(_params, socket, _connect_info) do {:ok, socket} end @doc """ Identifies the Phoenix.Socket for a LiveView client. """ @impl Phoenix.Socket def id(_socket), do: nil end The only line we need to add in addition to what we’ve copied from the LiveView source code is the channel definition in which we map the topic, "event_bus:*" to our soon-to-be-defined custom channel. channel "event_bus:*", PhatWeb.ChatChannel Next we’ll tell our app’s Endpoint module to map the socket mounted at the "/live" endpoint to the socket we just defined: # lib/phat_web/endpoint.ex defmodule PhatWeb.Endpoint do use Phoenix.Endpoint, otp_app: :phat # socket "/live", Phoenix.LiveView.Socket socket "/live", PhatWeb.LiveSocket ... end Defining the Custom Channel Now we’re ready to define our ChatChannel: # lib/phat_web/channels/chat_channel.ex defmodule PhatWeb.ChatChannel do use Phoenix.Channel def join("event_bus:" <> _chat_id, _message, socket) do {:ok, socket} end end Connecting to the Socket and Joining the Channel With our socket and our channel defined, we can tell the front-end client to join the channel after connecting to the LiveView socket: // assets/js/app.js import LiveSocket from "phoenix_live_view" let chatId = window.location.pathname.split("/")[2] // just a hack to get the chatId from the route, there are definitely better ways to do this! const liveSocket = new LiveSocket("/live") liveSocket.connect() let channel = liveSocket.channel("event_bus:" + chatId, {}) Now, when the page loads, we will: - Connect to and start the LiveView process running over the socket - Join a channel over that same socket Later, we can write some code on the front-end to respond to a specific event by changing the chat box’s scroll height: channel.on("new_message", (msg) => { targetNode = document.getElementsByClassName("messages")[0] targetNode.scrollTop = targetNode.scrollHeight }) So, how can we get our channel to send the "new_message" event to the front-end? Let’s find out! Part II: Handling Events in the LiveView In this section, we’ll dive into the following portion of the process: - User submits a new chat message, sending an event to the live view; The live view updates its state and re-renders the template - The live view broadcasts the event to the other live view processes subscribing to that chat room topic which then update their own state and re-render their templates - The live views send a message to themselves, instructing them to in turn send a message to their “associated” channel (i.e. the channel joined on the live view’s socket). This ensures that the live view will finish re-rendering before telling the channel to push a message to the front-end. Here’s a closer look at this flow: Receiving Events in the LiveView When a user submits a new message via the chat form, it will send the "new_message" event to the LiveView process, over the socket. Our live view process already responds to this message by: - Updating its own state and re-rendering the template to display the new message. - Broadcasting the message to the other running live view processes subscribed to the same topic so that everyone gets the new message and subsequent re-render. To get a refresher on how this works, check out our earlier post here. In this post, we’ll just take a brief look at that code: # lib/phat_web/live/chat_live_view.ex # this function fires when we receive the "new_message" event from the front-end def handle_event("new_message", %{"message" => message_params}, socket) do chat = Chats.create_message(message_params) PhatWeb.Endpoint.broadcast(topic(chat.id), "new_message", %{chat: chat}) {:noreply, assign(socket, chat: chat, message: Chats.change_message())} end # this function fires when all of the subscribing live view processes receive the broadcast from above def handle_info(%{event: "new_message", payload: state}, socket) do {:noreply, assign(socket, state)} end Its important to note that the live view is broadcasting the message to all of the LiveView processes subscribed to the chat room’s topic, including itself. However, LiveView is smart enough not to re-render a page for which there are no diffs, so this isn’t an expensive operation. Sending Messages from the LiveView to the Channel We need to ensure that the page has a chance to re-render before we have the channel send the message to the front-end. Otherwise the JavaScript function to adjust scrollTop might run before the new message is present on the page, thereby failing to actually make an adjustment to the chat window. After this handle_info/2 function returns is the point at which we can be sure all LiveView templates are re-rendered: def handle_info(%{event: "new_message", payload: state}, socket) do {:noreply, assign(socket, state)} end So, how can we make sure each LiveView process handling this message will only send a message to the channel after this function finishes working? We can use send/2 to have the live view send a message to itself! Since a process can only do one thing at a time, the live view process will finish the the current work in the handle_info/2 processing the "new_message" event before acting on the message it receives from itself. def handle_info(%{event: "new_message", payload: state}, socket) do send(self(), {:send_to_event_bus, "new_message"}) {:noreply, assign(socket, state)} end def handle_info({:send_to_event_bus, msg}, socket) do # send a message to the channel here! {:noreply, socket} end Now we’ve captured the moment in time at which to send a message from the LiveView process to the Channel process. But wait! How can we send a message to a process whose PID we don’t know? The LiveView process, in its current form, doesn’t know about the channel process with which it shares a socket. In order to fix this, we’ll need to leverage a Registry. Part III: Communicating from the LiveView to the Channel In this section, we’ll register our channel process so that the live view can look up and send a message to the appropriate channel PID. Then, we’ll teach the live view how to perform this lookup and send a message to the right channel PID. Here’s the code flow we’re aiming for: - The LiveView is mounted from the controller and stores a unique identifier of a “session UUID” in its own state; it renders the template with a hidden element that contains the session UUID encoded in a Phoenix.Token - The channel’s socket is connected with this token; the socket stores it in state. - The channel is joined; it takes the session UUID from its socket’s state and registers its PID under a key of that UUID. Later… - When the user submits a new chat message, the LiveView processes that received the message broadcast will look up the channel PID under the session UUID in the registry - Each live view will then send the message to the PID they looked up Defining the Channel Registry We’ll use a process registry, implemented with Elixir’s native Registry module, to keep track of the channel PID so that the LiveView can look up its associated channel in order to send it a message. Its important to note that Elixir’s Registry module isn’t distribution friendly––if you look up a given PID created on one server on a totally different server, there’s no guarantee that it will refer to the same process. But! Since our channel shares a socket with the LiveView process, it is guaranteed that the live view and the channel are running on the same server. We’ll tell Elixir’s Registry supervisor to start supervising a named registry called SessionRegistry when our app starts up: # application.ex def start(_type, _args) do children = [ Phat.Repo, PhatWeb.Endpoint, PhatWeb.Presence, {Registry, [keys: :unique, name: Registry.SessionRegistry]} ] opts = [strategy: :one_for_one, name: Phat.Supervisor] Supervisor.start_link(children, opts) end We want to register our channel PID when the channel is joined. But we need to store the PID under a unique key that the live view can use to look it up by later. So, we need to create such an identifier and find a way to make it available to both the live view and the channel. Sharing the Session UUID When the LiveView first mounts via the controller, we’ll create a unique identifier––a session UUID––to store in the live view’s state: # lib/phat_web/controllers/chat_controller.ex def show(conn, %{"id" => chat_id}) do chat = Chats.get_chat(chat_id) session_uuid = Ecto.UUID.generate() LiveView.Controller.live_render( conn, ChatLiveView, session: %{ chat: chat, current_user: conn.assigns.current_user, session_uuid: session_uuid } ) end # lib/phat_web/live/chat_live_view.ex def mount(%{chat: chat, current_user: current_user, session_uuid: session_uuid}, socket) do ... {:ok, assign(socket, chat: chat, message: Chats.change_message(), current_user: current_user, users: Presence.list_presences(topic(chat.id)), username_colors: username_colors(chat), session_uuid: session_uuid, token: Phoenix.Token.sign(PhatWeb.Endpoint, "user salt", session_uuid) )} end In the mount/2 function of our live view, we store the session UUID in the socket’s state so that we can use it to look up the channel PID later. We also encode the session UUID into a signed Phoenix.Token so that we can put it on the page and use it when we join the channel from the client-side. # lib/phat_web/templates/chat/show.html.leex <%= tag :meta, name: "channel_token", content: @token %> Let’s take a look at how we will give our channel access to this token. When we send the socket connection request from the browser, we hit the connect/3 function of our extended Live View socket, PhatWeb.LiveSocket. At this time, we don’t have access to the Live View process’s representation of the socket, but we do have access to the channel’s representation of the socket. We need to give the channel awareness of the session UUID. So, we’ll include the signed token from the page in the socket connection request and use connect/3 to store the session UUID in the channel’s socket state. We’ll include the token in our socket connection request on the front-end: // assets/js/app.js const channelToken = document.getElementsByTagName('meta')[3].content const liveSocket = new LiveSocket("/live", {params: {channel_token: channelToken}}) liveSocket.connect() And we’ll have the PhatWeb.LiveSocket.connect/3 function verify the token, extract the session UUID and store it in the channel socket’s state: # lib/phat_web/channels/live_socket.ex def connect(params, socket, _connect_info) do case Phoenix.Token.verify(socket, "user salt", params["channel_token"], max_age: 86400) do {:ok, session_uuid} -> socket = assign(socket, :session_uuid, session_uuid) {:ok, socket} {:error, _} -> :error end end Registering The Channel Process Now, when we join the channel, we can look up the :session_uuid in the channel socket’s state and use it to register the channel’s PID in the SessionRegistry under a key of this UUID: # lib/phat_web/channels/chat_channel.ex defmodule PhatWeb.ChatChannel do use Phoenix.Channel def join("event_bus:" <> _chat_id, _message, socket) do Registry.register(Registry.SessionRegistry, socket.assigns.session_uuid, self()) {:ok, socket} end end Now our registry is up and running, and we’re registering a given channel PID under a unique identifier (session UUID) that live view with which the channel shares a socket connection is aware of. We’re ready to have the live view send a message to its channel! Sending Messages to the Channel Let’s recap the “new chat message” process so far: - A user submits the “new message” form and sends a "new_message"event to the live view - The live view responds to this event by updating its own socket’s state, re-rendering and broadcasting the "new_message"event to all the live view processes subscribing to the topic for this chat room, i.e. the processes that represent the other users in the chat room. - The live view processes receive this message broadcast and respond to it by updating their own state and re-rendering. They also senda message to themselves that they will process once they finish re-rendering. - The live view processes responds to the message they sent themselves, telling themselves to send a message to the channel with which they share a socket. Now our live views have what they need to look up their associated channel. They are storing the same session UUID in state that the channel used to register its PID in the SessionRegistry. So, our live views can look up the channel PID and send a message to that PID. # lib/phat_web/live/chat_live_view.ex # handle the broadcast of the "new_message" event from the live view that received it from the user def handle_info(%{event: "new_message", payload: state}, socket) do send(self(), {:send_to_event_bus, "new_message"}) {:noreply, assign(socket, state)} end # handle the message sent above, after re-rendering the template def handle_info({:send_to_event_bus, msg}, socket = %{assigns: %{session_uuid: session_uuid}}) do [{_pid, channel_pid}] = Registry.lookup(Registry.SessionRegistry, session_uuid) send(channel_pid, msg) {:noreply, socket} end Each live view process shares a session UUID with the channel that was joined on its socket. In this sense, each live view has an “associated” channel. By registering the channel PID under this session UUID, the given live view can look up its associated channel’s PID and send a message to that channel and that channel only. Next up, we need to teach our channel to respond to this message. Part IV: Sending Messages from the Channel to the Front End In this section, we’ll focus on the following portion of our process: - The channel receives the message from the live view and pushes it out to the front-end - The front-end receives the message and responds by triggering our scrollTopadjustment JavaScript Here’s a closer look: Receiving Messages in the Channel We need to define a handle_info/ in the ChatChannel that knows how to respond to "new_message" messages by pushing them down the socket to the front-end. # channel def handle_info("new_message", socket) do push(socket, msg, %{}) {:noreply, socket} end Responding to Messages on the Front-End On the front-end, our channel JS is ready and waiting to fire: // assets/js/app.js channel.on("new_message", function() { const targetNode = document.getElementsByClassName("messages")[0] targetNode.scrollTop = targetNode.scrollHeight }) Now, right after the page re-renders, the channel will receive the "new_message" message and push it to the client which is listening for just this event. The client reacts by firing our scrollTop adjustment JS and the user experiences a responsive UI––a chat window that automatically and seamlessly scrolls down to accommodate new messages in real-time. Conclusion We’ve seen that a seeming “limit” of LiveView can be surpassed by incorporating available Phoenix real-time tools––in this case Phoenix Channels. The work in this post raises the question: “What should LiveView be capable of?” Is the extension of LiveView with a custom Phoenix Channel a violation of the “purpose” of LiveView? Does such a use-case mean we should eschew LiveView in favor of Channels? I think there are still distinctive advantages to using LiveView to back a feature like our chat app. Almost all of the chat functionality is handled in less than 100 lines of LiveView code. This is as opposed to all of the Channel back and front-end code that you would otherwise write. So, I would like to see LiveView become more extensible and configurable, making it easier to incorporate custom channels out-of-the-box.
https://elixirschool.com/blog/live-view-with-channels/
CC-MAIN-2019-51
refinedweb
3,511
59.74
. Let's implement the IList properties first. Two of them, IsReadOnly and IsFixedSize, are already done because the IDE has set them to return false. Since we want the list to grow and shrink with new values, this default setting is fine. But we do need to implement the Contains property, which should return true if any node in the list contains the value that is passed in as a parameter: public bool Contains(object value) { bool containsNode = false; if (head != null) { Node tempNode = head; if (tempNode.Item == value) { containsNode = true; } else { for (int i = 0; i < count; ++i) { tempNode = tempNode.Next; if (tempNode.Item == value) { containsNode = true; } } } } return containsNode; } We also have to implement an indexer so that elements can be accessed in the list by their index position: public object this[int index] { get { Node temp = head; if (index > -1 && index < count) { if (index != 0) { for(int i = 0; i < index; ++i) { temp = temp.Next; } } } return temp.Item; } set { Node temp = head; if (index > -1 && index < count) { if (index != 0) { for(int i = 0; i < index; ++i) { temp = temp.Next; } } temp.Item = value; } } } That takes care of the properties for IList. Now let's implement its methods. The first method is Add, which accepts any object as its value and will insert a new node at the end of the list to store that value. It should return the index position of the new node: public int Add(object value) { if (IsFixedSize) { throw new NotSupportedException("List is a fixed size."); } if (head != null) { Node tempNode = head; Node newNode = new Node(); newNode.Item = value; // add Item to end of list for(int i = 0; i < count - 1; ++i) { tempNode = tempNode.Next; } tempNode.Next = newNode; } else { head = new Node(); head.Item = value; } count++; return count; } The Clear method removes all nodes from the list: public void Clear() { if (IsReadOnly) { throw new NotSupportedException("List is read-only."); } if (head != null) { Node prevNode; Node tempNode = head; for (int i = 0; i < count; ++i) { prevNode = tempNode; tempNode = tempNode.Next; prevNode = null; } } } The IndexOf method takes a value as a parameter and searches the list for that value. If found, it returns the index number for that node, otherwise it returns -1: public int IndexOf(object value) { int idx = -1; Node temp = head; for(int i = 0; i < count; ++i) { if (temp.Item == value) { idx = i; } } return idx; } The Insert method is similar to the Add method except that it can insert a new node anywhere in the list. That requires a little more work to implement: void System.Collections.IList.Insert(int index, object value) { if ((IsReadOnly) || (IsFixedSize)) { throw new NotSupportedException("List is either " + "read-only or a fixed size."); } if (index > -1 && index < count) { // insert at position index if (head != null) { Node currNode = head; // get to index position for (int i = 0; i == index; ++i) { currNode = currNode.Next; } Node nextNode = currNode.Next; // create new node and assign value Node newNode = new Node(); newNode.Item = value; // insert new node between curr and Next currNode.Next = newNode; newNode.Next = nextNode; } else { // insert in first position as the head head = new Node(); head.Item = value; } } else { throw new ArgumentOutOfRangeException("Index is out of range."); } count++; } The last two methods in the IList interface are Remove and RemoveAt, which are similar in that both remove a node from the list. The difference is that Remove will search the list for a passed-in value and remove the first node that matches that value: public void Remove(object value) { if (head != null) { Node prevNode = head; Node tempNode = head; if (tempNode.Item == value) { head = null; } else { for (int i = 0; i < count; ++i) { tempNode = tempNode.Next; if (tempNode.Item == value) { // point previous node to Next node Node nextNode = tempNode.Next; prevNode.Next = nextNode; tempNode = null; count--; return; } else { prevNode = tempNode; } } } } else { throw new Exception("List is empty."); } } RemoveAt merely gets the node from the list at the location specified by the passed-in index position. In both cases, if the node to be removed is somewhere in the middle of the list, then we must link the previous node to the node after the one to be removed. If we didn't do that we'd break the chain and lose the integrity of our list: public void RemoveAt(int index) { if ((IsReadOnly) || (IsFixedSize)) { throw new NotSupportedException("List " + "is either read-only or a fixed size."); } if (index > -1 && index < count) { if (head != null) { // get to index position Node prevNode = head; Node tempNode = head; if (index != 0) { for (int i = 0; i < index; ++i) { prevNode = tempNode; tempNode = tempNode.Next; } prevNode.Next = tempNode.Next; tempNode = null; } else { head = tempNode.Next; } count--; } else { throw new Exception("List is empty."); } } else { throw new ArgumentOutOfRangeException("Index is out of range."); } } Now all of the members of IList have been implemented, so let's turn our attention to the three members of the ICollection interface. Because thread safety is beyond the scope of this article just leave IsSynchronized and SyncRoot alone. The Count method is easy enough to code: public int Count { get { return count; } } That leaves us with the CopyTo method, which takes a one-dimensional array passed in by reference and loads it up with values from the linked list. The method expects an index parameter for the starting position, which must be within the bounds of the collection or else an ArgumentOutOfRangeException is thrown. Also, if the array passed in is not large enough to hold the range of elements in the collection, then an ArgumentException is thrown: public void CopyTo(Array array, int index) { Node tempNode = head; if (index < 0 || index >= count) { throw new ArgumentOutOfRangeException("The index is out of range."); } if (array.Length < (this.Count - index) + 1) { throw new ArgumentException("Array " + "cannot hold all values."); } // advance to starting index position for(int i = 0; i < index; ++i) { tempNode = tempNode.Next; } // iterate through list adding to array int j = 0; for(int i = index; i < count; ++i) { array.SetValue(tempNode.Item, i); tempNode = tempNode.Next; j++; } } The last thing we have to do is implement the GetEnumerator method. I won't devote the space to explain it here since the second article in this series covered it in depth. Basically, you just have to return an IEnumerator instance: public IEnumerator GetEnumerator() { return new ListEnumerator(this); } public class ListEnumerator : IEnumerator { private int idx = -1; private LinkedList linkedList; public ListEnumerator(LinkedList linkedList) { this.linkedList = linkedList; } public void Reset() { idx = -1; } public object Current { get { if (idx > -1) return linkedList[idx]; else return -1; } } public bool MoveNext() { idx++; if (idx < linkedList.Count) return true; else return false; } } The singly linked list now fully implements the IList interface. It supports enumeration of its elements with the foreach statement. It allows for complex data binding with .NET iterative or list controls such as a DataGrid, a Repeater or a DropDownList. And best of all, by implementing against IList, our linked list class has now become a data source that can be used predictably throughout the project. Let's stress the new class. In a console application that references the LinkedList class, instantiate it and populate it with five string values: LinkedList myList = new LinkedList(); myList.Add("Alpha"); myList.Add("Beta"); myList.Add("Gamma"); myList.Add("Delta"); myList.Add("Epsilon"); Console.WriteLine("Loaded " + myList.Count.ToString() + " items into list."); If you run the program you should see the message "Loaded 5 items into list." Since our class now supports the foreach statement we can easily enumerate the elements in the collection and write them out to the console: foreach (object o in myList) { string s = (string)o; Console.WriteLine(s); } If we want to remove an item from the middle (the "Gamma" string value at index position 2) we would call: myList.RemoveAt(2); Now let's remove the first and last elements from the collection and print out the remaining elements to the console: myList.RemoveAt(0); myList.Remove("Epsilon"); foreach (object o in myList) { string s = (string)o; Console.WriteLine(s); } The two remaining strings "Beta" and "Delta" print out to the console. In this article we took a simple singly linked list and turned it into a robust .NET collection class. We did that by implementing against the IList interface, which requires us to implement certain methods and properties that guarantee our class will behave the way a collection class should in the .NET Framework. You now have everything you need to implement IList in your own projects. James Still James Still is an experienced software developer in Portland, Oregon. He collaborated on "Visual C# .NET" (Wrox) and has immersed himself in .NET since the Beta 1 version was released. Return to ONDotnet.com
http://www.oreillynet.com/lpt/a/4149
CC-MAIN-2014-42
refinedweb
1,442
64.2
Difference between revisions of "Python" Revision as of 01:51, 20 March 2014 Related articles Python "is a remarkably powerful dynamic programming language that is used in a wide variety of application domains. Python is often compared to Tcl, Perl, Ruby, Scheme or Java." Contents - 1 Installation - 2 Dealing with version problem in build scripts - 3 Integrated development environments - 4 Getting easy_install - 5 Getting completion in Python shell - 6 Widget bindings - 7 Old versions - 8 Tips and tricks - 9 See also - 10 For Fun Installation There are currently two versions of Python: Python 3 (which is the default) and the older Python 2. Python 3 Python 3 is the latest version of the language, and is incompatible with Python 2. The language is mostly the same, but many details, especially how built-in objects like dictionaries and strings work, have changed considerably, and a lot of deprecated features have finally been removed. Also, the standard library has been reorganized in a few prominent places. For an overview of the differences, visit Python2orPython3 and their relevant chapter in Dive into Python 3. To install the latest version of Python 3, install the python package from the official repositories. If you would like to build the latest RC/betas from source, visit Python Downloads. The Arch User Repository also contains good PKGBUILDs. If you do decide to build the RC, note that the binary (by default) installs to /usr/local/bin/python3.x. Python 2 To, which points to Python 3. To do so, open the program or script in a text editor and change the first line. The line will show one of the following: #!/usr/bin/env python or #!/usr/bin/python In both cases, just change python to python2 and the program will then use Python 2 instead of Python 3. Another way to force the use of python2 without altering the scripts is to call it /path/to/project2/* Where /path/to/project2/* is a list of patterns separated by | matching all project trees. Don't forget to make it executable: # chmod +x /usr/local/bin/python Afterwards scripts within the specified project trees will be run with Python 2. Integrated development environments There are some Python specific IDEs. IEP IEP is an interactive (e.g. MATLAB) python IDE with basic debugging capabilities and is especially suitable for scientific computing. It is provided by the package iepAUR. PyCharm PyCharm 3. The intelligent Python IDE with unique code assistance and analysis, for productive Python development on all levels. The community edition is available for free. pycharm-communityAUR Getting easy_install The easy_install tool is available in the package python-setuptools. Getting completion in Python shell Copy this into Python's interactive shell /usr/bin/python import rlcompleter import readline readline.parse_and_bind("tab: complete") Widget bindings The following widget toolkit bindings are available: - TkInter — Tk bindings - || standard module - || python2-pyqt4 python2-pyqt5 python-pyqt4 python-pyqt5 - || python2-gobject2 python2-gobject python-gobject2 python-gobject - February 2014, Python upstream only supports Python. Tips and tricks IPython is a enhanced Python command line available in the official repositories as ipython and ipython2. See also -
https://wiki.archlinux.org/index.php?title=Python&curid=5717&diff=305743&oldid=302080
CC-MAIN-2017-30
refinedweb
521
54.42
python 2.7 is sage compatible with python 2.7.x or only 2.6.x? is sage compatible with python 2.7.x or only 2.6.x? "Compatible" might be the wrong word, because Sage uses its own Python. Currently, as of Sage 4.8, the internal Python version is 2.6.4. However, this should change (assuming everything works out!) in Sage 5.0, which is currently in beta. 5.0 should have Python 2.7.2, and you can verify this at test.sagenb.org, if you like-- log in and run "import sys; print sys.version". It'll be nice to have dict comprehensions and collections.Counter.. answered 2012-02-23 01:18:29 -0500 DSM's answer is the best. However, just in case this is what you are asking, some people do use Sage with system components, including Python, with things like Sage on Gentoo and the related lmonade. There also used to be a Debian version. Asked: 2012-02-22 09:05:04 -0500 Seen: 520 times Last updated: Feb 23 '12 How do I install python modules, or use a different version of python? (with sage) Use memory profile (python module) in SAGE Using from Python (breaking the monolith) does ubuntu ppa update 32-bit version ?? Get Sage output on Python 2.7 without writing files [closed] Traversing sage's symbolic expression trees in python v4.5.2 Upgrade Breaks Notebook, "got EOF subprocess must have crashed..." `sage -upgrade` downgrades from 4.6 to 4.6.rc0 from sage.all import * results in libcsage.so "cannot open shared object file" error
https://ask.sagemath.org/question/8733/python-27/
CC-MAIN-2017-39
refinedweb
270
79.77
I need to create a random code that uses "abcde" and that does not repeat any letters, but I have no idea how to do that, I know how to make random numbers, but not chars, any help? This is what I need them for: Code:#include <iostream> #include <cstring> #include <ctime> using namespace std; int main() { char code[6] = "abcde"; // here I want the ranom code char icode[6]; char letter; char opcion; bool playagain = false; int i = 0; int correct = 0; do { cout << "\t\t*RULES*\n\n"; cout << "1.- You do NOT speak about Fight Club.... lol.\n"; cout << "2.- You do NOT write in CAPS.\n"; cout << "3.- You do NOT write more than 5 lettrrs per guess.\n\n\n"; cout << "Try to guess a code I have thought of:\n"; do { icode[5] = '\0'; for (i = 0; i < 5; i++) { cin >> letter; icode[i] = letter; if (icode[i] == code[i]) { correct++; } } cout << "You had " << correct << " correct letters!\n"; if (!strcmp(code, icode)) { cout << "You guessed the code!\n"; } correct = 0; } while (strcmp(code, icode)); cout << "Do you want to play again? (y/n)\n"; cin >> opcion; playagain = (opcion == 'y' ? true : false); } while (playagain); return 0; }
http://cboard.cprogramming.com/game-programming/33639-any-help-making-random-char-code.html
CC-MAIN-2016-07
refinedweb
201
90.29
Kubernetes - Volumes In Kubernetes, a volume can be thought of as a directory which is accessible to the containers in a pod. We have different types of volumes in Kubernetes and the type defines how the volume is created and its content. The concept of volume was present with the Docker, however the only issue was that the volume was very much limited to a particular pod. As soon as the life of a pod ended, the volume was also lost. On the other hand, the volumes that are created through Kubernetes is not limited to any container. It supports any or all the containers deployed inside the pod of Kubernetes. A key advantage of Kubernetes volume is, it supports different kind of storage wherein the pod can use multiple of them at the same time. Types of Kubernetes Volume Here is a list of some popular Kubernetes Volumes − emptyDir − It is a type of volume that is created when a Pod is first assigned to a Node. It remains active as long as the Pod is running on that node. The volume is initially empty and the containers in the pod can read and write the files in the emptyDir volume. Once the Pod is removed from the node, the data in the emptyDir is erased. hostPath − This type of volume mounts a file or directory from the host node’s filesystem into your pod. gcePersistentDisk − This type of volume mounts a Google Compute Engine (GCE) Persistent Disk into your Pod. The data in a gcePersistentDisk remains intact when the Pod is removed from the node. awsElasticBlockStore − This type of volume mounts an Amazon Web Services (AWS) Elastic Block Store into your Pod. Just like gcePersistentDisk, the data in an awsElasticBlockStore remains intact when the Pod is removed from the node. nfs − An nfs volume allows an existing NFS (Network File System) to be mounted into your pod. The data in an nfs volume is not erased when the Pod is removed from the node. The volume is only unmounted. iscsi − An iscsi volume allows an existing iSCSI (SCSI over IP) volume to be mounted into your pod. flocker − It is an open-source clustered container data volume manager. It is used for managing data volumes. A flocker volume allows a Flocker dataset to be mounted into a pod. If the dataset does not exist in Flocker, then you first need to create it by using the Flocker API. glusterfs − Glusterfs is an open-source networked filesystem. A glusterfs volume allows a glusterfs volume to be mounted into your pod. rbd − RBD stands for Rados Block Device. An rbd volume allows a Rados Block Device volume to be mounted into your pod. Data remains preserved after the Pod is removed from the node. cephfs − A cephfs volume allows an existing CephFS volume to be mounted into your pod. Data remains intact after the Pod is removed from the node. gitRepo − A gitRepo volume mounts an empty directory and clones a git repository into it for your pod to use. secret − A secret volume is used to pass sensitive information, such as passwords, to pods. persistentVolumeClaim − A persistentVolumeClaim volume is used to mount a PersistentVolume into a pod. PersistentVolumes are a way for users to “claim” durable storage (such as a GCE PersistentDisk or an iSCSI volume) without knowing the details of the particular cloud environment. downwardAPI − A downwardAPI volume is used to make downward API data available to applications. It mounts a directory and writes the requested data in plain text files. azureDiskVolume − An AzureDiskVolume is used to mount a Microsoft Azure Data Disk into a Pod. Persistent Volume and Persistent Volume Claim Persistent Volume (PV) − It’s a piece of network storage that has been provisioned by the administrator. It’s a resource in the cluster which is independent of any individual pod that uses the PV. Persistent Volume Claim (PVC) − The storage requested by Kubernetes for its pods is known as PVC. The user does not need to know the underlying provisioning. The claims must be created in the same namespace where the pod is created. Creating Persistent Volume kind: PersistentVolume ---------> 1 apiVersion: v1 metadata: name: pv0001 ------------------> 2 labels: type: local spec: capacity: -----------------------> 3 storage: 10Gi ----------------------> 4 accessModes: - ReadWriteOnce -------------------> 5 hostPath: path: "/tmp/data01" --------------------------> 6 In the above code, we have defined − kind: PersistentVolume → We have defined the kind as PersistentVolume which tells kubernetes that the yaml file being used is to create the Persistent Volume. name: pv0001 → Name of PersistentVolume that we are creating. capacity: → This spec will define the capacity of PV that we are trying to create. storage: 10Gi → This tells the underlying infrastructure that we are trying to claim 10Gi space on the defined path. ReadWriteOnce → This tells the access rights of the volume that we are creating. path: "/tmp/data01" → This definition tells the machine that we are trying to create volume under this path on the underlying infrastructure. Creating PV $ kubectl create –f local-01.yaml persistentvolume "pv0001" created Checking PV $ kubectl get pv NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 10Gi RWO Available 14s Describing PV $ kubectl describe pv pv0001 Creating Persistent Volume Claim kind: PersistentVolumeClaim --------------> 1 apiVersion: v1 metadata: name: myclaim-1 --------------------> 2 spec: accessModes: - ReadWriteOnce ------------------------> 3 resources: requests: storage: 3Gi ---------------------> 4 In the above code, we have defined − kind: PersistentVolumeClaim → It instructs the underlying infrastructure that we are trying to claim a specified amount of space. name: myclaim-1 → Name of the claim that we are trying to create. ReadWriteOnce → This specifies the mode of the claim that we are trying to create. storage: 3Gi → This will tell kubernetes about the amount of space we are trying to claim. Creating PVC $ kubectl create –f myclaim-1 persistentvolumeclaim "myclaim-1" created Getting Details About PVC $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE myclaim-1 Bound pv0001 10Gi RWO 7s Describe PVC $ kubectl describe pv pv0001 Using PV and PVC with POD kind: Pod apiVersion: v1 metadata: name: mypod labels: name: frontendhttp spec: containers: - name: myfrontend image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: ----------------------------> 1 - mountPath: "/usr/share/tomcat/html" name: mypd volumes: -----------------------> 2 - name: mypd persistentVolumeClaim: ------------------------->3 claimName: myclaim-1 In the above code, we have defined − volumeMounts: → This is the path in the container on which the mounting will take place. Volume: → This definition defines the volume definition that we are going to claim. persistentVolumeClaim: → Under this, we define the volume name which we are going to use in the defined pod.
https://www.tutorialspoint.com/kubernetes/kubernetes_volumes.htm
CC-MAIN-2018-34
refinedweb
1,088
60.85
In Apache 1.x modules hooked into the appropriate "phases" of the main server by putting functions into appropriate slots in the module structure. This process is known as "hooking." This has been revised in Apache 2.0 — instead a single function is called at startup in each module, and this registers the functions that need to be called. The registration process also permits the module to specify how it should be ordered relative to other modules for each hook. (In Apache 1.x this was only possible for all hooks in a module instead of individually and also had to be done in the configuration file, rather than being done by the module itself.) This approach has various advantages. First, the list of hooks can be extended arbitrarily without causing each function to have a huge unwieldy list of NULL entries. Second, optional modules can export their own hooks, which are only invoked when the module is present, but can be registered regardless — and this can be done without modification of the core code. Another feature of hooks that we think is pretty cool is that, although they are dynamic, they are still typesafe — that is, the compiler will complain if the type of the function registered for a hook doesn't match the hook (and each hook can use a different type of function).[69] They are also extremely efficient. [69]We'll admit to bias here — Ben designed and implemented the hooking mechanisms in Apache 2.0. [69]We'll admit to bias here — Ben designed and implemented the hooking mechanisms in Apache 2.0. So, what exactly is a hook? Its a point at which a module can request to be called. So, each hook specifies a function prototype, and each module can specify one (or more in 2.0) function that gets called at the appropriate moment. When the moment arrives, the provider of the hook calls all the functions in order.[70] It may terminate when particular values are returned — the hook functions can return either "declined" or "ok" or an error. In the first case all are called until an error is returned (if one is, of course); in the second, functions are called until either an error or "ok" is returned. A slight complication in Apache 2.0 is that because each hook function can define the return type, it must also define how "ok," "decline," and errors are returned (in 1.x, the return type was fixed, so this was easier). [70]Note that the order is determined at runtime in Apache 2.0. [70]Note that the order is determined at runtime in Apache 2.0. Although you are unlikely to want to define a hook, it is useful to know how to go about it, so you can understand them when you come across them (plus, advanced module writers may wish to define optional hooks or optional functions). Before we get started, it is worth noting that Apache hooks are defined in terms of APR hooks — but the only reason for that is to provide namespace separation between Apache and some other package linked into Apache that also uses hooks. A hook comes in five parts: a declaration (in a header, of course), a hook structure, an implementation (where the hooked functions get called), a call to the implementation, and a hooked function. The first four parts are all provided by the author of the hook, and the last by its user. They are documented in .../include/ap_config.h. Let's cover them in order. First, the declaration. This consists of the return type, the name of the hook, and an argument list. Notionally, it's just a function declaration with commas in strange places. So, for example, if a hook is going to a call a function that looks like: int some_hook(int,char *,struct x); then the hook would be declared like this: AP_DECLARE_HOOK(int,some_hook,(int,char *,struct x)) Note that you really do have to put brackets around the arguments (even if there's only one) and no semicolon at the end (there's only so much we can do with macros!). This declares everything a module using a hook needs, and so it would normally live in an appropriate header. The next thing you need is the hook structure. This is really just a place that the hook machinery uses to store stuff. You only need one for a module that provides hooks, even if it provides more than one hook. In the hook structure you provide a link for each hook: APR_HOOK_STRUCT( APR_HOOK_LINK(some_hook) APR_HOOK_LINK(some_other_hook) ) Once you have the declaration and the hook structure, you need an implementation for the hook — this calls all the functions registered for the hook and handles their return values. The implementation is actually provided for you by a macro, so all you have to do is invoke the macro somewhere in your source (it can't be implemented generically because each hook can have different arguments and return types). Currently, there are three different ways a hook can be implemented — all of them, however, implement a function called ap_run_name( ). If it returns no value (i.e., it is a void function), then implement it as follows: AP_IMPLEMENT_HOOK_VOID(some_hook,(char *a,int b),(a,b)) The first argument is the name of the hook, and the second is the declaration of the hook's arguments. The third is how those arguments are used to call a function (that is, the hook function looks like void some_hook(char *a,int b) and calling it looks like some_hook(a,b)). This implementation will call all functions registered for the hook. If the hook returns a value, there are two variants on the implementation — one calls all functions until one returns something other than "ok" or "decline" (returning something else normally signifies an error, which is why we stop at that point). The second runs functions until one of them returns something other than "decline." Note that the actual values of "ok" and "decline" are defined by the implementor and will, of course, have values appropriate to the return type of the hook. Most functions return int s and use the standard values OK and DECLINE as their return values. Many return an HTTP error value if they have an error. An example of the first variant is as follows: AP_IMPLEMENT_HOOK_RUN_ALL(int,some_hook,(int x),(x),OK,DECLINE) The arguments are, respectively, the return type of the hook, the hook's name, the arguments it takes, the way the arguments are used in a function call, the "ok" value, and the "decline" value. By the way, the reason this is described as "run all" rather than "run until the first thing that does something other than OK or DECLINE" is that the normal (i.e., nonerror) case will run all the registered functions. The second variant looks like this: AP_IMPLEMENT_HOOK_RUN_FIRST(char *,some_hook,(int k,const char *s),(k,s),NULL) The arguments are the return type of the hook, the hook name, the hook's arguments, the way the arguments are used, and the "decline" value. The final part is the way you register a function to be called by the hook. The declaration of the hook defines a function that does the registration, called ap_hook_name( ). This is normally called by a module from its hook-registration function, which, in turn, is pointed at by an element of the module structure. This function always takes four arguments, as follows: ap_hook_some_hook(my_hook_function,pre,succ,APR_HOOK_MIDDLE); Note that since this is not a macro, it actually has a semicolon at the end! The first argument is the function the module wants called by the hook. One of the pieces of magic that the hook implementation does is to ensure that the compiler knows the type of this function, so if it has the wrong arguments or return type, you should get an error. The second and third arguments are NULL-terminated arrays of module names that must precede or follow (respectively) this module in the order of registered hook functions. This is to provide fine-grained control of execution order (which, in Apache 1.x could only be done in a very ham-fisted way). If there are no such constraints, then NULL can be passed instead of a pointer to an empty array. The final argument provides a coarser mechanism for ordering — the possibilities being APR_HOOK_FIRST, APR_HOOK_MIDDLE, and APR_HOOK_LAST. Most modules should use APR_HOOK_MIDDLE. Note that this ordering is always overridden by the finer-grained mechanism provided by pre and succ. You might wonder what kind of hooks are available. Well, a list can be created by running the Perl script .../support/list_hooks.pl. Each hook should be documented in the online Apache documentation. Optional hooks are almost exactly like standard hooks, except that they have the property that they do not actually have to be implemented — that sounds a little confusing, so let's start with what optional hooks are used for, and all will be clear. Consider an optional module — it may want to export a hook, but what happens if some other module uses that hook and the one that exports it is not present? With a standard hook Apache would just fail to build. Optional hooks allow you to export hooks that may not actually be there at runtime. Modules that use the hooks work fine even when the hook isn't there — they simply don't get called. There is a small runtime penalty incurred by optional hooks, which is the main reason all hooks are not optional. An optional hook is declared in exactly the same way as a standard hook, using AP_DECLARE_HOOK as shown earlier. There is no hook structure at all; it is maintained dynamically by the core. This is less efficient than maintaining the structure, but is required to make the hooks optional. The implementation differs from a standard hook implementation, but only slightly — instead of using AP_IMPLEMENT_HOOK_RUN_ALL and friends, you use AP_IMPLEMENT_OPTIONAL_HOOK_RUN_ALL and so on. Registering to use an optional hook is again almost identical to a standard hook, except you use a macro to do it: instead of ap_hook_name(...) you use AP_OPTIONAL_HOOK(name,...). Again, this is because of their dynamic nature. The call to your hook function from an optional hook is the same as from a standard one — except that it may not happen at all, of course! Here's a complete example of an optional hook (with comments following after the lines to which they refer). This can be found in .../modules/experimental. It comprises three files, mod_optional_hook_export.h, mod_optional_hook_export.c, and mod_optional_hook_import.c. What it actually does is call the hook, at logging time, with the request string as an argument. First we start with the header, mod_optional_hook_export.h. #include "ap_config.h" This header declares the various macros needed for hooks. AP_DECLARE_HOOK(int,optional_hook_test,(const char *)) Declare the optional hook (i.e., a function that looks like int optional_hook_test(const char *)). And that's all that's needed in the header. Next is the implementation file, mod_optional_hook_export.c. #include "httpd.h" #include "http_config.h" #include "mod_optional_hook_export.h" #include "http_protocol.h" Start with the standard includes — but we also include our own declaration header (although this is always a good idea, in this case it is a requirement, or other things won't work). AP_IMPLEMENT_OPTIONAL_HOOK_RUN_ALL(int,optional_hook_test,(const char *szStr), (szStr),OK,DECLINED) Then we go to the implementation of the optional hook — in this case it makes sense to call all the hooked functions, since the hook we are implementing is essentially a logging hook. We could have declared it void, but even logging can go wrong, so we give the opportunity to say so. static int ExportLogTransaction(request_rec *r) { return ap_run_optional_hook_test(r->the_request); } This is the function that will actually run the hook implementation, passing the request string as its argument. static void ExportRegisterHooks(apr_pool_t *p) { ap_hook_log_transaction(ExportLogTransaction,NULL,NULL,APR_HOOK_MIDDLE); } Here we hook the log_transaction hook to get hold of the request string in the logging phase (this is, of course, an example of the use of a standard hook). module optional_hook_export_module = { STANDARD20_MODULE_STUFF, NULL, NULL, NULL, NULL, NULL, ExportRegisterHooks }; Finally, the module structure — the only thing we do in this module structure is to add hook registration. Finally, an example module that uses the optional hook, optional_hook_import.c. #include "httpd.h" #include "http_config.h" #include "http_log.h" #include "mod_optional_hook_export.h" Again, the standard stuff, but also the optional hooks declaration (note that you always have to have the code available for the optional hook, or at least its header, to build with). static int ImportOptionalHookTestHook(const char *szStr) { ap_log_error(APLOG_MARK,APLOG_ERR,OK,NULL,"Optional hook test said: %s", szStr); return OK; } This is the function that gets called by the hook. Since this is just a test, we simply log whatever we're given. If optional_hook_export.c isn't linked in, then we'll log nothing, of course. static void ImportRegisterHooks(apr_pool_t *p) { AP_OPTIONAL_HOOK(optional_hook_test,ImportOptionalHookTestHook,NULL, NULL,APR_HOOK_MIDDLE); } Here's where we register our function with the optional hook. module optional_hook_import_module= { STANDARD20_MODULE_STUFF, NULL, NULL, NULL, NULL, NULL, ImportRegisterHooks }; And finally, the module structure, once more with only the hook registration function in it. For much the same reason as optional hooks are desirable, it is also nice to be able to call a function that may not be there. You might think that DSOs provide the answer,[71] and you'd be half right. But they don't quite, for two reasons — first, not every platform supports DSOs, and second, when the function is not missing, it may be statically linked. Forcing everyone to use DSOs for all modules just to support optional functions is going too far. Particularly since we have a better plan! [71]Dynamic Shared Objects — i.e., shared libraries, or DLLs in Windows parlance. [71]Dynamic Shared Objects — i.e., shared libraries, or DLLs in Windows parlance. An optional function is pretty much what it sounds like. It is a function that may turn out, at runtime, not to be implemented (or not to exist at all, more to the point). So, there are five parts to an optional function: a declaration, an implementation, a registration, a retrieval, and a call. The export of the optional function declares it: APR_DECLARE_OPTIONAL_FN(int,some_fn,(const char *thing)) This is pretty much like a hook declaration: you have the return type, the name of the function, and the argument declaration. Like a hook declaration, it would normally appear in a header. Next it has to be implemented: int some_fn(const char *thing) { /* do stuff */ } Note that the function name must be the same as in the declaration. The next step is to register the function (note that optional functions are a bit like optional hooks in a distorting mirror — some parts switch role from the exporter of the function to the importer, and this is one of them): APR_REGISTER_OPTIONAL_FN(some_fn); Again, the function name must be the same as the declaration. This is normally called in the hook registration process.[72] [72]There is an argument that says it should be called before then, so it can be retrieved during hook registration, but the problem is that there is no "earlier" — that would require a hook! [72]There is an argument that says it should be called before then, so it can be retrieved during hook registration, but the problem is that there is no "earlier" — that would require a hook! Next, the user of the function must retrieve it. Because it is registered during hook registration, it can't be reliably retrieved at that point. However, there is a hook for retrieving optional functions (called, obviously enough, optional_fn_retrieve). Or it can be done by keeping a flag that says whether it has been retrieved and retrieving it when it is needed. (Although it is tempting to use the pointer to function as the flag, it is a bad idea — if it is not registered, then you will attempt to retrieve it every time instead of just once). In either case, the actual retrieval looks like this: APR_OPTIONAL_FN_TYPE(some_fn) *pfn; pfn=APR_RETRIEVE_OPTIONAL_FN(some_fn); From there on in, pfn gets used just like any other pointer to a function. Remember that it may be NULL, of course! As with optional hooks, this example consists of three files which can be found in .../modules/experimental: mod_optional_fn_export.c, mod_optional_fn_export.h and mod_optional_fn_import.c. (Note that comments for this example follow the code line(s) to which they refer.) First the header, mod_optional_fn_export.h: #include "apr_optional.h" Get the optional function support from APR. APR_DECLARE_OPTIONAL_FN(int,TestOptionalFn,(const char *)); And declare our optional function, which really looks like int TestOptionalFn(const char *). Now the exporting file, mod_optional_fn_export.c: #include "httpd.h" #include "http_config.h" #include "http_log.h" #include "mod_optional_fn_export.h" As always, we start with the headers, including our own. static int TestOptionalFn(const char *szStr) { ap_log_error(APLOG_MARK,APLOG_ERR,OK,NULL, "Optional function test said: %s",szStr); return OK; } This is the optional function — all it does is log the fact that it was called. static void ExportRegisterHooks(apr_pool_t *p) { APR_REGISTER_OPTIONAL_FN(TestOptionalFn); } During hook registration we register the optional function. module optional_fn_export_module= { STANDARD20_MODULE_STUFF, NULL, NULL, NULL, NULL, NULL, ExportRegisterHooks }; And finally, we see the module structure containing just the hook registration function. Now the module that uses the optional function, mod_optional_fn_import.c: #include "httpd.h" #include "http_config.h" #include "mod_optional_fn_export.h" #include "http_protocol.h" These are the headers. Of course, we have to include the header that declares the optional function. static APR_OPTIONAL_FN_TYPE(TestOptionalFn) *pfn; We declare a pointer to the optional function — note that the macro APR_OPTIONAL_FN_TYPE gets us the type of the function from its name. static int ImportLogTransaction(request_rec *r) { if(pfn) return pfn(r->the_request); return DECLINED; } Further down we will hook the log_transaction hook, and when it gets called we'll then call the optional function — but only if its present, of course! static void ImportFnRetrieve(void) { pfn=APR_RETRIEVE_OPTIONAL_FN(TestOptionalFn); } We retrieve the function here — this function is called by the optional_fn_retrieve hook (also registered later), which happens at the earliest possible moment after hook static void ImportRegisterHooks(apr_pool_t *p) { ap_hook_log_transaction(ImportLogTransaction,NULL,NULL,APR_HOOK_MIDDLE); ap_hook_optional_fn_retrieve(ImportFnRetrieve,NULL,NULL,APR_HOOK_MIDDLE); } And here's where we register our hooks. module optional_fn_import_module = { STANDARD20_MODULE_STUFF, NULL, NULL, NULL, NULL, NULL, ImportRegisterHooks }; And, once more, the familiar module structure.
https://docstore.mik.ua/orelly/weblinux2/apache/ch20_08.htm
CC-MAIN-2021-25
refinedweb
3,082
62.68
Here is a listing of C programming questions on “Sizeof” along with answers, explanations and/or solutions: 1. What is the sizeof(char) in a 32-bit C compiler? a. 1 bit b. 2 bits c. 1 Byte d. 2 Bytes View Answer 2. What is the output of this C code? #include <stdio.h> printf("%d", sizeof('a')); a. 1 b. 2 c. 4 d. None of the mentioned View Answer (Assuming array declaration int a[10];) a. sizeof(a); b. sizeof(*a); c. sizeof(a[10]); d. 10 * sizeof(a); View Answer 4. What is the output of this C code? #include <stdio.h> union temp { char a; char b; int c; }t; int main() { printf("%d", sizeof(t)); return 0; } a. 1 b. 2 c. 4 d. 6 View Answer 5. Which of the following is not an operator in C? a. , b. sizeof() c. ~ d. None of the mentioned View Answer 6. Which among the following has the highest precedence? a. & b. << c. sizeof() d. && View Answer a. 0 b. 1 c. 2 d. 4 View Answer 8. What type of value does sizeof return? a. char b. short c. unsigned int d. long View Answer Sanfoundry Global Education & Learning Series – C Programming Language. Here’s the list of Best Reference Books in C Programming Language. To practice all features of C programming language, here is complete set of 1000+ Multiple Choice Questions and Answers on C.
http://www.sanfoundry.com/c-programming-questions-answers-sizeof-keyword-1/
CC-MAIN-2016-44
refinedweb
241
78.45
A pure python poker hand evaluator for 5, 6, 7 cards In pure python 27 January 2011, Alvin Liang Introduction This is a pure python library to calculate the rank of the best poker hand out of 5, 6, or 7 cards. It does not run the board for you, or calculate winning percentage, EV, or anything like that. But if you give it two hands and the same board, you will be able to tell which hand wins. It is nowhere near as fast as pypoker-eval, but it works if you can’t use C for some reason (the early stages of the first MIT pokerbot competition come to mind). The core algorithm is slower, and you obviously don’t have the speed of C. Quick Start from pokereval.card import Card from pokereval.hand_evaluator import HandEvaluator hole = [Card(2, 1), Card(2, 2)] board = [] score = HandEvaluator.evaluate_hand(hole, board) Rank is 2-14 representing 2-A, while suit is 1-4 representing spades, hearts, diamonds, clubs. The Card constructor accepts two arguments, rank, and suit. aceOfSpades = Card(14, 1) twoOfDiamonds = Card(2, 3) Algorithm The algorithm for 5 cards is just a port of the algorithm that used to be at the following URL. (I purposely broke the link because it now hosts a malware site.) httx://wwx.suffecool.net/poker/evaluator.html I came up with the 6 and 7 card evaluators myself, using a very similar card representation and applying some of the same ideas with prime numbers. The idea was to strike a balance between lookup table size and speed. Also, I haven’t included the code I used to generate the lookup tables, but you should be able to do that with a simpler, slower algorithm. Maybe I’ll add that later as well. There is also a two-card ranking/percentile algorithm that is unrelated to the rest and may get cleaned up later. We used it at one point for some pre-flop evaluation. Credit to Zach Wissner-Gross for developing this. Documentation is sparse at the moment, sorry about that, and obviously I did not really bother to package it or clean it up. I may or may not work on this in the future. Basically, I made it, so why not release it? Contributors - Me! Go me! - Zach Wissner-Gross (2-card algorithm) - arslr (Fixes for other Python versions) - Jim Kelly (Help with packaging, additional documentation) Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pokereval/
CC-MAIN-2017-51
refinedweb
428
63.09
13 December 2007 17:07 [Source: ICIS news] By Joseph Chang NEW YORK (ICIS news)--Dow Chemical will no longer pursue ethylene and polyethylene (PE) projects as part of its “asset light” strategy once its 50/50 joint venture with Kuwait Petroleum Corp (PIC) is set up, Dow’s CEO said on Thursday. “Dow will not pursue ethylene and PE deals on its own once this deal is concluded,” said CEO Andrew Liveris in a conference call with analysts and investors. “That will be the domain of this venture.” As part of its asset light strategy, Dow will put much of its commodity businesses in the venture, including PE, polypropylene, polycarbonate, ethylene amines and ethanolamines. Dow values the business it is putting in the JV at $19bn (€13bn). The company will receive a $9.5bn payment from PIC in exchange for its 50% stake. “This joint venture will mitigate our cyclicality,” said Liveris. “We are placing about half our earnings elsewhere, and this JV will create hugely competitive projects from advantaged feedstocks going forward.” The venture also inherits a number of Dow’s US and European facilities, some of which could be shut down in the future. “We have made our assets competitive in the ?xml:namespace> And while the petrochemical and plastics joint venture itself will still be cyclical, Liveris pointed out that the returns will be higher. “With a 40% return, I can take cyclicality,” he said. “We have created the global petrochemical company of note,” said Liveris. “Stay tuned - Dow Chemical is not done yet.” (
http://www.icis.com/Articles/2007/12/13/9086691/dow-to-no-longer-pursue-ethylenepe-liveris.html
CC-MAIN-2014-35
refinedweb
258
62.98
We had a couple of customers facing an issue where TFS backup schedule wizard was failing with an error "Provider load failure" when they tried to schedule a backup plan, which included SSRS encryption key. It failed at the readiness checks with a stack similar to below +-+-+-+-+-| Verifying Reporting Services encryption key can be backed up |+-+-+-+-+- Starting Node: ENCRYPTIONKEY NodePath : Container/Progress/Conditional/ENCRYPTIONKEY Node returned: Error Provider load failure Completed VerifyBackupEncryptionKeyOperation: Error The error stack pointed to a WMI provider load failure. WMI is uses by TFS tool to take a backup of the SSRS key. However the backup of SSRS key succeeded from the SSRS configuration manager. Also the backup plan scheduling succeeded, if the reporting databases were skipped. We did some investigation and found the reason for the failure. We found that this issue started happening after an upgrade from SQL 2008 R2 to SQL 2012. Due to some reason there were two namespaces for the same SSRS instance. One corresponded to the level of SQL 2008 R2 (level 10) and another corresponded to the SQL 2012(level 11). As there were two WMI name spaces for the same instance the backup tool threw an error as the name space for the older version can no longer be used. Here is how we found the issue (how you can see if you are facing the same issue) - We launched wbemtest.exe on the SSRS server. - Used connect option to select the namespace root\microsoft\sqlserver\reportserver\rs_<instance> - Ran the query "select * from __NameSpace" - It showed up 2 versions, v10 and v11, with v10 being for 2008 and v11 being for 2012. To fix the issue we took a backup of the v10 namespace and then deleted it. The backup plan scheduling succeeded after that. Content created by – Venkat & Aparna Content Reviewed by – Romit Gulati
https://blogs.msdn.microsoft.com/tfssetup/2014/02/17/provider-load-failure-when-scheduling-a-backup-plan/
CC-MAIN-2018-34
refinedweb
308
61.06
Results 1 to 3 of 3 I'm reading arch/arm/boot/compressed/head.s, and have a question about the following codes: ARM( mov r0, r0 ) ARM( b 1f ) THUMB( adr r12, BSYM(1f) ) THUMB( bx r12 ) what ... - Join Date - Dec 2011 - 2 Ask a simple question ARM( mov r0, r0 ) ARM( b 1f ) THUMB( adr r12, BSYM(1f) ) THUMB( bx r12 ) what do "ARM()" and "THUMB()" mean? Are they ARM directives or macro? where can I find the definition of them? Thanks. This has to do with the fact that the ARM chip has two modes, one is the normal ARM mode and one is THUMB mode. My guess is that those instructions are macros to make the mode switch. - Join Date - Dec 2011 - 2 Thanks for your reply. I know ".arm" and ".thumb" are assembly directives for ARM and THUMB mode, but they are capital letters. according to your remind, I searched all .h file, and I found where they are: In arch/arm/include/asm/unified.h, there are several declaration. In short, if the cpu is Thumb2, use THUMB, otherwise ARM. #ifdef CONFIG_THUMB2_KERNEL #if __GNUC__ < 4 #error Thumb-2 kernel requires gcc >= 4 #endif /* The CPSR bit describing the instruction set (Thumb) */ #define PSR_ISETSTATE PSR_T_BIT #define ARM(x...) #define THUMB(x...) x #ifdef __ASSEMBLY__ #define W(instr) instr.w #endif #define BSYM(sym) sym + 1 #else /* !CONFIG_THUMB2_KERNEL */ /* The CPSR bit describing the instruction set (ARM) */ #define PSR_ISETSTATE 0 #define ARM(x...) x #define THUMB(x...) #ifdef __ASSEMBLY__ #define W(instr) instr #endif #define BSYM(sym) sym #endif /* CONFIG_THUMB2_KERNEL */
http://www.linuxforums.org/forum/kernel/185689-ask-simple-question.html
CC-MAIN-2014-49
refinedweb
263
75.61
Two years later, frustration with Generics continues Laird Nelson describes his frustrations with understanding Java Generics. While clear in the simple case, as he works through a more complicated scenario, he ends up throwing them away because they're so complicated. In the two years since Java SE 5 was released, there have been many articles and tutorials posted about Generics. In the simple case (using generified collections) they appear to be well used. However, when digging deeper, developers easily get in over their heads. Issues such as self-bounding generics, wildcard types, or type erasure make generics harder to wrap your head around. A new book is out which might help developers, but at this point the question is if generics are just too complicated for the average developer, and they'll stick with 1.4 style casts. When Nelson reaches this code: public class BaseObjectAdapter < T extends BaseObject < T > > implements BaseObject < T > { /* various instance fields...*/ private Reference canonicalReference; // with the usual getters and setters } he gives up, both because how ugly it is and because it is so complicated that even after writing it he cannot get his brain wrapped around it. One of the commenters to his post notes that when migrating to generics he would just try different combinations until the code compiled. If generics are not understandable by most developers, how much trouble are we getting ourselves into when using them? Thumbs up for use of generics. by Tomas Varaneckas Re: Thumbs up for use of generics. by Jason Carreira Re: Thumbs up for use of generics. by Ricky Clarkson I often change 5,000 lines of code a week, most of those through automated refactoring. I don't worry about the amount of change I need to make, any more than I worry about molecules of water crossing each others' path when I turn on the tap. Generics is no different there. IDEs can help in generification too. Can you think of an actual situation that's hard to get the type qualifiers to match up with? Re: Thumbs up for use of generics. by Jason Carreira Here's a fun challenge for you, create a hierarchy of interfaces, base classes, and subclasses that pass around type-specific Arrays as params and return types. Re: Thumbs up for use of generics. by Taylor Gautier I have found that Generics help me find where my design is flawed. If I can't genericize my code, it tends to show me where I've incorrectly placed dependencies, IOTW problems with Generics means tight coupling. Re: Thumbs up for use of generics. by Mark Richards Generics is a great feature, if you don't overuse it. Very well stated, and I agree. My first priority from a coding and design perspective is simplicity. If Generics helps simplify my code, then I will use it. However, as Nelson pointed out through his code example, simplicity is *not* always equal to how concise or generic the code is. When talking about Generics I always refer back to the J2EE design pattern days. Design patterns alone are ok, but not of much use. However, combine the various design patterns, and you have a great model. The same is with Generics and the other features of Java 1.5. Used alone, they don't seem all that useful. However, combine the features, and they can be quite powerful and significantly simply your code. Draft copy of Naftalin & Wadler's generics book available for download by Jim Bethancourt java-generics-book.dev.java.net/files/documents... However, it looks like the amount of content has increased since the draft (posted in Oct. 2005) -- the page count went from 212 to 240, and some of the code examples may have been improved upon too. Cheers, Jim Re: Thumbs up for use of generics. by Eric Smith ...if you don't overuse it. Compare with the C++ standard library. You know you've gone too far when you need to copy the type of an object out of the debugger watch window and paste it into an editor so you can format and indent all the templates just to figure out what the heck you've
http://www.infoq.com/news/2006/11/generics-frustration
CC-MAIN-2014-52
refinedweb
702
73.17
NAME ggLockCreate, ggLockDestroy, ggLock, ggUnlock, ggTryLock - Lowest common denominator locking facilities SYNOPSIS #include <ggi/gg.h> void *ggLockCreate(void); int ggLockDestroy(void *lock); void ggLock(void *lock); void ggUnlock(void *lock); int ggTryLock(void *lock); DESCRIPTION These functions allow sensitive resource protection to prevent simultaneous or interleaved access to resources. For developers accustomed to POSIX-like threading environments it is important to differentiate a gglock from a "mutex". A gglock fills *both* the role of a "mutex" and a "condition" (a.k.a. an "event" or "waitqueue") through a simplified API, and as such there is no such thing as a gglock "owner". A LibGG lock is just locked or unlocked, it does not matter by what or when as long as the application takes care never to create a deadlock that never gets broken. The locking mechanisms are fully functional even in single-threaded, uninterrupted-flow-of-control environments. They must still be used as described below even in these environments; They are never reduced to non-operations. The locking mechanisms are threadsafe, and are also safe to call from inside LibGG task handlers. However, they are not safe to use in a thread that may be cancelled during their execution, and they are not guaranteed to be safe to use in any special context other than a LibGG task, such as a signal handler or asyncronous procedure call. Though the LibGG API does provide ample functionality for threaded environments, do note that LibGG does not itself define any sort of threading support, and does not require or guarantee that threads are available. As such, if the aim of an application developer is to remain as portable as possible, they should keep in mind that when coding for both environments, there are only two situations where locks are appropriate to use. These two situations are described in the examples below. Cleanup handlers created with ggRegisterCleanup(3) should not call any of these functions. LibGG must be compiled with threading support if multiple threads that call any of these functions are to be used in the program. When LibGG is compiled with threading support, the ggLock, ggUnlock, and ggTryLock functions are guaranteed memory barriers for the purpose of multiprocessor data access synchronization. (When LibGG is not compiled with threading support, it does not matter, since separate threads should not be using these functions in the first place.) ggLockCreate creates a new lock. The new lock is initially unlocked. ggLockDestroy destroys a lock, and should only be called when lock is unlocked, otherwise the results are undefined and probably undesirable. ggLock will lock the lock and return immediately, but only if the lock is unlocked. If the lock is locked, ggLock will not return until the lock gets unlocked by a later call to ggUnlock. In either case lock will be locked when ggLock returns. ggLock is "atomic," such that only one waiting call to ggLock will return (or one call to ggTryLock will return successfully) each time lock is unlocked. Order is *not* guaranteed by LibGG -- if two calls to ggLock are made at different times on the same lock, either one may return when the lock is unlocked regardless of which call was made first. (It is even possible for a call to ggTryLock to grab the lock right after it is unlocked, even though a call to ggLock was already waiting on the lock.) ggTryLock attempts to lock the lock, but unlike ggLock it always returns immediately whether or not the lock was locked to begin with. The return value indicates whether the lock was locked at the time ggTryLock was invoked. In either case lock will be locked when ggTryLock returns. ggUnlock unlocks the lock. If any calls to ggLock or ggTryLock are subsequently invoked, or have previously been invoked on the lock, one of the calls will lock lock and return. As noted above, which ggLock call returns is not specified by LibGG and any observed behavior should not be relied upon. Immediacy is also *not* guaranteed; a waiting call to ggLock may take some time to return. ggUnlock may be called, successfully, even if lock is already unlocked, in which case, nothing will happen (other than a memory barrier.) In all the above functions, where required, the lock parameter *must* be a valid lock, or the results are undefined, may contradict what is written here, and, in general, bad and unexpected things might happen to you and your entire extended family. The functions do *not* validate the lock; It is the responsibility of the calling code to ensure it is valid before it is used. Remember, locking is a complicated issue (at least, when coding for multiple environments) and should be a last resort. RETURN VALUE ggLockCreate returns a non-NULL opaque pointer to a mutex, hiding its internal implementation. On failure, ggLockCreate returns NULL. ggTryLock returns GGI_OK if the lock was unlocked, or GGI_EBUSY if the lock was already locked. ggLockDestroy returns GGI_OK on success or GGI_EBUSY if the lock is locked. EXAMPLES One use of gglocks is to protect a critical section, for example access to a global variable, such that the critical section is never entered by more than one thread when a function is called in a multi-threaded environment. It is important for developers working in a single- threaded environment to consider the needs of multi-threaded environments when they provide a function for use by others. static int foo = 0; static gglock *l; void increment_foo(void) { ggLock(l); foo++; ggUnlock(l); } In the above example, it is assumed that gglock is initialized using ggLockCreate before any calls to increment_foo are made. Also note that in the above example, when writing for maximum portability, increment_foo should not be called directly or indirectly by a task handler which was registered via ggAddTask because a deadlock may result (unless it is somehow known that increment_foo is not being executed by any code outside the task handler.) Another use of gglocks is to delay or skip execution of a task handler registered with ggAddTask(3). It is important for developers working in a multi-threaded environment to consider this when they use tasks, because in single-threaded environments tasks interrupt the flow of control and may in fact themselves be immune to interruption. As such they cannot wait for a locked lock to become unlocked -- that would create a deadlock. static gglock *t, *l, *s; int misscnt = 0; void do_foo (void) { ggLock(t); /* prevent reentry */ ggLock(l); /* keep task out */ do_something(); ggUnlock(l); /* task OK to run again */ if (!ggTryLock(s)) { /* run task if it was missed */ if (misscnt) while (misscnt--) do_something_else(); ggUnlock(s); } ggUnlock(t); /* end of critical section */ } /* This is called at intervals by the LibGG scheduler */ static int task_handler(struct gg_task *task) { int do_one; /* We know the main application never locks s and l at the * same time. We also know it never locks either of the * two more than once (e.g. from more than one thread.) */ if (!ggTryLock(s)) { /* Tell the main application to run our code for us * in case we get locked out and cannot run it ourselves. */ misscnt++; ggUnlock(s); if (ggTryLock(l)) return; /* We got locked out. */ } else { /* The main application is currently running old missed * tasks. But it is using misscnt, so we can’t just ask * it to do one more. * * If this is a threaded environment, we may spin here for * while in the rare case that the main application * unlocked s and locked l between the above ggTryLock(s) * and the below ggLock(l). However we will get control * back eventually. * * In a non-threaded environment, the below ggLock cannot * wedge, because the main application is stuck inside the * section where s is locked, so we know l is unlocked. */ ggLock(l); do_something_else(); ggUnlock(l); return; } /* now we know it is safe to run do_something_else() as * do_something() cannot be run until we unlock l. * However, in threaded environments, the main application may * have just started running do_something_else() for us already. * If so, we are done, since we already incremented misscnt. * Otherwise we must run it ourselves, and decrement misscnt * so it won’t get run an extra time when we unlock s. */ if (ggTryLock(s)) return; if (misscnt) while (misscnt--) do_something_else(); ggUnlock(s); ggUnlock(l); } In the above example, the lock t prevents reentry into the dofoo subroutine the same as the last example. The lock l prevents do_something_else() from being called while do_something() is running. The lock s is being used to protect the misscnt variable and also acts as a memory barrier to guarantee that the value seen in misscnt is up- to-date. The code in function dofoo will run do_something_else() after do_something() if the task happened while do_something() was running. The above code will work in multi-threaded-single-processor, multi-threaded-multi-processor, and single-threaded environments. Note: The above code assumes do_something_else() is reentrant. SEE ALSO pthread_mutex_init(3)
http://manpages.ubuntu.com/manpages/intrepid/man3/ggTryLock.3.html
CC-MAIN-2014-41
refinedweb
1,494
59.03
11 March 2010 23:10 [Source: ICIS news] HOUSTON (ICIS news)--Dow Chemical is minimising its exposure to the volatility of feedstock oil and gas prices through an emphasis on alternative energy sources and petrochemical joint ventures with government-run companies, the chief executive of the largest US producer said on Thursday. “Two-thirds of our company is much more independent from oil and gas input – biology and material sciences, solar shingles, these new businesses are a long way from that volatility,” chief executive Andrew Liveris said. “The one third of our company that still buys 1m bbl/day equivalent of crude is our last physical hedge,” he added. Liveris said Dow Chemical was focused on joint ventures with nation-states looking to grow their countries’ petrochemical industries. “We want partners who really believe that we will provide great jobs for their nation - through an education in science, learning the right skills, and allowing our franchise, in turn, to grow,” Liveris said. Liveris spoke at the CERAWeek 2010 energy conference in ?xml:namespace> The chief executive cited several Dow investments in “You need government policy, but you also need intelligent intervention,” Liveris said in explaining his company’s draw to joint ventures with state-run institutions. Liveris said the strategy originated in 2001 and 2002, when the company closed on its acquisition of Union Carbide and more heavily exposed it to the volatility of natural gas and ethylene prices. He said 2003 and 2004 “were interesting years, from shutting down plants to resizing all of our operations”. Since 2004, however, the company has sought to distance itself as much as possible from such feedstock volatility - allowing it to close on its acquisition of Rohm and Haas in 2009, even in the midst of the global economic recession. “We at Dow don’t count on the recovery,” Liveris said. “And we’re running 2010 like we ran 2009. We don’t rely on the rhetoric.” Even so, Liveris said he did expect 2010 to be better than 2009. “It already is,” he said. “We’re still in the middle of restocking, I think. True demand will come back, but we don’t know when. I would say we see signs of progress everywhere except the Liveris noted that the That GDP outlook was largely shared by a panel of economists in another afternoon session, which cautioned, however, that expectations had slightly lowered in recent months. Elsewhere, the Dow chief said he was excited by the recent surge in US natural gas supplies, but said both pricing conditions and government policies would need to be proven successful over a longer time frame before Dow would consider increasing its reliance on those supplies. “Until that, we’ll build plants in The Liveris said Dow was a leader in understanding efficiency issues, having reduced its energy use by 25% from 1995 to 2005 even while doubling production. The CERAWeek conference lasts through Friday.
http://www.icis.com/Articles/2010/03/11/9342123/us-dow-minimises-oil-gas-exposure-looks-to-state-run-jvs.html
CC-MAIN-2015-18
refinedweb
489
57
15 March 2010 20:16 [Source: ICIS news] HOUSTON (ICIS news)--The growth of US natural gas capacity may make natural gas production in ?xml:namespace> Specifically, the structural costs differential between Alberta-based ethane relative to US Gulf-based supplies is very important to the company, Woelfel said. Ethane, which is extracted from natural gas, is used as a feedstock for chemicals produced by NOVA, including ethylene and polyethylene (PE). “We are clearly seeing some degree of decline in the west [ Woelfel said NOVA had expressed its concerns with While additional natural gas and ethane supplies would be available in the US, proposed natural gas pipelines from several locations were not economically feasible at present due to lower natural gas pricing, NOVA said. In addition, NOVA recently joined a fight against a toll surcharge on a US pipeline system that supplies its chemical production in southern Ontario province with feedstock. NOVA is headquartered in the NOVA noted, however, that Canada's ethane availability in 2010 was actually slightly better thus far than its pre-year projection. In addition, NOVA’s lower reliance on PE exports to Earlier on Monday, NOVA reported a fourth-quarter net income of $17m (€12.4m), up from a $212m net loss in the year-earlier period. Sales slipped 2.5% to $1.12bn. During the earnings conference call, Woelfel said the company’s business results had improved steadily through 2009, with the fourth quarter as its only quarterly profit. NOVA is a wholly-owned subsidiary of Woelfel took over as NOVA's chief executive on 1 January. ($1 = €0.73) For more on NOVA Chemicals visit ICIS company intelligence For more on PE.
http://www.icis.com/Articles/2010/03/15/9342877/us-natgas-growth-may-lead-to-canadian-ethane-shortage-nova.html
CC-MAIN-2013-20
refinedweb
279
50.97
JSF: UIComponent.getAttributes() -- good, bad or ugly? Back in September Allen Holub said that accessors are evil, there was also a long thread on TSS about Allen's article. Then very recently Cedric Beust posted that he thought accessors are here to stay and that Allen Holub was all wrong (I tend to agree with Cedric on accessors but Allen did make a couple of good points). Which all seem to have very little to do with JSF I know but... Consider the UIComponent.getAttributes() : Map method, which is new in the latest beta release. This method returns a map that gives you get and put access to the JavaBean attributes on your component (all those evil accesors:). This fascinates me and I think its a cool thing to do. Tools have a very simple ( java.util.Map) API to use and we as programmers have the accessors for the stuff we need typed access to (its likely to be a little quicker too but probably not called often enough to matter much). So for example; public class MyComponent extends UIComponentBase { ... public static final String FOO = "foo"; private String foo; public void setFoo(String foo) { this.foo = foo; } public String getFoo() { return foo; } ... } public class MyRenderer extends Renderer { public static final String BAR = "bar"; ... public void encodeBegin(FacesContext context, UIComponent component) { // instead of casting component to MyComponent I can use the attributes // map to get at 'foo' or I could cast and call getFoo() either way // it works the same (the map is of course a bit slower) ... String foo = (String)component.getAttributes().get(MyComponent.FOO); ... // even more interesting I can get at the renderer specific // values put into the component by the creator of the component // (usually a JSP tag) with the same logic // Typically I'd be setting the value of bar in a JSP tag // also using the map interface like this // component.getAttributes().put(MyRenderer.BAR, valueSpecifiedInJSP); ... String foo = (String)component.getAttributes().get(MyRenderer.BAR); ... } } The spec calls this 'attribute-property transparency'. I like this a lot better than the way that the API was laid out in the EA4 version of JSF. In that world you had to keep everything in a Map and if you wanted accessors they had to delegate to the Map. This is a lot cleaner in my opinion. Another side effect is that you can just assume that put(Object, Object) will 'just work' and will call the underlying set method when it can and put the value into a map otherwise. Thus we will be able to iterate through properties in a very straightforward way. This comes in very handy in the way JSP custom actions (tags) are implemented. The first example in the spec in section 9.3 shows how this could/should work. With this API components could conceivably be writen without any accessors and no fields. All the instance data would be stored in the map. I think there is a certian danger of abuse like this. On the positive side tools can access the whole component without having to know anything about it. This is a key aspect of the tooling requirements behind JSF and was possible in EA4 but uglier. I've seen this sort of thing done on a limited scale on other projects but I've never seen something like this in a project like JSF (that will be used by lots and lots of people). What are your thoughts on doing this? Will it scale up in terms of usability? Will it get abused so much that we will end up with lots of nasty code? What do you think? - Login or register to post comments - Printer-friendly version - bdudney's blog - 5561 reads
https://weblogs.java.net/blog/bdudney/archive/2004/02/jsf_uicomponent.html
CC-MAIN-2015-35
refinedweb
623
72.56
Does sputtering butter mean that water is present? share|improve this answer answered Feb 10 '11 at 1:04 hmp 5363725 add a comment| Your Answer draft saved draft discarded Sign up or log in Sign up using Google Sign Is there any known limit for how many dice RPG players are comfortable adding up? What you lose in the syntactic sugar of model inheritance, you gain a bit in query speed. What now? Not the answer you're looking for? If through this overriding a subclass contains no more abstract methods, that class is concrete (and we can construct objects directly from it). As we described above, although we cannot constuct a new object from the class Shape, we can call the constructor for this class inside the constructor for a subclass (as the What is really curved, spacetime, or simply the coordinate lines? Has swap space a file system? django inheritance django-models data-modeling share|improve this question edited Oct 18 '10 at 16:55 asked Oct 18 '10 at 0:04 jackiekazil 2,36441219 add a comment| 2 Answers 2 active oldest votes up The PositionalShape subclass, extends the abstract Shape superclass. share|improve this answer edited Jan 6 '12 at 6:15 answered Dec 15 '08 at 8:24 muhuk 9,42243879 Thanks for the suggestions, but answers will have 1..M risks as well If you get stumped on any problem, go back and read the relevant part of the lecture. Ballpark salary equivalent today of "healthcare benefits" in the US? You can define a reusable core app which includes base.py with abstract models and models.py with models extending the abstract ones and inheriting all the features. class F(models.Model): pass class C1(F): pass class C2(F): pass class D(F): pid = models.ForeignKey(F) Or use a GenericforeignKey which is a bit more complicated, especially if you need to limit the However, if you use this approach, you shouldn't declare your service model as abstract as you do in the example. The InheritanceManager class from django-model-utils is probably the easiest to use. We can construct objects from the formerly abstract class; when calling their stub methods, bad results are returned. This way: Answer_Risk would work without modification. Finally, we will examine some general principles for designing classes in inheritance hierarchies.. Feels like overkill. An example : Say I would like a Queryset containing all the Content objects that are associated with a specific object of Child1(eg. I like this approach because no matter who the author is, I can easily build the list of comments just by iterating over BlogPostComments set and calling display_name() for each of To work around this problem, when you are using related_name in an abstract base class (only), part of the name should be the string %(class)s. Why are these methods new here and not inherited from other interfaces? more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed Arrays.sort(allShapes, new Comparator () { public int compare(Object o1, Object o2) { double areaDiff = ((Shape)o1).getArea() - ((Shape)o2).getArea(); if (areaDiff < 0) return -1; else if (areaDiff > 0) return +1; django-models share|improve this question asked Feb 10 '11 at 0:56 Burak 2,20494278 add a comment| 1 Answer 1 active oldest votes up vote 1 down vote accepted Multi-table inheritance? Finally, final as an access modifier for class and methods in classes. My problem is that I have no idea how to represent different types of services in my database. The downside of this is that if these are large tables and/or your queries are complex, queries against it could be noticeably slower. Browse other questions tagged python django inheritance django-models or ask your own question. Now assume that we want to find the two shapes that have the most similar area. Why is this C++ code faster than my hand-written assembly for testing the Collatz conjecture? If you know what child type these will have beforehand, you can just access the child class in the following way: from django.core.exceptions import ObjectDoesNotExist try: telnet_service = service.telnetservice except (AttributeError, This site is great, its users too :) –pistache Oct 25 '11 at 13:47 OK, that was a great solution you gave here, especially the InheritanceManager trick, and the asked 5 years ago viewed 659 times active 5 years ago Related 234When to use an interface instead of an abstract class and vice versa?853Interface vs Abstract Class (general OO)9Abstract base share|improve this answer answered Oct 18 '10 at 0:45 Bernhard Vallant 26.3k871106 That's what I thought, but what I don't understand is the docs give me the impression that So, this class must be abstract because it contains two abstract methods: it specifies getBoundingBox and also inherits (and doesn't override) getArea. Are “Referendum” and “Plebiscite” the same in the meaning, or different in the meaning and nuance? It WOULD NOT make sense to specify getBoundingBox or mayOverlap in either individual interface, because the concepts of bounding boxes and overlaping shapes don't make sense when applied to just shapes That might be done saving final non-abstract classes in a dictionary and referencing to them by names (let's say, defined in the settings).If this didn't help you, maybe you can send What is the simplest way to put some text at the beginning of a line and to put some text at the center of the same line? Count trailing truths The 10'000 year skyscraper How can I check to see if a process is stopped from the command-line? The Liskov subsitution rule: If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, Either make F a concrete class, there are some downsides to this, but its not a bad way to go. How to deal with a coworker that writes software to give him job security instead of solving problems? We can also easily define a simlar subclass for rectangles. Finally, it adds one additional method that detects whether two shapes "may overlap" by checking for intersection in their bounding boxes: if the bounding boxes don't intersect, there is no possibility I also have other models that need to reference an 'Answer' regardless of it's sub-type. Can I hint the optimizer by giving the range of an integer? archatas at 11:30 Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest Labels: Advanced, Programming, Python 5 comments: AnonymousMonday, March 09, 2009 4:41:00 PMIs there a workaround for the drawback you mention? This is called delegation: one object uses another to implement a method. The delegation mechanism is known as the HAS-A mechanism. In a company crossing multiple timezones, is it rude to send a co-worker a work email in the middle of the night? The only difficulty with this approach is that when you do something like the following: node = Node.objects.get(pk=node_id) for service in node.services.all(): # Do something with the service The 'service' objects Leave unchanged any methods in a subclass that overrides a formerly abstract method. more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed One could, for example, define the Circle and Elipses classes separately, and then implement the Circle class by delegating its behavior to an Elipse stored as an instance variable, ensuring that Reread the last part of the previous section. As stated above, the second design is a bit more symmetrical. You can, however, imitate this behaviour with one-to-one relationships: class F(models.Model): pass # stuff here class C1(models.Model): f = models.OneToOneField(F) class C2(models.Model): f = models.OneToOneField(F) class D(F): pid = models.ForeignKeyField(F) You'll public class Circle extends PositionalShapeBasics { public Circle (String name, int centerX, int centerY, double r) { super(name,centerX,centerY); radius = r; } //Implement the getArea method, // specified in the Shape
http://hiflytech.com/cannot-define/cannot-define-a-relation-with-abstract-class.html
CC-MAIN-2017-47
refinedweb
1,391
59.43
How should I log while using multiprocessing in Python? I just now wrote a log handler of my own that just feeds everything to the parent process via a pipe. I've only been testing it for ten minutes but it seems to work pretty well. (Note: This is hardcoded to RotatingFileHandler, which is my own use case.) Update: @javier now maintains this approach as a package available on Pypi - see multiprocessing-logging on Pypi, github at Update: Implementation! This now uses a queue for correct handling of concurrency, and also recovers from errors correctly. I've now been using this in production for several months, and the current version below works without issue. from logging.handlers import RotatingFileHandlerimport multiprocessing, threading, logging, sys, tracebackclass MultiProcessingLog(logging.Handler): def __init__(self, name, mode, maxsize, rotate): logging.Handler.__init__(self) self._handler = RotatingFileHandler(name, mode, maxsize, rotate) self.queue = multiprocessing.Queue(-1) t = threading.Thread(target=self.receive) t.daemon = True t.start() def setFormatter(self, fmt): logging.Handler.setFormatter(self, fmt) self._handler.setFormatter(fmt) def receive(self): while True: try: record = self.queue.get() self._handler.emit(record) except (KeyboardInterrupt, SystemExit): raise except EOFError: break except: traceback.print_exc(file=sys.stderr) def send(self, s): self.queue.put_nowait(s) def _format_record(self, record): # ensure that exc_info and args # have been stringified. Removes any chance of # unpickleable things inside and possibly reduces # message size sent over the pipe if record.args: record.msg = record.msg % record.args record.args = None if record.exc_info: dummy = self.format(record) record.exc_info = None return record def emit(self, record): try: s = self._format_record(record) self.send(s) except (KeyboardInterrupt, SystemExit): raise except: self.handleError(record) def close(self): self._handler.close() logging.Handler.close(self) The only way to deal with this non-intrusively is to: - Spawn each worker process such that its log goes to a different file descriptor (to disk or to pipe.) Ideally, all log entries should be timestamped. - Your controller process can then do one of the following: - If using disk files: Coalesce the log files at the end of the run, sorted by timestamp - If using pipes (recommended): Coalesce log entries on-the-fly from all pipes, into a central log file. (E.g., Periodically selectfrom the pipes' file descriptors, perform merge-sort on the available log entries, and flush to centralized log. Repeat.) QueueHandler is native in Python 3.2+, and does exactly this. It is easily replicated in previous versions. Python docs have two complete examples: Logging to a single file from multiple processes For those using Python < 3.2, just copy QueueHandler into your own code from: or alternatively import logutils. Each process (including the parent process) puts its logging on the Queue, and then a listener thread or process (one example is provided for each) picks those up and writes them all to a file - no risk of corruption or garbling.
https://codehunter.cc/a/python/how-should-i-log-while-using-multiprocessing-in-python
CC-MAIN-2022-21
refinedweb
486
50.63
#include <Adafruit_GPS.h>#include <SoftwareSerial.h>SoftwareSerial mySerial(8, 7);Adafruit_GPS GPS(&mySerial);void setup() { // 9600 NMEA is the default baud rate for Adafruit MTK GPS's- some use 4800 GPS.begin(9600);// GPS.sendCommand(PMTK_SET_BAUD_115200); //set baud rate to 115200 hopefully! GPS.sendCommand("$PMTK251,57600*2C"); //set baud rate to 57600// GPS.sendCommand(PMTK_SET_BAUD_57600); //set baud rate to 57600// GPS.sendCommand("$PMTK2u51,38400*27"); //set baud rarte to 38400// GPS.sendCommand("$PMTK251,19200*22"); //set baud rate to 19200// GPS.sendCommand("$PMTK251,9600*17"); //set baud rate to 9600 mySerial.end(); delay(500); GPS.begin(57600);} void loop(){ } I'd like to put the baud change code into the setup of my actual code though, and I'm unsure how to do so with different library's to those above. How could I change the baud rate using just NMEAGPS.h and GPSPort.h? The NMEAGPS header file defines a class that knows how to read data from, and parse the data from, some specified stream. By itself, you can not do anything to set up the stream.The GPSPort header file is a complete mystery, since you failed to post a link.But, it is unlikely that you can make the header file do anything to make the GPS output data at a different baud rate. YOU will need to write the code to call the appropriate methods, to do that. just as it is, get rid of the commented lines as they are useless and confusing----EDIT - misread what you said. you basically need to open up a Serial port set up at 9600 bauds, send the $PMTK251,57600*2C command to your device, then terminate the connexion and reopen at 57600 #include <NMEAGPS.h>#include <GPSPort.h>NMEAGPS gps;void setup(){ gpsPort.begin(9600); gps.send_P( &gpsPort, F("$PMTK251,57600*2C") ); delay(200); gpsPort.begin(57600);}void loop(){ } #include <NMEAGPS.h>#include <GPSPort.h>NMEAGPS gps;void setup(){ gpsPort.begin(9600); gpsPort.print("$PMTK251,57600*2C"); gpsPort.end(); delay(500); gpsPort.begin(57600);}void loop(){ } How did you test it?
https://forum.arduino.cc/index.php?amp;topic=610679.msg4150108
CC-MAIN-2019-35
refinedweb
344
68.57
Le mardi 30 septembre 2008 à 15:49 +0200, Tarek Ziadé a écrit : > The "Obsoletes" info could be used maybe. But the main problem I can > see is that > in any case several versions of the same module can be needed to build > one application. This is indeed a problem, and when it happens, it needs fixing instead of trying to work with it. >. And this is not a problem, but something that is desired. No, the problem we have today is that some developers are providing modules without API stability, which means you cannot simply depend on a module, you need a specific version. Again, when a C library changes its ABI, we do not allow it to keep the same name. It’s as simple as that. > The setuptools project has partly improved this by providing a way to > install several > version of the same package in Python and give a way to select which > one is active. This is not an improvement, it is a nightmare for the sysadmin. You cannot install things as simple (and as critical) as security updates if you allow several versions to be installed together. > From your point of view, how could we solve it at Debian level ? to > kind of isolate a group > of packages that fit the needs of one given application ? I think we need to enforce even more the habit to move unstable and private-use modules to private directories. It is not viable to add them to public directories. This is something that is done punctually in some Debian packages, but it should become mandatory for all cases where there is no API stability. A tool that eases installation and use of modules in private directories would certainly encourage developers to do so and improve the situation in this matter. > (btw A recent change it Python has allowed us to define per-user site-packages >) This is definitely a nice improvement for those on multi-user systems without administrative rights, and for those who wish to install a more recent version of a specific module. However, I don’t think we should rely on it as the normal way of installing python modules. And especially, we should not rely on on-demand download/installation of modules like setuptools does. > ? Two conflicting versions must not use the same module namespace. The real, fundamental issue, that generates even more brokenness when you accept it and work around it, is here. It is a nightmare for the developer (who can’t rely on a defined API after "import foo"), a nightmare for the distributor (who has to use broken-by-design selection methods), and a nightmare for the system administrator (who cannot easily track what is installed on the system). Forbid that strictly, and you’ll see that methods that work today for a Linux distribution (where we already forbid it) will work just as nicely for all other distribution mechanisms.: <>
https://mail.python.org/pipermail/distutils-sig/2008-September/010136.html
CC-MAIN-2022-27
refinedweb
491
58.82
solutions to problems using computer vision. In fact, what makes this project so special is that we are going to combine the techniques from many previous blog posts, including building a document scanner, contour sorting, and perspective. Looking for the source code to this post? Jump right to the downloads section. Bubble sheet scanner and test grader using OMR, Python, and OpenCV In the remainder of this blog post, I’ll discuss what exactly Optical Mark Recognition (OMR) is. I’ll then demonstrate how to implement a bubble sheet test scanner and grader using strictly computer vision and image processing techniques, along with the OpenCV library. Once we have our OMR system implemented, I’ll provide sample results of our test grader on a few example exams, including ones that were filled out with nefarious intent. Finally, I’ll discuss some of the shortcomings of this current bubble sheet scanner system and how we can improve it in future iterations. What is Optical Mark Recognition (OMR)? Optical Mark Recognition, or OMR for short, is the process of automatically analyzing human-marked documents and interpreting their results. Arguably, the most famous, easily recognizable form of OMR are bubble sheet multiple choice tests, not unlike the ones you took in elementary school, middle school, or even high school. If you’re unfamiliar with “bubble sheet tests” or the trademark/corporate name of “Scantron tests”, they are simply multiple-choice tests that you take as a student. Each question on the exam is a multiple choice — and you use a #2 pencil to mark the “bubble” that corresponds to the correct answer. The most notable bubble sheet test you experienced (at least in the United States) were taking the SATs during high school, prior to filling out college admission applications. I believe that the SATs use the software provided by Scantron to perform OMR and grade student exams, but I could easily be wrong there. I only make note of this because Scantron is used in over 98% of all US school districts. In short, what I’m trying to say is that there is a massive market for Optical Mark Recognition and the ability to grade and interpret human-marked forms and exams. Implementing a bubble sheet scanner and grader using OMR, Python, and OpenCV Now that we understand the basics of OMR, let’s build a computer vision system using Python and OpenCV that can read and grade bubble sheet tests. Of course, I’ll be providing lots of visual example images along the way so you can understand exactly what techniques I’m applying and why I’m using them. Below I have included an example filled in bubble sheet exam that I have put together for this project: Figure 1: The example, filled in bubble sheet we are going to use when developing our test scanner software. We’ll be using this as our example image as we work through the steps of building our test grader. Later in this lesson, you’ll also find additional sample exams. I have also included a blank exam template as a .PSD (Photoshop) file so you can modify it as you see fit. You can use the “Downloads” section at the bottom of this post to download the code, example images, and template file. The 7 steps to build a bubble sheet scanner and grader The goal of this blog post is to build a bubble sheet scanner and test grader using Python and OpenCV. To accomplish this, our implementation will need to satisfy the following 7 steps: - Step #1: Detect the exam in an image. - Step #2: Apply a perspective transform to extract the top-down, birds-eye-view of the exam. - Step #3: Extract the set of bubbles (i.e., the possible answer choices) from the perspective transformed exam. - Step #4: Sort the questions/bubbles into rows. - Step #5: Determine the marked (i.e., “bubbled in”) answer for each row. - Step #6: Lookup the correct answer in our answer key to determine if the user was correct in their choice. - Step #7: Repeat for all questions in the exam. The next section of this tutorial will cover the actual implementation of our algorithm. The bubble sheet scanner implementation with Python and OpenCV To get started, open up a new file, name it test_grader.py , and let’s get to work: On Lines 2-7 we import our required Python packages. You should already have OpenCV and Numpy installed on your system, but you might not have the most recent version of imutils, my set of convenience functions to make performing basic image processing operations easier. To install imutils (or upgrade to the latest version), just execute the following command: Lines 10-12 parse our command line arguments. We only need a single switch here, --image , which is the path to the input bubble sheet test image that we are going to grade for correctness. Line 17 then defines our ANSWER_KEY . As the name of the variable suggests, the ANSWER_KEY provides integer mappings of the question numbers to the index of the correct bubble. In this case, a key of 0 indicates the first question, while a value of 1 signifies “B” as the correct answer (since “B” is the index 1 in the string “ABCDE”). As a second example, consider a key of 1 that maps to a value of 4 — this would indicate that the answer to the second question is “E”. As a matter of convenience, I have written the entire answer key in plain english here: - Question #1: B - Question #2: E - Question #3: A - Question #4: D - Question #5: B Next, let’s preprocess our input image: On Line 21 we load our image from disk, followed by converting it to grayscale (Line 22), and blurring it to reduce high frequency noise (Line 23). We then apply the Canny edge detector on Line 24 to find the edges/outlines of the exam. Below I have included a screenshot of our exam after applying edge detection: Figure 2: Applying edge detection to our exam neatly reveals the outlines of the paper. Notice how the edges of the document are clearly defined, with all four vertices of the exam being present in the image. Obtaining this silhouette of the document is extremely important in our next step as we will use it as a marker to apply a perspective transform to the exam, obtaining a top-down, birds-eye-view of the document: Now that we have the outline of our exam, we apply the cv2.findContours function to find the lines that correspond to the exam itself. We do this by sorting our contours by their area (from largest to smallest) on Line 37 (after making sure at least one contour was found on Line 34, of course). This implies that larger contours will be placed at the front of the list, while smaller contours will appear farther back in the list. We make the assumption that our exam will be the main focal point of the image, and thus be larger than other objects in the image. This assumption allows us to “filter” our contours, simply by investigating their area and knowing that the contour that corresponds to the exam should be near the front of the list. However, contour area and size is not enough — we should also check the number of vertices on the contour. To do, this, we loop over each of our (sorted) contours on Line 40. For each of them, we approximate the contour, which in essence means we simplify the number of points in the contour, making it a “more basic” geometric shape. You can read more about contour approximation in this post on building a mobile document scanner. On Line 47 we make a check to see if our approximated contour has four points, and if it does, we assume that we have found the exam. Below I have included an example image that demonstrates the docCnt variable being drawn on the original image: Figure 3: An example of drawing the contour associated with the exam on our original image, indicating that we have successfully found the exam. Sure enough, this area corresponds to the outline of the exam. Now that we have used contours to find the outline of the exam, we can apply a perspective transform to obtain a top-down, birds-eye-view of the document: In this case, we’ll be using my implementation of the four_point_transform function which: - Orders the (x, y)-coordinates of our contours in a specific, reproducible manner. - Applies a perspective transform to the region. You can learn more about the perspective transform in this post as well as this updated one on coordinate ordering, but for the time being, simply understand that this function handles taking the “skewed” exam and transforms it, returning a top-down view of the document: Figure 4: Obtaining a top-down, birds-eye view of both the original image (left) along with the grayscale version (right). Alright, so now we’re getting somewhere. We found our exam in the original image. We applied a perspective transform to obtain a 90 degree viewing angle of the document. But how do we go about actually grading the document? This step starts with binarization, or the process of thresholding/segmenting the foreground from the background of the image: After applying Otsu’s thresholding method, our exam is now a binary image: Figure 5: Using Otsu’s thresholding allows us to segment the foreground from the background of the image. Notice how the background of the image is black, while the foreground is white. This binarization will allow us to once again apply contour extraction techniques to find each of the bubbles in the exam: Lines 64-67 handle finding contours on our thresh binary image, followed by initializing questionCnts , a list of contours that correspond to the questions/bubbles on the exam. To determine which regions of the image are bubbles, we first loop over each of the individual contours (Line 70). For each of these contours, we compute the bounding box (Line 73), which also allows us to compute the aspect ratio, or more simply, the ratio of the width to the height (Line 74). In order for a contour area to be considered a bubble, the region should: - Be sufficiently wide and tall (in this case, at least 20 pixels in both dimensions). - Have an aspect ratio that is approximately equal to 1. As long as these checks hold, we can update our questionCnts list and mark the region as a bubble. Below I have included a screenshot that has drawn the output of questionCnts on our image: Figure 6: Using contour filtering allows us to find all the question bubbles in our bubble sheet exam recognition software. Notice how only the question regions of the exam are highlighted and nothing else. We can now move on to the “grading” portion of our OMR system: First, we must sort our questionCnts from top-to-bottom. This will ensure that rows of questions that are closer to the top of the exam will appear first in the sorted list. We also initialize a bookkeeper variable to keep track of the number of correct answers. On Line 90 we start looping over our questions. Since each question has 5 possible answers, we’ll apply NumPy array slicing and contour sorting to to sort the current set of contours from left to right. The reason this methodology works is because we have already sorted our contours from top-to-bottom. We know that the 5 bubbles for each question will appear sequentially in our list — but we do not know whether these bubbles will be sorted from left-to-right. The sort contour call on Line 94 takes care of this issue and ensures each row of contours are sorted into rows, from left-to-right. To visualize this concept, I have included a screenshot below that depicts each row of questions as a separate color: Figure 7: By sorting our contours from top-to-bottom, followed by left-to-right, we can extract each row of bubbles. Therefore, each row is equal to the bubbles for one question. Given a row of bubbles, the next step is to determine which bubble is filled in. We can accomplish this by using our thresh image and counting the number of non-zero pixels (i.e., foreground pixels) in each bubble region: Line 98 handles looping over each of the sorted bubbles in the row. We then construct a mask for the current bubble on Line 101 and then count the number of non-zero pixels in the masked region (Lines 107 and 108). The more non-zero pixels we count, then the more foreground pixels there are, and therefore the bubble with the maximum non-zero count is the index of the bubble that the the test taker has bubbled in (Line 113 and 114). Below I have included an example of creating and applying a mask to each bubble associated with a question: Figure 8: An example of constructing a mask for each bubble in a row. Clearly, the bubble associated with “B” has the most thresholded pixels, and is therefore the bubble that the user has marked on their exam. This next code block handles looking up the correct answer in the ANSWER_KEY , updating any relevant bookkeeper variables, and finally drawing the marked bubble on our image: Based on whether the test taker was correct or incorrect yields which color is drawn on the exam. If the test taker is correct, we’ll highlight their answer in green. However, if the test taker made a mistake and marked an incorrect answer, we’ll let them know by highlighting the correct answer in red: Figure 9: Drawing a “green” circle to mark “correct” or a “red” circle to mark “incorrect”. Finally, our last code block handles scoring the exam and displaying the results to our screen: Below you can see the output of our fully graded example image: Figure 10: Finishing our OMR system for grading human-taken exams. In this case, the reader obtained an 80% on the exam. The only question they missed was #4 where they incorrectly marked “C” as the correct answer (“D” was the correct choice). Why not use circle detection? After going through this tutorial, you might be wondering: “Hey Adrian, an answer bubble is a circle. So why did you extract contours instead of applying Hough circles to find the circles in the image?” Great question. To start, tuning the parameters to Hough circles on an image-to-image basis can be a real pain. But that’s only a minor reason. The real reason is: User error. How many times, whether purposely or not, have you filled in outside the lines on your bubble sheet? I’m not expert, but I’d have to guess that at least 1 in every 20 marks a test taker fills in is “slightly” outside the lines. And guess what? Hough circles don’t handle deformations in their outlines very well — your circle detection would totally fail in that case. Because of this, I instead recommend using contours and contour properties to help you filter the bubbles and answers. The cv2.findContours function doesn’t care if the bubble is “round”, “perfectly round”, or “oh my god, what the hell is that?”. Instead, the cv2.findContours function will return a set of blobs to you, which will be the foreground regions in your image. You can then take these regions process and filter them to find your questions (as we did in this tutorial), and go about your way. Our bubble sheet test scanner and grader results To see our bubble sheet test grader in action, be sure to download the source code and example images to this post using the “Downloads” section at the bottom of the tutorial. We’ve already seen test_01.png as our example earlier in this post, so let’s try test_02.png : Here we can see that a particularly nefarious user took our exam. They were not happy with the test, writing “#yourtestsux” across the front of it along with an anarchy inspiring “#breakthesystem”. They also marked “A” for all answers. Perhaps it comes as no surprise that the user scored a pitiful 20% on the exam, based entirely on luck: Figure 11: By using contour filtering, we are able to ignore the regions of the exam that would have otherwise compromised its integrity. Let’s try another image: This time the reader did a little better, scoring a 60%: Figure 12: Building a bubble sheet scanner and test grader using Python and OpenCV. In this particular example, the reader simply marked all answers along a diagonal: Figure 13: Optical Mark Recognition for test scoring using Python and OpenCV. Unfortunately for the test taker, this strategy didn’t pay off very well. Let’s look at one final example: Figure 14: Recognizing bubble sheet exams using computer vision. This student clearly studied ahead of time, earning a perfect 100% on the exam. Extending the OMR and test scanner Admittedly, this past summer/early autumn has been one of the busiest periods of my life, so I needed to timebox the development of the OMR and test scanner software into a single, shortened afternoon last Friday. While I was able to get the barebones of a working bubble sheet test scanner implemented, there are certainly a few areas that need improvement. The most obvious area for improvement is the logic to handle non-filled in bubbles. In the current implementation, we (naively) assume that a reader has filled in one and only one bubble per question row. However, since we determine if a particular bubble is “filled in” simply by counting the number of thresholded pixels in a row and then sorting in descending order, this can lead to two problems: - What happens if a user does not bubble in an answer for a particular question? - What if the user is nefarious and marks multiple bubbles as “correct” in the same row? Luckily, detecting and handling of these issues isn’t terribly challenging, we just need to insert a bit of logic. For issue #1, if a reader chooses not to bubble in an answer for a particular row, then we can place a minimum threshold on Line 108 where we compute cv2.countNonZero : Figure 15: Detecting if a user has marked zero bubbles on the exam. If this value is sufficiently large, then we can mark the bubble as “filled in”. Conversely, if total is too small, then we can skip that particular bubble. If at the end of the row there are no bubbles with sufficiently large threshold counts, we can mark the question as “skipped” by the test taker. A similar set of steps can be applied to issue #2, where a user marks multiple bubbles as correct for a single question: Figure 16: Detecting if a user has marked multiple bubbles for a given question. Again, all we need to do is apply our thresholding and count step, this time keeping track if there are multiple bubbles that have a total that exceeds some pre-defined value. If so, we can invalidate the question and mark the question as incorrect. Summary In this blog post, I demonstrated how to build a bubble sheet scanner and test grader using computer vision and image processing techniques. Specifically, we implemented Optical Mark Recognition (OMR) methods that facilitated our ability of capturing human-marked documents and automatically analyzing the results. Finally, I provided a Python and OpenCV implementation that you can use for building your own bubble sheet test grading systems. If you have any questions, please feel free to leave a comment in the comments section! But before you, be sure to enter your email address in the form below to be notified when future tutorials are published on the PyImageSearch blog! what if the candidate marked one bubble, realised it’s wrong, crossed it out, and marked another? will this system still work? When taking a “bubble sheet” exam like this you wouldn’t “cross out” your previous answer — you would erase it. The assumption is that you always use pencils for these types of exams. Made this a long time ago for android, when I used to give a cocktail party on every Friday the 13th. We always had a quiz, which took more than an hour to grade, so I made an app for it! Has long since been taken offline because I was banned from the Google Play Store. Was pure Java, no libraries. Could do max 40 questions, on 2 a4 papers (detected if first or second sheet) Very nice, thanks for sharing Jurriaan! Can I get you source code ? You do you have to go to java code help me with. Thanhk can i get the source code pls sirrrrrr Great article! One question though. ideally (assuming the input image was already a birds-eye view), won’t the loop in lines 26-49 be sufficient to detect the circle contours too? If the image is already a birds-eye-view, then yes, you can use the same contours that were extracted previously — but again, you would have to make the assumption that you already have a birds-eye-view of the image. Hi Adrian, I am trying to run this code and am getting an error on running this code: from imutils.perspective import four_point_transform ImportError: No module named scipy.spatial I have installed imutils successfully and am not sure why I am getting this error. It would be great if you could help me here Thanks, Madhup Make sure you install NumPy and SciPy: Wonderful Tut! I was wondering how to handle such OMR sheets. any idea or algorithm please? Thanks!! I would suggest using more contour filtering. You can use contours to find each of the “boxes” in the sheet. Sort the contours from left-to-right and top-to-bottom. Then extract each of the boxes and process the bubbles in each box. Thanks! What can I do to detect the four anchor points and transform the paper incase it rotates? As long as you can detect the border of the paper, it doesn’t matter how the paper is oriented. The four_point_transformfunction will take care of the point ordering and transformation for you. I understand, but what If the paper is cropped being rotated without border of the paper? What technique shall I use to detect the four anchor points please? If you do not have the four corners of the paper (such as the corners being cropped out) then you cannot apply this perspective transform. @King It looks like the marks on the right side of the paper are aligned with the target areas. You could threshold the image, findContours and filter contours in the leftmost 10% of the image to find the rows and sort them by y-position. Then you could look for contours in the rest of the area. The index of the closest alignment mark for y-direction gives row, the x position as percentage of the page width gives column. Once you have the column and row of each mark, you just need “normal code” to interpret which question and answer this represents. Watch out for smudges, though! 😉 you see this project: project.auto-multiple-choice.net it’s free and opensource. you can design any form in the world. example Nice to see your implementation of this. I started a similar project earlier this year but I ended up putting it on parking for now. My main concern was the amount of work it goes into making one work right without errors and the demand didn’t seem to be there. Seems like scantron has a monopoly on this. What are your thoughts on that? There are a lot of companies in this space actually. I would suggest reading this thread on reddit to learn more about the companies involved and what they are doing for OMR. This is indeed a very cool post! Well explained 🙂 Thank you Linus, I’m glad you enjoyed it 🙂 please, send me your code! You can download the code + example images to this post by using the “Downloads” form above. Hi thank you very much.. But cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) giving me only one counter as a result only one question is being identified. Could you pls help I’m not sure what you mean by “giving me only one contour as a result”. Can you please elaborate? Hi adrian, In my case, Image contain 4 reference rectangle which is base for image deskewing. Assume, Image contain some other information like text, circle and rectangle. Now, I want to write a script to straighten the image based on four rectangle.my resultant image should be straighten. So i can extract some information after deskewing it.How can be possible it? When i used for my perspective transformation, it only detects highest rectangle contour. my image is like output image must be like So your question is on deskewing? I don’t have any tutorials on how to deskew an image, but I’ll certainly add it to my queue. I am waiting for that tutorial. i am not getting proper reference for deskewing of image in my scenario. In image there is barcode as well as that 4 small rectangle. i am not able to deskew it because of barcode in side that. As i am building commercial s/w, i can not provide real images here only. In side image, i have to extract person’s unique id and age, DOB which are in terms of optical mark. Once i scan form which is based on OMR i need to extract those information. Is there any approach which can help me to achieve this goals? I am very thankful to your guidance. As I mentioned, I’ve added it to my queue. I’ll try to bump it up, but please keep in mind that I am very busy and cannot accommodate every single tutorial request in a timely manner. Thank you for your patience. Dear Adrian thank you i upgraded the code: – the code now capturing the image from laptop camera. – added dropdown to selected answer key. – added the date in the name of the result (result+date+.png). can i send the cod to you, and is this code opensource, free. best regards Hi Silver — feel free to send me the code or you can release it on your own GitHub page. If you don’t mind, I would appreciate a link back to the PyImageSearch site, but that’s not necessary if you don’t want to. AoA…My final Year project is Mobile OMR system for recognition of filled bubbles…but using Matlab will you please provide me Matlab code… 🙁 Hi adrian i made gui for this projects and add the program in sourceforge.net best regards Hi adrian, I am facing below issues while making bounding rectangle on bubbles: 1. In image, bubbles are somewhere near to rectangle where student can write manually their roll number because after thresolding bubbles get touched to rectangle. so, it can’t find circle. 2. If bubble filled out of the boundary, again it can’t be detectable. 3. False detection of circle because of similar height and width. Best Regards, Sanna If you’re running into issues where the bubbles are touching other important parts of the image, applying an “opening” morphological operation to disconnect them. What about second and third issue? Is there any rough idea which can help me to sort out it? It’s hard to say without seeing examples of what you’re working with. I’m not sure what you mean by if the bubble is filled in outside the circle it being impossible to detect — the code in this post actually helps prevent that by alleviating the need for Hough circles which can be hard to tune the parameters to. Again, I get the impression that you’re using Hough circles instead of following the techniques in this post. Dear sir I have install imutils but I am still facing “ImportError: No module named ‘imutils'” kingly guide me. You can install imutils using pip: $ pip install imutils If you are using a Python virtual environment, access it first and then install imutils via pip: How to convert py to android Can this also work with many items in the exam like 50 or 100? Yes. As long as you can detect and extract the rows of bubbles this approach can work. Adrian, do you have the android version of this application? You will need to port the Python code to Java if you would like to use it as an Android application. Hello Adrian, Do you have a code for this in java? I am planning a project similar to this one, I am having problems especially since this program was created in python and using many plugin modules which is not available in java. I hope you can consider my request since this is related for my school work. Thank you Hey Nic — I only provide Python and OpenCV code on this blog. If you are doing this for a school project I would really suggest you struggle and fight your way through the Python to Java conversion. You will learn a lot more that way. Hi again adrian, thanks for the reply on my previous comments. Can you provide a code that can allow this code to run directly on a python compiler rather than running the program on cmd. I would like to focus on python for developing a project same on this one, I’ve ask many experts and python was the first thing they recommended since it can create many projects and provides many support on many platforms unlike java. Hey Nic — while I’m happy to help point readers like yourself in the write direction, I cannot write code for you. I would suggest you taking the time to learn and study the language. If you need help learning OpenCV and computer vision, take a look at Practical Python and OpenCV. Can this process of computation be possible in a mobile devices alone using openCV and python? If yes, In what way can it be done? Most mobile devices won’t run native Python + OpenCV code. If you’re building for iOS, you would want to use Swift/Objective-C + OpenCV. For Android, Java + OpenCV. hi Adrain could u please tell the code about how did u draw the output of questionCnts on image I’m not sure what you mean Pawan. Can you please elaborate? I’m capturing image through USB web camera and executing this program but that image not giving any answers and it’s shows multiple errors Without knowing what your errors are, it’s impossible to point you in the right direction. In fact, try the same. 1 – Take a picture with mobile phone => test_mobile.jpeg 2 – python test_grader.py –image images/test_mobile.jpeg It seems that it comes from edged = cv2.Canny(blurred, 100, 200) 3 – add instruction to check it: cv2.imshow(“edged”, edged) 4 – In our program : # then we can assume we have found the paper => no ‘paper’ found … – – try different values for 2nd & 3rd arguments of cv2.Canny – – ( ) —-> same result, edged dont have paper contour well defined as your image. Must we have to convert jpeg to png ? when i show your edged image with test_01.png, we have a high quality of contour. Could you please explain how you get a so well defined contour ? Best regards The best way to ensure you have a well defined contour is to ensure there is contrast between your paper and your background. For example, if you place a white piece of paper on a white countertop, there is going to be very little contrast and it will be hard for the edge detection algorithm to find the outlines of the paper. Place the paper on a background with higher contrast and the edges will easily be found. Hi Adrian, I’m getting a really weird error and was hoping if you could provide some guidance. First of all, I’m using my own omr that is a bit different from yours. I did up to finding thresh and got a really nice clear picture. thresh = cv2.threshold(warp, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1] However, when I try to find the bubbles, it fails to find them. The only contour it gives is the outermost boundary. On the other hand, if I do some additional steps on “warp” before finding thresh, I get almost all the bubbles. Here are the additional steps I performed on “warp”. (cnts, _) = cv2.findContours(warp.copy(),cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) cv2.drawContours(warp, cnts, -1, (255,0,0), 3) I was wondering if you have any insights on this matter. Why is it not finding contours for the bubbles if I don’t do some steps on “warp”? The picture of “thresh” seems to have very clear boundaries of bubble. If this is confusing, I’d love to share with you my code and some images I generated to give you a more clear picture of what’s going on. The best way to diagnose this issue would be to see what your threshimage looks like. Also, is there any reason why you are using cv2.RETR_TREE? Usually for this application you would use cv2.RETR_EXTERNAL, but again, that is really dependent on what your threshimage looks like. Thanks for the reply! Here is the link to my “thresh” image And for using RETR_EXTERNAL instead of RETR_TREE when processing warp, thank you for the suggestion. I was able to detect few more bubbles using RETR_EXTERNAL. I used RETR_TREE previously for cropping my omr image. (when I used RETR_EXTERNAL, I wasn’t able to crop the image properly for some reason) Here is the link to my original image for your information. Thank you so much in advance. I think I see the issue. The thresholded image contains white pixels along the borders. That is what is causing the contour detection to return strange results. Remove the white pixels along the boundaries and your script should work okay. just use adaptive threshold Hi Jason and Adrian, could you share how you managed to overcome this problem? I have a similar thresh image, and am unable to remove the white borders. Any help would be greatly valued. if i have only four circles , then what should i modify to the given codes?help me with figures, You’ll want to change Line 90 so that the code only looks for 4 circles along the row instead of 5. Hey Adrian, Thanks for a really good post. This has helped me a lot. I have doubt but. I can’t seem to figure out the use of this line cnts = cnts[0] if imutils.is_cv2() else cnts[1] I was able to detect the contour of my paper only after removing this line. What is imutils.is_cv2()? The cv2.findContoursreturn signature changed between OpenCV 2.4 and OpenCV 3. You can read more about this change (and the associated function call) here. How to divide circles if options are not equally divide? cnts = contours.sort_contours(questionCnts[i:i + 5])[0]. In my case, somewhere, it is like 4 option or 3 option. How to resolve uneven division issue? hello.. in need of your help. the system gives me a traceback error while installing imutils. why is this happening and how can i get over it.? What is the error you are getting when installing imutils? Without knowing the error, neither myself nor any other PyImageSearch reader can help. usage: test_grader.py [-h] -i IMAGE test_grader.py: error: argument -i/–image is required Can you help me with this error? Please read up on command line arguments before continuing. please tell me how sorting is working?? Are you referring to contour sorting? If so, I cover contour sorting in detail in this blog post. Adrian, it is a great post for learners like me. I had this problem in detecting circle contours, for non filled circles it is detecting inside as well as outside edge of the circle. any method by which i can make it to detect only outside edges ?? I would suggest working with your pre-processing methods (edge detection and thresholding) to create a nicer edge map. Secondly, make sure you are passing the cv2.RETR_EXTERNALinto cv2.findContours. Thanks!! it worked. Hey Adrian, Is there any specific blog or tutorial about porting python code to java? I am developing Application in android and struggling with few points. Please help me Sorry, I am pretty far removed from the Java + OpenCV world so no recommendations come to mind. Hi Adrian, I had to add “break” in line 125 and de-indent line 127 to get proper score and still draw the circles after the break. Otherwise (1) score was increased with every iteration after finding a filled bubble (resulting in final 280%) and (2) contours were drawn also repetitively. Maybe I made a mistake somewhere else in the code but the above fixed it. I’m glad you added a section about dealing with unexpected input (e.g. no bubbles filled or more than 1 filled), I wish more of your tutorials had such critical analysis. Cheers, Tom Hi Tom — always make sure you use the “Downloads” section of a post when running the code. Don’t try to copy and paste it as that will likely introduce indentation errors/bugs. Hi Adrian, I’m trying to identify crosses in the check-boxes using similar approach, the problem I’m having is the mask becomes hollow box instead of the type we obtain here which is just the outline of the bubble (The reason for this is the bubble have alphabet inside and checkbox is completely blank). Thereafter, when I try to calculate total pixels (using bit-wise and logic between mask and thresh checkbox), the totals are somehow incorrect. As a result, the box that gets highlighted at the end is the one that is not crossed? Any suggestion how to modify the mask on plain boxes and check boxes without any background serialization? Thanks, Pjay Hi Pjay — do you have any example images of the crosses and check-boxes you are using? That might help me point you in the right direction. Hi. do you guys have links to tutorials or blogs that I can follow to develop the same bubble sheet scanner for android? I have this project and I really don’t know where to start. I would appreciate any suggestions. Thank you. Good day, My name is Otis Olifant from an agency in South Africa called SPACE Grow Media. The company is hosting an event in Munich next year. There will be an exam during the event which will require a third party to assist us in marking the exam papers. I would like to ask if you will be able to assist me in marking the exam papers written at the event. If you are able to assist, may I please request that you reply to this mail with a formal high level price quotation for marking the papers. Please get back to me as soon as you can. Kind regards Hi Otis — while I’m happy to help point you in the right direction, I cannot write code for you. I hope you understand. I would suggest you post your project on PyImageJobs, a computer vision and OpenCV jobs board I created to help connect employers with computer vision developers. Please take a look and consider posting your project — I know you would get a number of applicants. Hi, I want to know how can orientation be considered in the algorithm. In your examples the photos are well taken, but what if an user takes a photo upside down? would the algorithm still work but with wrong results? In some example sheets I have seen that 4 squares in the corners are used, but I don’t understand how could that determine the right orientation. Thank you in advance. Best regards Hi Daniel — this particular example does not consider orientation. You can modify it to consider orientation by detecting markets on each of the four corners. One marker should be different than the rest. If you can detect that marker you can rotate it such that the orientation is the same for all input images. Hello Adrian, The code giving me error ” gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) error: C:\projects\opencv-python\opencv\modules\imgproc\src\color.cpp:10638: error: (-215) scn == 3 || scn == 4 in function cv::cvtColor ” How to solve this? Double-check the path to your input image. It sounds like cv2.imshowis reading the path to an invalid image and returning “None”. Hi, I have a question ! I took my exam yesterday and I’m quite concerned with my answers because I got to shade ¼ of each the circles only. I just tried to put mark inside the circles while I was answering. After I finished my exam the proctor didn’t anymore allow me to shade so some of the circles were left ¼ shaded only. Will the scantron be able to read them? ? If you don’t mind, could you send me your reply in my gmail account? (email removed) The Scantron software is normally pretty good at recognizing partially filled in bubbles, but you should really take the time to fill in bubbles fully when you take an exam. Hello Adrian, Thanks a lot for such a nice article, I coded the same and found your blog very helpful. I have a major doubt, why some of major bubble sheet providers have a vertical strip of black markers. I searched it a lot, I just came to know that it will improve accuracy. but how the vertical strip of black markersis helpful to detect circles/elipses in the image. example of such a sheet is here Searching a lot, but could not find the reason of black strip of vertical markers on the right of the sheet. I was looking for an answer to the same doubt for long. Strangely, found it in a OMR software guide! Quoting the snip: “In the old pattern, machine read sheets, were additional black marks placed at equal increments running throughout the length of the sheet on either side or on both sides. This strip of black marks is called a timeline which helps the OMR machine to identify the next row of bubbles” Ref link: Hello Adrian I wanted to ask you how can you expand this to include more questions per page? Since in the expample it only showed 5 questions, I want to expand it to say 15 to 20 per page Yes, it would still work. Is it possible to get output in .xls or .csv file? Can we scan multiple pages bubble sheets? Can we scan multiple students multi pages bubble sheets? Sure, you can absolutely save the output to a CSV file. A CSV file is just a comma separated file. If you’re new to CSV files I would encourage you to do your research on them. They are very easy to work with. Traceback (most recent call last): File “test_grader.py”, line 55, in paper = four_point_transform(image, docCnt.reshape(4, 2)) AttributeError: ‘NoneType’ object has no attribute ‘reshape’ It sounds like your document contour region was not properly found. This method assumes that the document has 4 vertices. Double-check the output of the contour approximation. The picture is perfect i do not why it is not working. The image i have given as input have 4 corners visible. Okay, but what does OpenCV report? Does OpenCV report four corners after the contour approximation? Hi, I would like to ask if you have a test image when the examinee circles 2 answers for 1 question. Thank you. I don’t have a test image for this but you can update the code to sort the bubbles based on how filled in they are and keep the two bubbles with the largest filled in values. Hi, When my input is the original image of yours, which is without any bubble has been filled, it doesn’t work. How come I solve this problem? Please show me. Thank you. Can you be more descriptive of what you mean by “it doesn’t work”? I’m happy to help but keep in mind that I cannot help if you do not provide more details on what specifically isn’t working. What if student leave some question blank? Any way I can be notified once empty answer is detected? I address this in the blog post. See the “Extending the OMR and test scanner” section. Hello Adrian, I not see your adress. Please shared me your adress. And I ask for your support in a matter. You can use the PyImageSearch contact form to get in contact with me. Hi Adrian. Thanks so much for this tutorial – I’ve found it immensely helpful in learning OpenCV. A few questions: 1) If I were to add additional questions to include a total of 20 questions and only have 4 answer options, where would I edit the code? 2) Is it possible to include fill-in-the-blank questions on the same test paper and use a OCR-like solution for those? If so, how where would I edit the code to disregard these fill-in-the-blank questions? Any recommended OCR solutions that would work well with this? Again, thanks so much. Love the site & newsletter! 1. On Line 90 you would change the “5” to a “4”. Same goes for Line 94. This should be the main lines to edit. 2. You may want to try Tesseract for OCR and then look at the Google Vision API. Thank you sir! Very much appreciated. hi, can anyone can tell me this code is patent? can we use commercially or not with further development? i have some questions to about Indian software’s like OMR Home and admin OMR, they are selling their solutions but Scantron OMR scanners even scanning Techniques are patent by their respective owners. how they work to skip this part or they are just selling their software without any permission or they work with fair use? please help me in this regards thanks Are you asking about my code on this blog post? It’s MIT license. You can use it on your own projects but I would appreciate an attribution or a link back. The predictions are the exact replica of the data with which it learned from. It doesn’t seem to matter what is “shown” to it. The answers are always the same. I’m not sure what you mean by “data with which it learned from”. The algorithm in this post does not use any machine learning. Could you elaborate? Good morning. I proposed this project to our design project/thesis project. But our instructor asked why I do this system if there is a machine in store. Can I try this on Windows 10 OS? I want to make this project in our thesis that can answer 60 question and save the score via MySQL. Provided you have OpenCV + Python installed on your system this code will run on Windows. hi, sir I have partially filled a wrong bubble then completely filled a correct bubble in my state staff selection commission exam.will it be counted as correct response??.plz sir tell me. Hey Manas, I would recommend that you try the code and see. I’ve discussed extensions to handle partially filled in bubbles or no bubbles filled in either. After I execute the file, I get this error: usage: [-h] -i IMAGE : error: the following arguments are required: -i/–image How can I fix it please? You need to provide the “–image” command line argument to the script. If you’re new to command line arguments you need to read this post first. This tutorial is awesome in every angle, Thanks man, very informative, Thank you for the kind words 🙂 Code for the last two problems (no bubble marked and multiple bubbles marked) Thanks for sharing, Lokesh! 🙂 I am getting error at following line: ap.add_argument(“-i”, “–image”, required=True, help=”path to the input image”) I am replacing –image with image name for ex. “1.png” and “path to the input image” with my path where image is stored for ex. “C:\\Users\\SAURABH\\Desktop”. You are not supplying the command line arguments correctly. Read my tutorial on command line arguments and you’ll be all set 😉 hello sir .. how to matching image on omr sheets paper ,,, but this project u was set the answer key in source code right .. pls give me the solution ,,, thanks Could you possibly help me utilize this code for a more traditional “scantron” style document? I’ve noticed a few issues with them already: If I blur the document almost nothing is left when examining it. I plan on making it to the end of the tutorial without changing too much but I may require assistance and I would be delighted if you would be able to help. Hey Robert — this blog post is really meant to be just an example of what you can accomplish with computer vision. It’s not meant to work with Scantron-style cards right out-of-the-box. I’m happy to provide this tutorial for free and if you have any specific questions related to the problems you’re having updating the code I’d be happy to provide suggestions. Hi Adrian, I added the logic for skipped questions and multiple answers and finally got it to work. During the process I noticed that the code is EXTREMELY sensitive to input conditions (input being the picture of the test) such as: – If the paper isn’t perfectly flat, or close to it, the thresholding/masking produces weird shadows and results in additional contours that mess with the for-loop sequence – If I photograph the test with the flash on, the reflective properties of the graphite result in higher pixel values which in turn break the thresholding step and results in some of the filled in circles being interpreted as empty, especially the circles located directly under the lens of the camera because much of the light bounces directly back – If I photograph the test without flash, unless I have a light source that isn’t directly above (which was problematic because at the time I was working in an area that only had ceiling lighting), the resulting image has shadows from my hand/camera which again mess with the thresholding/masking I finally got it to work by taking a very good photo of the test. I was wondering if you encountered similar problems while writing the tutorial, and if there are any additional steps, aside from fine tuning the thresholding/masking, to deal with this kind of noise that generalize well to different images, i.e. with flash, without, wrinkled paper, etc.? For a commercial application I imagine the tests would be scanned and then these artifacts wouldn’t really be an issue, but I’m just curious what you think. If you are trying to build a commercial grade application you would certainly need to apply more advanced algorithms. This tutorial is meant to be a proof of concept and show you what’s possible with a little bit of computer vision, OpenCV, and a few hours of hacking around. It’s meant to be educational, not necessarily a professional application. Hi Adrian! just want to ask, will it work if i use squares instead of circles? That is absolutely doable, just follow this guide, specifically the steps on contour approximation and detecting squares. Any idea how to use multiple columns and read it correctly? I have rectangular boxes inside of circles . How can i detect them? Is that a type of bubblesheet/OMR system? Or is that just a general computer vision question? if background is white, it not working? Adrian, Is it possible to port this tutorial to use tensorflow? I need a mobile version of this. You don’t need TensorFlow for this project, just OpenCV. Hi Adrian, can you explain how we can use Hough Circle Transform in OpenCV to detect circles instead of the method you have used? You can follow this post on Hough circles and then replace the contour method with the Hough circles method. Hi I was wondering about your thoughts on this paper I found whilst looking for a OMR type solution: It’s from 2018 in what seems like an academic journal of some sort. The pictures looks quite similar. Did you work on this together? As otherwise it’s very dubiously used. Ooof. That’s just pure plagiarism. Shame on the paper authors and journal publishers 🙁 Thank you for reporting it. hi Adrian how are you ?? how can i use oval instead of circle in omr sheet and detecting Take a look at the scikit-image library — it includes a method for detecting ovals/ellipses.
https://www.pyimagesearch.com/2016/10/03/bubble-sheet-multiple-choice-scanner-and-test-grader-using-omr-python-and-opencv/
CC-MAIN-2019-35
refinedweb
8,965
71.34
In this tutorial we will learn how to perform a software reset on the Micro:bit, using MicroPython. Introduction In this tutorial we will learn how to perform a software reset on the Micro:bit, using MicroPython. For this tutorial, we will use the microbit module, which we have also been using in some previous tutorials. Amongst other features, the microbit module exposes a function that allows us to perform a software reset to our device. The code The code for this tutorial will be very simple and short, since we just need to include a module and call a function. So, as mentioned, the first thing we will do is importing the microbit module, so we have access to the function that will allow us to reset the device import microbit After importing the microbit module, we will have access to the reset function. This function takes no arguments and resets the device. microbit.reset() The full code for this script can be seen below. import microbit microbit.reset() Testing the code To test the code, simply run the previous two lines of code in your Micro:bit device, using a tool of your choice. In my case, I’ll be using uPyCraft, a MicroPython IDE. Upon running the code, you should get a result similar to figure 1. As can be seen, after executing the call to the reset function, the device was reset. After the reset, the board is restarted, an informative message regarding the software / hardware is printed and the prompt becomes available again. References [1]
https://techtutorialsx.com/2019/03/24/microbit-micropython-software-reset/
CC-MAIN-2019-43
refinedweb
260
53.1
Good question, fixed in my latest commit as I saw this issue. I also tried building without that makedepend and it worked fine. Search Criteria Package Details: hugo 0.17-1 Dependencies (4) - glibc - git (git-git) (make) - go (go-bin, go-cross, go-cross-all-platforms, go-cross-major-platforms, go-git) (make) - pygmentize (optional) – syntax-highlight code snippets. Required by (0) Sources (1) Latest Comments fusion809 commented on 2016-11-09 12:55 ogarcia commented on 2016-11-09 12:24 Why 'mercurial' as makedepend? You can make hugo without mercurial. fusion809 commented on 2016-11-04 00:27 What request would I have to make? Merge request? As there's only merge, orphan and deletion requests available. neitsab commented on 2016-11-03 19:32 Hi fusion809, I just wanted to let you know that I took over the former "hugo" package and requested its move to "hugo-bin" so as to follow package naming guidelines. As a consequence, the "hugo" namespace is yours to take ! Feel free to submit a request to have your package renamed (adjustments to the PKGBUILD and .SRCINFO will be necessary). Cheers!
https://aur.archlinux.org/packages/hugo/?comments=all
CC-MAIN-2016-50
refinedweb
190
66.13
solution seems awkward in comparison to yours I would be grateful if you could give me some advice Hi! - Initialize your variables with brace initializers. - Line 13: I don't think this can be true. - Line 22 doesn't do anything. Once you remove line 13 and 22, you solution is identical to Alex', apart from the center point calculation. It appears everything is not so bad after all Thank you! Hello, I have a stupid question. How would you design algorithms like this: not to overflow. Algorithms like this are difficult to understand ! Are there any ways to design and understand them easily ? That's not an algorithm, it's a simple calculation. To prevent an overflow, you'll have to use a larger type. It could be a part of an algorithm! You can't re-assign a reference. Line 9 is changing the value of @arr[0]. You can use a pointer instead. Thanks a lot i think i ahould recheck some chapters. Hi, I think the default parameter in Chapter summary should be corrected to the default argument Hi, I get this error every time I try to build my program "error: control reaches end of non-void function [-Werror=return-type]". I compiled and ran my program from the command line in order to avoid this error and my program worked successfully, but how can I solve this error? I'm using Codeblocks by the way. Thanks in advance @binarySearch doesn't return anything in the case array[midPoint] != target. The return value @binarySearch is undefined. In your case, it happens to be the return value of line 14 and 19. This might not be the case when using a different compiler. Add a return to line 14 and 19. Compiler warnings and errors are there for a reason. Don't disable them. Hey Alex, Thanks for all of this great material! -Max Someone might want to know the way to return multiple values. #include <iostream> #include <string> #include <tuple> std::tuple<double, std::string, int> get_multi_values() { return std::make_tuple(2.9, "ahaa", 10000); } int main() { auto[grade, name, money] = get_multi_values(); //C++17 return 0; } Using uniform initialization, @std::make_tuple can be omitted. Hi! - says "return { 2.9, "ahaa", 10000 };" with list-initialization doesn't work with c++17 standard. -What is the benefit of using uniform initialization with structure binding? > with list-initialization doesn't work with c++17 standard It does work since C++17. It didn't work before. > What is the benefit of using uniform initialization [...]? It works almost everywhere (Hence, "uniform"). It solves the most vexing parse. It prevents implicit casts. Ohh, "error until C++17". means It didn't work before. I misunderstood this. Thank you! Pedantic grammar answer to 2)e): users should be user's ;-) I'm okay with pedantic corrections. Accuracy is accuracy. :) Thanks. mySoln Iterative... Recursive... Nice trick in preventing overflow, I never thought about that when writing these. Alex, I think quiz number 2.e at 5th row missing a semicolon too. Fixed. Thanks! How does this prevent an overflow? And why might cause an overflow? It might be better if you could give an example. Thanks! For simplicity, let's say the maximum value an int can store is 1000. Let min be 400 and max 700. Version 1 Version 2 Wouldn't that store 1100/2 i.e. 550? It can't do all the math at once, it has to happen step after step. Oh.. got it! Thank you! Should be 550 in version 1 right? Right Hi Alex and Nascardriver I have a question that is "when we should use iteration version and when we should use recursion version." Use an iterative approach whenever possible without making your program overly complex. Unless you need high performance, don't use recursion at all then. I understand this, but not sure if i'm thinking correct- in my binarySearch i have "target" and it's replaced with testValues[count].(which i'm guessing it's taking the number under the testValues[0]=0, because we have a loop and loop through all of them). So basically the output will be all the numbers of the testValues[] { 0, 3, 12, 13, 22, 26, 43, 44, 49 }, because the expectedValues[] = { -1, 0, 3, -1, -1, 8, -1, 13, -1 } are equal with the return of the numbers of the binarySearch function that are tested with this(to output them): Correct ? Yep, we're testing the algorithm with each value in testValues and making sure the answer matches what's in our expectedValues array. If so, then we can be pretty confident it works as desired. Thank you ! In binarySearch we always know the target, no ? like in your example with one single array , or like in the quiz where relatively know the targets. Binary Search is more of a efficient way to a better search algorithm and memory use no ? Binary search is an extremely fast way to determine whether a value exists in a sorted array. The assumption is that you know what value you're searching for. Otherwise, how will you know whether you've found it or not? Thank you. Hello there! I'm having an issue with my code in the sense that i can't understand why changing a certain piece of code made it work,(This is the solution to 3b) the code is: My issue is the else if at line 15, when I used only an if, the program gave me fails for all numbers after 12, but adding the else meant a pass on all test values, I don't see how changing the if to else if could have affected my program so drastically, and that's what i want to understand. All help appreciated! Hello, my answer to 3b (the recursive solution to binary search) was very similar to the solution posted by Alex. The only difference was that Alex's solution returned the recursive call to the binarySearch function, whereas mine just called it. The relevant snipped from Alex's solution: And mine: Is there any difference between returning the recursive function call and simply calling it there and then? Or are they identical in the end result? Hi Matt! Your function doesn't return, the return value is undefined, your compiler should've told you, read the warnings. The reason why your program works is that @binarySearch is the last function you call in @binarySearch, so the outer call will return the value that the recursive call returned (Because of how function calls work on a low level). You might not get that lucky on other architectures. Ahh I see. Yes on closer inspection the compiler did give me a warning that I originally missed. Thanks again Nascardriver! For Q3. what does this do.. int binarySearch(int *&array, int target, int min, int max) { ... } what exactly am I passing as an argument in case of array? thank you A reference to an int-pointer Hi , the solution of question 3a) is wrong , its because when u return -1 and it checks with expectedValues[count] , in the "expectedValues" array u added -1 for wrong numbers, u must always return midpoint for correction . thanks Can you show an example where the current algorithm doesn't work? Hmmmm, No I cant! I was wrong about understanding the purpose of the code :] . current algo works :) 1. The solution given for 3a does not check to see if the argument for the array is nullptr. The program will crash with such an argument unless a check is made. 2. The solution given for 3a does not include const in parameter specifications. That might represent an improvement. 1) Added assert to 3a and 3b to check for existence of array. 2) Pass by value parameters are most often not consted even if they could be because the value from doing so is typically so low. You can certainly const yours if you like, but it's such a widespread practice not to that I don't consider it incorrect not to. 3. When the midpoint is calculated and used as a new bound, we can choose the properties we want for that bound. In the given solution, it is assumed that the bound we want might be the index for the value we are searching. (This is analogous to closed intervals in mathematics.) The choice made in the given solution is obvious when one is either added or subtracted from the midpoint of the range being examined. That isn’t always the most efficient way to search, because it doesn’t take advantage of the fact that if the bounds exclude the target index (analogous to an open interval in mathematics), we can stop checking earlier in the process. Let’s call the bounds normalized if they act like open intervals, i.e., that they exclude the possibility that they can be an index for the target array element. Every time we move min or max, they have this property. Note, however, that we might not move one or the other during a search, which means that one of these unmoved bounds may be un-normalized. Therefore, to address this issue, you have to either keep track of whether the bounds have been moved or normalize the bounds initially. If they are initially normalized (either by convention or at runtime by the code) we do not have to worry about them afterwards. The timesaving isn’t a black and white issue though, because we gain a little bit by decreasing the range if we use closed, un-normalized bounds. In terms of time, the un-normalized approach also has to add and subtract 1 through each iteration. The normalized version has an extra add one as part of the while loop condition. To determine the benefits of an approach, we can put in a counter to see how many times we have to go through our search loop or put in timing code to test for overall efficiency. For now, I just put in a counter and discovered that I could get about a 39% decrease in iterations using the normalized approach. Maybe that is a little biased though because the normalization process eliminates any loops that occur at the locations where a binary search has to work the hardest. Still, the results are encouraging. Note that, in spite of the comments in the given solution to 3a, with normalization, we do not test any elements of the array more than once. Here is a binary search routine that implements this normalization approach: I updated the routine to make it faster. One egregious mistake was the test order within the while loop. Here is my latest version: Quiz 2e Also there seems to be a problem within the for loop. Probably the intent is to output the arguments rather than the numbers used to index the arguments. Addressed all of the issues in your prior comments. Thanks! Pass by value parameters are typically not made const because the value from doing so is almost zero. quiz 2c Add - there is also a potential divide by 0 issue. Regarding quiz 1a: Although not necessary, would it be better to write: ? Hi Peter! const isn't usually use for pass-by-value parameters, because it doesn't have a real purpose. Good point nascardriver. Thanks. 5th paragraph: "The return value is not considered a parameter." When was the return value of a function ever considered a parameter? 4th paragraph: "You should not need to use this." I would add "because the compiler will usually make such choices for you." In the second paragraph it says "Use pass by (const) reference for structs, classes, or when you need the function to modify an argument." That will be confusing, because the following won't compile because of const: A question related to 1c, this is the code so far: I then tried to declare @a as a pointer to an int instead just to get a bit more confident with pointers and refs. So as far i understand when declaring @a as ptr i need to assign the address of the returned value because the returned value comes with an implicit dereference right? This somehow confused me a bit with the previous assignment using the reference How does @a know that the right hand side is a reference to an int, since on the right hand side the result is implicitly dereferenced. Does @a recognize that because @a itself is declared as a reference and therefore storing the address of the right hand side instead of the dereferenced value? Hi Donlod! A reference cannot store a value, so there's no much of a choice to be made when initializing a reference. Hi! This is my solution open for your suggestions. Hi Cumhur! * Initialize your variables with uniform initialization. * Mathematically ((min + max) / 2) = (min + (max - min) / 2), but the left side of the equation is more prone to an overflow, because (min + max) might result in quite a large number. * Line 5: Should return @average * Line 6: You already know array[average] != target. There's no point in checking it again. * @binarysearch is causing a warning "control reaches end of non-void function". This isn't actually true, but it's an indicator for bad code. Replace the second inner if-statement with an else-if-statement and return -2 at the end of the outer if-statement. * Your while loop should've been inside @binarysearch. Here is my solution to 3a. Any comments for improvement? Edit: Here is my solution to 3b. It took less time than I thought it would. Hi J! Good job on solving the quiz! Suggestions: * Use uniform initialization. * Mathematically ((min + max) / 2) = (min + (max - min) / 2), but the left side of the equation is more prone to an overflow, because (min + max) might result in quite a large number. * In your iterative function you have a repetition of (center = (max + min) / 2), this should be avoided. * I would've split line 10 in your recursive function into separate lines for readability. Thanks for the suggestions, I've updated 3b accordingly. I was thinking about temporaries, because your approach could be undone by different formatting options of the editor. Yeah I like this better, looks cleaner and safer. Thanks! Alex, I'm curious how overflow as result of integer addition would be of concern in this example considering that the "stack" would probably overflow from creating an array large enough to provide overflow-capable values of min and max in the first place. Assuming a 32 bit integer, it could hold values up to +/- 2147483647 (if I'm not mistaken). Given the following: (min + max) would have to be larger than 2147483647...I just don't see how we could be dealing with an array large enough to have indices that large anyways (and certainly not in the example code). Regardless, your method of calculating the input makes sense, as the value of the integer (during calculation) never exceeds its original value...but am I misinterpreting your intention? A couple of thoughts here: 1) int could be 2 bytes on some machines, in which case (min + max) would only have to be greater than 32,767, which is much more doable. 2) The array passed in could have been dynamically allocated, which doesn't use the stack. It's always better to program defensively, even if you don't think something is possible. Save my name, email, and website in this browser for the next time I comment.
https://www.learncpp.com/cpp-tutorial/7-x-chapter-7-comprehensive-quiz/comment-page-2/
CC-MAIN-2020-29
refinedweb
2,612
64.71
I have a file that has the layout: 0 0 0 1 1 0 1 1 I want to read it into a multidimensional array and use the values. The thing is I don't know how to read it in.. I can read the amount of columns and rows the file has.. but I don't know how to read the values. Also I want it to make a Multidimensional array depending on the amount of columns and rows and then read them into their respective cells.. If the file only contains numbers.. How do I make it read the values in as Integers? How do I do it? My Attempt.. It's not homework but it's nice to know how.. #include <iostream> #include <fstream> #include <string> #include <windows.h> #include <sstream> using namespace std; int main() { int col, row; row = 0; col = 0; fstream InFile; InFile.open("test.txt", ios::in | ios::out); string line; while(getline(InFile, line)) { row++; stringstream s(line); string token; while(getline(s, token, ' ')) { col++; table[row][col]; } } //table [row][col]; string table [row][col]; //Create A MultiDimensional Array depending on the amount of columns and rows.. cout <<"Rows: "<< row << endl; cout<<"Columns: "<< col/row <<endl; cout<<table[1][2]; InFile.close(); }
https://www.daniweb.com/programming/software-development/threads/393750/file-to-multidimensional-arrays
CC-MAIN-2018-47
refinedweb
210
73.98
I recently wanted to install a python package via pip, specifically presto-python-client. Then I installed python-pip. Then I tried installing the package: ``` pip install presto-python-client Traceback (most recent call last): File "/usr/bin/pip", line 6, in <module> from pkg_resources import load_entry_point ModuleNotFoundError: No module named 'pkg_resources' ``` And also the same error via sudo. Then I read elsewhere that pkg_resources was part of setup_tools so installed python-setuptools. However I still got the same error. Offline You may have messed up your Python installation. Try to reinstall all Python packages, or at least python, python-setuptools and python-pip. Offline It should be entirely impossible to install python-pip without having setuptools be automatically installed as a dependency... Did you do a partial update? Do you have part of your system installed as python3.7 and part of your system held back on python3.6? Managing AUR repos The Right Way -- aurpublish (now a standalone tool) Offline
https://bbs.archlinux.org/viewtopic.php?id=240425
CC-MAIN-2019-09
refinedweb
162
58.89
To way behind Europe; so has our poor. Why has more than their fair share of the growth gone to the wealthiest people in America? Is it because 38% of Americans who pay income taxes (that is 95% of the poorest people minus the famous 47% (most of who work or are retired or disabled) that don't pay income taxes) sit on their arses and contribute nothing to this nations productivity? Based on the popular rhetoric, the 47% obviously deserve nothing since they contribute nothing toward growth. (For those who missed, that is satire.) Define "fair share", please, without interjecting personal opinions into it? I am glad you asked, it needed defining. If economic growth (GDP) increases 3% plus inflation, then the median income of each quintile except the bottom ought to increase by about 3% plus inflation as well. I say except the bottom 20% because that is the stratification that contains the largest number of minimum wage workers, retirees, disabled, unemployed, and yes, the welfare cheats as well. The first of the bottom 20% are caught politically and depend on Congress for raises as no company will give them one, and the next two (and similar) of those groups in the bottom quintile should grow by at least inflation but not necessarily at GDP (again determined by the vagaries of the stock market, Congress, union contracts, or other such entities that controls their retirement income) since they weren't productive in the growth itself other than creating demand. It is demonstrable (in other hubs of mine) that this has not been the case since the 1980s and between 2007 and 2010, the middle class median income fell dramatically in the Great Recession and its aftermath while the long-term rich (meaning they were rich prior to 2004) only took a one year hit in 2009, I believe. (Just to be clear, I exclude those who got rich flipping, selling, brokering, mortgaging houses or got rich on paper because of the housing bubble who went bust when the bubble did.) Would you accept value, in terms of, say, 1950, of goods that can be bought rather than income? If GDP has doubled since 1950 should we be able to have a car priced at double what we could have had in 1950? A home priced at double the cost of the 1950 one? Or should it be based on value, where the home doubles in size, regardless of cost? (Potential problem with value as people assign different values to things). And at the very upper end, the very rich, how do you justify that as competition doubles and triples their salary doesn't (by your method)? (Population doubles, but the number of top CEO's remains fairly static). I think you are talking apples and oranges, @Wilderness. I am not sure how "value" or population plays into the question you asked me. Are you suggesting alternatives GDP to measure what is "fair"? Yes. I look at the middle (or bottom) class that they have far, far more in the way of luxuries than they did when I grew up. Are they poor? And the rich probably "deserve" more than a simple increase of GDP would indicate. When top salaries are tied directly to profits/stock price (whichever seems more important to the BOD) then the top earners would seem to earn their astronomical pay, regardless of what you think is "fair". I read your post to be a condemnation of the rich, and our current capitalistic structure that seems rigged to make them richer - at the expense of everyone else. Did I get it right? With that perspective I will jump in. Although Wilderness posed the most pertinent question for you first... what do you think is their "fair share?" But I will follow by asking if you really meant to say what you said... Removing the parenthesis that explain how you came up with the 38% figure, you sentence then reads: " Is it because 38% of Americans who pay income taxes.......... sit on their arses and contribute nothing to this nations productivity? That's not what you really meant is it? I know, I know, it's picky, but if you are going to slam someone, at least get the someone right. Yes, the system is rigged. Success is rewarded. Risk taking is rewarded. Yes, the system is rigged as I take your inference to be. There are too many tax breaks that benefit only the rich. What's so wrong with a luxury tax on those multimillion dollar yachts and super cars? Why shouldn't capital gains be taxed just like a wage earners paycheck, why do the rich get a lower rate just because their money doesn't come from an hourly wage? Why is it fair if a rich man's $5 million dollar tax payment was only 15% of the money he made that year, when Joe the Plumber's $3300 tax payment was 21% of his adjusted gross income? It is the percentages that matter, and not the actual dollars into the treasury - right?[satire?] That was fun. To offer a second realistic comment on what I perceived to be your point... yes, monied interests, (rich people, rich companies, rich organizations), have bought favorable tax and business rulings - but that isn't the whole picture. Is it fair to blame them, or should you be blaming the politicians they bought? GA Have to head up to Virginia in a moment, @GA, but no, you didn't get it right. Between 1950 and 1980, when we had the same capitalistic structure, the benefits of growth were much more evenly distributed among the income groups. While the rich still got richer, just at a much slower rate. The rate was low enough to allow the middle class, in terms of numbers of them, to keep on expanding; such is not the case today. I have nothing against the rich, I am one, barely; what I do condemn is having the game rigged against everybody else from attaining that goal and that is the way it is today. We need to go back to the way it was (a nice conservative idea) in the 1960s and 1970s; covering both Democratic and Republican administrations. A funny thing with going back is that it never quite looks the same. When the growth of the 50's and 60' was taking place there was a housing boom. We are decreasing the birth rate so the engine that created many of these jobs has subsided as well. Manufacturing was booming as well but much of that has been shipped offshore. Some manufacturing is returning but that is stymied by mechanization which cuts back on jobs as well. So what are we to do to increase income of the poor and middle class? Just take what we need from those who foresaw and directed their efforts accordingly? The disconnect between those that invest and risk against those that physically produce is what at the heart of the issue. Should there be such a disparity because capitalism makes it possible or is it capitalism lends itself to this divide because it wants to? Who could regulate or mediate such a thing. Since unions seem to be the dirty word to many in this situation who is to effect a change to get it back on track more equally? An interesting dilemma if ever there was one. Correct me if I am wrong, @Rhamson, but seem to be saying with your birth rate comment that the population growth in the U..S. has gone negative, for you are absolutely right population growth is the engine for economic growth. Or, are you saying the decline in manufacturing jobs has led to a decline of overall job availability? It is a combination of both but not in conjunction with each other. That is a startling thing as you would expect the decline in population growth to affect job availability. But the massive exodus of jobs due to NAFTA and now the TPP have steadily eroded the job market. The problem is @Rhamson, the labor force has increased from 104.9 (64% of those eligible to work; like housewifes and housedads who choose not to work outside the home) million in 1979 to 155.5 (63%) million in 2013. Are these the numbers you are thinking of or have I missed the mark? Those eligible to work, which is a proportion of population growth, has grown from 164.9 million to 245.7 million over the same period. Eligibility is a funny benchmark on which to base your numbers. One being that "eligibility" is a subjective term. To mix eligibility requires that all households who have two working spouses should be working while childcare and the types of jobs that would merely pay for the childcare is a wash. So if one chooses to have the other spouse work who can bring more income and save on the childcare is that computed into your calculations? How about those that are disabled but require government assistance to pay for their disability and having a job would greatly impact their benefits or losing their entire check to pay for the healthcare and also be a wash? The other thing to consider is what types of jobs are being considered available? Are they fast food as I know several people employed at two or three of these jobs to make at least a livable income? So many variables are not taken into an "eligible" label to determine how much of an increase of jobs there have been. "Eligible" is my term for the Bureau of Labor Statistics "Civilian noninstitutional population: Persons 16 years of age and older residing in the 50 states and the District of Columbia, who are not inmates of institutions (e.g., penal and mental facilities, homes for the aged), and who are not on active duty in the Armed Forces."; it is shorter to write, in my opinion. From that population comes the "Civilian Labor Force" (which I called "labor force") is defined as "All persons in the civilian noninstitutional population classified as either employed or unemployed." The calculations are simple and straight-forward and do not involve any of the complexities you brought up. Your point was that population and the workforce has declined over time (unless I misunderstood you), my point is that population and workforce has increased over time. Based on the information you have supplied and not withstanding the exceptions I mentioned you are correct. Oh if it were that simple. Let the bean counters take over and machinate the data so that we all fall into their columns of information. If we were to follow a strict regimen of data with no extenuating circumstances we could boil the whole problem down to one of work of die. Eligibility is the crux of the matter when calculating those that "choose" not to work. Even with the statistics you mention productivity is sky high while wages remain low as good paying jobs are at a premium. So if you want to spit out raw statistics without any analysis that takes into account all possibilities I guess you are right. Am I to take it from your response @Rhamson that you are one who does not rely on empirical evidence to ascertain the truth of such statements as "the population of the United States is declining" or the "number of jobs are increasing in the United States"? If my assumption is correct, what is it then that you do rely on to back up those assertions? No, just your data. Looking it up from several sources there is reported a decline in the birth rate attributed from the Great Recession and fertility numbers are on the decline as well as the general world population.- … tes-slows/ … e/1880231/ As far as jobs there are plenty but the ones that you can live off of are harder to find. … s/2613483/ I don't discount surveys and data but I don't totally rely on them to make an educated guess at what the heart of the issue is. The US is losing good paying jobs and to get to the next economic tier you have to spend a ton of money on education. That is not a sure thing as well. So if you wish to make me the fool somehow go ahead but relying on strictly your data I would say you win if that is what you are looking for. You have made two good points there. The first "there are plenty of jobs" is often ridiculed by pointing at the number of unemployed Americans, (although "plenty" is validly subjective), and the second, "but the ones that you can live off of are harder to find..." is often ignored in these type of conversations - even when the caveat "underemployed" is included in the rationalization. I think the problem with discussions about minimum wage-type jobs is one of compassion. Folks value a good work ethic, and a person working a full-time, or even two part-time minimum wage jobs is certainly demonstrating a good work ethic. So compassion kicks in and folks say it isn't fair that someone holding a full-time job can't make enough to live on. I feel that compassion too, but I also realize that minimum wage jobs are such because that is the value of the task(s) involved. Demanding an artificial value be assigned to them so they can support a family is not a realistic demand. Need should be ab achievement motivator, not a yardstick. GA +1 It is astounding how many people seem to think a job is worth whatever it costs to maintain a good lifestyle for the worker and his or her family. It is also surprising how many employers think a job is worth how little you can get away with in paying for it. Neither viewpoint gets at what a given job is worth which is what is the value added of that function the person to the bottom line. The trick, of course, is trying to figure that out which often can't be done. Consequently, if falls back on the law of supply and demand; and if the supply of labor far exceeds the demand for it,(which it normally does) then employers can, have, and will pay slave wages if they are allowed to. That, of course, leads up to the two sides of our political debate; the Right side wants to allow the employers to do just that and the Left side does not. Applause. The mythical "living wage" then is no more than a myth when trying to argue that all jobs should be paid that. That the left continually tries to force employers to pay that mythical figure just indicates ignorance of how things work. The next hurdle is to understand that if employers all paid what every job adds to the bottom line there would be no profits left. Salaries MUST be less than the profit from that job or the company is better off without it. That the left calls this "slave labor" just indicates ignorance on their part; without profits there ARE no jobs. @Wilderness, "bottom line" includes the firms internal rate of return (which includes profit expectations, among other things). Consequently, my comment stands as is In smaller companies, maybe, but big corporations like Walmart literally have no excuse. In Walmart's case, if you were to have even their lowest employee make $10.10/hour (rather than the current ~$8.25) and pass all the costs to the consumer, it would increase the price of a few goods by about a cent. You can't seriously say there's any reason not to have a wage hike in that kind of scenario--you're intentionally keeping people in abject poverty just so you'll save a few cents on a copy of Transformers 5. OMG! Another Walmart is Satan post. And all those 1.4 million (+/-) jobs they provide are just devil bait to trick us into letting them into our midst. But why just Walmart? Why not use the same accusation against all those mom and pop businesses? After all, right is right, why should affordability have anything to do with it. GA Didja read my post? I said smaller businesses couldn't get away with paying higher wages, but big businesses like Walmart easily could. The point is that those huge companies are affluent enough to be able to treat their workers right, yet they don't. No living wage, benefits, or stock/investment/401k/whatever options for you (nevermind the fact that we'd never notice the difference in our wallets if we did offer those things)! Yep, I read your post. From it I got that because Walmart was big and successful they should do what you see as "only right," but a successful Mom and Pop doesn't have to do what is right because they don't have Walmart's bucks. Isn't that the same as saying doing right depends on the size of your wallet. And isn't it the same as saying mom and pop's are paying slave wages? But it's OK for them to do it? There have been plenty of evil Walmart/good Walmart threads already so I won't hijack this one just to begin another one. But I will say there have been plenty of reliable rebuttals to the "facts" as your linked article presented them. So, we all get to pick the version that suits our perspective. GA Not necessarily, because a mom-and-pop store really has no choice about what wages it can offer whereas Walmart does. The other big difference involved is what kind of work atmosphere both kinds of stores provide. I can attest to this, because I've worked for both Walmart and Riverside (which used to be a local grocery store chain). At Riverside, business is usually pretty slow and steady, so even if your pay is low, so is your workload. I worked in the deli, so my job comprised of bringing out some meat or cheese, cutting and weighing for customers, washing a few dishes here and there, and setting up the display. It was very low-intensity work, and if it weren't for damn workplace politics and, well, the fact that the store went out of business due to the addition of a second Walmart and a Martin's (another grocery store chain) within walking distance, I'd probably still be there today. The pay sucked, but the work was easy, so I liked it. At Walmart, it was a different story. There were no workplace politics, my co-workers were great, and I got to work in Electronics (movies, video games, and computers are my specialty!), but the workload suuuuucked. Even if there were four of us on the floor, there was always a backlog of crap that needed to be done (and RIGHT NOW), leaving almost no time for zoning and customer service (you know, kinda the most important things for a clerk to be doing?). I was doing a job that was much, much more stressful than when I worked at Riverside, yet my pay was almost identical ($8.65/hour vs. $8.90/hour, though keep in mind that my job at Riverside preceded my job at Walmart by about 8 years). And on those occasions at Walmart where I'd get called to run register up front, it somehow got even more stressful. Long lines of people with carts full of stuff, and Corporate demands that cashiers be able to scan and bag ~45-55 items per minute (which sounds easy on paper, but is RIDICULOUS in practice) means you're in for a bad time. And God help you if you take a minute or two to catch your breath. That egregiously long-winded hogwash was necessary just so I could say this: From my own experience, local/small businesses often can't really afford to pay high wages to their employees, but they're often low-intensity jobs that are more lenient towards the worker and encourage cautious, methodical performance. Larger businesses like Walmart, on the other hand, tend to be high-intensity with little room for caution or methodical work (which many like to blame on managers, but it's actually mostly because a bunch of suits who've never worked on the sales floor in their lives push idiotic standards on the poor managers who have to adhere to them). Thus, a higher wage is not only logical because it can be afforded, but also because the workload is often far more egregious than in smaller stores. Trust me, if there were still a Jubilee's or Riverside open around here, I would've chosen to work there over Walmart in a heartbeat--I mean, what kind of moron would willingly choose to do twice as much work for the same low wage? Wonderful response @Zwlkiiero. and I bet you gave better customer service at Riverside as well. In my company, with only a few exceptions, we start at $10 - $12/hr, depending on location, for entry level jobs. We don't have to (except in Douglas, WY where the oil boom is driving wages through the roof), but we choose to. We owners make less money, but we adjust our business model accordingly and the company makes a nice profit and we like doing the right thing. Then why does my small business get away with paying starting wages 33% higher than Walmart and my partners and I still make a very nice salary as well as the company a profit when Gross Profits run around 40%. Go figure. I think my response to your response to my response regarding the "Walmart is Satan" response answers your question in this response too. Or at least that is my response. GA hehehee! (btw, regarding my CFO, we both seem to be able to separate our politics from our business activities although we both shake our heads wondering what planet the other came from.) While WalMart may indeed be able to spread that cost out over millions of SKU's, what is to become of the small mom and pop store that sells specific things like camping gear, or fishing tackle in a small town? If they do not have millions of SKUs to spread the extra $2.00 an hour they will have to raise their prices giving even more business to the evil big box store... (who happen to employ millions in this country and who have kept it possible to the shrinking dollar to go as far as it still does for most of the middle class not to mention the hundreds of thousands of mangement level folks they employ and pay extremely well and who keep the economy going by buying home, cars and furniture, but, yes, screw them for all they have done). @GA, I would offer that your perspective is backwards. Walmart needs those 1.4 million employees to put money in the shareholders pockets. Let's make sure we understand that Walmart is not being altruistic. There motive is solely to make money; unfortunately, the most efficient way to do that is offer the lowest wages and cheapest (price-quality) products that the market will bear. They are amoral when it comes to employees and customers; each is merely a means to an end. They will give just enough to maximize profit. On the other hand, small to medium size companies are often just the opposite. My answer to @Zelkiiro regarding my company is not unique; in fact, at least in my observations, it is more the norm. I think it is safe to say that most large corporations are virtuous at making money, but not when it comes to the moral virtues, while small to medium sized companies may be a bit less virtuous at making money, but the trade-off is being much more morally virtuous. I find it rather ironic that moral virtues are high on the Conservatives list of important things when it comes to individuals, but disappears when applied to groups of individuals which comprise a corporation. I don't think I have it backwards - I just disagree with the stated premise and posed a "why not sauce for the gander too" perspective. I agree with your response. The anecdotal experiences in my lifetime agree with your thoughts too. So is the next appropriate question why is this so? I venture that it has to do with the business structure. Small businesses, whether sole proprietors or S-corps, or whatever - small businesses is the point, typically have one, or maybe just a few owners that are typically hands on and more focused on making the business successful - with the profits following success. On the other hand, large corps, again I venture, are already, (mostly), successful, and have a lot of shareholder owners that are such just for the profits - not to be the parent of a successful business. Hence, the duty of a corp to its shareholders is to maximize profits, not to be a moral entity. Of course there are exceptions. In the great Walmart debate folks like to point to Costco as a model to follow. So is a large corporation wrong, (not doing what's right, by some folks standards), for following its implied, (and sometimes specifically stated), mandate? Is it really a virtues question? To beat-up on corps like Walmart seem like, (to me), criticizing the car for a DWI offense instead of the driver. Generally speaking of course. Don't hit me with the Enron club. GA You are right, @GA, "Plenty" is a relative term. A few days ago, I heard that there are now, on average .38 (that is point 38) job openings for each person looking for one. (That, I believe, is actually almost getting to normal.) A year ago, it was .25 jobs for each person looking; and four years ago it was .025 jobs per person looking. "I feel that compassion too, but I also realize that minimum wage jobs are such because that is the value of the task(s) involved. Demanding an artificial value be assigned to them so they can support a family is not a realistic demand." I agree with this statement completely and wonder why is it that so many of these minimum jobs are fast becoming the norm. Is it that they are low skilled and therefore easily filled? That coupled with their abundance makes them notoriously subject to the low wage? There are many jobs out there for college educated people but with the cost of student loans stifling the aggregate wage you could earn is it cost prohibitive, especially when you don't know for sure you can get that job? Is it fair to make employers make up the difference by taking less in profits by paying people more? With the great exodus of manufacturing jobs to foreign labor markets that had skills that are now dormant are these people left out in the cold or just earn less at a fast food or retailer job? Some of the jobs are coming back but we now find that the robotic replacement is still putting these skilled laborers out of work. So what is the answer is the only question I ask and how do we get there without years or recessions and poverty? Your last question is the nut that no one can crack - yet. And it might make a good forum topic by itself. Rather than having conflicting ideologies butting heads, maybe they could be discussing their views on solutions. You start it and I will jump right in. I will even give you a head start. I do not think it is a jobs tax break/incentive solution. I do not think it is more business deregulation solution - generally. (I am sure there may be specific deregulation issues worthy of consideration) I do not think it is an increase in government welfare/safety net programs solution. (and certainly not MMT's JG, (Jobs Guarantee) idea) I do think it is going to require some type of government/private sector program, (I can't believe I am thinking this), along the concept of FDR's CWA, (not CCC or WPA), program. There, just start your thread with this response and I bet you could charge admission. Hmm... never mind, If I can charge admission, I'll start the thread myself. GA Now, @Rhanson, I don't doubt the "lower birthrate" assertion; that has been true for a long time but that, in and of itself doesn't mean a declining population. It can and has been offset by a declining death rate and increasing average age of the population as well as an increase in the age of when people can still be effectively employed; I am almost 67 and don't plan to quit working at something productive until I drop dead. The same is true of "jobs", there are simply more of them year after year, when you aren't in a Great Recession. However, when you Qualify it with "... ones that you can live off of ...", then you have a whole new ballgame and gets at the heart of my question. Yes, I knew I did not get the intent of your statement about the 38% right - it was satire and I was playing. But I do think I got it right that your inference was a condemnation of our current capitalistic system. And since I have not put in the research that you have, (ie. your hubs on the topic), I am just "shooting from the hip" with reaction based on perceptions formed by past observations and discussions. I think rhamson's response is leading in the right direction. I think the "old" middle class was composed of a majority of folks that produced something - either via physical labor, (factory and production workers), or management of physical labor and business activities, (managers, supervisors, foremen). The world is changing, The workforce needs have changed, (and continue to change), and our economy has changed.. History also shows us corruption and money buying privilege has also been with us from the beginning. In human activity it is a constant - not cyclic. What I think has changed is that this time the mechanical and technological changes of the cycle are more acute than any past changes. Our economy has and is continuing to move away from a manufacturing and producing economy to a service economy. Less physical work and physical bodies are needed. Both reductions hit the middle class more than any other segment. What to do, what to do.... GA Of course I was being sarcastic again with the 38% comment. The point of that comment is that it isn't only the wealthy who create growth in this country, it is everybody who works and is productive. As such, anybody who contributes to growth ought to share in that growth roughly equally for if any one part fails to be productive, then there is no growth. As to the value of the work that is performed, the theory is that wages and bonuses an employee receives is equal to the contribution they make to a firms success and the profit the firm makes is the reward for the owners investment. Now you know and I know that is only theory. The only employees who get paid their worth are generally those who fall to those who are mobile and can switch jobs if needed. For those who are locked into their jobs, they are often paid less than what they are worth while those the run the company use that money (at least in the last few decades) to pay themselves "rent" wages, which are wages over and above the wages that represent value they actually add to the company's bottom-line. That is one example of how the game is rigged; being rewarded for success is not considered rigging the game by anyone I know, nor is being fairly rewarded for the risk you take. But discrimination, race or gender, is. Unfair labor practices are. Anti-competition practices are. I hope you really don't believe these and a myriad of other methods don't go on in this country in a very big way, especially as the regulations designed to prevent them are gutted or repealed. As to who to blame, you blame both for their total lack of morals. And about your satire, it all depends on what kind of country you want to live in. One that was somewhat equitable although highly racist in the 50's, 60s, and 70s or one more like the 1870s when there were no taxes, people lived in company towns, you had a small wealthy class, a small middle class and a huge lower class. The 1870s model is the logical end result of the tax system you seem to favor because mathematically, it can end up no other way under unregulated capitalism. You have, to a large degree the kind of flat tax system where the effective tax rate of everybody, except those near the poverty line, is around 17% when all is said and done; so what is happening, we are moving quickly to the 1870s scenario of a small upper class and a very large lower class with not much in the middle. Your response shows that we have different perspectives that probably preclude either of us agreeing with the other. Or else maybe we are in semi-agreement and just talking past each other because we see different culprits *shrug* "...As such, anybody who contributes to growth ought to share in that growth roughly equally for if any one part fails to be productive, then there is no growth." Roughly equal to whom? Their co-worker peers? The company owners? For simplicity, let's use a company as an example; Owner risks life savings to start the venture. A key-employee in sales risks health and family harmony working a grueling schedule -18 hour days, to make the sales that make the company successful. A diligent salaried shipping dept. mgr. keeps the product flowing to the customers. Several packers earning above minimum wage do a good job preparing the product for shipping. And a minimum wage janitor keeps the trash cans empty and the floors swept. Here's what I see as an equitable sharing of the rewards of contributing to the increased productivity, (financial success), of the company. The janitor continues to earn his minimum wage - because that is all the job is worth, regardless of the success of the company. The packers continue to earn their above minimum wage rate - because they are doing the job they were hired for and paid to do. They are paid above minimum wage because the owner sees the value of their job as being worth more than minimum wage. Although, a nice Christmas, or "I appreciate your good work" bonus would be beneficial to both the owner and the packers. This is not an unheard of practice. The shipping mgr. would probably get a raise or a bonus. His job is the most crucial to the success of the company - of those discussed so far. The sales guy would have earned a rewarding amount of commissions, and probably some new company perks - new office, car, trip, etc. etc. - but his commissions are his real reward because that is the way his productivity compensation was established between he and his employer. The owner reaps the rest as his reward for the initial risk. I would also include the caveat that a smart owner would "spread the wealth" a little to key employees because he is successful because of them. That example seems like a fair, if simplistic, description of the components of a company/economy. Yet there is no "roughly equal" benefits of success. How do you see that crew sharing equally the success of the company, the benefits of their productivity? If an employee is "locked" into a position, wouldn't the reason for being locked in have a bearing on their compensation? And why do you state they automatically get paid less than what they are worth? Of course my "rigged" was a tongue-in-cheek inference to geared-for. "... But discrimination, race or gender, is. Unfair labor practices are. Anti-competition practices are." Now where the hell did that enter the conversation as being part of the rigged system you spoke of? Are we to consider illegalities as "unfair rigging" now, or as the law-breaking acts they are? "I hope you really don't believe these and a myriad of other methods don't go on in this country in a very big way, especially as the regulations designed to prevent them are gutted or repealed. OK, just one soup per bowl please. You initially spoke of our system as being rigged for the rich, and now you are lumping illegal acts in with ones that you think are "just unfair." As for the "gutted regulations".... ahem... wasn't that my point about bought politicians and purchased special favors? "As to who to blame, you blame both for their total lack of morals. " Was that a generic "you" or a me specifically "you?" And who is the both that I/we are blaming? If it is the monied interests and the bought politicians - I blame the bought politicians more. "...or one more like the 1870s when there were no taxes, Geez Louise! Now you want to bring the 1870s into the conversation? And how do you know what tax system I prefer? I also hope your "flat" reference was to flat as in one-size-fits-all, because if you are referring the the frequently proposed "Flat Tax" system, then you either misspoke, or are ill-informed. Do you really consider our current capitalistic system as unregulated - as you stated, or just under regulated - in your view? It is good talking with you again, but you really should be less jumpy with the assumptions about what I think or prefer. sometimes you get the bull, sometimes you get the horns. GA Equal as in back to my original 3% growth translates into roughly 3% increase in median/mean income for the top four quintiles. All of those other things you mention are theoretically taken care of by paying people a wage equal to their contribution to the company's success, no more, no less. Those who risk their life savings get a return on investment equal to the degree of risk assumed. That is the way it is supposed to work in the perfect world. In the real world, those at the top get paid more than there contribution, those at the bottom get paid less than their contribution. Large corporations often receive a larger ROI than the risk assumed while small firms, like mine, receive smaller ROI's (at least for my partner's) than the risk they assumed. Twice as many cars are made, raising GDP by 3%. Why are the janitors entitled to more pay for doing the same job they were doing? Because more cars are made? Doesn't make sense to me. OK, so it is a difference in perspectives. My world does not see the logic of percentage of growth as having any bearing on "fairness" - relative to this topic. But just for kicks... how do you determine the relative payoffs of those you mentioned? How do you peg their contribution? Their compensation? Surely you are not advocating an ROI pegged to GDP increase percentages? Risk your life savings or a company's future, or your reputation for 3%, (or whatever the percentage)? If GDP is a negative, do wage earners, (those that don't lose their jobs), lose a percentage of their hourly wage? The owners will probably lose a portion of the value of their risked investment. Is it "fair" that they alone should suffer a loss? A corp. will probably lose sales which in most cases will mean reduced profits, which also in most cases will mean a drop in stock price, which will also probably mean a loss of income to shareholders - so is it "only fair" that the wage earners of that corp. also lose a proportionate share of their hourly wage? Bottom line, for "fairness," do wage earners take the same reduction hit that the risk takers suffer? And why do you feel there should be ROI parity between large and small companies/corps? GA To your Bottom Line question, just a quick glance at the results of the Great Recession of 2008 should answer that question; wage earners took orders of magnitude greater reductions than the risk takers suffered. A few hundred non-financial businesses went BK, yet over nine million Americans lost their jobs. The long-term wealthy took a one-year decline in their income while everybody else was stuck with a five-year decline. When things go south in a business, the managers and owners take a hit in their income at worst, or see the growth slow down for awhile, but the general labor force sees a total loss of their income as they are laid off; yeah, I would say they lose more than a proportionate share of their hourly wage, assuming you consider 100% a lot. If GDP goes negative, you betcha wage earners take a big hit, median income declines as they get laid off. Wage earners get a 100% loss while owners and managers may only get 50%. Nope, ROI and GDP are disconnected. But tell me why this example should be the case. Assume the following: GDP = $1 trillion Total Payroll = GDI = $1 trillion broken out as follows: - Top 20% = $700 billion - 2nd 20% = $125 billion - 3rd 20% = $75 billion - 4th 20% = $60 billion - Bottom 20% = $40 billion Now if GDP grew at 3% or $30 billion, then payroll probably grew around 3% as well since GDP = GDI (Gross Domestic Income). That means payroll grew $30 billion as well. What I am suggesting is that total payroll, at the end of the day and assuming the bottom 20% gets a 3% growth as well (to keep the math simple) should look like this: Total Payroll = GDI = $1 trillion + $.03 trillion = $1.03 trillion broken out as follows: - Top 20% = $700 billion + $21 billion = $721 billion - 2nd 20% = $125 billion + $3.75 billion = $128.8 billion - 3rd 20% = $75 billion + $2.25 billion = $77.25 billion - 4th 20% = $60 billion + $1.8 billion = $61.8 billion - Bottom 20% = $40 billion + $1.2 billion = $41.2 billion What you suggest would be just fine and perfectly fair is as follows: Total Payroll = GDI = $1 trillion + $.03 trillion = $1.03 trillion broken out as follows: - Top 20% = $700 billion + $30 billion = $730 billion - 2nd 20% = $125 billion - 3rd 20% = $75 billion - 4th 20% = $60 billion - Bottom 20% = $40 billion Somehow, that doesn't seem "fair" to me; it may to you, but it doesn't to me. You certainly provide a lot to chew on... and I hope you won't think my editing is to obscure your context. These were relative to my response about a negative GDP... (I think) ".....results of the Great Recession of 2008 should answer that question; wage earners took orders of magnitude greater reductions than the risk takers suffered. .........the long-term wealthy took a one-year decline in their income while everybody else was stuck with a five-year decline. ......the managers and owners take a hit in their income at worst, or see the growth slow down for awhile, but the general labor force sees a total loss of their income as they are laid off; .....you betcha wage earners take a big hit, median income declines as they get laid off. Wage earners get a 100% loss while owners and managers may only get 50%. Of course it may just be me, but it does appear you have a bias against folks with money or businesses. Is it the long term wealthy's fault that their financial situation allows them to rebound quicker? Obviously I don't think so. An owner provides multiple jobs, pick a number 1o 1000. If he loses the business, all the jobs are lost too. If he takes a "hit" to income, (profit), is it hard to understand he cannot afford to pay for all those same jobs now? Are the lay-offs his fault? It sounds like you think if a janitor, or a manager lose their jobs - then the job providers should lose theirs too. What is that, an eye for an eye? Justice? "If GDP goes negative, you betcha wage earners take a big hit, median income declines as they get laid off. Wage earners get a 100% loss while owners and managers may only get 50%" Damn, life just isn't fair is it. Without applying any tint of malfeasance in your explanation, it sounds like you are lamenting that a manager that has worked for years to improve their job skills and lot in life, (or the owner that risks their life's investment - yes, I know that is simplistic) doesn't suffer the same financial effects on their life as your fifth quintile, (the janitors and minimum wagers) To your example, and I think the folks in my first example will fit your quintiles nicely for this illustration;: "What you suggest would be just fine and perfectly fair is as follows: "*that means me "Total Payroll = GDI = $1 trillion + $.03 trillion = $1.03 trillion broken out as follows: - Top 20% = $700 billion + $30 billion = $730 billion - 2nd 20% = $125 billion - 3rd 20% = $75 billion - 4th 20% = $60 billion - Bottom 20% = $40 billion" From the bottom 20%; The janitor in my example These would be entry-level, or minimum wage earners - people without valuable or improved job skills - generally, (of course there will be exceptions), doing jobs that only provide minimum wage-type value. Without an improvement that makes their labor more valuable, worth more, why should it be considered unfair if they do not receive an income increase comparable to the coming examples that do offer more value? The 4th level 20% - the product packers These folks are above the bottom 20%, they have more valuable job skills, and they earn more. Although they may, (and frequently do), receive something more - even if it is a token amount or perk, if they have provided no additional value, or have not improved their job skills, (if they do they will migrate up to the next higher quintile potential - to manager or supervisor), why do you think them to be entitled to an increase in income? They are being paid for what they hired to do. If they do nothing to improve their value, (yes, their work is valuable, and they are hired, agreed to, and are being paid that value), is "just because it's only fair" a valid reason to insist on parity? 3rd, 2nd, and top, again generally speaking, are where the job skills or personal abilities are more valuable, where their efforts have more of an impact on the company's success or failure, and where the people in them are more financially stable. I think they would see a piece of that 3% increase. So - you can see where I am going with this. While I agree there are inequities - some through corrupt influence, (I am back to the purchased politicians), but most are just because, "that's life." It's not unfair. It just is. And that is where our perspectives differ - you appear to think share and share alike is the only fair way to look at things, the only fair method of compensation. Obviously I don't. I think value is the fair way to determine compensation. So if GDP increase 3%, yet an earner's value has stayed the same, their contribution has stayed the same - it appears you think it is only fair they get the same 3% increase just because they are still there. Of course I realize you are speaking to the macro level, and my example and explanations are of a micro level - but I think that realistically both should be a reflection of the other - so I am talking about the same apples you are. On the other hand... if you wanted to direct your observations and evaluations and sense of "fairness" at specific guilty parties, (plenty of targets in the financial markets and rich-people circles), and not at the general groupings of "The Rich" or "Big Business" - then you will hear quite a different tune from me. But as for an equal distribution of a percentage "Just because it is fair" - that is not something I think is "fair." GA I have read this stream and have enjoyed the give and take very much. I would offer one concern that I did not see addressed...I have heard from several places that 10,000 baby boomers are retiring every day for the next 10-15 years. My first thought on this was the utter colapse of the SSI system. There were 64 Americans working for every 1 who got a check when SSI started, this number is closer to 3/1 now. My other thought was that 300,000 jobs should be opening up each month for the conceivable future and that alone should cause an incredible decline in both the unemployment rate and the long term, no longer reported, unemployment rate. Seems to me any President in office should in the very near future, appear to be a genious when it comes to job growth. It can't be that all these open jobs are being eliminated through attrition can it? As these are retirees that are leaving the work force, retirees who historically are not in a buying mode of their life (they often times have their homes paid for or their finances structured to where a mortgage is about all they have, not a lot of lavish buying going on) it is young people taking those jobs (the same jobs that don't seem to show up on any new job reports mind you) who are buying first homes, better cars, financing everything they get. This should be showing up in economic growth reports but it is not... My point is that this will not be job creation but job replacement but either way it should reduce these numbers in unemployment and increase economic growth regardless of who is in the White House. It should also start a new migration of economic mobility as people start careers. Could this ebb and flow of life cycles now produce a new 1950's style movement of both income and mobility?10,000 a day is a huge number and a heck of a lot of financial change... I believe we will continue to see the top money makers increase the gap but I also believe that comparatively, we will see a second 1950's style economic boom and I hope we do as 1/5th of the nation will be of SSI age by 2050. The size og the gap isn't as important so long as those in the middle are gaining ground as well in my mind. That seems a logical progression, but my first thoughts are that I don't think the logic works in this case. Following your premise that the baby boomers are leaving the workforce; think of the point from which they are leaving - probably from the middle to upper rungs of the career ladder. And these jobs are probably much more likely to be filled by their juniors moving up, if they are filled at all. Think of why companies have early retirement plans... Leaving whatever new jobs available being those at the bottom - entry level, or even minimum wage level. Not the impact to the shrinking middle class My Esoteric spoke of. Plus, the baby boomer retirement wave would have started around 2001 - 2005, and the jobs picture is what it is, Sooooo.... GA @Average American. I am guessing you saw that headline from Newsmax. The problem is, that is the only place they mention it. If you read the article, they contradict it as does the Pew Research study it was based on; … rs-retire/ What is true is 10,000 BBs will reach retirement age a day, but only about 40% of those may actually retire; the rest either don't want to (like me) or can't afford to. As to economic mobility, that depends on opportunity, and in your scenario, that is certainly possible but it all depends if the benefits of economic growth are distributed to those who created it or kept by those who control the power and wealth. GA wrote: ." I don't think there is a historical precedent for what we are (beginning) to see today. In the past, there has always been a shortage of labor, even as the industrial revolution changed the makeup of that labor. That is no longer true today, at least in advanced economies. Once that demand for labor is gone, it renders our current method of distributing wealth unfair. And that method is capitalism, the give and take between ownership and labor. The bottom is falling out, and it's bringing the wages of the middle class down with it. Without a sufficient demand for labor, capitalism devolves into a system of "sultans and fanners." Europe's middle class is overtaking ours because Europeans are not so caught up in the mentality that considers every interference by government to be creeping socialism, and hence a bad thing. It's going to take some government interference, probably a lot of it, to keep any semblance of a middle class intact, because there isn't much of a mechanism built into capitalism that will do it anymore. Today must be my "shooting from the hip" day, since I don't think the above quote is correct, but without knowing how you mean it it to apply, I can't be sure. Always a shortage of labor... During wartime yes, the labor was occupied elsewhere. During economic boom times, yes, plenty of jobs to go around... But what about the times of; depression, recession, "normal" gradual economic growth - where is the labor shortage there? Wasn't the Industrial Revolution an economic boom time for jobs, (putting aside the discussion of the "Robber Baron" inequities)? so of course there would have been a labor shortage. And yes I certainly agree that time frame changed the makeup of labor. Or are you saying there was always a shortage of qualified labor? How does advanced economics change the labor discussion? As for the historical precedent part - well, as proof of the cyclic nature of the problem I still think if you look at America's economic history - the proof is in the pudding - labor shortages during boom times, labor surplus during the busts. But if you are speaking of the severity of the challenge to the labor situation, I agree. I don't think the American worker has ever been faced with a future jobs prospect that we have agreed looks to be likely. Just less need for labor... period. GA @GA, I would offer that during the bust period, the reason there is a labor surplus is because there was a job deficit caused by the bust.. On the other hand, I might argue that during the Industrial Revolution there was a surplus of labor. I say that for two reasons. 1) that was the period of urbanization, when everyone was leaving the farm and moving to the city and 2) firms weren't competing for labor, they were able to name their own subsistence wages. Ok, so "shortage" means different things? And to your latter... there was no shortage because folks were flocking to the jobs? Well, normally it is rude, but I really don't mean it to be... it's just the simplest response... Duh! I hope you noticed that I did offer the caveat that the "Robber Baron" practices were a different topic from the "shortage" discussion. Even though you seem to want to combine them to add to the negativity of the situation as you stated it originally. GA No, I caught the Robber Baron exception, but they weren't the only ones who were guilty of the abuse of labor in those days. Having said that, even though few in number, they monopolized various major industries and therefore controlled a sizable portion of the labor force. But even without them, manufacturing was having its heyday making money hand-over-fist because they could and did pay labor next to nothing. The middleclass has raised the underclass percentage to 47% more or less from 37% more or less before Globle Capitalism. Before Unregulated Capitalism was allowed to go globle there was a middle class in America. That was when most anything worth having, was made in America not China by My Esoteric20 months ago The bottom line of President Reagan's Right-wing endorsed economic policy is that "if you put more money in the hands of the wealthy, it will, 1) Expand the economy, 2) Let the boat rise with the economy, and 3)... by rhamson3 years ago Although it is a harped on subject and the answer from the neo cons is work harder, the end result is inevitable.Buying more than selling depletes the available capital. When the capital comes back in liquidity who... by mio cid2 years ago Income inequality has been growing for decades . Do you consider it a problem ? If so , how would you fix it ? by Kathryn L Hill2 years ago Right? by Kathleen Cochran2 months ago "One of the many underappreciated legacies of the Obama administration has been its widespread implementation of pro-consumer policies. Under the outgoing president’s leadership, multiple executive branch... by rhamson2 years ago With a recent forum topic that claims economic power is shifting to China why is it that the US gave it their jobs? Is this the new look of freedom for.
http://hubpages.com/politics/forum/121803/why-is-the-american-middle-class-so-poor
CC-MAIN-2017-09
refinedweb
9,785
68.91
Today I came across a bit of a problem converting an old SQL select statement into JPQL. Essentially the problem was that the original query used a join but my two entities weren’t mapped to each other. As an example, lets say we have the following two entities. @Entity @Table(name="Animal", schema="test") public class Animal { @Id @Column(name="ID", nullable=false) private Long animalID; @Column(name="COLOUR") private String colour; ... } Let’s say the parent entity is a generic Animal class. In this example we also have a Fish class which captures any extra attributes that a fish has in addition to an animal. Maybe the number of fins for example. @Entity @Table(name="Fish", schema="test") public class Fish { @Id @Column(name="ID", nullable=false) private Long animalID; @Column(name="NUMOFFINS") private Integer numberOfFins; ... } Given these entities, with no inheritance relationship or JPA mappings, I needed to convert the following SQL select statement to select all fish that are grey in colour. SELECT f.* FROM test.Fish f INNER JOIN test.Animal a ON f.animalID = a.animalID WHERE a.COLOUR = "Grey" My first stab at converting this to JPQL looked like this: SELECT f FROM Fish f INNER JOIN Animal a WHERE f.animalID = a.animalID AND a.COLOUR = "Grey" This is where I hit problems. In JPQL you’re not joining tables but entities and as such you can’t join two entities that don’t have a mapped relationship. Idealistically I would want to make Fish subclass Animal and map the relationship properly but for now this is out of scope. I managed to solve this by replacing the join with a sub select. SELECT f FROM Fish f WHERE f.animalID in (SELECT a.animalID FROM Animal a WHERE a.COLOUR = "Grey") JPA doesn’t seem to mind this and you could even change it to use exists. SELECT f FROM Fish f WHERE EXISTS (SELECT a FROM Animal a WHERE f.animalID = a.animalID AND a.COLOUR = "Grey") It takes some practice to stop thinking of tables and start thinking of entities!
https://bthurley.wordpress.com/2012/05/29/jpa-jpql-to-join-an-entity-not-mapped/
CC-MAIN-2017-39
refinedweb
351
67.15
I will walk through the steps to combine an ASP.NET MVC5 application with an App for Office to allow the app to authenticate using a Microsoft Account or using Facebook. Similar steps could also be followed to authenticate using Google. Step 1 (Create an App for Office) Using Visual Studio 2013 or Visual Studio 2012 with MVC5, create a new App for Office application. Note that the web application project which is automatically created and added to the solution includes an html page and the default Source Location of the app is that html page. We will change this later to reference our ASP.NET MVC5 web application instead. Step 2 (Create an ASP.NET MVC5 Application) Add a new ASP.NET MVC project to the solution and click the button to Change Authentication. By default authentication is set to No Authentication, so change that to Individual Accounts. Step 3 (Copy CSS and JavaScript assets to your MVC project) Copy the css files from the App and App\Home folders and paste into the Content folder of your MVC project. Copy the js files from the App and App\Home folders and paste into a folder within the Scripts folder of your MVC project. Copy the stylesheet references from the auto-generated Home.html into _Layout.cshtml in your MVC project. Copy the script references from Home.html into either _Layout.cshtml or the scripts section of Index.cshtml. Step 4 (Remove the HTML Web project) Delete the auto-generated Web project from your solution. Step 5 (Configure authentication and register with each provider) You will edit Startup.Auth.cs with the login providers you wish to support. The class that is created for you by MVC contains commented out code that can be filled in to automatically add support for each provider. This code makes use of OWIN middleware and requires a reference to each of the providers you plan to use. using Microsoft.Owin.Security.Facebook; using Microsoft.Owin.Security.Google; using Microsoft.Owin.Security.MicrosoftAccount; using Microsoft.Owin.Security.Twitter; In order to obtain the properties that this authentication code needs, you must first register your app with each of the authentication service providers. And this requires that you create a developer account for each one. MICROSOFT ACCOUNT: The Client ID and Client Secret will be listed under App Settings and should be copied and pasted into Startup.Auth.cs Startup.Auth.cs Uncomment the Microsoft Account section and enter the ClientId and ClientSecret. If your app requires access to particular properties (such as the email address in this code example), that field may be requested. Startup.Auth.cs Uncomment the Facebook Authentication section and enter the AppId and AppSecret. Again, this example shows the syntax to request the email address. AccountController.cs If your authentication code requests additional properties such as the email address, modify ExternalLoginCallback() to retrieve the claim attached to the identity. Step 6 (Update the App manifest file) YourAppForOffice.xml The Source location should contain the url for the root of your MVC web application. In addition you need to include the domains that provide the authentication in the app manifest. Step 7 (Enable SSL) Before testing your app you will need to make sure your web project has the SSL Enabled property set to True. And your App for Office will not be satisfied with a self-signed certificate. This problem is quickly solved by deploying your web app to Microsoft Azure. If you are using the *.azurewebsites.net domain assigned to your web site by Azure then your site is already secured by a certificate provided by Microsoft. You may access your free ASP.NET web sites through MSDN or here for those without an MSDN subscription. The Source location in the app manifest should now be updated to contain the url of your Azure web site.
https://blogs.msdn.microsoft.com/laurieatkinson/2014/06/03/using-oauth-in-an-app-for-office/
CC-MAIN-2017-34
refinedweb
647
58.08
This is a quick post to demonstrate a very useful was of programmatically populating the models (i.e. database) of a Django application. The canonical way to accomplish this is fixtures - the loaddata and dumpdata commands, but these seem to be more useful when you already have some data in the DB. Alternatively, you could generate the JSON information loadable by loaddata programmatically, but this would require following its format exactly (which means observing how real dumps are structured). One could, for the very first entries, just laboriously hammer them in through the admin interface. As programmers, however, we have a natural resentment for such methods. Since Django apps are just Python modules, there's a much easier way. The very first chapter of the Django tutorial hints at the approach by test-driving the shell management command, which opens a Python shell in which the application is accessible, so the model classes can be imported and through them data can be both examined and created. The same tutorial also mentions that you can bypass manage.py by pointing DJANGO_SETTINGS_MODULE to your project's settings and then calling django.setup(). This provides a clue on how the same steps can be done from a script, but in fact there's an even easier way. There's no need to bypass manage.py, since it's a wonderful convenience wrapper around the Django project administration tools. It can be used to create custom management commands - e.g. your own commands parallel to shell, dumpdata, and so on. Not only that creating such commands gives you a very succinct, boilterplate-free way of writing custom management scripts, it also gives you a natural location to house them, per application. Here's some simple code that adds a couple of tags into a blog-like model. Let's say the application is named blogapp: from django.core.management.base import BaseCommand from blogapp.models import Post, Tag class Command(BaseCommand): args = '<foo bar ...>' help = 'our help string comes here' def _create_tags(self): tlisp = Tag(name='Lisp') tlisp.save() tjava = Tag(name='Java') tjava.save() def handle(self, *args, **options): self._create_tags() This code has to be placed in a file within the blogapp/management/commands directory in your project. If that directory doesn't exist, create it. The name of the script is the name of the custom command, so let's call it populate_db.py. Another thing that has to be done is creating __init__.py files in both the management and commands directories, because these have to be Python packages. The directory tree will look like this: blogapp ├── admin.py ├── __init__.py ├── management │ ├── commands │ │ ├── __init__.py │ │ └── populate_db.py │ └── __init__.py ├── models.py ... other files That's it. Now you should be able to invoke this command with: $ python manage.py populate_db All the facilities of manage.py are available, such as help: $ python manage.py help populate_db Usage: manage.py populate_db [options] <foo bar ...> our help string comes here Options: ... Note how help and args are taken from the Command class we defined. manage.py will also pass custom positional arguments and keyword options to our command, if needed. More details on writing custom management commands are available in this Django howto. Once you start playing with such a custom data entry script, some of the existing Django management commands may come in very useful. You can see the full list by running manage.py help, but here's a list of those I found handy in the context of this post. For dumping, dumpdata is great. Once your data grows a bit, you may find it useful only to dump specific models, or even specific rows by specifying primary keys with --pks. I also find the --indent=2 option to be essential when doing the default JSON dumps. The flush command will clear the DB for you. A handy "undo" for those very first forays into entering data. Be careful with this command once you have real data in the DB. Finally, the sqlall command is very useful when you're trying to figure out the structure of your models and the connections between them. IMHO model problems are important to detect early in the development of an application. To conclude, I just want to mention that while custom management commands live within applications, nothing ties them to a specific app. It is customary for Django management commands to accept app and model names as arguments. While a data entry command is naturally tied to some application and model, this doesn't necessarily have to be the case in general. You can even envision an "app" named my_custom_commands which you can add to projects and reuse its functionality between them.
http://eli.thegreenplace.net/2014/02/15/programmatically-populating-a-django-database/
CC-MAIN-2014-49
refinedweb
790
57.87
Accepting unicode for most strings The same issue as #45 exists for any value that is expected to be str. If unicode is required on Python 3, unicode should work on Python 2 too (even though the type is named differently.) Such parameters are at least in ffi.cdef() (Triggered later on usage with TypeError: enumerators must be a list of strings), ffi.dlopen(), ffi.cast(), ffi.new(), and maybe ffi.verify() (untested) To test, run a program on Python 2 with from __future__ import unicode_literals Fixed (76bef4539c42). I hope I didn't miss a place, but I seem to have got all places that test for the 'str' type.
https://bitbucket.org/cffi/cffi/issues/50/accepting-unicode-for-most-strings
CC-MAIN-2017-30
refinedweb
110
73.27
Hey there! In this tutorial, we will study what are arrays in c programming and the types of arrays - Single Dimensional Array, Two-Dimensional Array, and Multi-Dimensional Array with the help of a suitable example. Arrays in C Programming An array is a primitive and linear data structure that is a group of identical data items, ie it will store only data of only one type or will only store floating-point. Array data is a static data structure, that is, you can allocate memory in compile time itself and cannot change it in run time. Array live Example Assume you are creating a program that is to stores the names of the students of a college on the computer and the number of those students is 500. So how will you store the names of these 500 students? You must be thinking in mind that creating 500 variables, doing so will make the program very complex and this method will also take a lot of time and the program will also become very large and there will also be wastage of computer memory space. To solve this type of problem, c language provides us with Array. With the help of which we solve this type of problem and address it. Types of Array Single Dimensional Array An array that contains only one subscript is called a one-dimensional array. It is used to store data in linear form. A one-dimensional array is also called a 1-D array. Two-Dimensional Array An array containing two subscripts is called a two -dimensional array. A two-dimensional array is also called a 2-D array. Multi-Dimensional Array An array containing more than two subscripts is called a Multi-dimensional array. A multi-dimensional array is also called a 3-D array. Basic program in an array #include <stdio.h> int main() { int arr[5]; arr[0] = 5; arr[1] = -10; arr[2] = 2; arr[3] = 6; printf("%d %d %d %d", arr[0], arr[1], arr[2], arr[3]); return 0; } Output 5 -10 2 6 Originally posted on - Arrays in C Programming Top comments (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/alimammiya/arrays-in-c-programming-5665
CC-MAIN-2022-40
refinedweb
358
60.04
[Solved] Problem with svg pictures and transparent background Dear experts, I'm trying to code something which paints a widget using svg images. I have read that this is rather slow, and that the pictures have better to be pre-rendered first. I found this thread and thought that it would be straightforward to achieve what I wanted... My problem is that my pictures are diamond-shaped, and the QPixmap I use is of course rectangular. I want therefore the 4 corners of the QPixmap to be transparent, I have thus the following piece of code, mostly inspired from q8phantom's snippet: Converter -> load( fileName ); // Converter is a QSvgRenderer object pointer // checks that the loaded file is valid, extract the source size, etc. QPixmap image( destSize ); QPainter painter( &image ); Converter -> render( &painter ); image.fill( Qt::transparent ); In order to test that, I created a stupid little widget which displays 5 pictures converted from svg on a white background. The result shown here below, is clearly buggy to me, as there is this strange black background around the diamonds, background which even contains some text. Example picture I also checked that the obtained QPixmap is not transparent at all: if I put two diamonds with neighbouring edges, part of the one drawn first is hidden by the background from the second one. What am I missing? Some information: I'm using Qt 4.7.4 with gcc 4.6.1 on a 64 bit Mandriva 2011. Thanks in advance, Johan Edit: I have the same strange behaviour using Qt 4.8.0 with gcc 4.6.2 on a 64 bit Fedora 16. I think the problem comes from the svg rendering: at some point I had a bug in my code and the svg was never rendered, which resulted in an "empty" QPixmap, i.e. totally transparent. As soon as I had the svg rendering, the background was buggy. Hello guys! I can't believe nobody has a hint... Did I ask the question that shouldn't be asked? ;-) More seriously, I also investigated on my own in the meantime, but still no solution. If you miss any kind of information, please let me know. I put here a minimalist example : buggywidget.hpp: #ifndef BUGGYWIDGET_HPP #define BUGGYWIDGET_HPP #include <QtGui/QWidget> class QPaintEvent; class Widget : public QWidget { Q_OBJECT public: Widget(QWidget *parent = 0); ~Widget(); void paintEvent(QPaintEvent *); }; #endif // BUGGYWIDGET_HPP buggywidget.cpp: #include "buggywidget.hpp" #include <QtGui/QPainter> #include <QtSvg/QSvgRenderer> #include <QtGui/QPixmap> Widget::Widget(QWidget *parent) : QWidget(parent) { setPalette( QPalette( Qt::white ) ); setGeometry( 0, 0, 800, 600 ); update(); } Widget::~Widget() { } void Widget::paintEvent(QPaintEvent *) { QSvgRenderer * Converter = new QSvgRenderer( this ); Converter -> load( QString( ":/VectorRedTile" ) ); QPixmap image( 800, 240 ); QPainter painter( &image ); Converter -> render( &painter ); image.fill( Qt::transparent ); QPainter p( this ); p.drawPixmap( 0, 180, image ); } main.cpp: #include <QtGui/QApplication> #include "buggywidget.hpp" int main(int argc, char *argv[]) { QApplication a(argc, argv); Q_INIT_RESOURCE( TestResources ); Widget w; w.show(); return a.exec(); } and BuggyTest.pro: #------------------------------------------------- # # Project created by QtCreator 2012-03-13T07:27:52 # #------------------------------------------------- QT += core gui svg TARGET = BuggyTest.exe TEMPLATE = app MOC_DIR = "./bin/moc/" OBJECTS_DIR = "./bin/objects/" DESTDIR = "./bin/" RESOURCES += TestResources.qrc SOURCES += main.cpp\ buggywidget.cpp HEADERS += buggywidget.hpp With any svg file I use, the diamond is painted, now with a pure black background (I guess this background is just random). My problem is 100% reproducible with the given example. I see however something I didn't see before (maybe I just didn't notice it before): there's a warning message saying "QPixmap::fill: Cannot fill while pixmap is being painted on". Is this related to my problem? Thanks, Johan Edit: problem solved! I just had to write Pixmap image( 800, 240 ); image.fill( Qt::transparent ); QPainter painter( &image ); Converter -> render( &painter );@ instead of @QPixmap image( 800, 240 ); QPainter painter( &image ); Converter -> render( &painter ); image.fill( Qt::transparent ); Sorry for the noise! - mlong Moderators In your paintEvent() method, you should do the transparent fill before you do the painting. [Edit: I should finish reading your post before I write mine. I see you figured it out! :) ] Thanks anyway!
https://forum.qt.io/topic/14762/solved-problem-with-svg-pictures-and-transparent-background
CC-MAIN-2018-26
refinedweb
683
58.38
Building. So I thought I’d write a little something to celebrate the new preview. The new features include recursive templates, which is pretty much begging us to implement a treeview with it, and we’ll do just that in this post. There is also an intriguing capability, which enables you to dynamically set what template to render for each data item, and where to render it. At first, this doesn’t look like the most useful thing in the world, but it actually opens up some very interesting possibilities, which we’ll also show in this post. The sample code that I’m going to write for this post is a rudimentary class browser. It will render a treeview representing the hierarchical structure of namespaces and classes in Microsoft Ajax, and clicking one of the tree nodes will render a details view for it: a list of classes and subnamespaces for namespaces, and a grouped list of members for classes. Let’s start with the tree. It will be rendered as nested unordered lists by a simple recursive DataView: <ul id="tree" class="tree" sys: </ul> <ul id="nodeTemplate" class="sys-template"> <li> <a href="#" onclick="return false;" sys: {{ getSimpleName($dataItem.getName()) }} </a> <ul sys:</ul> </li> </ul> On the first UL, which is the outer DataView for the tree, you can see that we set the data property to Type.getRootNamespaces(), which returns the set of root namespaces currently defined. We also set the template to point to the “nodeTemplate” element, which has to be outside the DataView itself when doing recursive templates. Note that the outer node of the template, the UL, won’t actually get rendered into the target ul (tree). It is only a container. The command event of the DataView is hooked to the onCommand function, and we’ll get back to that when we couple the tree with the detail view. In the template itself, you can see we have a link with the select command so that clicking it will trigger the nearest onCommand event up the DOM. The text of that link is the results of a call to getSimpleName, which will extract the last part of the fully-qualified name of the namespace or class. After that link, we find another DataView control. The data property of that control points to an array of namespaces and classes under the current object. But the nice part here is that the template property points to “nodeTemplate”, its own parent, enabling the recursive nature of the tree. In other words, we’ve morphed a simple DataView control into a tree, with minimal effort and code. There is just one thing missing to the tree, and that is the +/- buttons that will collapse and expand the tree nodes. This is actually very easy to set-up using CSS and some simple script. First, let’s collapse the tree by default. This is done by defining the style of the tree as follows in our stylesheet: .tree ul { padding: 0; display:none; } This has the effect of collapsing all unordered list nodes under the tree. The +/- button is created by adding the following to the template, right before the existing link: <a class="toggleButton" href="#" sys:+</a> The button is a simple link whose rendering is conditioned by whether the current data item is a namespace: only namespaces can be expanded, classes are leaf nodes. The toggling function itself is fairly simple: function toggleVisibility(element) { var childList = element.parentNode .getElementsByTagName("ul")[0], isClosed = element.innerHTML === "+"; childList.style.display = isClosed ? "block" : "none"; element.innerHTML = isClosed ? "-" : "+"; return false; } This just toggles the display style of the first child UL between none and block, and the text of the link between + and –. So there it is, we have built a simple tree by simply making use of the recursive capabilities of DataView and some very simple script. Before we look at the details view, let’s look at the code that gets called when the user selects a node in the tree: function onCommand(sender, args) { var dataItem = sender.findContext( args.get_commandSource()).dataItem; $find("details").set_data(dataItem); } That code gets a reference to the data item for the selected node from the template context that we can get from the sender of the event (the inner DataView that contains the selected node) using the command source as provided by the event arguments (that source is the element that triggered the command). We can then set the data of the details DataView to that data item, which will trigger that view to re-render. Now let’s build the details view. The details view will display the child namespaces and classes if a namespace is selected in the tree, and the properties, events and methods (instance and static) in the case of a class. For each case, we’ll use a different template: “namespaceTemplate” for namespaces, and “classTemplate” for classes, but we’ll do so from the same DataView. This dynamic template switching is done by handling the onItemRendering event of the DataView: function onDetailsRendering(sender, args) { var dataItem = args.get_dataItem(); args.set_itemTemplate(Type.isNamespace(dataItem) ? "namespaceTemplate" : "classTemplate"); } This code gets the data item from the event arguments and sets the itemTemplate property depending on its type. Each of these two templates will have to display the contents of the selected object. But, and that will be the tricky part, we want all those to be neatly grouped into separated lists. One way to do that would be to have one DataView per list but where would the fun be in that? Here, we are going to enumerate only once through the data items to display and dispatch them dynamically to this or that placeholder depending on their nature. Once more, the key to doing that will be handling the onItemRendering event: function onNamespaceChildRendering(sender, args) { if (Type.isClass(args.get_dataItem())) { args.set_itemPlaceholder("classPlaceHolder"); } } This code is simply changing the rendering place holder for the curent item from the default (the DataView’s element) to “classPlaceHolder” if the current data item is a class (instead of a namespace). The template itself looks like this: <div id="namespaceTemplate" class="sys-template"> <h1>{{ $dataItem.getName() }}</h1> <div class="column"> <h2>Namespaces:</h2> <ul sys:id="namespacePlaceHolder" sys:attach="dataview" dataview:data="{{ getChildren($dataItem) }}" dataview:itemtemplate="namespaceChildTemplate" dataview: </ul> </div> <div class="column"> <h2>Classes:</h2> <ul><li sys:</li></ul> </div> </div> <ul id="namespaceChildTemplate" class="sys-template"> <li>{{ $dataItem.getName() }}</li> </ul> As you can see, there really is only one DataView in there, and thanks to the code above, it can dispatch its rendering to different places if necessary. The template for the items of that DataView happens to be the same in all cases (namespaceChildTemplate) but it could be easily different, as it was for the parent details view. The template for displaying classes is essentially the same thing, but with four placeholders instead of two. So here’s what it looks like in the end: Key takeaways of this post are that it’s now super-easy to render hierarchical data structures with DataView, and that you can do some interesting grouping of data on the fly by handling the item rendering event. You can play with the class browser live here: And you can download the code here: Microsoft Ajax 4.0 Preview 5: Jim and Dave’s posts on Preview 5:
http://weblogs.asp.net/bleroy/building-a-class-browser-with-microsoft-ajax-4-0-preview-5
CC-MAIN-2015-18
refinedweb
1,236
57.91
Important: Please read the Qt Code of Conduct - Import Header Files in the .pro file - Michael.R.LegakoL-3com.com last edited by I've been bitten a couple of times now by a pretty subtle mistake that fortunately is quite easy to avoid. The issue occurs when you have a QT custom Widget of some sort that you want to share among different applications, and when you put the Custom Widget code in its own library. The mistake is to add the header for the Widget library to the HEADER list in the application. Doing this (for Windows applications) will cause the compilation of the application to fail with error C2491: '<WidgetLibraryName>::staticMetaObject' : definition of dllimport static data member not allowed. Instead, I've learned the correct thing to do is just make sure INCLUDEPATH has the correct paths so that the Widget Library's external header file(s) can be located. Now all that is fine, but it sure is handy to be able to see a list of those import library headers as part of my project, so my question is: bolded text Is there any QMAKE variable one can use to list import library header files upon which an application (or library) is dependent? - Paul Colby last edited by Hi @Michael-R-LegakoL-3com-com, I think a better approach, is to use a macro to apply the dllimport/export, and then enable that macro in the library only. For example, if you look at qmake's simple_dll test example, the simple.h header contains: #ifdef SIMPLEDLL_MAKEDLL # define SIMPLEDLL_EXPORT Q_DECL_EXPORT #else # define SIMPLEDLL_EXPORT Q_DECL_IMPORT #endif class SIMPLEDLL_EXPORT Simple { ... } Then the simple_dll.pro file contains: DEFINES += SIMPLEDLL_MAKEDLL So, this way, when the library is being built (via simple_dll.pro), then class is dllexport'ed. But when other code uses that same header (in doesn't define SIMPLEDLL_MAKEDLL) then that same class is dllimport'ed instead. Typically, you end up doing this sort of thing to many classes in a single library, so rather than having the #ifdef ... #define ... etcin each header file, it often ends up being written once in a "global" project header of some kind. For example, Q_CORE_EXPORT gets defined in qglobal.h, then used throughout all of the exported core Qt class headers. Also see: - When creating a library with Qt/Windows and using it in an application, then there is a link error regarding symbols which are in the library Cheers.
https://forum.qt.io/topic/73259/import-header-files-in-the-pro-file
CC-MAIN-2021-25
refinedweb
408
60.35