text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
22 December 2008 16:10 [Source: ICIS news] By John Richardson SINGAPORE (ICIS news)--The global economic crisis – at the very least the worst since the early 1980s – will have huge implications for ?xml:namespace> The danger is that the vicious cycle of declining consumer spending in the west, job losses and even further declines in consumption, is only just beginning. We could also be entering a global deflationary spiral like the one that kept Why buy anything today when it could be cheaper tomorrow – and why buy at all when you are in danger of losing your job and the real value of your debt is increasing rather than decreasing because prices are falling? Heavily export-based Asian economies, such as But more factory closures will surely follow among export-focused manufacturers in the southern and eastern provinces if the vicious circle mentioned above continues. This will inflict a great deal more pain on those who ship chemicals to Although Ironically, governments across Asia have funded the Letters of credit have become very hard to obtain, especially for small-and-medium-sized companies. You need to ration credit, if you have it. A lot of this money is no longer available, creating the fear that growth – even if it stays reasonable healthy in the short term despite the global crisis – will eventually slow down as a result of inadequate roads, railways, ports and airports. And to make a virtue out of lack of exposure to manufacturing is a little perverse, given that this is at the core of one of Other economies face their own problems, including Despite the appointment of another new prime minister, the battle between the “red shirts” (mainly the working class and the rural communities and the “yellow shirts” (the urban middle class) looks set to continue. Another military coup seems possible and an end to fully representative democracy. Even if the world economy hadn’t been in major crisis, Those two crackers, one by PTT Chemicals and the other by Siam Cement and Dow Chemicals, are due to start-up over the next few years. So too are new ExxonMobil and Shell Chemicals crackers in It is hard to see how these volumes will be placed in markets where demand will be much weaker than anyone had forecast. Any chemicals demand-growth predictions drawn up before September this year (in other words all the forecasts used in feasibility studies used to justify the current wave of investment) are likely to be way off the mark. The problem with demand is two-fold. Firstly, the lack of credit and the volatility in energy prices is making every company at every stage in each production chain very unwilling to buy or sell anywhere near close to the quantities seen before September. The risk is that you end up stocking up on polypropylene (PP) resin if you are a converter, for example, only to see the oil price fall the next day. This will almost instantly translate into lower PP prices. Your hard-pressed customers, even your closest customers who you’ve been doing business with for many years, will not be in a position to do you any favours by paying over the market odds for their packaging material. Secondly, there is a great deal of uncertainty over what is the state of fundamental demand. Nobody knows the full extent of the damage to economies and how much worse it will become. Sales volumes could remain far below expectations throughout 2009 and beyond. Those saddled with debt from ill-timed capacity expansions (isn’t hindsight a wonderful gift?) could in theory be subject to mergers and acquisitions. For many years, the South Koreans have been singled out as perhaps the most vulnerable to further consolidation because they expanded aggressively in 2004-05 when they had the cash in order to maintain economies of scale. But government support might enable companies to limp through the crisis intact. The Japanese might also be vulnerable as they struggle to compete with the Middle East and One advantage of this crisis is that it has reduced energy costs, even though oil prices remain erratic as they rise and fall in almost perfect alignment with stock markets. But if you believe the International Energy Agency, we are heading for another major supply crunch once the world economy recovers because of the decline in investment in new conventional and unconventional oil fields. This could mean that future economic cycles are much shorter with recoveries frequently nipped in the bud by soaring crude chemicals producers everywhere. The kind of growth When you are struggling to keep your company going in perhaps the worst economic crisis since the Great Depression, the dangers are that you slash research and development (R&D) spending. Good people might leave because they become disillusioned or more likely they will be laid off. Ultimately, the chemical companies that fail to effectively innovate in the face of growing environmental pressures are likely to fail anyway, even if they get through the next few years. ($1 = CNY6.85) To discuss issues facing the chemical industry go to ICIS connect Read John Richardson's Asian Chemical Connections blog and Paul Hodges' thoughts on.
http://www.icis.com/Articles/2008/12/22/9180936/insight-global-economic-crisis-will-hit-asia-chemicals-hard.html
CC-MAIN-2013-20
refinedweb
871
52.02
Julian Elischer wrote: We are aware of this. You are of course also welcome to make suggestions as to what the correct behavior in these situations should be. When an interface is moved from a parent to a child vnet a check is done. I tried to copy that behavior. Does it look correct? --- sys/net/if.c.orig 2009-08-24 15:52:05.000000000 +0300 +++ sys/net/if.c 2009-08-25 23:55:26.000000000 +0300 @@ -992,6 +992,13 @@ prison_hold_locked(pr); mtx_unlock(&pr->pr_mtx); + /* Make sure the named iface does not exist in the dst. prison/vnet. */ + ifp = ifunit(ifname); + if (ifp != NULL) { + prison_free(pr); + return (EEXIST); + } + /* Make sure the named iface exists in the source prison/vnet. */ CURVNET_SET(pr->pr_vnet); ifp = ifunit(ifname); /* XXX Lock to avoid races. */ Thank you for trying out our new little toy! Well, thanks for creating this "little toy":) Nikos _______________________________________________ freebsd-virtualization@freebsd.org mailing list To unsubscribe, send any mail to "freebsd-virtualization-unsubscr...@freebsd.org"
https://www.mail-archive.com/freebsd-virtualization@freebsd.org/msg00125.html
CC-MAIN-2018-13
refinedweb
168
71
A final variable behaves like a constant as its value never changes If a final variable is assigned, it always contains the same value. If final variable holds a reference to an object, then the state of the object may be changed by the operations on object, but the variables will refers to the same object. This applies also to arrays, because arrays are objects; if a final variable holds a reference to an array, then the components of the array may be changed by operations on the array, but the variable will always refer to the same array. public class ExFinal { private final String name; private final String age; private final Object country ="India"; public ExFinal(String name, int age) { this.name = name; this.age = age; } public String toString() { return name + ", " + age + " " + " from " + " " +country; } }
http://it.toolbox.com/wiki/index.php/Final%5Fvariable
crawl-002
refinedweb
135
57.61
So we were in class today and my professor said that, while iterating through a linked list you can use the ++ operator to get the next node. Here's an example linked list... #include <iostream> using namespace std; class Node { public: int data; Node * next; }; class List { public: List(); void append(int i); void print(); void test(); private: Node* front; Node* back; }; List::List() { front = back = NULL; } void List::append(int i) { Node *n = new Node; n->data = i; n->next = NULL; if (front == NULL) { front = back = n; } else { back->next = n; back = n; } } void List::print() { cout << "printing" << endl; for (Node *n = front; n != NULL; n = n->next) { cout << n->data << endl; } } int main() { List l; l.append(1); l.append(2); l.append(3); l.print(); return 0; } The line of interest is in print(). for (Node *n = front; n != NULL; n = n->next) He claimed you can just do n++ instead of n = n->next. I disagree. While it may work sometimes if the nodes grab memory quick enough they are grabbing contiguous memory addresses, it would fail if the addresses were not contiguous. This lead me to wanting to over load the ++ operator to achieve the behavior he described. I ran into some problems... n isn't a Node, it's a pointer to a Node. So we'd have to overload the operator with the pointer as an argument instead of a Node. I couldn't really see a way to do this as a member. The other issue is, we'd have to reassign the address of this. We'd be saying this = this->next. I'm pretty sure you can't do that because this isn't an lvalue. So does anyone have an elegant way of overloading the ++ operator so the following is valid? for (Node *n = front; n != NULL; n++)
https://www.daniweb.com/programming/software-development/threads/199787/linked-list-overload-operator
CC-MAIN-2018-30
refinedweb
309
81.12
How to use res/values folder In this lesson we will: In the res folder, subfolders of different application resources are stored. We already have a good knowledge about layout-files in res/layout folder. I also mentioned about res/drawable folder with density suffixes - images are stored in it. Now pay attention to res/values folder. It is intended for resources (constants) storage of different types. We will look through String and Color types. Create a project: Project name: P0111_ResValues Build Target: Android 2.3.3 Application name: ResValues Package name: ru.startandroid.develop.resvalues Create Activity: MainActivity Open res/values/strings.xml file: We can see two elements of the String type: hello - is used by default in the Text property of the TextView in main.xml. And correspondingly, TextView displays the value of this element. app_name - is used by default as a title for your application and Activity. This constant is used in the manifest-file, but we are not familiar with it yet. You can click on these elements and see what do they represent on the right: Name and Value Name - it is an ID. It must be unique and a constant is generated for it in R.java, so we could access this String-element from code. If we have a look at XML inside strings.xml file (tab at the bottom - the same as for main.xml), we can see that everything is clear and simple there. Let’s try using resources. To begin with, create this layout in main.xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <LinearLayout android: <TextView android: </TextView> <Button android: </Button> </LinearLayout> <LinearLayout android: <TextView android: </TextView> <Button android: </Button> </LinearLayout> </LinearLayout> The screen is divided into two equal parts, which contain LinearLayout, Button and TextView. We will specify a background color for LinearLayout and change text for TextView and Button. Let’s implement this using resources. And we will configure View-elements in the top part manually using properties, the bottom part - programmatically. Let’s create our resource file in the values folder and name it myres. After creating a file, the editor opens. It is simple to add an element - click Add button, choose the type and write name and value on the right. We will create 4 String-elements and 2 Color-elements: <?xml version="1.0" encoding="utf-8"?> <resources> <string name="tvTopText">Upper text</string> <string name="btnTopText">Upper button</string> <string name="tvBottomText">Bottom text</string> <string name="btnBottomText">Bottom button</string> <color name="llTopColor">#336699</color> <color name="llBottomColor">#339966</color> </resources> You can input this text manually for practice, or you can just copy this text into myres.xml. Don’t forget to save the file. Have a look into R.java and make sure that everything appeared there: Ok, now when all the resources are created, let’s configure View-elements to use them. Elements on the top for beginning: llTop - find Background property in Properties and click the selection button (the ellipsis), in the Color branch highlight llTopColor and click OK tvTop - for Text property open the selection window and find tvTopText there. btnTop - for Text property open the selection window and find btnTopText there. The color of the top part of the screen has changed and the inscriptions have changed to those we specified in myres.xml. To change the bottom part of the screen, we will write some code. First, find the elements, then set their values. public class MainActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); LinearLayout llBottom = (LinearLayout) findViewById(R.id.llBottom); TextView tvBottom = (TextView) findViewById(R.id.tvBottom); Button btnBottom = (Button) findViewById(R.id.btnBottom); llBottom.setBackgroundResource(R.color.llBottomColor); tvBottom.setText(R.string.tvBottomText); btnBottom.setText(R.string.btnBottomText); } } Note that for changing text setText method is used. But it is not the same setText that we used when setting text directly. This method takes an ID as a parameter and we use R.java, which stores all our resource IDs. As you see methods have the same name but have different parameters. This is common in Java. Save, run and check. Now texts and colors are taken from the resource file. You can change the contents of myres.xml (text of the upper button, for instance), save, run the application and you will see your changes. Sometimes it is required to get the value of the resource in code, not its ID. It is accomplished this way: getResources().getString(R.string.tvBottomText); fThis expression will return text "Bottom text", which corresponds to the String resource with name = tvBottomText. In conclusion, I will tell a few words about files organization for storing resources. We have just created String and Color resources in a single file myres.xml, but it is recommended to split them into different files (e.g. strings.xml, colors.xml ...) and I will follow this recommendation further. There are reasons for that and we will have evidence for this in future. Resource names are prevailing for all the files in res/values folder. So you cannot create a resource with the same name and type in different files. You can name resource files as you wish and you can create as many files as you need. All the constants for resources from these files will be generated in R.java. Please login in order to leave a comment.
http://www.chupamobile.com/tutorial-android/lesson-11-res-values-folder-using-application-resources-187
CC-MAIN-2016-50
refinedweb
916
58.89
#include "secp256k1.h" #include "secp256k1_extrakeys.h" Go to the source code of this file. This module implements a variant of Schnorr signatures compliant with Bitcoin Improvement Proposal 340 "Schnorr Signatures for secp256k1" (). A pointer to a function to deterministically generate a nonce. Same as secp256k1_nonce function with the exception of accepting an additional pubkey argument and not requiring an attempt argument. The pubkey argument can protect signature schemes with key-prefixed challenge hash inputs against reusing the nonce when signing with the wrong precomputed pubkey. Returns: 1 if a nonce was successfully generated. 0 will cause signing to return an error. Out: nonce32: pointer to a 32-byte array to be filled by the function. In: msg32: the 32-byte message hash being verified (will not be NULL) key32: pointer to a 32-byte secret key (will not be NULL) xonly_pk32: the 32-byte serialized xonly pubkey corresponding to key32 (will not be NULL) algo16: pointer to a 16-byte array describing the signature algorithm (will not be NULL). data: Arbitrary data pointer that is passed through. Except for test cases, this function should compute some cryptographic hash of the message, the key, the pubkey, the algorithm description, and data. Definition at line 38 of file secp256k1_schnorrsig.h. Create a Schnorr signature. Does not strictly follow BIP-340 because it does not verify the resulting signature. Instead, you can manually use secp256k1_schnorrsig_verify and abort if it fails. Otherwise BIP-340 compliant if the noncefp argument is NULL or secp256k1_nonce_function_bip340 and the ndata argument is 32-byte auxiliary randomness.) noncefp: pointer to a nonce generation function. If NULL, secp256k1_nonce_function_bip340 is used ndata: pointer to arbitrary data used by the nonce generation function (can be NULL). If it is non-NULL and secp256k1_nonce_function_bip340 is used, then ndata must be a pointer to 32-byte auxiliary randomness as per BIP-340. Definition at line 127 of file main_impl.h. Verify a Schnorr signature. Returns: 1: correct signature 0: incorrect signature Args: ctx: a secp256k1 context object, initialized for verification. In: sig64: pointer to the 64-byte signature to verify (cannot be NULL) msg32: the 32-byte message being verified (cannot be NULL) pubkey: pointer to an x-only public key to verify with (cannot be NULL) Definition at line 190, schnorrsig_sign does not produce BIP-340 compliant signatures. The algo16 argument must be non-NULL, otherwise the function will fail and return 0. The hash will be tagged with algo16 after removing all terminating null bytes. Therefore, to create BIP-340 compliant signatures, algo16 must be set to "BIP0340/nonce\0\0\0" Definition at line 94 of file main_impl.h.
https://doxygen.bitcoincore.org/secp256k1__schnorrsig_8h.html
CC-MAIN-2021-04
refinedweb
438
56.05
If we do not specify a return type, compiler assumes an implicit return type as int char #include <stdio.h> char fun(char ch) { return ch; } int main() { char ch = fun('A'); printf("ch : %c\n",ch); } #include <stdio.h> char fun(char ch) { } int main() { char ch = fun('A'); printf("ch : %c\n",ch); } char int As it has been pointing out, it seem that the goal of a return value has been misunderstood. when declaring a function you are not forced to give it a body, you are telling the compiler that somewhere in the full program there will be such function. This : char fun(char ch); int main() { char ch = fun('A'); } Is still valid C, and compile ( to an object file ) By doing this you are telling the compiler that the function does not exists now but should exists later. This is quite helpful when using library or other object files to be linked to form your final program. unfortunately C compilers are still not smart enough to infer what was exactly your intention if you write int main() { char ch = fun('A'); } it will try to best guess by defaulting to a mostly wrong type for your fun, or just stop because guessing would be wrong. One practical example to this would be char fun(char ch) { return ch; } Compile with cc -c fun.c will compile a fun.o object file #include <stdio.h> char fun(char ch); // fun exist in fun.c but not in main.c this is a way yo tell that the function exists int main() { char ch = fun('A'); printf("ch : %c\n",ch); } Compile with cc -c main.c will compile a main.o object file then link the two files with cc fun.o main.o -o main, it will link the two object file into the program main. the return statement have two goal in your function prototype you will have to declare what your function would return. which seems quite some extra code when having only one return statement per function, but what would make sure you are keeping your promise ? What if you have two return statement in your function, ... returning different type? what should be the inferred type of your function? What if you are returning an int should the computer store it in the char? what if the char is too big? what if you return a struct instead?
https://codedump.io/share/jfdwO3wp7svl/1/return-type-in-c
CC-MAIN-2017-51
refinedweb
407
80.92
Answered by: Fatal error RC1015: cannot open include file 'afxres.h' - I am trying to compile samurize plugin for T-Balancer: and whatever I do I receive message from the subject. I followed "Using Visual C++ 2005 Express Edition with the Microsoft Platform SDK" and it did not help. I have read somewhere that problem can be related to too long paths. Even so I have no idea how to make them shorter. Also information that 'afxres.h' is related to MFC does not help. I just want to compile some examples and every second attempt ends up with that very annoying message.Saturday, April 22, 2006 2:06 AM Question Answers - All replies - That will only give you the include files. If you actually call anything from MFC then you will not have MFC LIB files to link to because the Platform SDK only comes with 64 bit version of the libs. In fact, Visual C++ Express is not meant to be used with MFC (they don't ship MFC at all with it). To get around this, the MFC source code that comes with the Platform SDK must be rebuilt to generate library files compatible with Visual C++ 2005 Express (there is a way to do this but it is by no means trivial)Wednesday, April 26, 2006 1:26 AM I must be missing something painfully obvious. I have entered the Platform SDK path into the given screen, including the \mfc part, but still I get: c:\documents and settings\monty\my documents\visual studio 2005\projects\nettest2\nettest2\stdafx.h(45) : fatal error C1083: Cannot open include file: 'afxwin.h': No such file or directory All I need is a simple HTTP client that can send my file data to a PHP script. I'm running my software in XP and Server '03 environments, and don't mind using MFC and/or .NET 2.0, I just need to tell this ()*&)&*)(&* compiler that the include files are right there!Tuesday, September 19, 2006 7:25 PM - The version of MFC that ships with the PSDK is a 64-bit version (it dates from the days when the PSDK was the only way to get 64-bit tools). Even if you manage to include the header files and link you'll probably discover that the code won't execute.Tuesday, September 19, 2006 8:24 PM I struggled with this for a whole day and all the internet postings are useless. I know this probably isn't the best solution, but it works. In solution explorer, under resource files, right click your .rc file and select properties. Under resource > general > additional include directories, add the following: ;"$(VCInstallDir)atlmfc\include";"$(VCInstallDir)PlatformSDK\Include"Friday, March 02, 2007 11:00 PM - hi there. I struggled now a few times with this and what always helped perfect was copying the altmfc directory from my entire VS 2003 installation from my student times as is into the the VC dir in express and link it in the directory options. I have always done this under time pressure, so could well be I'm not actually using the lib code and so header files are enough for me, but not everybody... CheersWednesday, July 04, 2007 12:24 AM I had the same problem with trying to compile a resource file from VC++7.1. Adding "C:\Program Files\Microsoft Platform SDK\include\mfc" to the VC++ directories include files fixed the issue. I wasn't even using MFC, but for some reason it needs the afxres.h file (I think because it includes windows.h), even though it doesn't need any MFC stuff. This "fix" should be mentioned in this document, along with the rest of the required changes to VC++ directories. Of course, if the resource files from VC++7.1 didn't wrongly include an MFC file, and simply included windows.h, as it should, this wouldn't be a problem.Wednesday, July 04, 2007 7:28 PM It doesn't "wrongly" include afxres.h, under the VS IDE this is something useful. The afxres.h has some additional definitions for common control types, like IDOK, IDCANCEL and more. There are two more things to remember too, first of all, Microsoft has seemed to assume that when you create a project you will keep editing it on the same machine or the same IDE type, secondly there is no inbuild native resource handling for EE, meaning you manually edit it yourself. In the situation where these resource files were initally created there is no problems including afxres.h, but when you move them that is when problems could start occuring.Wednesday, July 04, 2007 10:06 PM crescens2k, I assume you mean to replace every instance of afxres.h with windows.h? It doesn't just appear in one location for me:Code Snippet ///////////////////////////////////////////////////////////////////////////// // // Generated from the TEXTINCLUDE 2 resource. // #include "afxres.h" . . . 2 TEXTINCLUDE BEGIN "#include ""afxres.h""\r\n" "\0" END And there's also this line of code which seems to be related:Code Snippet #if !defined(AFX_RESOURCE_DLL) || defined(AFX_TARG_ENU) Thursday, July 05, 2007 5:28 PM - Proposed as answer by Jayavel Rajavel Wednesday, July 20, 2011 2:47 PM crescens2k, afxres.h does not include the definitions for IDOK, IDCANCEL, etc, in either VC++ 2003 or VC++ 2005. They don't exist in windows.h, either. They exist in WinUser.h, for both versions. You are right, Microsoft assumes you'll continue using the same IDE for the length of the project. But, they do have a conversion process which should work, and it does... it's just the directories are not set up, that's all. Their Platform SDK webpage is supposed to tell you how to rectify this, and for the most part, it does successfully, but it just forgets one directory -- the MFC one with afxres.h in it.Thursday, July 05, 2007 5:39 PM Misfire there, you are right, about it not defining those, it defines menu, toolbar and other things to make using the resource editor in the VS IDE easier. But the Platform SDK webpage has nothing wrong. The version of the MFC in the Platform SDK is for 64 bit windows only. It was only intended as a supplement for VS.net and VS.net 2003. If you look it only has 64 bit compilers too. If you look in the latest version of the PSDK you will actually notice that there is no MFC or ATL libraries there at all. This shows that since they are now in VS properly there is no need to distribute them with the PSDK. There is only two ways are supposed to get the MFC, through VS or the WDK. In all, the Express edition was not designed to work with MFC or ATL. It doesn't have the resource editor, which relies on MFC to work. Yes, if you get hold of the MFC then you can hook it up to VC but it is not intended for that.Thursday, July 05, 2007 7:25 PM Yes, replace all instances of it with windows.h. As far as the #if, hmm, it looks like it is there for MFC resource DLLs so it probably won't hurt you, since it would hit that case normally. Although, if you want to be safe you can just remove it.Thursday, July 05, 2007 7:29 PM Ok, so the VC++7.1 IDE needs afxres.h for some defines for its resource editor. And VC++8.0 Express, since it has no resource editor, doesn't need it. That's why removing the reference to afxres.h, and replacing it with windows.h, is fine in VC++8.0 Express, where it presumably wouldn't be in VC++7.1, since windows.h doesn't have the definitons that afxres.h has that its resource editor needs? I hope I got that right.Thursday, July 05, 2007 7:38 PM - Ok, thanks for your help, crescens2k.Thursday, July 05, 2007 7:39 PM So, Why the Microsoft in it's examples for DirectX dinput joystick for MVC ++ use afxres.h. And moust important question is how they manage to make it worksTuesday, November 20, 2007 8:39 PM Simple and very effective. Many thanks!Thursday, August 28, 2008 8:23 PM - Friday, November 14, 2008 5:17 PM - Where exactly would one edit this #include "afxres.h" for #include "windows.h" at? I'm a novice user at best .Thursday, December 04, 2008 8:55 AM - I had the same problem. But I was using Visual Studio 2008 Professional Edition, so the VC Express problem did not apply. However, in my case the RC1015 error happened when the project was configured in Release config, but worked in Debug. I checked the Include directories but kept missing to look at the option to include "standard include files". The debug config has it checked, but release config did not. This one drove me insane for a few days. The problem happened because the version of Visual Studio I used had the standarad include files disabled by default.Friday, April 10, 2009 1:10 AM - Hi everyone, I solved this issue by editing the resource file and replacing "afxres.h" by "windows.h", as it was mentioned here, and it worked. Maybe it's because my code didn't need mfc libs to run, so it may be useful to others. Cheers. Sunday, December 20, 2009 3:06 PM - Thanks. replacing "afxres.h" with "windows.h" did the trick.Wednesday, January 12, 2011 6:25 PM The better choice for this is to edit the rc file and repace afxres.h with windows.h. This is only for VC-Express. I had the problem with standard Visual Studio, where this is not a good idea, because the .RC file will be auto-edited upon any resource-change and the afxres.h will return. "I'll be back - hasta la vista, hombre!" ;)Thursday, February 24, 2011 10:08 AM I also had this problem (c++ visual studio 2008/64bit), the #include was in a resource file (.rc). Appearently a resource file doesn't look at your additional include directories. I had to set the full path, than everything worked: #include "C:\Program Files (x86)\dummy dir\dummyFile.h"Thursday, May 19, 2011 9:12 AM
https://social.msdn.microsoft.com/Forums/en-US/d3480ee3-d880-4431-9264-b110a69c8a07/fatal-error-rc1015-cannot-open-include-file-afxresh?forum=Vsexpressvc
CC-MAIN-2015-11
refinedweb
1,740
74.29
System Calls vfork(2) NAME vfork - spawn new process in a virtual memory efficient way SYNOPSIS #include <unistd.h> pid_t vfork(void); DESCRIPTION The vfork() function creates new processes without fully copying the address space of the old process. This function is useful in instances where the purpose of a fork(2) opera- tion would)). The parent process is suspended while the child is using its resources. In a multithreaded application, vfork() borrows only the thread of control that called vfork() in the parent; that is, the child contains only one thread. In that sense, vfork() behaves like fork(). The vfork() function can normally be used the same way as fork(). The procedure that called vfork(), however, should not return while running in the child's context, since the eventual return from vfork() would be to a stack frame that no longer exists. The _exit() function should be used in favor of exit(3C) if unable to perform an execve() opera- tion, since exit() will flush and close standard I/O chan- nels, and thereby corrupt the parent process's standard I/O data structures. The _exit() function should be used even with fork() to avoid flushing the buffered data twice. deter- mined when the system is generated. SunOS 5.8 Last change: 22 May 1996 1 System Calls vfork(2) ENOMEM There is insufficient swap space for the new process. SEE ALSO exec(2), exit(2), fork(2), ioctl(2), wait(2), exit(3C) NOTES The use of vfork() for any purpose other than as a prelude to an immediate call to a function from the exec family or to _exit() is not advised. The vfork() function is unsafe in multithreaded applica- tions. This function will be eliminated in a future release. The memory sharing semantics of vfork() can be obtained through other mechanisms. To avoid a possible deadlock situation, processes that are children in the middle of a vfork() are never sent SIGTTOU or SIGTTIN signals; rather, output or ioctls are allowed and input attempts result in an EOF indication. On some systems, the implementation of vfork() causes the parent to inherit register values from the child. This can create problems for certain optimizing compilers if <unistd.h> is not included in the source calling vfork(). SunOS 5.8 Last change: 22 May 1996 2
http://www.manpages.info/sunos/vfork.2.html
crawl-003
refinedweb
387
62.17
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab. Microcontroller Programming » make: *** [portplay.hex] Error 1 I get this when I try to compile this program. When I remove the while loop or substitute a literal for the second variable it compiles and runs fine. I've installed WinAVR on my PC with Windows Vista. Here's my program: #define F_CPU 14745600 #include <stdio.h> #include <math.h> #include <stdlib.h> #include <avr/interrupt.h> #include <avr/pgmspace.h> #include <inttypes.h> #include "../libnerdkits/io_328p.h" #include "../libnerdkits/delay.h" #include "../libnerdkits/lcd.h" //PORTS //PB1, pin 15: OUtput for led blink. int main() { lcd_init(); DDRB |= (1<<PB1); // Input from shift register uint8_t idx, tik, itr, pr=0; lcd_write_string(PSTR("Test ripple and ring")); //delay_ms(5000); itr = 0; while (1){ tik = 0; while(tik<itr){ for(idx=0; idx<2; idx++){ // 10 to get all the way to all the col high PORTB |= (1<<PB1); delay_ms(2); PORTB &= ~(1<<PB1); delay_ms(2); } tik++; } for(idx=0; idx<2; idx++){ // 10 to get all the way to all the col high PORTB |= (1<<PB1); delay_ms(500); PORTB &= ~(1<<PB1); delay_ms(500); } delay_ms(3000); itr = pow(2,pr); pr += 2; } return 0; } and the error when I attempt to compile: C:\328ATMega_Projects\ripple>make make -C ../libnerdkits make[1]: Entering directory `C:/328ATMega_Projects/libnerdkits' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `C:/328ATMega_Projects/libnerdkits' avr-gcc -g -Os -Wall -mmcu=atmega328p -Wl,-u,vfprintf -lprintf_flt -Wl,-u,vfsca nf -lscanf_flt -lm -o portplay.o portplay.c ../libnerdkits/delay.o ../libnerdkit s/lcd.o ../libnerdkits/uart.o ../libnerdkits/delayns.o c:/winavr-20090313/bin/../lib/gcc/avr/4.3.2/../../../../avr/lib/avr5\libc.a(modf .o): In function `modf': (.text.fplib+0x3e): relocation truncated to fit: R_AVR_13_PCREL against symbol ` __subsf3' defined in .text section in c:/winavr-20090313/bin/../lib/gcc/avr/4.3. 2/avr5\libgcc.a(_addsub_sf.o) make: *** [portplay.hex] Error 1 All the .c and .h files are in libnerdkits folder. This has only started this evening after almost a year of trouble free programming. Thanks. You have the -lm in your make file but it needs to follow your source/object references as shown here: avr-gcc -g -Os -Wall -mmcu=atmega328p -mcall-prologues -std=c99\ -I/home/paul/AVR \ -Wl,-Map=Test_AVR.o.map \ Test_AVR.c -o Test_AVR.o \ -lm -Wl,-u,vfprintf -lprintf_flt -Wl,-u,vfscanf -lscanf_flt Also, as a matter of practice, the return 0 at the end of your source is useless. It follows an endless loop so will never be executed and even if it was, there is nothing to return to. Hi, I've tried your suggestions but still get the same problem. In my program, portplay, if I change line 42 to itr = idx; it works but not as it's given here even with the changes to the makefile. I've also tried the original makefile from nerdkits that was included in the libnerdkits folder that i downloaded from nerdkits when I first bought my nerdkit with the ATMega168 mpu. I made the necessary changes in my program before doing so. Next I changed that original makefile for the 168 according to your suggestion and this too didn't work. I've even tried changes to the arguments in pow(2, pr) from integer to double types and this didn't have an effect either. Here is the makefile for the ATMega328 with your advised changes. GCCFLAGS=-g -Os -Wall -mmcu=atmega328p LINKFLAGS=-lm -Wl,-u,vfprintf -lprintf_flt -Wl,-u,vfscanf -lscanf_flt AVRDUDEFLAGS=-c avr109 -p m328p -b 115200 -P COM4 LINKOBJECTS=../libnerdkits/delay.o ../libnerdkits/lcd.o ../libnerdkits/uart.o ../libnerdkits/delayns.o all: portplay-upload portplay.hex: portplay.c make -C ../libnerdkits avr-gcc ${GCCFLAGS} ${LINKFLAGS} -o portplay.o portplay.c ${LINKOBJECTS} avr-objcopy -j .text -j .data -O ihex portplay.o portplay.hex portplay.ass: portplay.hex avr-objdump -S -d portplay.o > portplay.ass portplay-upload: portplay.hex avrdude ${AVRDUDEFLAGS} -U flash:w:portplay.hex:a Please let me know if I can fix this. I've had similar problems in other programs that involved the use of pow(x,y) function. When I assign it to a variable and try to pass the value to another function or to a loop I get the same Error 1. I've even tried to pass the reference to a function or loop to no avail. I'm not sure what you've tried but you want to use pow() like you are then you have to move the linkflags to follow the .c and .o in your make file - like this: avr-gcc ${GCCFLAGS} -o portplay.o portplay.c ${LINKFLAGS} ${LINKOBJECTS} Working now. many thanks. Please log in to post a reply.
http://www.nerdkits.com/forum/thread/2626/
CC-MAIN-2020-05
refinedweb
808
58.79
On Thu, Aug 10, 2006 at 02:12:26PM +0100, Daniel P. Berrange wrote: > On Thu, Aug 10, 2006 at 08:02:20AM -0400, Jeremy Katz wrote: > > > > But read-only isn't all that you want -- think about giving access to a > > CD-R drive. It's not read-only, but we still need to have it exposed as > > a CD device. And with things like the bios for qemu and HVM guests, if > > a device is a CD-ROM or a hard drive makes a large difference. > > > > Thinking out loud, what if we went with something like > > <cdrom type='file'> > > <source file='/root/boot.iso'/> > > <target dev='hdc'/> > > </cdrom> > > for CDs and then similarly <floppy .../> for floppies > > I wouldn't do this for CDROMs, since they basically share the same device > namespace as disks already - with versions Xen / QEMU any hda -> hdd > can be labelled as a cdrom by appending :cdrom - so they're best handled > under same XML tag as disks > > For floppy disks though we could certainly have a separate <floppy> > tag name instead of <disk> - it would be clearer than distinguishing > based on the value of the 'dev' attribute. Actually I take that back. There is a potentially never ending list of different disk interfaces (IDE, FD, SCSI, XVDA) - I don't think we really need to dstingiush between them by having separate <floppy> <disk>, <cdrom> tags, since the value of the 'dev' attribute is always -=|
https://www.redhat.com/archives/libvir-list/2006-August/msg00053.html
CC-MAIN-2016-22
refinedweb
240
63.32
On a form used in our hospitals for pre-admitting patients I'm trying to calculate BMI from height and weight. I've put the formula in the script editor and it correctly calculates BMI, but when the form is first opened the following error message appears: Script failed (language is formcalc; context is xfa[0].form[0].F[0].#subform[1].BMI[0]) script=(Weight*703)/Height*Height) Error: arithmetic over/underflow. I suspect this is because the denominator (Height*Height) is zero before a value has been entered into Height. Is this correct, and regardless whether it is or not, how can I avoid this error message? The users are nurses and they tend to unwind when they encounter technical error messages. Found the answer myself. It was a problem with a zero denominator. By using an if/then/endif you can control when it's trying to run the calculation. The original formula was (weight*703)/(height*height). The solution was: if(height>0)then(weight*703)*(height*height)endif That prevents it from trying to run the calculation before a value has been entered in 'height'.
https://forums.adobe.com/thread/680260
CC-MAIN-2017-30
refinedweb
190
56.86
Regardless of the language's usefulness, which language fails the beauty contest the worst? We're a place where coders share, stay up-to-date and grow their careers. Regardless of the language's usefulness, which language fails the beauty contest the worst? Top comments (236) Does BASH count? :| :| Sorry but it's ugly only because it's wrong (there should have been additional spaces before and after square brackets) ;) The BASH is powerful and its syntax allows to write easy to read code. It just needs some rules to follow. For instance, there can be found a quite good Shell Style Guide from Google In this particular example, it could be written even simpler that is even harder to read That's the funny thing about these questions (and variations of them can be found in developer forums going back to the early 80's and usenet groups) they are highly subjective. This should not be surprising as language to large degree encapsulates methods of thinking and psychologists and linguists will tell you quickly that there are vastly many different ways of thinking. The fluidity of spoken languages ability to relay information comes not from their terseness it comes from their interpretability. Programming languages are syntactically terse means of conveying information about how to shift bits in low level computer memory cores which in their abstractions from that low level task must necessarily replicate the possible interpretability of the syntax and that will vary with the mode of thinking of the reader. It's a paradox of information transfer between beings with vastly different ways of thinking about the particular ingredients of language revealing itself. To me, a person who prefers syntax that is explicit over implicit the first version looks more interpretable than the second (my main criteria for this are two fold, a) my personal preference and b) how I think most other engineers would think about it and in particular to ensure that as many as possible would have as little effort as possible in capturing the meaning of the code ) Everything beyond that is subjective bickering.... That is going to be very helpful! Thank you. IMHO the worst bash quirk is not being able to put whitespace around the assignment operator, ie.: foo = bar # syntax error foo=bar # works as expected *cough* set -oto turn an option on and set +oto turn it off *cough* Bash counts! It's beautiful, IMO. <3 That would be Bourne Shell's fault, not BASH. Java's verbosity gets old pretty fast. Especially after seeing what's possible with other JVM languages. Kotlin is a joy to use, in part because of how expressive yet succinct it is. Going back and forth between the two makes Java's flaws much more visible, unfortunately. Java Syntax has the weird ability to direct the attention to the less important parts of the code first. To each his own haha, I love Java for how verbose it is and really dislike Kotlin. Operator overrides are a really neat idea but I can't get over the fun/ funccraze of using it instead of just saying function. Swift, Go, Rust, they all do it. What's the benefit of typing out function? There is none, it just adds more characters Then we should use fbecause "un" would just add a few more characters 🤔. I think there's a fine line between short and too short. There's a reason why we don't all prefer our Shakespeare or our technical documentation in SMS/text speak :). Writing code Shakespeare style would be awful. falready stands for functionin mathematics: it's just what you start with, the next one would be g(x) Rust goes like: Fn... And I don't think there's anything wrong with writing instead of function, with good syntax highlighting there's barely a difference. I don't think code becomes any more readable by writing longer words. And Java's verbosity doesn't increase the readability at all. "final public static void" Btw you don't even have a function keyword in Java and nobody tends to make a problem of it. the modifiers are necessary to differentiate non-final private/protected non-static methods with a return type ... it makes total sense 😅 I agree with you on the funpart. I've been getting more into JavaScript recently and I can see the use of the full word ( function) being better for beginners overall. "fun is short for function" "Oh ok" Yeah, but I still haven't seen the first person who doesn't question what funis the first time they see it. Sometimes being clear and short don't go hand to hand. there's a readability factor that I feel is added when you use the whole function. Especially for compiled languages, the "less characters" argument has no weight at all. It’s actually a “fewer characters” argument. Actually in javascript es6 you don't have to type any word to create a function just () => {} which is the best notation IMHO Objective C Or should I say [[[[[[[[[objective c]]]]]]]]]] PHP Oldschool-wise, I can never forgive Pascal it's :=operator. Can you tell me more about the :=operator? In Pascal, a:=33 would assign the value 33 to variable a. The is-equal operator was just =. It was less error prone than what finally stuck, thanks to C and Java ---> = to assign values, == to compare them. I strongly disagree, with the Pascal remark, but I'm a Pascal lover, so I'm probably biased the other way. I'd say :=vs. =is much better and a lot less error prone and ugly than =vs. ==let alone ===in some languages. Also :=was not even a Pascal invention, but much older. It comes from ALGOL actually, IIRC. === and !== look rad in Fira Code And there was me thinking it was from ADA ... it's like archaeology, this, just stripping layers away. Historical linguistics is a real discipline in natural languages. I love the analogies between natural languages and computer languages. Yes, and there's definitely a "thing" with code quality being correlated to natural readability too. I come from an ORACLE background originally, and it would drive me insane to see people uppercasing their SQL, like it was an act of kindness to the compiler, and us humans have to suck up the inconvenience. I used to send people photos of major highway direction signs and ask "WHY DO YOU THINK WE DO NOT PUT INFORMATION THAT HAS TO BE QUICKLY ABSORBED IN UPPER CASE? Which do you find easier to read?". I was also once instructed that all text in a BI system's reports should be in fixed pitch uppercase – got out of it by showing how wide that made every text-heavy report. TL;DR; People are sometimes quite dumb. All very fair points. I never minded Pascal as a language. Just that assignment operator. I had forgotten it came from ALGOL. But you agree PHP is hideous? :D I believe there's worse than PHP, but that alone doesn't make any PHP better for sure. :) BTW, I always think about replacing :with beand =with equalin my mind when reading Pascal code. So a:=1;is let a be equal 1. Same works with colon elsewhere, for example type definitions like var x: integer;can be read as let var x be integer;... It makes the whole syntax quite readable for me. But I'm not a native English speaker, so sorry if this just makes it even worse. :P No, it's a nice thought. It does work, as far as I can remember. Maybe it's what Pascal's creators were thinking. PHP: it’s not a bad syntax, it’s just not what some people like, and they’ve formed opinionated debates over it. Especially if you look at more modern PHP. I really like Laravel's code style. For non-PHP developers, a few syntax niceties: $prefix. So $post = getPost($id). $request->has('name') ::(by far my favorite). Route::get('/api/posts', function() {}); I also like PHP namespacing: and accessing the namespace: As someone who's used PHP in the past, all of your niceties are among the many reasons I don't care for the language 😆 to each their own though! 🤷🏻♂️ Haha, I can understand that. I actually liked the namespacing so much, I wrote a babel plugin that rewrites import paths in JS to work similarly. Not EXACTLY the same, but it's very similar. I really dislike PHP namespacing. I agree. There's nothing bad about the syntax. True, the language could be better. There are 99 reasons to dislike PHP but syntax is definitely not one of them. PHP has been my main language for 10 years. -> instead of . is rather unfortunate. Much harder to type :/ $var instead of var is also not idea if you ask me, though at least not as silly as needing to hit 3 keys. That does not make the syntax as a whole ugly though. Lol. It helps. :D True enough. The syntax isn't bad as in defective or overly verbose. It does look inelegant to me, though. You might have to blame ADA for that. PL/SQL also uses it – it is very closely related to ADA. I personally find JavaScript to be absolutely hideous. Javascript is very close to start looking like a bunch of emojis: ()=> :} ()=> :}isn't a valid span of JS in any non-string context, but there is bound to be a Haskell library that introduces a ()=>:}binary operator/infix function. (This is a joke, all of ():}are reserved characters) HA! You are 100% right! It's right behind Java, IMHO. It's much easier to read obfusticated C than JS. Right, I mean... so many parentheses. I know that's not an aesthetic issue but trying to figure out where everything begins and ends makes my brain cry. Sounds like you need Bracket Pair Colorizer™ This is by far the most useful comment I've read here, thanks Mihail Why would anybody care to read obfuscated code in the first place without the help of formatter? Unless you are developer of bundler... Definitely, but there are some small signs to become more handleable. Same here. Objetive-C have a special place in the darkness places of my heart. "Wait, how many [ do I have? [[[[[ fuck this shit!" And yet there are people who insist Objective-C is somehow a great language, ahead of its time. Like Javascript and C++. LOL [[[alloc Shit] init] intercourseWith:Penguin]]] Reminds me of ClojureScript. I vote for Objective C as well I always hate dealing with our VB code base. You thought parenthesis and curly braces were bad, what about a whole statement to terminate a block: End If End Case End For. Then there is having to continue a multi-line statement with an ending underscore _. Then there's Subvs Funckeywords. If I use Subthen later decide I need to return a value, I have to change the keyword to Funcin order to do so or it will not compile. There are probably more I'm not thinking of. VB.net was my first language, and I have a special hatred of it. (What sort of lunatic thought Dimmade even the SLIGHTEST bit of sense for initializing a variable???) HAHAHAHA That's literally the first thing all VB.net developers, who generally tend to hate VB.net, say. Even I was like "Hey what's this Dim.. oh wait, that's a variable being declared." I believe Dimis short for dimension. Not sure why the syntax is so array-focused. Technically a scalar value is just a special case where an array only has one dimension and one element... I remember it from QBasic in the 90s and apparently it goes back to Fortran. True point. Although. technically declaring a value is also called instantiation, but that doesn't make Instanta good keyword for declaring a variable. :-P Dim goes back to VB's BASIC roots. You didn't have to declare most scalar variables, but you had to use 'Dim' to 'Dim'ension arrays. In some dialects, strings were treated as scalars and didn't require declaration, but other dialects required you to 'Dim' them as arrays. As was mentioned, it may even go further back to FORTRAN, but that's a bit before my time. /grognard hat off. Be glad ReDim Preserveisn't a thing any more :P DIM was short for dimension. In early BASIC forms, it was to declare an array by indicating its size. Later it was repurposed to do any sort of declaration. I suddenly remembered my very first programming language back in 1989 (?). GW-BASIC. And line numbers. 😐 GW-BASIC was my very first language, too! :D Back in the day, I worked on a VB Embedded program but didn't have the devices yet so worked in VB 6. All well before .NET. When I started to port it over, I found that VB differed. Like, VBe and VB6 had incompatible forloops. Or maybe while. I didn't like VB much either way, but it worked. It's the incompatibility that's ugly to me. .NET ftw Explicit is better than implicit, right? Doesn't get more explicit than that :D this is terrifying yet taught as the "standard" for the first two years of every university that starts with java (the beginner language not the uni name LOL) ;_; Unpopular opinion but Python. Personally find it difficult to read. It's like supposed to be really easy to learn and write but in the process lost any structure in longer file. Other languages based on whitespace like Ruby or Lua have clear endings to blocks. Python just drops out the block without a clear indication. We're not alone! I think Python is deceptively ugly, because at first you expect it to be sensible but when you actually try to understand it you find the ugliness in it's whitespace. At least stuff like Lisp and Objective-C have the courage to show their ugliness loud and proud. (and oh goodness! those are ugly!) I grew up on 6500-family and Z-80 assembler, and I find those very easy to read, despite some of the cramping from the mnemonics and addressing modes. I wrote a lot RPG for IBM systems during the 80's and 90's. RPG is still (IMHO) the best business language in terms of clean syntax. It's essentially a hopped-up macro assembler language. Java's syntax has always made my head hurt. It has wayy too many rules and doesn't trust that devs have the capacity for making the right decision. Python's syntax doesn't cure all of Java's sins, but it has the decided benefit of its syntax not getting in the way of problem solving. If you follow the guidelines in PEP-008, make good decisions, discuss those decisions openly, respect the analytical capacity of the Programmer of the Future, you can write not just good syntactically clean and readable Python, but code that's maintainable. These are all big if's I grant you, but I'd rather solve problems and get work done than try to make Java happy. I see the opposite problem. The lack of syntax is a bigger annoyance than how verbose Java is. This is why there are so many different languages; everyone has different opinions on how to solve problems. :) As someone who has never worked with Python professionally it felt like I was fighting the formatter any time I wanted to quickly jot down temporary code while working through a problem. The WHITESPACE is exactly the problem. Lisp, "ugly"? best oxymoron ever. Oxymoron? Best misnorm ever. Oxymoron is when the antonym meets its base word. Absolutely. My pet peeves with it are: Python lacks punctuation, in the logical sense of the word. If punctuation was an unnecessary lexical tool, people wouldn't use it in English. It helps convey meaning. Abbreviations. What the hell does "t", "p", "x", "y", "pd", "plt" and "sns" stand for? The community has a horrible notion for naming conventions. The most popular Stack Overflow posts have this style of naming and it's making the language hard to learn. Underscores. Who decided that this "init" with two sets of double undescores should be one of the things developers write most often? It's not readable. All of these things make Python hard to understand. I'm personally a big fan of Python, but I still totally agree with #3. Even though I write python pretty regularly, 95% of the time I just copy and paste the whole: (copy & pasted) Oh yeah, despite complaining so much I still like it overall as a language. I hate python too. Lack of {} make it difficult to read, debug, write and whitespaces are a nightmare. It is also a nightmare to copy and paste snippets. The greatness of Python is proportional to its ugliness Basically every language that's actually paid my bills (which means I can't hate them too much: Java: so much boilerplate private static final void getter arghh C++: Ever had<to const<try to > parse> this nonesense const? Perl: my $head has @had enough of this %_#& Obj-C: Did no one [get theMessage:message] that sometimes you have to actually @Read code? Obj-C: what if C++ but everything is different just because ...and then there's Objective-C++ I want to tell whoever made it: "What did source code ever do to you?!?" I like this game. Ugly syntax: Verbose, unwieldly, and/or awkward syntax... but not what I'd quite call truly ugly: I just don't like it syntax: Nice syntax: Syntax that I'm not sufficiently familiar with to find the flaws, but I bet I'd find it nice, or at least not ugly:
https://dev.to/ben/which-mainstream-programming-language-has-the-ugliest-syntax--31l
CC-MAIN-2022-40
refinedweb
3,008
74.08
The latest version of this document is always available at. This FAQ tries to answer specific questions concerning GCC. For general information regarding C, C++, resp. Fortran please check the comp.lang.c FAQ, comp.lang.c++ FAQ, comp.std.c++ FAQ, and the Fortran Information page. -fnew-abito the testsuite?. In April 1999 the Free Software Foundation officially halted development on the gcc2 compiler and appointed the EGCS project as the official GCC maintainers. We are in the process of merging GCC and EGCS, which will take some time. The net result will be a single project which will carry forward GCC development under the ultimate control of the GCC Steering Committee. It is a common mis-conception that Cygnus controls either directly or indirectly GCC. While Cygnus does donate hardware, network connections, code and developer time to GCC development, Cygnus does not control GCC. Overall control of GCC is in the hands of the GCC Steering Committee which includes people from a variety of different organizations and backgrounds. The purpose of the steering committee is to make decisions in the best interest of GCC and to help ensure that no individual or company has control over the project. To summarize, Cygnus contributes to GCCproject, but does not exert a controlling influence over GCC. With GCC, we are going to try a bazaar style.. There are complete instructions in the gcc info manual, section Bugs. The manual can also be read using `M-x info' in Emacs, or if the GNU info program is installed on your system by `info --node "(gcc)Bugs"'. Or see the file BUGS included with the GCC source code. Before you report a bug for the C++ compiler, please check the list of well-known bugs. If you want to report a bug with egcs 1.0.x or egcs 1.1.x, we strongly recommend upgrading to the current release first. In short, if GCC says Internal compiler error (or any other error that you'd like us to be able to reproduce, for that matter), please mail a bug report to gcc-bugs@gcc.gnu.org or bug-gcc@gnu.org including: All this can normally be accomplished by mailing the command line, the output of the command, and the resulting `your-file.i' for C, or `your-file.ii' for C++, corresponding to: gcc -v --save-temps all-your-options your-file.c Typically the CPP output (extension .i for C or .ii for C++) will be large, so please compress the resulting file with one of the popular compression programs such as bzip2, gzip, zip, pkzip or compress (in decreasing order of preference). Use maximum compression ( -9) if available. Please include the compressed CPP output in your bug report. Since we're supposed to be able to re-create the assembly output (extension .s), you usually don't have to include it in the bug report, although you may want to post parts of it to point out assembly code you consider to be wrong. split, for example) and post them in separate messages, but we prefer to have self-contained bug reports in single messages.. The Fortran front end can not be built with most vendor compilers; it must be built with gcc. As a result, you may get an error if you do not follow the install instructions carefully. In particular, instead of using "make" to build GCC, you should use "make bootstrap" if you are building a native compiler or "make cross" if you are building a cross compiler. It has also been reported that the Fortran compiler can not be built on Red Hat 4.X GNU/Linux for the Alpha. Fixing this may require upgrading binutils or to Red Hat 5.0; we'll provide more information as it becomes available.. alterative.. To ensure that GCC finds the GNU assembler (the GNU loader),. The GCC testsuite is not included in the GCC 2.95 release due to the uncertain copyright status of some tests.. It is believed that only a few tests have uncertain copyright status and thus only a few tests will need to be removed from the testsuite. dejagnu snapshot available until a new version of dejagnu can be released. . Please read the host/target specific installation notes, too. If you are using the GNU assembler (aka gas) on an x86 platform and exception handling is not working correctly, then odds are you're using a buggy assembler. Releases of binutils prior to 2.9 are known to assemble exception handling code incorrectly. We recommend binutils-2.9.1 or newer. Some post-2.9.1 snapshots of binutils fix some subtle bugs, particularly on x86 and alpha. They are available at. The 2.9.1.0.15 snapshot is known to work fine on those platforms; other than that, be aware that snapshots are in general untested and may not work (or even build). Use them at your own risk. bug report entry). For the general case, there is no way to tell whether a specified clobber is intended to overlap with a specific (input) operand or is a program error, where the choice of actual register for operands failed to avoid the clobbered register. Such unavoidable overlap is detected by versions GCC 2.95 and newer, and flagged as an error rather than accepted. An error message is given, such as: foo.c: In function `foo': foo.c:7: Invalid `asm' statement: foo.c:7: fixed or forbidden register 0 (ax) was spilled for class AREG. Unfortunately, a lot of existing software, for example the Linux kernel version 2.0.35 for the Intel x86, has constructs where input operands are marked as clobbered. The manual now describes how to write constructs with operands that are modified by the construct, but not actually used. To write an asm which modifies an input operand but does not output anything usable, specify that operand as an output operand outputting to an unused dummy variable. In the following example for the x86 architecture (taken from the Linux 2.0.35 kernel -- include/asm-i386/delay.h), the register-class constraint "a" denotes a register class containing the single register "ax" (aka. "eax"). It is therefore invalid to clobber "ax"; this operand has to be specified as an output as well as an input. The following code is therefore invalid: extern __inline__ void __delay (int loops) { __asm__ __volatile__ (".align 2,0x90\n1:\tdecl %0\n\tjns 1b" : /* no outputs */ : "a" (loops) : "ax"); } It could be argued that since the register class for "a". This corrected and clobber-less version, is valid for GCC 2.95 as well as for previous versions of GCC and EGCS: extern __inline__ void __delay (int loops) { int dummy; __asm__ __volatile__ (".align 2,0x90\n1:\tdecl %0\n\tjns 1b" : "=a" (dummy) : "0" (loops)); } Note that the asm construct now has an output operand, but it is unused. Normally asm constructs with only unused output operands may be removed by gcc, unless marked volatile as above. The linux kernel violates certain aliasing rules specified in the ANSI/ISO standard. Starting with GCC 2.95, the gcc optimizer by default relies on these rules to produce more efficient code and thus will produce malfunctioning kernels. To work around this problem, the flag -fno-strict-aliasing must be added to the CFLAGS variable in the main kernel Makefile. If you try to build a 2.0.x kernel for Intel machines with any compiler other than GCC 2.7.2, then you are on your own. The 2.0.x kernels are to be built only with gcc 2.7.2. They use certain asm constructs which are incorrect, but (by accident) happen to work with gcc 2.7.2. If you insist on building 2.0.x kernels with egcs, you may be interested in this patch which fixes some of the asm problems. You will also want to change asm constructs to avoid clobbering their input operands..) Finally, you may get errors with the X driver of the form _X11TransSocketUNIXConnect: Can't connect: errno = 111. When compiling X11 headers with a GCC 2.95 or newer, g++ will complain that types are missing. These headers assume that omitting the type means 'int'; this assumption is wrong for C++. g++ accepts such (illegal) constructs with the option -fpermissive; it will assume that missing type is 'int' (as defined by the C89 standard). Since the upcoming C99 standard also obsoletes the implicit type assumptions, the X11 headers have to get fixed eventually. Building cross compilers is a rather complex undertaking because they usually need additional software (cross assembler, cross linker, target libraries, target include files, etc). We recommend reading the crossgcc FAQ for information about building cross compilers. If you have all the pieces available, then `make cross' should build a cross compiler. `make LANGUAGES="c c++" install' will install the cross compiler.. Unfortunately, improvements in tools that are widely used are sooner or later bound to break something. Sometimes, the code that breaks was wrong, and then that code should be fixed, even if it works for earlier versions of gcc or other compilers. The following problems with some releases of widely used packages have been identified: There is a separate list of well-known bugs describing known deficiencies. Naturally we'd like that list to be of zero length. To report a bug, see How to report bugs. The FD_ZERO macro in (e.g.) libc-5.4.46 is incorrect. It uses invalid asm clobbers. The following rewrite by Ulrich Drepper <drepper@cygnus.com> should fix this for glibc 2) Apparently Octave 2.0.13 uses some C++ features which have been obsoleted and thus fails to build with EGCS 1.1 and later. This patch to Octave should fix this. Octave 2.0.13.96, a test release, has been compiled without patches by egcs 1.1.2. It is available at. This has nothing to do with gcc, but people ask us about it a lot. Code like this: #include <stdio.h> FILE *yyin = stdin; will not compile with GNU libc (Linux libc6), because stdin is not a constant. This was done deliberately, in order for there to be no limit on the number of open FILE objects.. The appropriate place to ask questions relating to GNU libc is libc-alpha@sourceware.cygnus.com. Let me guess... you wrote `#' The problem, simply put, is that GCC's preprocessor does not allow you to put #ifdef (or any other directive) inside the arguments of a macro. Your C library's string.h happens to define memcpy as a macro - this is perfectly legitimate. The code therefore will not compile. memcpy(dest, src, 1224); from the above example. A very few might do what you expected it to. We therefore feel it is most useful for GCC to reject this construct immediately so that it is found and fixed. Second, it is extraordinarily difficult to implement the preprocessor such that it does what you would expect for every possible directive found inside a macro argument. The best example is perhaps #define foo(arg) ... arg ... foo(blah #undef foo blah) which. This error means your system ran out of memory; this can happen for large files, particularly when optimizing. If you're getting this error you should consider trying to simplify your files or reducing the optimization level.. We make snapshots of the GCC sources about once a week; there is no predetermined schedule.. Many folks have been asking where to find libg++ for GCC. First we should point out that few programs actually need libg++; most only need libstdc++/libio which are included in the GCC distribution. If you do need libg++ you can get a libg++ release that works with GCC from. Note that the 2.8.2 snapshot pre-dates the 2.8.1.2 release. If you're using diffs up dated from one snapshot to the next, or if you're using the CVS. . Autoconf is available from; have a look at for the other packages. It is not uncommon to get CVS conflict messages for some generated files when updating your local sources from the CVS repository. Typically such conflicts occur with bison or autoconf generated files. As long as you haven't been making modifications to the generated files or the generator files, it is safe to delete the offending file, then run cvs update again to get a new copy. On some systems GCC will produce dwarf debug records by default; however the gdb-4.16 release may not be able to read such debug records. You can either use the argument "-gstabs" instead of "-g" or pick up a copy of gdb-4.17 to work around the problem. The GNU Ada front-end is not currently supported by GCC; however, it is possible to build the GNAT compiler with a little work. First, retrieve the gnat-3.10p sources. The sources for the Ada front end and runtime all live in the "ada" subdirectory. Move that subdirectory to egcs/gcc/ada. Second, apply the patch found in egcs/gcc/README.gnat. Finally, rebuild per the GNAT build instructions. The GNU Pascal front-end does work with EGCS 1.1 It does not work with EGCS 1.0.x and the main branch of the CVS repository. A tarball can be found at. It is possible to checkout specific snapshots with CVS or to check out the latest snapshot. We use CVS tags to identify each snapshot we make. Snapshot tags have the form "egcs_ss_YYYYMMDD". In addition, the latest official snapshot always has the tag "gcc_latest_snapshot".. The current version of gperf (v2.7) does not support the -F flag which is used when building egcs from CVS sources. You will need to obtain a patch for gperf and rebuild the program; this patch is available at Patches for other tools, particularly autoconf, may also be necessary if you're building from CVS sources. Please see the FAQ entry regarding these tools to determine if anything else is needed. These patched utilities should only. From the libstdc++-FAQ: "The EGCS Standard C++ Library v3, or libstdc++-2.90.x, is an ongoing project to implement the ISO 14882 Standard C++ library as described in chapters 17 through 27 and annex D." At the moment the libstdc++-v3 is no "drop in replacement" for GCC's libstdc++. The best way to use it is as follows: Please note that the libstdc++-v3 is not yet complete and should only be used by experienced programmers. For more information please refer to the libstdc++-v3 homepage Return to the GCC home page Last modified: October 19, 1999
https://opensource.apple.com/source/gcc_legacy/gcc_legacy-938/faq.html
CC-MAIN-2017-43
refinedweb
2,460
66.64
score:16 I have restricted the textbox to allow only numbers and formatted the mobile number as US mobile number format. Follow the below code. handleChange(e) { const onlyNums = e.target.value.replace(/[^0-9]/g, ''); if (onlyNums.length < 10) { this.setState({ value: onlyNums }); } else if (onlyNums.length === 10) { const number = onlyNums.replace( /(\d{3})(\d{3})(\d{4})/, '($1) $2-$3' ); this.setState({ value: number }); } } score:-1 Set class on your Textfield: <TextField type="number" inputProps={{className:'digitsOnly'}} floatingLabelText="mobileNumber" value={this.state.value} onChange={this.handleChange} id={"mobileNumber"} name={"mobileNumber"} /> You can apply a class on your Textfield that enables only numbers as follows: $(".digitsOnly").on('keypress',function (event) { var keynum if(window.event) {// IE8 and earlier keynum = event.keyCode; } else if(event.which) { // IE9/Firefox/Chrome/Opera/Safari keynum = event.which; } else { keynum = 0; } if(keynum === 8 || keynum === 0 || keynum === 9) { return; } if(keynum < 46 || keynum > 57 || keynum === 47) { event.preventDefault(); } // prevent all special characters except decimal point } Restrict paste and drag-drop on your Textfield: $(".digitsOnly").on('paste drop',function (event) { let temp='' if(event.type==='drop') { temp =$("#mobileNumber").val()+event.originalEvent.dataTransfer.getData('text'); var regex = new RegExp(/(^100(\.0{1,2})?$)|(^([1-9]([0-9])?|0)(\.[0-9]{1,2})?$)/g); //Allows only digits to be drag and dropped if (!regex.test(temp)) { event.preventDefault(); return false; } } else if(event.type==='paste') { temp=$("#mobileNumber").val()+event.originalEvent.clipboardData.getData('Text') var regex = new RegExp(/(^100(\.0{1,2})?$)|(^([1-9]([0-9])?|0)(\.[0-9]{1,2})?$)/g); //Allows only digits to be pasted if (!regex.test(temp)) { event.preventDefault(); return false; } } } Call these events in componentDidMount() to apply the class as soon as the page loads. score:3 In case you are using a form-validation library like react-hook-form, you can validate your Textfield like this, <TextField type="number" {...register("phonenum",{ required: { value: true, message: 'Please fill this field', }, pattern: { value: /^[1-9]\d*(\d+)?$/i, message: 'Please enter an integer', }, min: { value: 1, message: 'Value should be atleast 1', }, })} error={errors?.index ? true : false} helperText={errors?.index?.message} /> In case, you want to input phone numbers only, I would highly recommend to consider using react-phone-input-2. score:3 Its really simple in MUI just add the line <TextField type='number' //ad this line onChange={(e) => setCode(e.target.value)} score:12 import Input from '@material-ui/core/Input'; Use <Input type="number" /> instead. Source: stackoverflow.com Related Query - How to allow only numbers in textbox and format as US mobile number format in react js? ex : (224) - 5623 -2365 - How to allow only numbers in textbox in reactjs? - How can I make my input to be only limited to a maximum of 10 numbers and don't allow any characters? - How to do so that my field must at least be able to contain only numbers and a + in front of the number (if the user wishes) - Allow chinese characters and English numbers only in textbox - Javascript - How to only allow a specific element type as children using Flow and React? - How can I format my CSS and Bootstrap so that only certain elements wrap when page width decreases? - How to allow only numbers on an input in react hooks? - How to allow german user to enter numbers in german format in React JavaScript - How do I get my HTML form date input to only allow specific days of the week to be chosen using JavaScript, React and HTML? - How to input decimal numbers and auto format with thousand separators at the same time in javascript? - How to unmask US mobile number format while making an API call in react js? - How to display in input textbox of credit card number the 4 numbers then space? - How do I prevent user from entering a username that starts with a number, but allow numbers only after alpha characters? - How can I only allow certain components to scroll and keep certain React components fixed? - How to limit number with 0-999 and just three numbers after the comma in React - With redux form how can I normalize and allow only positive integers on an input type number? - Regex only allow whole and decimal point numbers - How do I format AntD form input fields such as phone numbers and SSNs in React? - Allow textbox only digits and first digit starts with 2 to9 - How to store images securely and allow access only for user [React] - How to allow only double numbers in input React js - How can I make a text box in React which allows only numbers or an empty value, which triggers the number keypad on mobile? - how to remove number from array of numbers using TypeScript and React - Input box onchange only allow numbers and one dot - React - how to capture only parent's onClick event and not children - In React, how to format a number with commas? - How to parse ISO 8601 into date and time format using Moment js in Javascript? - How to wait for AJAX response and only after that render the component? - Only allow specific components as children in React and Typescript More Query from same tag - How to transfer a value/object bottom to top in reactjs - ReactJS get XMLHttpRequest not working - Positioning a Material UI button to the rightmost of row - Axios mock adapter giving error 'Error: Request failed with status code 404' - Does the Provider in useContext hold more than one value - How Do I Get Values From This Array From My Database Without the Column Name? - Does react-dom/server work on the client-side? - In react bootstrap how do I show the contents of an array in an html table - React localhost:3000 Not Working after restarting the pc - React Storybook blank page when run - useReducer() is returning data for one component and for other component it is returning the initialState - Page movement by React and Material UI - How to pass both props and a component as parameters of a functional component - react I want to add a state to the array - How can I keep on adding username and status instead of updating the previous one in reactJS? - How to render header row without cell and expand subrow react-table - How to add a prop to the last element of an array of jsx elements in React? - how to make a react component state change on window resize? - GraphQL rate limit on mutations - Is it possible to create a DOM node (and then appendChild it into some DOM element) from an existing SVG file? - React state change not causing re-render - How to select a single firebase realtime database data using where condition in react? - Change @observable property don't rerender observer component in my project with mobx - Deleting document from Firestore based on array key - reack-hook-form selected dropdown value using reset - In React how to get a property value of an object using another property value? - Can't get import working, JavaScript w/ ReactJS & ES6 / ES2016 - Semantic-ui-react style not loaded - Cannot read property of undefied map - Working with ReactJS and HTML5 Canvas
https://www.appsloveworld.com/reactjs/100/11/how-to-allow-only-numbers-in-textbox-and-format-as-us-mobile-number-format-in-rea
CC-MAIN-2022-40
refinedweb
1,189
53.31
Software Development Kit (SDK) and API Discussions Dear all, (Please let me know the right place to discuss this kind of topic since I'm new to join this community site, thank you) I'm trying to use Ruby API NaServer.rb (and others), which are provided by NetApp Inc, to control NetApp server from our ruby-based application. When integrating this API to our app, some strange behavior happen on our unit-test. After removing the portion which uses this API, there is no problem (but of course the app cannot work with NetApp server). This is apparently something NetApp.rb side-effect. After our further investigation, we found that there is an issue in the NaServer.rb code as follows: ... require 'rexml/streamlistener' include REXML # Here require 'stringio' include StreamListener # and Here! These two 'include' lines above apparently wrong usage of ruby-lang because 'include' in ruby means to include the module into current namespace so that above two include lines cause overwriting global methods by all of REXML and StreamListener methods. That's quite danger, I think. Actually, ActionMailer (or Rspec?, sorry I foregot right now) "comment" method is overwritten by this so our unit-test failed. Actually, in this case, 'include' is not necessary. It should be OK to remove 'include' lines and rewriting lines which uses REXML as follows as one of example: Document.parse_stream(xml_response, MyListener.new) | V REXML::Document.parse_stream(xml_response, MyListener.new) (NaServer.rb 284-th line as one of example) Could anyone update NaServer.rb? Best Regards, Hi, Thanks for your suggestion. We will check if we can incorporate these changes in the upcoming releases of NMSDK (after thorough evaluation). Regards, Sen. Hi Sen, Thank you for your prompt reply. I'm looking forward to hearing the update.
https://community.netapp.com/t5/Software-Development-Kit-SDK-and-API-Discussions/NaServer-rb-ruby-API-issue/td-p/70796
CC-MAIN-2021-04
refinedweb
297
57.57
On 9/15/19 10:04 AM, Richard W.M. Jones wrote: > On Sun, Sep 15, 2019 at 03:55:41PM +0100, Richard W.M. Jones wrote: >> - Plugins could change content based on client. (The fourth patch in >> the series is a PoC of this implemented in the new reflection >> plugin.) Be cautious about combining this feature with multi-conn >> as it's not obviously always safe to do. > > Given this commit, I guess we should squash in the following to the > 4th patch: > > diff --git a/plugins/reflection/reflection.c b/plugins/reflection/reflection.c > index a0d7c60..f765557 100644 > --- a/plugins/reflection/reflection.c > +++ b/plugins/reflection/reflection.c > @@ -303,11 +303,22 @@ reflection_get_size (void *handle) > return (int64_t) h->len; > } > > -/* Read-only plugin so multi-conn is safe. */ > static int > reflection_can_multi_conn (void *handle) > { > - return 1; > + switch (mode) { > + /* Safe for exportname modes since clients should only request > + * multi-conn with the same export name. > + */ > + case MODE_EXPORTNAME: > + case MODE_BASE64EXPORTNAME: > + return 1; > + /* Unsafe for mode=address because all multi-conn connections > + * won't necessarily originate from the same client address. > + */ > + case MODE_ADDRESS: > + return 0; Correct - any two simultaneous clients over TCP will necessarily have different content even if they have requested the same export name, so you do need this patch squashed in. Unix sockets (currently) get the same content, but it's not worth trying to distinguish TCP vs. Unix sockets here. -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org Attachment: signature.asc Description: OpenPGP digital signature
https://listman.redhat.com/archives/libguestfs/2019-September/msg00116.html
CC-MAIN-2021-49
refinedweb
255
51.04
CS 638 Graphics Instructor: Dr. Gleicher TA: Richard Yu Gu Updated September 2000, Michael Gleicher Last modified: 17:12 Sep 7, 2001 1 Introduction The Fast Light Tool Kit ("FLTK", pronounced "fulltick") is a LGPL'd C++ graphical user interface toolkit for X (UNIX®), OpenGL®, and Microsoft® Windows® NT 4.0, 95, 98, or 2000. This tutorial shows how to construct GUI applications using FLTK with Microsoft Visual C++ 6.0. This tutorial documentation is intended for the reader to quickly get started with FLTK library under MSVC++ for the Graphics class. Note: This is neither a detailed or complete documentation for the FLTK library or MSVC++. Instead, it walks you through the process of building a program with fltk in the U. Wisconsin CSL software environment. For the complete documentation please check at, or locally at S:\fltk\i386_win2k\documentation\. Since this documentation is intended for the students in the Graphics class only, we assume that you are using the instructional Windows 2000 stations in the CSL labs. 2 Creating A New FLTK Work Space In Visual C ++ 6.0 Note: One big advantage of FlTk over some of the other UI toolkits is that it can be built into a console application. That means that all of the C++ standard I/O stuff you used under Unix (like printf or cout) can be used in your programs. 3 Setting-Up the FLTK project Now the new project is created, we are going to configurate it to be a FLTK project. 4 Create and Add Source Files The The Project After you have all the options setup, the project is FLTK ready. All you need is to add your FLTK source files to the project, then build and run. To add source files to the project, you need to follow the following instruction. There is a working example project here. You can use the settings of this project as a template for our programs. For more FLTK code examples please refer to the FAQ of this tutorial. 5 Build And Run Choose Build->Build [projectname].exe. Now choose Build->Execute [projectname].exe This should compile and link this project, and launch the new executable. If during Build process, error messages are generated, then there might be some problem with either your code or some settings that we have done. For more detail please refer to Trouble Shooting. If the errors occur at other places, most likely the problem is in your program code itself. Note: You can choose to build and run the project in either Debug or Release configuration by choosing the desired option through Build->Set Active Configuration... menu. 6 Reopen A Project The easiest way to do so is to go to the project directory and double-click on the [projectname].dsw file. Or you can choose File->Open WorkSpace within the MSVC application, and open the [projectname].dsw file. 7 FLTK Basics For example, if you need to use Fl_Window object, then you should have this line in your code: #include <FL/Fl_Window.H>
http://www.cs.wisc.edu/graphics/Courses/559-f2001/Notes/fltktut.html
crawl-002
refinedweb
509
71.85
Jeff) > > Almost: Assuming that OGRGeometryH is some sort of pointer, e.g. typedef struct OGRGeometry *OGRGeometryH you could write: > data OGRGeometry > type OGRGeometryH = Ptr OGRGeometry where OGRGeometryH is the type that you API exports. > foreign import ccall > oGR_G_CreateFromWkb :: ...whatever... -> Ptr OGRGeometryH -> OGRErr Note that Ptr OGRGeometryH is in fact a pointer to a pointer, just as the C type demands. When calling such a procedure you must first allocate space. This is most elegantly done using alloca (I have added some basic error handling, otherwise the value returned might be undefined): > createFromWbk :: ...argtypes... -> Either OGRErr OGRGeometryH > createFromWbk ...args... = alloca $ \ptr -> do > -- in this code block you can peek and poke the ptr; > -- space will be deallocated after block exits > res <- oGR_G_CreateFromWkb ...args... ptr > if checkResultOK res > then do > -- this can be shortened to: liftM Right $ peek ptr > -- or: peek ptr >>= return . Right > h <- peek ptr > return (Right h) > else return (Left res) (Instead of returning an Either type you could throw exceptions, but you didn't ask about error handling, did you ?-) Cheers Ben
http://www.haskell.org/pipermail/haskell-cafe/2008-October/049539.html
CC-MAIN-2014-23
refinedweb
174
54.02
This SVVDecayer class implements the decay of a scalar to 2 vector bosons using either the tree level VVSVertex or the loop vertex. More... #include <SVVDecayer.h> This SVVDecayer class implements the decay of a scalar to 2 vector bosons using either the tree level VVSVertex or the loop vertex. It inherits from GeneralTwoBodyDecayer and implements the virtual member functions me2() and partialWidth(). It also stores a pointer to the VVSVertex. Definition at line 36 of file SVVDecayer.h. Make a simple clone of this object. Implements ThePEG::InterfacedBase. Initialize this object after the setup phase before saving and EventGenerator to disk. Reimplemented from Herwig::GeneralTwoBodyDecayer. Initialize this object. Called in the run phase just before a run begins. 135 of file SVVDecayer.h.
https://herwig.hepforge.org/doxygen/classHerwig_1_1SVVDecayer.html
CC-MAIN-2019-30
refinedweb
124
52.05
03 January 2012 15:21 [Source: ICIS news] HOUSTON (ICIS)--Williams has completed the separation of its exploration and production (E&P) business into a stand-alone listed company, the ?xml:namespace> Williams’ E&P business, WPX Energy, will begin trading on the New York Stock Exchange under the symbol WPX on Tuesday. "We've now fully executed on our plans to create two separate and strong companies, each with a clear focus," said Williams’ CEO, Alan Armstrong. "This effort has been all about unlocking value for shareholders and creating the best possible growth prospects for our businesses,” Armstrong said. In addition to its core energy infrastructure and natural gas liquids (NGL) business, Williams has olefins plants in the Tulsa, Oklahoma-based Williams employs at staff of about 4,100 in the For more on William
http://www.icis.com/Articles/2012/01/03/9519948/us-williams-completes-separation-of-ep-business.html
CC-MAIN-2014-10
refinedweb
136
54.15
Good afternoon. I have a Viewport with accordion menu, How do I get the click event of each menu item? My Controller Ext.define('aplicacao.controller.Usuarios', { extend :... Type: Posts; User: vanderbill Good afternoon. I have a Viewport with accordion menu, How do I get the click event of each menu item? My Controller Ext.define('aplicacao.controller.Usuarios', { extend :... tks for answer, but its not a checkbox, is a CheckBoxSelectionModel in a grid...:) Public Events Event Defined By beforerowselect : (... wich event o need implment to know??? tks :D:D sorry i haved tested, im wrong here.. i have implemented in a wrong event! sry :">:"> ok, my list is = List<EmpresaData> mylist; but, if i use public void loaderLoad(LoadEvent le) { getGridEmpresa().getSelectionModel().select( this is a bug??? because i used the same code but iwth select(index) method getGridEmpresa().getSelectionModel().select(1); and work, but i need select my list!!!! ty for all guys!:D Hello guys, im tryng select my list, when a load my page(pagingtoolbar) my grid private Grid<EmpresaData> getGridEmpresa() { if (gridEmpresa == null) { gridEmpresa = new... hello guys, im trying implements basic login but.... the responseText is returning all code look. /** * @author Vander */ Ext.onReady(function(){ ok i will look on net for gzipping, but have any tutorial how i can active this.. ty for answer :D Hello guys, i have an aplication on gxt, then i wll acess first time later deployed in tom cat it load a lot, between 15 and 30 seconds, any have the same issue??? tks guys sorry my bad english! look the code in examples, there are server side code paging implementation :D:D:D:D:D I have same problem can any1 help??? Hello guys im trying migrating, but not all project. I have any problems with datefield... The trigger(Data picker i think ) dont show!! [CODE]public... i get it :D:D:D:D:D private void refreshChk(final List<ModelData> list) { if (!getGrid().isRendered()) return; if (list == null) { if... Hello :)) I checkboxSelectionModel how i can take select/deselect checkbox??? sm = new CheckBoxSelectionModel<RamoData>(); sm ... Hello guys. I making a test here and to select rows onload...but nothing happens. public BasePagingLoader getLoader() { if (loader == null) { loader = new... sorry i dont understand, can gimme a example??? tks for help... but i think dont have a solution for Ext gwt yet...but tks for all!!! im looking yet!!:D:D:D Im trying override getEditor, but dont work :((:(( ColumnConfig colResposta = new ColumnConfig("resposta", "Resposta", 150) { @Override ... Hello guys. How i can have more than one widget in the same column in a editorGrid??? example: row Column A 1 TextField 2 ComboBox(options 1, 2) 3 CheckBox 4 ... Ty so much its Work =D Hello guys. I have a query where have 12.000 rows, but i iwant paging it. Im tryng it In Client side: My model: public class ConhecimentoModel extends BaseModelData { private String... hello guys.....the problem happen only in linux....in Windows its works :D:D:D:D ty so much for the great work....cya!!!! hello this key dont work too when i hold delete or backspace the mask disconfigure...i need making any stuff??? the key '~' is not validate i think the key '
https://www.sencha.com/forum/search.php?s=fdac6c807665bd0dfeb55c16be55e689&searchid=18955953
CC-MAIN-2017-09
refinedweb
538
61.63
Hi people, Let me try to explain a subject that I already got a lot of emails asking me about how it works: - Data Server - Physical Schema - Logical Schema - Context It’s very common doubts raises from this combination because it is based in 3 concepts that I call “The three bases”. The most important is: one doesn’t exist without other. It’s good remember that Data Server can be understood as the higher level of Physical Schema and that one Physical Schema is linked to one, and no more than one Data Server. To a better understanding, take a look in the following flow: How does this flow wok? - Data Server - Object that defines the connection to database. It storage the IP, User and Password for instance - Physical Schema - Defines 2 database schema’s (Oracle definition), one to read the data and other to ODI works (work area where the C$, I$ tables could be created if necessary) - Context: - Defines an “environment”, a particular instance for code execution. The most common example is Development, Test and Production environments but there are several possible other possibilities. - Logical Schema - It is an alias to a “Logic Structure”, I mean, when a code is developed in a Development environment (a single interface in ODI as example) it is expected (and necessary) that any database structure table and column used at it must be at any new environment where this code could be deployed because, if not, a database error will be raised. Logical Schema is the final dot to understand the flow. The idea behind its existence is allow the same code be used at any environment once it is an alias. But this alias, the Logical Schema, can not work alone, once it only represents the a Logical Structure not the connection itself, I mean, User, IP, etc… For that, exists the Physical Schema. It will complete the Logical Schema with physical characteristics to connection. Because of that, one Logical Schema is linked to one, and just one, Physical Schema. But why to have an alias to the user, IP, password??? Because then there is no need to include these physicals characteristics into the scenario (“compiled code”) allowing that if, for instance, a password is changed there is no need of a scenario regeneration!!! Well, after understand the link between Physical and Logical Schema how to add the Context in this equation? It is to determine, at execution moment, to which hardware the Logical Schema that points to a Physical schema it will be executed. A hardware here can be understood as Development, Test or Production. If you take a look into the users and schema’s used at the figure you will see differences between the environments. I will explain those differences in another post, about “Connection Architecture”. Friends, I hope to be helpful in the understanding of all this concepts! See you soon. Cezar Santos October 11, 2016 at 1:13 PM Thanks a lot for the Information!! I have a doubt that how we can connect two different technology’s like Oracle,Teradata in Physical Schema to Logical Schema that we can only only one logical Schema. How Can I map these two Physical Schema’s to one Logical Schema? October 11, 2016 at 5:03 PM For that it’ll be necessary 2 Contexts… so for each physical you can use the same logical in distinct contexts. That is what happens when Development and Production context are defined, for instance…. July 4, 2016 at 2:08 PM Hi, Thanks a lot for the extremely informative article! Why do we create physical schemas before logical schemas? Surely if we create the logical ecosystem first then when linking we are testing for the data’s conformity to our requirements/expectations rather than creating our logic according to the constrictions of present accessible data. May 26, 2014 at 10:51 AM Is it possible to define more than one physical schema associated with the same data server? A. Yes, but you have to flag which one is the default physical schema. B. Yes, but it is mandatory to specify a different user to log in. C. It is possible,but it is better to avoid doing so because it is more difficult to define a logical schema this way. D. No, ODI Topology does not allow defining more than one physical schema for a data server because the associated logical schema would be ambiguous. May 26, 2014 at 1:57 PM Between the options that you sent, it’s A Where this come from? May 14, 2014 at 7:04 PM Hi Cezar, Can I hardcode a context in a procedure and then use that code to override the context passed at runtime? Thanks, Anmol Kaushik May 16, 2014 at 1:11 AM Hi, Yes, it’s possible if you generate a scenario, when the scenario is called you can call it with distinct contexts… Make sense? March 28, 2014 at 9:56 AM Hi, when I execute a scenario in test environment, I am getting the error, unable to find logical schema in master repository. but the issue does not occur in development. I read from other posts to check for the existence of logical schema and I have done it. it exists. And also, to change the technology to undefined in the command on source tab in proceudres. but I have only interfaces and there is only one proc which is used across all the scenarios. so I don’t want to touch that. can anybody help how to resolve this issue? March 28, 2014 at 3:04 PM Hi Raji, Did you associate the Physical Schema with Logical Schema to the Test environment context? September 14, 2010 at 12:02 AM hi i want to run a pkg in more than 1 context.i hav used a procedure with following steps 1.create connection-> srcConn=odiRef.getJDBCConnection(“SRC”) stmt=srcConn.createStatement(); …………… 2.Retrieve and Store File List> import glob,os mydir=”; mypattern=”; filepattern=’%s%s%s%s’ % (mydir,os.sep,os.sep,mypattern); mydirlist=glob.glob(filepattern); mysession=”; myfilestable=’F%s’ % mysession; mystatus=’ARRIVED’; stringtoprint= ‘mydir’ + mydir + ‘ patten:’ + mypattern + ‘filepattern :’ + filepattern for file in mydirlist: print os.path.basename(file); mystmt=”insert into FILESLIST values(‘%s’,’%s’,’%s’,’%s’)” % (mysession,os.path.basename(file),mydir,mystatus); stmt.execute(mystmt); stmt.execute(“commit”); stmt.close(); …………………… 3.FileDispatcher-> (Target) OdiStartScen -SCEN_NAME= -CONTEXT= -SCEN_VERSION=001 -“ORACLE.FileToBeProcessed=#FILENAME” (Source) Select FILENAME from fileslist where status=’ARRIVED’ ……….. 4.close connection-:> srcConn.close(); ITS GIVING ERROR IN 3RD STEP..scenerio did not end properly.. plz suggest. September 14, 2010 at 4:51 AM although it will be difficult for me to suggest the correct error. Please check for this. Have you make the command on Target with Sunopsis API . Next can you copy the generated scenario code and call it from the oracledi/bin command prompt using startscen ,replacing variable FILENAME with hardcoded value. Iam sure you will be able to figure out then what mistake you are doing in syntax and later reflecting the same in the Procedure. Also just want to add , i am not sure if you know this, if you use SCEN_VERSION= -1 , it will always fetch the most recent scenario version. October 23, 2009 at 8:36. July 2, 2014 at 7:03 PM I have the same issue, can you tell me how you fixed it. October 23, 2009 at 7:54. October 16, 2009 at 9:31 AM Good Idea Sam, I will publish a new post to complete it… Thank you! October 13, 2009 at 5:58 PM This post is really informative!!!! Adding ‘Agents’ in this equation, would have made this even more informative. October 13, 2009 at 5:32 AM Thanks a ton for the post!! I guess it would be more clear/helpful if you can provide a set of example(may be moving from one enviornment to another enviornment, where the use of different contexts, physical and logical schema comes into play)
http://odiexperts.com/context-logical-and-physical-schema-how-does-it-work/
CC-MAIN-2017-04
refinedweb
1,343
61.97
Hi, i don’t want to waste my time by typing some paragraph because you not going to read it. Directly into the topic. you can use Core Data as a framework to save, monitor, alter, and filter data, within App In this blog you will learn How to use the core data and save, fetch, delete single data, delete all the data To create your section in your place(in iOS — view) Hey beginners you may know little about container in flutter(if not, container widget that combines common widgets). Let's see a little more Inside the container, you can keep any widget you want. Widget build(BuildContext context) { return Container( color: Colors.red, ); } This is a simple example of a container lets Expandable and play with tableview cell without section and with stackView Im skipping tableView setting steps and into the cell content view :) simple just drag a stackView and put two view inside it (TopView and bottom view) contentview | -----StackView | ------ TopView | ------ BottomView Don’t forget to select the Axis -vertical and Distribution -in any Fill in StackView then while creating outlet just hide the bottomView like i did @IBOutlet weak var bottomView: UIView! { didSet { bottomView.isHidden = true } } That’s all guys just put the below code at didSelectRowAt func tableView(_ tableView: UITableView… This blog will cover all the errors while making an webView app Get the packages updated version here Add the package at pubspec.yaml file. dependencies: flutter: sdk: flutter webview_flutter: ^0.3.22+1 import 'package:webview_flutter/webview_flutter.dart'; <uses-permission android: Add this line in file AndroidManifest.xml file path: android/app/src/main/AndroidManifest.xml <key>io.flutter.embedded_views_preview</key> <string>YES</string> file path: iOS/Runner/Info.plist) return Scaffold( appBar: AppBar( title: const Text('Flutter WebView example'), ), body: const WebView( initialUrl: '', javascriptMode: JavascriptMode.unrestricted, ), ); JavaScriptMode is restricted default. Unrestricted it by adding javascriptMode: JavascriptMode.unrestricted, What is Swipe Detector? Just to detect the swipes on Screen, Like Left, Right, up and down, and very simple in a flutter when we add swipedetector package. Add the package at pubspec.yaml file. dependencies: flutter: sdk: flutter swipedetector: 1.2.0 import 'package:swipedetector/swipedetector.dart'; body: SwipeDetector( onSwipeRight: () { setState(() { print("Swiped right"); }); }, ) just declare SwipeDetector in the body or as a widget. That's it we can use onSwipeRight or onSwipeLeft or onSwipeUp or onSwipeDown or all at the same time to detect the swipe you want SwipeDetector( child: ... //You Widget Tree here… Basically What is Shared Preference?. Just to save the users small size data in App, Like some kind of settings and some Data (Not too much, If much then need to go with some DataBase). In flutter Shared Preferences are stored in XML format. And it supports in both iOS and Android. Add the package at pubspec.yaml file. dependencies: flutter: sdk: flutter shared_preferences: "<get updated version>" import 'package:shared_preferences/shared_preferences.dart'; Yes… Let's have some fun by adding MultiColor for text. So this blog is going to cover how to add multicolor for your UILabel, UITextView, and UIButton And I created some Extension for it, CodeLink A string that has associated attributes (such as visual style, hyperlinks, or accessibility data) for portions of its text. So we go with NSAttributedString So this is the code for making your Text in multiColor So this blog will be about Setting up OSRM (Open Source Routing Machine) with demo video and explaining HTTP requests OSRM in Open Source Routing Machine. In simple words, you can create your map application like google map using OSRM (FOR FREE!!) So say bye to google map and create your own map company by using OSRM I tried OSRM set-up without docker and end up with frustration Ubuntu users just go with… With a simple example, Imagine your friend copying your work and he found some mistake on it. If he corrects that mistake in his work and also in your work its reference type. If he changed only with his work, Then he is selfish (Value type). The real meaning is if you change the copied value and it affects the original then its a reference type. If it affects only the copied value then its a value type (Selfish one). At this time your mind should have a question “Why the hell reference type is affected after copying ?” … The extension is like magic in swift. The example is two types of persons in your company, one gets a salary per minute and another per hour. So for hours calculation, you have code but not for minutes. Insane || iOS developer intern @ivyMobility
https://tonywillson.medium.com/?source=post_internal_links---------7----------------------------
CC-MAIN-2021-17
refinedweb
770
54.12
Hi! I had a question some time ago (isaacavila MK22DX128VLK5 definitions) how I could make MK21DA5 based project I developed on TWR-K21D50M to run on MK22DX128VLK5. I changed all the features as requested, however during the startup program is stuck in Reset Handler code, as it always getting watchdog interrupt just before it tries to disable watchdog timers. I managed to generate empty project with MK22D5.h, system_MK22D5.h, system_MK22D5.c and startup_MK22D5.S (which are all almost identical to MK21DA5) and it runs OK in to the main(). Could you suggest anything I am doing wrong with MK21DA5 project so it does not run on MK22DX128VLK5. Thank you very much, Viktor Hi, Victor, Are you sure that the watchdog leads to the interrupt or reset event? after the code enters interrupt or reset, I suggest you check IPSR register in debugger, the Low 9 bits reflect the interrupt vector number, in this way, you can know the interrupt source. If it is Reset, you can check the RCM_SRSx so that you can know which leads to the Reset event. If it is watchdog exactly which leads to reset/interrupt, you can put the the following disabling watchdog code ahead of all code. As you know that the the watchdog initial value after Reset is 0x4C0000, it is enough to reach up to the code to change setting up the watchdog generally. Hope it can give you a clue. BR XiangJun Rong #if (DISABLE_WDOG) /* Disable the WDOG module */ /* WDOG_UNLOCK: WDOGUNLOCK=0xC520 */ WDOG->UNLOCK = (uint16_t)0xC520u; /* Key 1 */ /* WDOG_UNLOCK : WDOGUNLOCK=0xD928 */ WDOG->UNLOCK = (uint16_t)0xD928u; /* Key 2 */ /* WDOG_STCTRLH: ??=0,DISTESTWDOG=0,BYTESEL=0,TESTSEL=0,TESTWDOG=0,??=0,STNDBYEN=1,WAITEN=1,STOPEN=1,DBGEN=0,ALLOWUPDATE=1,WINEN=0,IRQRSTEN=0,CLKSRC=1,WDOGEN=0 */ WDOG->STCTRLH = (uint16_t)0x01D2u; #endif /* (DISABLE_WDOG) */
https://community.nxp.com/thread/389674
CC-MAIN-2019-51
refinedweb
302
61.06
Linux Mint Developer Forks Gnome 3 314." Long-Term? (Score:5, Insightful) Anyway, I'm typing this on Arch Linux 64-bit with GNOME 3.2.1 and a few (needed!) shell extensions. I find it fine and I thought I would be a GNOME 3 hater but I'm actually not. Re: (Score:2) Re: (Score:2) Re: (Score:3): (Score:2) Re:Long-Term? (Score:5, Informative): (Score:3) Which makes perfect sense. Gtk3 and other developer-side stuff is not broken, and so long as they keep it as is, apps written for Gnome will work. The problem with Gnome 3 is the UI design of the desktop itself. Re:Long-Term? (Score:5, Informative) Unlike what the summary suggests, it's not a Gnome 3 fork but just a Gnome Shell fork. With the whole back end untouched, they should be able to keep compatibility issues to a minimum. Re: (Score:3): (Score:2) Absolutely. I always thought that only an idiot would use gnome-terminal and gedit when the vastly superior konsole and kate were only an apt-get or yum install away, even under gnome. Re:Long-Term? (Score:5, Insightful): (Score:3) you would do well to try the latest mint. it has a hacked version of gs3 and still works pretty good. its very usable, the top-right hot-corner thing works amazingly, minimize works (mostly), multiple desktops works, you also have a traditional 'start' menu and its much, much faster than unity. oh, and i can't seem to find any obvious bugs either. there are plenty of customization options too. one thing i completely hate is the custom icon for firefox, its really irritating. but it can be changed by a simple Cool (Score:2) Cinna-Mint, anyone? Excellent (Score:2) Re:Excellent (Score:5, Funny) (Score:5, Funny) My mother overheard a conversation I was having about a certain linux distribution. After the conversation she asked "who is Debbie and why are you talking about her open sores?" Re: (Score:2) Re: (Score:2) Only you can answer that question. Try 'em both. You can install both on Mint 12 and log into either one. Just be aware that MATE is nowhere near mature yet. Keep away the UI "designers"! (Score:3, Insightful)"! (Score:4, Informative)"! (Score:4, Informative) Re: (Score:2) Firefox is at least copying an interface that hundreds of millions of people use in Chrome, maybe it's not for you but it certainly seems to work for a lot of people. I'm using Chrome on a 24" screen right now and I can't say I miss any of the things you mention much. Mozilla is quite deluded if they think that's why I use Chrome though. GNOME on the other hand choose to go their own way, really their own way. Which wouldn't be so bad if they didn't constantly collapse the path behind them, if you liked it Re:Keep away the UI "designers"! (Score:4, Insightful) I'd have no problem with self-proclaimed UI designers as long as they'd respect the following very basic "rules of thumb": * Every command can have a keyboard shortcut. * Issuing a command immediately provides visual feedback (always and with absolutely no visible delay, even menu items should blink). * While a command is issued or visual feedback is given other commands can be issued without delay, provided that processing has not become very slow and the queue becomes long (the latter must be avoided at all costs by using suitable programming techniques and data structures but of course sometimes a machine is just doing too much work). * Important commands are no more than one mouse click away, less important ones 2 or a maximum 3. There is really no need for an UI where you need to click or open 3 different menus/views/buttons/windows to get anywhere. * All visible GUI elements such as toolbars, panels, buttons are freely configurable both in their content and their spacing and place. * All interface elements can be selected and used with the keyboard or there are equivalent keyboard commands. * Windows and interface elements always remember their settings such as position, size, etc. * Modal dialogs are avoided as much as possible. * Instant/live update of the results of search fields is welcome, but then it must be instant--no delay. Voila! A working GUI...at least in my opinion. Re:Keep away the UI "designers"! (Score:5, Insightful) Or perhaps you're talking about the status bar. Again, something I can't believe anyone would notice or care about. A largely blank, useless bar that was practically only good for previewing link URLS was removed from the GUI and replaced with something smarter. Again, how is this a major change? Useless to you, perhaps. But the replacement is a kludge for tiny screens that's a horrible mess on a desktop with a decently sized monitor. I find it's contnually covering up things I want to click on all for the sake of not 'wasting' a few pixels on a 1920x1080 monitor; it's annoying, it's ugly and it provides no benefit over the old status bar.: (Score:3, Insightful) Re: (Score:3) Maybe i can help, i actually think Gnome3 looks nice, i think it would be pretty good on a touchscreen, but dumb unforgivable desktop changes include; - Disabling desktop icons by default (can only have a bare wallpaper unless you modify the registry, which average users wont do) - If you do have desktop icons enabled, you can see them at the same time as the applications selector thing. - No minimise button, instead you have to rightclick, select minimum press button. 3 steps instead o Re: (Score:3) Yep, knowing that is kind of relevant. But worse is that the text on the bar is different from the text you type, different from the text at the favorites, different from the text you get if you copy it... Not that huge a problem, but still a problem. I thought Gnome (Score:5, Funny) Awesome. (Score:5, Insightful). (Score:5, Funny) "the thing feels like it's designed for tablets," YOU WILL WORSHIP TABLETS DAMMIT and YOU WILL WANT A TABLETACEOUS INTERFACE on everything which is not a tablet!!!! Yours in Unity, The GNOME Foundation. I don't care any more (Score:5, Interesting) I have converted all of my systems to XFCE. It feels like an older, simpler and leaner Gnome to me and some of the applets even have better functionality. Re: (Score:2) XFCE is good, but i'd prefer an environment that's cross platform. They've gotten linux centric in the last few releases. Re: (Score:3) Re: (Score:2) Why doesn't Gnome get it? (Score:5, Insightful) The community has beating Gnome over the head for months now. But Gnome stubbornly refuses to go back to their less FUBAR interface. What the hell is wrong with them? Oh well, at least there's forking. Re:Why doesn't Gnome get it? (Score:5, Insightful) Oh well, at least there's forking. Forking is the answer to borking. Re:You're... (Score:5, Insightful) :/ GNOME has always been fucked up. (Score:3, Interesting) Re: (Score:3) I use GObject all the time. It's not that bad, especially if used from high-level languages. No, if you want to complain about Gnome's libraries I can give you some places to start;GObject isn't one of them. Re: (Score:3) Re:GNOME has always been fucked up. (Score:4, Interesting): (Score:3, Insightful) QT was amazingly good for C++, Gnome couldn't compete But the idea was to make an *open source* desktop environment. I am sure a lot of C++ programmers would have gone helped an effort to make an open-source version of QT. Re:GNOME has always been fucked up. (Score:4, Informative) There was a project to make an LGPL QT clone called Harmony. It didn't attract a ton of developers. Strategically the FSF (and Harmony was on board) was that the desktop needed to go first. Otherwise, Harmony would be chasing Trolltech and the free Harmony based desktop would be years behind the proprietary QT based desktop. The free version would be a poor quality knock off of the original. That is essentially the situation that GnuSTEP has always found themselves in. They can't lead they have to follow. So yes, what you are proposing was in fact what they were doing. C is good. (Score:4, Interesting) GObject has features C++ doesn't include natively, like type introspection. Besides, what's wrong with C for a low-level API? You can connect just about any C-based API to a higher level language. Re: (Score:3) Re:GNOME has always been fucked up. (Score:5, Informative). (Score:5, Interesting). Re:GNOME has always been fucked up. (Score:5, Insightful) I can't disagree with your take on the politics. I do take issue with the technology. Very odd. After making that statement you go on to validate just about everything the GP said. You get modded up for starting an argument, but before you've written 2 paragraphs you've agreed with the other guy by just using different words. Are you guys brothers? My brothers used to fight against each other on but the same side of the argument a lot too. Seems like the arguments always ended with "Ok then". To which the other replied "Fine". Agreeing with every point here, except one... (Score:5, Insightful).. :). (Sorry for sounding so angry. I don't mean to say this in a attacking way. I'm just a bit beside myself right now for completely unrelated reasons, and can't switch it off. Your post is still 95% in harmony with my... (Score:5, Informative)). When the superuser can't access all files on a system, something is worng. Backup programs and automated root "find" commands fail because of ~loggedinuser/.gvfs which they can't access. Good job. And no, it's not all the other well established tools that should change to accommodate gnome. It's gnome being stupid and breaking things. Re: (Score:3)) Yep, that's pretty bad. Windows 7's "Libraries" are up there too: suddenly, there's a whole new filesystem-level directory-like abstraction which doesn't exist at the filesystem level, but is only a fiction created by Explorer.exe. And can't be addressed by any kind of path. How am I supposed to point a user to their library? "Go to C:\... oh bugger. Um, point at the... clicky thing... no the other... um.... well, I can't see your screen over the phone, so, er... good luck!" Re:Agreeing with every point here, except one... (Score:5, Interesting). Are you joking? They absolutely are a mess, and I say that as someone who uses them every day. I'm just not fooling myself into thinking they're not a mess. UNIX shells and utils have had a chaotic development history and are chock full of bad design. Most of the good utils do a lot more than one thing, and they are usually far less than excellent, just good enough to suffice if you fight them long enough to get them to do what you want. And don't get me started on the gaping abyss of existential Lovecraftian horror that is shell code, or (shudder) Perl. (and I even like perl! But it's also an eldritch tool of the Many-Angled Ones.) The only reason the entire lot hasn't been incinerated and replaced by saner tools with better and more consistent design is that there's far too much legacy code out there which depends on the behavior of existing UNIX shells, tools, and scripting languages. Just look Plan 9. Even though it was very much a UNIX-philosophy OS, only more so, and better designed than the original, by the same people who designed UNIX in the first place, it failed to gain any traction because it came far too late. UNIX already had unstoppable critical mass. You're falling into the trap of believing that the ideal is the practice. UNIX started out as a very hacky OS because squeezing advanced features into a PDP-7 was Hard. The subsequent 40 years of continuous and divergent development, little of it done by people primarily concerned with "do-one-thing-and-do-it-well", have left that ethos in tatters. You're also falling into the trap of believing without rational reason that one philosophy of software design is best for everything. One-thing-and-do-it-well is a fine idea for a software environment intended to filter text through independently written programs, but it might not work so great for easy to learn and use GUIs.. Okay, so you're a crazy guy.. This is not even on the same planet as right and wrong. Re:Agreeing with every point here, except one... (Score:4, Insightful) That said there is no denying how "organic" some tools are. There is no consistent syntax between tools, and some tools are arcane or implement arcane default settings. I also have a love / hate relationship between bash, gawk and perl and constantly have to relearn these bastards when I need to write a script because they're almost write-only languages and virtually unmaintainable once they grow beyond a certain size. I once had to port a 5000 line cgi perl script which could generate 6 disparate web pages into Java. It took six months to unpick and reimplement. Re:Agreeing with every point here, except one... (Score:5, Insightful)./p> YES. THIS. EXACTLY THIS. I really, really want a Unix desktop which actually implements the Unix philosophy, and very much want my windowspace to be exposed as a file system (or rather, as a VERY loosely / completely untyped object system. And no, sadly JSON objects don't quite cut it, which is a problem since we're baking JSON into the Web. Lua objects would probably work though; they're pretty nice, and it interoperates well with C.) I want the ability to, just as you say, reuse objects and components from one "application" inside another. In fact, I want to completely erase the concept of "application"; I just want a robust store of data, as a set of fine-grained untyped objects/collections, and then various views or functions over that data. And then yes, publish any part of my data/function/object hierarchy in a safe, standard way to a net-wide repository as a sort of mini-distribution, and safely import subsets of other people's stuff into mine. See, there's a whole lot of nonsense busywork we're currently doing in the system administration space which duplicates and triplicates stuff we've almost solved in the programming space - only badly, and without interoperability. For example, what is a zipfile but an untyped object containing other files? What's a directory but an almost-but-not-quite-the-same object? What's a filesystem but again, an almost-but-not-quite-the-same thing? What's a version control "commit" but the same thing as an RPM/DEB patch, except implemented differently? What's a "distribution" but something that ought to just be an RPM of RPMs? And what are SQL "databases" and "tables" but again, objects containing sets of data, and why do I need multiple different incompatible formats for each one? So we have, at the OS/system level, these various different implementations of the idea of "structured object", but not really done sensibly; for one thing, there's this very archaic concept of a single shared filesystem which is very much like the old pre-1960s (FORTRAN and COBOL) programming concept of global variables. In programming languages, we moved past global variables toward structured sets of local variables when C came along; but we didn't at the filesystem level. This leads inevitably to easy corruption of a system: run one installer with root priviledges, and it has access to your entire root namespace on your hard drive. Our systems shouldn't really, in 2011, be structured in such an old-fashioned way. So at the OS layer we have "files and directories as objects". Then we have a process-management layer over the top: libraries, processes, threads. Then we reimplement the idea of "object" AGAIN (but in a non-interoperable way) as various "software component" frameworks (which of course install into a global per-system namespace, stomping all over each other to create DLL hell): COM objects, XPCOM objects (Mozilla), UNO objects (OpenOffice), CORBA objects, COM objects, KParts objects, GObject objects, .NET assemblies, Java namespaces. And a "registry", which reimplements the filesystem, to handle invocation of the software componentry. Then we reimplement the idea of "object" a fifth time as non-persistent programming-language objects which only exist in a single running process space : C++ objects, PHP objects, JSON/Javascript objects, .NET objects, Java Re:Agreeing with every point here, except one... (Score:4, Insightful) We are moving away from container-based storage units to metadata-based storage, precisely because the notion that everything is a file is quite limited. And these limitations aren't even new - symbolic links are in some ways a hack that breaks that base approach - you can refer to the same object from multiple different container, which - by itelf, is a rudimentary relation mechanism. I won't even mention ACLs - you access a file, but the system actually opens (at least) 2 files in many implementations, because the "file" notion doesn't comprehend accountability or complex ownership. The big players (Apple and Microsoft) have been moving away from file-based storage for years, and on to metadata-based stored approach. And no, afaik this isn't something you can easily slap over an existing filesystem. Also, the same concept you praise is contrary to the integration you preach - each vendor should implement the funcionality they need over the archaic "file" concept, as there is no "one size fits all" when it becomes to content decoding, and for the base libraries to actually be useful, they would have to be generic (think of the file api right now). We have huge bloated frameworks because different people has different needs, and processing power is cheap - cheaper than development time. That's what having a programmable device is all about - being able to write your own bloat how you think it should be implemented, instead of eating the other person's bloat. Re: (Score:3, Interesting) What would the API for this "single persistent, networked, yet not completely shared-global" object concept look like? If nothing is typed, how - for example - does an image manipulation program know which objects represent data, which objects represent manipulations, and how to actually fit the two together? What if I come up with a new and useful way of manipulating data which doesn't fit into the existing API? If what used to be a large database in an optimised binary format is now a millions of indiv Re: (Score:3) There wasn't any technical need for GNOME. Most people were quite pleased with KDE and its abilities I failed to follow you here. Most people is not all the people. If not all the people are pleased with something, then we have a valid technical need for an alternative. Re:GNOME has always been fucked up. (Score:5, Insightful) The world has moved on from the 1990s desktop. Windows 8 is replacing the Start menu with a radical new touch interface, and OS X Lion has adopted iOS features. Mobile operating systems that look and feel nothing like desktop operating systems are a huge hit. The 1990s desktop paradigm is dead in the mainstream, and Linux software developers are trying to progress computing forward by appealing to people outside of tech forums. That means reducing the insane amount of configurability and feature-itis that often ails Linux desktop software. You may consider it abhorrent, but you are a minority. The rest of the world uses computers simply as a tool, not as a hobby. Re:GNOME has always been fucked up. (Score:4, Insightful) PLEASE MOD PARENT UP. Abusive moderation detected. I do not agree with what it's saying, but the opinion is valid and the guy didn't do any offense! Re:GNOME has always been fucked up. (Score:5, Insightful) I use my computer as a tool. And as such it must be efficient. If make a hammer from soft rubber justifying that "common user might get hurt when using metal hammers" then you are making a goddamn toy, not a tool. And that is exactly what Gnome crew is doing. I get that they are trying to make an impressive interface for tablet PC, but why should regular PC users suffer? Would it hurt them to go play in a "Gnome-Tablet" version of the interface without crippling the desktop version? Those that would not need customizability and would be happy with all the bling could play with the new interface all they want and those that do any actual work could continue to do so without switching DEs. Re:GNOME has always been fucked up. (Score:4, Interesting) Re:GNOME has always been fucked up. (Score:5, Insightful) Yes and no. It's the "Silicon Valley effect". That is, some guys change stuff in the UI and declare it's where the innovation is. It's not that it's good. It's just different, and backed with strong advertising (Apple, etc). Then others feel they *have* to follow a similar path to survive and "have something new to announce". Otherwise, you're not news worthy, etc, etc. I also call it being blind :P It's fine to copy concepts, but copy the ones that are actually good. Re:GNOME has always been fucked up. (Score:5, Insightful) The rest of the world uses computers simply as a tool, not as a hobby. The rest of the world uses windows or OS X. I find it odd that FOSS advocates are willing to reject much of what they find useful with desktop enviornments to make them more appealing to people who will likely never use them... Re:GNOME has always been fucked up. (Score:4, Informative) Selecting a single coherent user interface experience as the default makes a lot of sense. Blocking users from changing the settings makes no sense, especially for an open source project. Not wrong ... RIGHT (Score:2): (Score:3) Linux as a whole (kernels, UIs...) has turned into a developers dick size contest. Everybody wags their own, nobody debug/documents/supports appropriately for end users. Re: (Score... (Score:5, Insightful)... (Score:5, Insightful): (Score:2) what the hell went wrong? My theory is that everyone who is in any way involved in UI development now thinks they're the next Steve Jobs and that they are justified in imposing their brilliant and unparalleled vision on everyone. Re: (Score:3). Re: (Score:3) A year or two ago everybody was happy with Gnome Clearly, not everyone was as happy as you thought. Otherwise there wouldn't be so many people working on so many alternatives. and now another kid on the block... what the hell went wrong? Not a damn thing. You can use gnome 2.x until MATE is working well enough to replace it. It's really the same thing. I for one don't understand why people get all emotionally attached to their old UI. I've used fvwm, twm, windowmaker, enlightenment, kde, gnome 1, gnome 2, xfce, unity, gnome shell (with extensions). Honestly I think these things just keep improving over time. But Re:You're... (Score:5, Insightful) I for one don't understand why people get all emotionally attached to their old UI. Muscle memory. Once you've gotten used to using a specific UI for years on end, the commands are basically hard-coded into your body. Changing this takes a lot of time and effort and you will often find yourself automatically doing things the old way. Re:You're... (Score:5, Interesting)... (Score:5, Insightful): (Score:3) L Re: (Score:2) Re: (Score:2) Actually i was more referring to the rip offs of spotlight/windows search, launchpad, etc., but good point. A new Windows feature is no excuse for Linux being broken. Haven't tried it recently, but when the composted desktop was first implemented, it broke openG Re: (Score:2) Linux/Unix desktop environments at the moment appear to be all about the colour of the bicycled shed [wikipedia.org], rather than things that ACTUALLY matter to end users / developers such as a stable ABI. A stable ABI is the reason why Windows is such a crock of crap and Microsoft can't fix poor decisions made twenty years ago. It's also pointless when most software people run on Linux is open source. Willingness to break backward compatibility in order to improve features or fix poor design choices is one of Linux's strengths, not a weakness. Re: (Score:3) I'm not sure if you've used Windows recently, but its actually quite a way away from being a crock of crap. Resource intensive? Yes. RAM is cheap. All my hardware works properly, virtually all of my apps work properly, and I'm not having to go track down old versions of library X to recompile simply to find myseilf mired in dependency hell. Don't get me wrong: Windows is no shining example of desktop design. But in terms of getting shit done with a minimum of fucking around fixing broken shit - we're Re: (Score:3): (Score:2) Must... resist... must... resist.... AAGGHH!! REMUNERATION Re: (Score:2) There's a number of users making it do what they want [linuxmint.com] but they're running up against nonsense, like having to edit files in a specific order. Re: (Score:2) Re: (Score:2) Re: (Score:2) Re: (Score:2) And we care about what you like because ..? (Score:2) You're free to carry on using whatever you like but the rest of us want a usable desktop. Re: (Score:2) Re:And I care because ..? (Score:5, Insightful) Nobody cares that you don't care. Get over yourself. Seriously. Re: (Score:2) Re: (Score:2) Re: (Score:2) Good for you (sincerely). But for the VAST majority this is wonderful news. In the end, we can both be happy. Re: (Score:2) Re: (Score:2): (Score:2). Re: (Score:3) When did last time GNOME get new amazing functionality? Really? Somewhere in the last year or so it got one of those Windows-style 'Program Load of Bollocks is not responding, do you really want to shut down?' dialog boxes that made me want to uninstall it almost overnight. One of the things I've always liked about Linux is that I could tell it to shut down and walk away, knowing that when I came back in two weeks it would actually have shut down, unlike Windows where it would be sitting there at some stupid dialog box waiting for a response it would never get. And then Re: (Score:2) Nope; wrong. Gnome2 had as many panels as you wanted. Some distros defaulted to one; some defaulted to two. Re: (Score:2) Re: (Score:3) The Mint developers have removed the engines from their cars and attached teams of mules. The next release to be known as Borax. yeah, and the Gnome designers designed a car that has only one stick, nothing else, no weel, no pedals, no buttons no nothing, but a single stick ... then you to take it to the freeway, and everything is fine until you get in an intersection and it starts to rain, and you need to steer, change gears and start the windshield wipers in the same time ...
http://tech.slashdot.org/story/11/12/21/2223251/linux-mint-developer-forks-gnome-3
CC-MAIN-2015-48
refinedweb
4,704
63.8
> -----Original Message----- > From: Xavier Hanin [mailto:xavier.hanin@gmail.com] > Sent: Friday, January 18, 2008 5:09 PM > To: ivy-user@ant.apache.org > Subject: Re: Ivy triggers and scope of properties > <target name="test"> > <antcall target="foo"> > <param name="dep.module" value="module1"/> > </antcall> > <antcall target="foo"> > <param name="dep.module" value="module2"/> > </antcall> > </target> > > If this works, then it's Ivy that doesn't do what expected. > If it doesn't, then Ivy has nothing to do with your problem, > and you should look for another solution, like using a script > or a custom Ant task (in Java) and a true variable (which > would BTW make things much more readable IMHO). I'm not sure You know that <antcall> is making a new ant project, which means a new namespace in particular. Any changes to properties done by targets called with <antcall> are not visible in the project from which <antcall> is called. Mayby You should check <runtarget>, <antcallback> and <antfetch> from ant-contib? -- Wszebor
http://mail-archives.apache.org/mod_mbox/ant-ivy-user/200801.mbox/%3C000301c85cf0$50b3db90$eb1a840a@pckurek%3E
CC-MAIN-2014-10
refinedweb
167
63.7
Sorry i didn't seen "Next Page" , I'll continue reading. Sorry i didn't seen "Next Page" , I'll continue reading. I've read it but i don't know how this helps me, neither do i know how it can help me. :/ I really REALLY am confused with what your trying to say to me, As i stated i am VERY VERY new to java. i really don't understand the problem and i don't understand what your trying to show or tell... Is this the correct thing your looking for? Here it is, Hope you can help! Sorry i really am a newbie in Java. But like this? If not please correct me for me! So how can i fix this? Sorry i don't understand, could you make it clear what the problem is? My compiler stickemu is the folder they are in, and then Tools,Game,Lobby are folders again. Thank you, I also just tried removing the following, but now i get the following errors in the compiler. Main.java:15: error: package stickemu.Tools does not exist import stickemu.Tools.*; ^... Sorry if I'm bothering anyone, I'm very new to java and i need some help with a code in my java file. class,interface, or enum expected On these lines: package stickemu.Tools; package...
http://www.javaprogrammingforums.com/search.php?s=8a203cb10184b5ba07b3ab3941c1343d&searchid=1929447
CC-MAIN-2015-48
refinedweb
224
76.82
pythonscript: saying that encoding is not defined. I have the following line in my python script #coding=utf8 the script runs but there is a message at console that SyntaxError: Non-ASCII character ‘\xe0’ in file C:\Users\ilLUSIon\AppData\Roaming\Notepad++\plugins\Config\PythonScript\scripts\Repl_nuqta.py on line 4, but no encoding declared; see for details this script was running well in w8-1, 64 bit on 32 bit npp, recently I switched to w10-64 bit, and installed 64 bit npp, now it is giving this error. I had Please resolve. Thanks. Edit: I had installed python script from within npp plugin manager, so I guess that 1.3.0 version is 64 bit, matching with npp bit. Which version of Notepad++ are you using? Not just 32-bit vs 64-bit, but the version number. my examples below are in v7.5.8 (with 32-bit or 64-bit indicated) Assuming that the unicode character ( ☺in my example) is inside a string, you may need to use unicode strings notation ( u'☺') rather than normal string notation ( '☺'). PythonScript currently uses Python 2.7. I don’t know enough about the intracacies of Python 2.7 to know whether #coding=utf8was always sufficient to be able to not use them; my minimal understanding, all unicodish strings in 2.7 (and thus in PythonScript) should useI u''for unicode strings. don’tdidn’t know whether the PythonScript python2.7.dllwas enabled with that option or not. Ok, I was wrong. Some experimenting (though this is 32-bit NPP 7.5.8 with PythonScript 1.3.0.0): from Npp import * def forum_post16899_FunctionName(): console.show() console.clear() console.write(u'SMILE: ☺\n') if __name__ == '__main__': forum_post16899_FunctionName() will give me the error. # encoding=utf-8 from Npp import * def forum_post16899_FunctionName(): console.show() console.clear() console.write(u'SMILE: ☺\n') if __name__ == '__main__': forum_post16899_FunctionName() does not give me the error. So the u'SMILE: ☺\n'notation is not sufficient, and the # coding=utf-8does work Let’s see if i can get PythonScript working in my 7.5.8 64-bit portable. Yes, those two scripts have the same behavior in both 32-bit and 64-bit NPP v7.5.8. Some more experiments, since the PEP 263 doesn’t show any utf-8 examples: # encoding=utf-8 # worked # encoding=utf8 # no hyphen: worked #encoding=utf8 # no space before `encoding`: worked # encoding= utf8 # space after equal, not before: worked # encoding = utf8 # space before and after equal: gave your error message # encoding =utf8 # space before equal, but not after equal: gave your error message So it appears you cannot have a space between the “encoding” and the “equal” Is the #coding=utf8line that you showed an exact quote, or was it modified by the forum? Rendering help below ----- You can get it to render exactly in this forum by surrounding it by the ` mark, like `#coding=utf8`, or by putting it on a line by itself, prefixed with four spaces, so #coding=utf8 becomes #coding=utf8 of you can use ```zon a line before, and ```on a line after (with blank lines surrounding) like: ```z #coding=utf8 ``` to render like #coding=utf8 this help-with-markdown post will give more details on how to mark up for this forum to successfully communicate. The thing that strikes me in this thread is that sometimes encodingis used and sometimes codingis used…are these supposed to be interchangeable? Whoops. PEP 263 said coding, but I had encoding. # coding=utf8 # worked # coding= utf8 # worked # coding =utf8 # error # coding = utf8 # error But I get the same results for codingor encoding: the space between the g and the = is the critical part. Ah…it needs to match this regex: ^[ \t\f]*#.*?coding[:=][ \t]*([-_.a-zA-Z0-9]+) Since encodingand codingboth will match that, both are acceptable. Ah, yeah, I hadn’t read down far enough to notice the regular expression. That was nice of them to include. :-) So, now it just remains for @V-S-Rawat to confirm whether his line actually matches that regex, and/or paste it in the forum without the forum mangling it. :-) Which version of Notepad++ are you using? I am on npp 7.6.2-64 bit on w10-64 bit and w8.1-64 bit (multiboot, I work in both os with the same npp) thnaks. This is my entire script editor.beginUndoAction() #coding=utf8 This replaces separate-nuqta + letter with nuqta-containing letters editor.replace(u"क़", u"क़") editor.replace(u"ख़", u"ख़") editor.replace(u"ग़", u"ग़") editor.replace(u"ज़", u"ज़") editor.replace(u"ड़", u"ड़") editor.replace(u"ढ़", u"ढ़") editor.replace(u"फ़", u"फ़") #removes ZERO WIDTH SPACE editor.replace(unichr(8203),"") #removes ZERO WIDTH NON JOINER editor.replace(unichr(8204),"") #removes ZERO WIDTH JOINER editor.replace(unichr(8205),"") #trim leading trailing space notepad.menuCommand(42043) #remove empty lines (containing blank characters) notepad.menuCommand(42056) editor.endUndoAction() you may need to use unicode strings notation (u’☺’ ) I am using double quotes, not single. like editor.replace(u"क़", u"क़") this script was working ever since in npp 7.6.1 32-bit I guess, then I switched to npp 7.6.2 64 bit and I noticed that this unicode chars replacements have stopped working. thanks. it doesn’t even give a notice on npp window while running. It only show on console which is not always on. so I had processed several files wrongly while this was not working, and then I noticed in one file and then checked the message on console to know. If script has some problem or error, is there any method to stop script right there, instead of it going ahead and skipping processing of the incorrect part, without the user getting to know? Thanks. #coding=utf8 is not working coding=utf8 is not working #encoding=utf8 is not working encoding=utf8 is not working you had mentioned (u’☺’ ) with single quote so I tried that also, but single quote as well as double quotes are not working. thanks. searching on net, I found so I used -- coding: utf-8 -- but this is also not working. thanks. - Alan Kilborn last edited by Alan Kilborn @V-S-Rawat said: If script has some problem or error, is there any method to stop script right there, instead of it going ahead and skipping processing of the incorrect part, without the user getting to know? If there is a Python-level error, scripts stop dead and the reason is reported in the Pythonscript console. Of course, to see it you have to have the console opened. It would be nice, and I think it has been requested in the past, if in such a case the console would be opened (if not open) or made-visible to the user (if not the active tab on a multitabbed docked window) when such a thing occurs. BTW, please put your code in proper markdown form. Those huge lines were really jarring. or should I say jarring? - PeterJones last edited by Please, if you want our help, format your posts so that code looks like code, rather than getting interpreted by the forum. As we’ve already pointed out, the help can be found by clicking that ? in the COMPOSE window, or by following the link I posted above which gives an excellent summary of how to use markdown in the forum. I am using double quotes, not single. like editor.replace(u"क़", u"क़") python does not distinguish between single and double quotes, unlike some other languages, so that’s irrelvant (as you discovered later). this script was working ever since in npp 7.6.1 32-bit I guess, then I switched to npp 7.6.2 64 bit and I noticed that this unicode chars replacements have stopped working. Ah, this is useful information: you not only changed between 32-bit and 64-bit, you also changed version. This is likely the culprit. I will have to find time to download portable editions of those, install pythonscript, and see if I can reproduce your problem. In the mean time, you example script (even if the source wasn’t clobbered by the forum) is way longer than it needs to be in order to debug the problem. The issue at hand is only trying to set the encoding, so we just need a minimal script that shows the issue. In 7.5.8 32-bit, this exact text (copy/paste from the box into a new pythonscript file, then run the python script), will run successfully: # encoding=utf-8 from Npp import console console.show() console.clear() console.write( u'SMILE: ☺\n' ) console.write( u"क़" ) Where the output is SMILE: ☺ क़ (That was an even-more-simplified version of the script that I had shown earlier, which removes the namespace-protecting function names – but then I used your double-quoted u-string) And this version of the script (the exact text shown) will give the error # encoding =utf-8 from Npp import console console.show() console.clear() console.write( u'SMILE: ☺\n' ) console.write( u"क़" ) where the error I see is File "C:\Users\peter.jones\AppData\Roaming\Notepad++\plugins\Config\PythonScript\scripts\NppForumPythonScripts\16899-encoding-sscce.py", line 5 SyntaxError: Non-ASCII character '\xe2' in file C:\Users\peter.jones\AppData\Roaming\Notepad++\plugins\Config\PythonScript\scripts\NppForumPythonScripts\16899-encoding-sscce.py on line 5, but no encoding declared; see for details Please try those two exact scripts in your installation(s) of Notepad++ and describe your results. I downloaded portable editions of 7.6.1-32, 7.6.1-64, 7.6.2-32, and 7.6.2-64. I manually installed PythonScript 1.3.0.0 into all four portable installations. I ran the two scripts I just showed in all four instances. In all four, the version with # encoding=utf-8worked, and the version with # encoding =utf-8failed. In 7.6.2-64, I then edited the two scripts to use codinginstead of encoding: # coding =utf-8 from Npp import console console.show() console.clear() console.write( u'SMILE: ☺\n' ) console.write( u"क़" ) This version failed with the same error. And the “correct” version: # coding=utf-8 from Npp import console console.show() console.clear() console.write( u'SMILE: ☺\n' ) console.write( u"क़" ) passed, as it did with encoding. Using the encoding lines that are coming through your forum-markdown badly formatted, I cannot reproduce your problem. The only ways I can reproduce your error message are to put a space between codingand =, or by not having the encoding line. It is not a problem with PythonScript 1.3.0.0. It is not a problem with my portable versions of 7.6.2 for either 32bit or 64bit. Either the text you are quoting is getting mangled – in which case, you will have to correctly use markdown to avoid it getting mangled – or you are doing something else wrong, or there is something else unique about your setup that I cannot reproduce in my portable setup. ---- Complete ? > Debug Info for 7.6.2 64bit Notepad++ v7.6.2 (64-bit) Build time : Jan 1 2019 - 00:02:38 Path : C:\usr\local\apps\npp64.7.6.2\notepad++.exe Admin mode : OFF Local Conf mode : ON OS : Windows 10 (64-bit) Plugins : DSpellCheck.dll mimeTools.dll NppConverter.dll PythonScript.dll I see that you were also trying without the hyphen in utf8and with a colon instead of an equal: # coding:utf8 from Npp import console console.show() console.clear() console.write( u'SMILE: ☺\n' ) console.write( u"क़" ) that one with # coding:utf8(no space before the colon) passed # coding :utf8 from Npp import console console.show() console.clear() console.write( u'SMILE: ☺\n' ) console.write( u"क़" ) The one with # coding :utf8(with the space before the colon) failed. Try again with no space between #and coding, and a space after the colon: #coding : utf8 from Npp import console console.show() console.clear() console.write( u'SMILE: ☺\n' ) console.write( u"क़" ) with space colon space, it fails #coding: utf8 from Npp import console console.show() console.clear() console.write( u'SMILE: ☺\n' ) console.write( u"क़" ) With nospace colon space, it passes. I cannot get the error with the lines you say you are trying. Sorry. [these four attempts were still with 7.6.2 64bit portable, as above] your last script gave this output on cosole. SMILE: ☺ क़ it is correct. so it seems that encoding is not the problem. thanks for you putting so much time and effort. it worked. I had put editor.beginUndoAction() #coding:utf8 in my file. meaning coding was not on the first line. how I put coding in the first line. #coding:utf8 editor.beginUndoAction() the error stopped and it did the required change in my text file. I can still say that the previous version was working ever since, but stopped working after I switched to 64 bit and new version. maybe, that had change some python version or something that had been causing the error. Thanks a lot for guiding me step by step to solution. - PeterJones last edited by I’m glad you found the problem. Per PEP 263, “To define a source code encoding, a magic comment must be placed into the source files either as first or second line” (emphasis added). That has been true since Python 2.3 in 2001, so it wasn’t a recent change in the Python library. (Besides, since I started using PythonScript a few years ago, they haven’t changed from Python 2.7). I am not sure how it ever would have worked on the third line for you. But the important thing is that you now know it needs to go on the first or second line of your file. @V-S-Rawat said: This is my entire script editor.beginUndoAction() #coding=utf8 The real problem likely could have been found in like 3 seconds if you would have ever learned how to present code via correct markdown on this forum. Scrolling back quickly thru all of the postings shows that only Peter’s code replies use the black-box markdown.
https://community.notepad-plus-plus.org/topic/16899/pythonscript-saying-that-encoding-is-not-defined/?lang=en-US&page=1
CC-MAIN-2020-16
refinedweb
2,387
65.73
Excellent news, everyone! We’ve just released Rider 2018.3 with lots of features you’ve been waiting for and all kinds of improvements to make coding easier still. Let’s dive into this ocean of improvements and features: - Code Vision: try this new way of seeing useful information about your code right in the code editor. Usages, version control info, derived types, extension methods, and exposing APIs are now in plain sight, with no additional actions required! - Remote Debugging via SSH: need to debug an application on a remote machine? No problem, just call the action “Attach to Remote Process…” to attach to the process. .NET Full/Core, and Mono debuggers are supported. - Rename Project refactoring: no more manual renaming of all usages of a project. Call up this refactoring and you’ll be done before you know it. - Assembly Explorer now allows opening multiple assemblies, as well as traversing and exploring all open assemblies. - Zero-latency typing: even for smart typing assists that perform time-consuming tasks like reformatting code, Rider’s UI doesn’t get blocked and the editor stays smooth and responsive. - Launch Settings: support for launchSettings.json files is now available in ASP.NET Core projects. - We’ve redesigned the Search Everywhere popup and the Plugins pages in Settings. - Updated C# and VB.NET support includes improved C# 7 deconstruction support and language support for VB.NET 15.3 and 15.5. - An integrated performance profiler. Note that for now, the profiling features are only available on Windows. - Type Hierarchy tool window: explore the inheritance hierarchy of types via Navigate | Type Hierarchy. - Web development: from improved Angular support and better auto imports in JavaScript to debugging Node.js worker threads and support for TypeScript 3.1 features, and more. - VCS support: manage GitHub Pull Requests right in the IDE, and work with Git Submodules. - Android Development: lots of embedded tools and features from Android Studio are available, including a WYSIWYG designer, code completion for AXML/Android Manifest files, SDK/AVD Manager, and more. - Inline parameter name hints for C# and VB.NET are here. - Updated NuGet support offers Find this type on nuget.org, a new quick-fix to Install missing .NET Core Adapter, and better and faster search in the NuGet window. - New language injections: SQL and other IntelliJ IDEA language injections now cover C# string literals. - Updated F# support; brand new F# lexers work for both IntelliJ IDEA’s frontend and ReSharper’s backend; Rename refactoring works for local symbols; and there are new grouping types in Find Usages. - Updated Database support delivers one more NoSQL database, Cassandra, and several improvements in SQL code completion. - Unity support updated: New inspections keep a tight watch on the performance of your Unity code; collect method and class usages from scene/prefab/asset files; Unity Explorer is now shown on Linux; and more! - Other features: there’s a brand new engine under the hood of the expression evaluator; Ctrl+Click on an entity declaration now shows the usages; the refactorings Move to Resource, Inline Resource, Move Resource, and Rename Resource are now available; you can now ‘Build only this project without dependencies‘; and, as we love saying, there’s even more! Visit What’s new in Rider 2018.3 on the product website to find more details about these enhancements. To see the whole list of fixes in this release, check out the issue tracker page. We’d love to hear your feedback on your experience with Rider 2018.3. Speak your mind – we’re listening! Thanks! 2019 is probably the year I switch from Visual Studio to Rider. Given that ReSharper stopped getting new features and Visual Studio is catching up fast we might have an actuall choice of real IDE’s in future! I am leaning towards Microsoft since they own the stack I am using. What I dislike about Rider is their horrible re-used UI of their other tools, it really depress me (/s). It clearly shows how Visual Studio has spent million of dollars on UX while Rider don’t, VS is super clean, way less clunky and with the help of dainty.site/vs, it looks fabulous. Rider on the other hand, feels like using Eclipse, which has to be the worst IDE ever existed. Weird, Riders/IntelliJ overall aesthetic (navigation/context/UI) is one of the main reasons it was so easy to drop VS, VS for me is a nightmare for navigation, it’s just a classic Microsoft horrorscape of contextless garbage. What? Visual Studio is clean? It is really messy and over-bloated, while rider looks more lightweight. What?? Do you mean VS Code?? Visual Studio is a clunky legacy third-rate IDE that was great in the C++ days, sucked in the .NET Framework era when we had zero choices, and now that .NET Core is in full flight, is irrelevant and probably actively harmful for the people still using it. Visual Studio users are bringing a pea-shooter to a fight where everyone else has tactical nukes. It’s so bad that MS saw fit to green light VS Code. Reading the full release notes, this sounds like a HUGE Rider update, can’t wait to test. Sometimes, code completion seems to be broken. For example, when typing: Console.WriteLine($”{ben}”); A cursor is at “ben|” and IDE suggests to complete “ben” to “benchmark”. After hitting the enter key, the whole line is changed to: Console.WriteLine($”{}”)chmark; Restaring IDE helps. My version is: JetBrains Rider 2018.3 Build #RD-183.5047.86, built on December 17, 2018 … JRE: 1.8.0_152-release-1343-b26 amd64 JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o Linux 4.4.0-98-generic Sigh, still no love for cg/shader completion/navigation in Unity? 2 years, cmon. Hi Brad! Any chance you can make a request in our issue tracker for Unity support? Unity is just an IoC container. Did you mean Unity3D? Unity3D is now called Unity Yesterday I download and setup rider 2018.3, and trying to use it on my Unity Project. While I Comment few lines of codes , the file was saved,that trigger Unity refresh the Assets. I can not Find where is the setting for this issue… Is that a new feature in 2018.3? JetBrains Rider 2018.3 Build #RD-183.5047.86, built on December 18, 2018 If you want to – for whatever reason – turn it off. Navigate to: Preferences/Languages & Frameworks/Unity Engine You’ll find the option under General Indexing got really SLOW for Unity projects in Rider 2018.3. We have a project with 90 GB of assets and starting with this version Rider indexes ALL files. Also find references gets stuck from time to time looking into all the files… Had no issues like this with 2018.2 Hello! Please, try to disable a checkbox “Preferences | Languages & Frameworks | Unity Engine “[ ] Parse text based assets files for implicit script usages” and check if the problem persists. Hi! I have the same problem I think. Unity is saying “editor is compiling scripts”, and it takes a long time before I can deploy. Otherwise I’m loving it In contrast to 2018.2.3, version 2018.3 after the start of a large application in the debug freezes for 3-4 minutes. This was in one of the EAP. Today, i installed Rider 2018.3 but was not able to import the (exported) settings from Rider 2018.2.3. Rider 2018.3 shows a blank import screen – no components to select – noting. Any idea what’s going wrong here? Or is there another option to import my hard time configured settings? My maschine is a Mac. Thanks for some support. Hello Robert! We’re sorry about that. This bug is fixed and is going to be in the first bugfix (2018.3.1): Find Usage in 2018.3 is significantly slower than 2018.2. Anyone feel the same? Hi! Could you please profile Rider as described in the following article- and create a new support request ()? Really great additions to an already amazing IDE! I absolutely love Rider, can’t imagine coding C# without it anymore. However It would be great if you could maybe somehow refine your release process in the way of continuous delivery. As a user I would like to get features faster via many small releases. For example maybe Code Vision could have been released individually? Releasing three times in a year is a bit scary from the user perspective as each release has so many new features there is almost certainly going to be handful of bad bugs. That’s why I’m a little hesitant to install new releases right away. Pingback: Rider 2018.3 maintenant disponible ! | JetBrains France I love rider since it’s first release. I test every release and I see a lot of new things and fixes in Rider 2018.3.1. Nice Work!!! Even though feels a lot heavier than 2018.2. My machine it’s low on resources so I will have to stay with previous version or Visual 2015 until I upgrade the hardware. In fact… Memory usage opening the same solution on bothe 2018.[2|3] went from about 1.2 gb to almost 2gb. I have 8gb of ram and also need memory for other things… Memory readings are only taking care of rider64.exe and Jetbrains.ReSharper.host64.exe JetBrains, I love you guys, but 2018.3 is the most unstable release of Rider since the pre-GA days. It’s the first time I had to roll back to a previous release. Fortunately 2018.2 is solid. 2018.3 is feature packed (project renaming w/ namespace adjustment is insanely great), but Rider is mature enough now that stability is everything. Pingback: Rider 2018.3の新機能 | JetBrains ブログ
https://blog.jetbrains.com/dotnet/2018/12/18/rider-2018-3-released/?replytocom=538199
CC-MAIN-2019-35
refinedweb
1,647
67.96
JET is one of the 21 projects bundled in the Eclipse Europa release. More precisely, it belongs to the Eclipse project Model To Text (M2T), which provides ready-to-use engines that perform model-to-text transformations. You can use JET to perform the following tasks: Before you get started with your first transformation, you must install JET into your Eclipse IDE from the update manager (See Sidebar 2. Starting with JET in Your Eclipse IDE). JET Basics At the heart of JET code generation resides a template, a plain text file that drives the JET engine to transform an input model into a software artifact. The input model and output artifact do not have to match any constraint (as you will see shortly). You can use JET to generate Java source files from an XML model, as well as C (or PHP or Ruby or whatever) code from an Eclipse EMF model. The template is a mixture of static sections, which JET will reproduce unmodified in the generated output, and XML-like directives, which perform transformations on the input model. If you are familiar with Java Server Pages (JSP), PHP, ASP, or any other templating engine, these concepts should be familiar. The other important concept to grasp is JET's use of XPath. By default, JET expects models to be represented by XML structures or EMF models. Therefore, its processing directives rely on XPath selectors and functions to identify and isolate the parts of the model upon which it will act. See Sidebar 3. Essential XPath for a quick intro to the language. However, you are not limited to XML or EMF input models. Since JET is packaged and distributed as an Eclipse plugin, it offers various extension points to augment its capabilities, including the definition of additional input formats. The documentation bundled with JET includes full specifications for the available extension points. JET processing directives can be expressed in various forms, including: Pretty much in the same way as defined by the JSP syntax, XML tags are provided to the engine in the form of tag libraries. A number of them, for the most common tasks, are bundled with the engine, but you can create additional custom tag libraries for your specific needs (again note the similarity with the contribution mechanism for custom tag libraries in the JSP world). Given the sample XML model in Listing 1, you can easily understand the JET template in Listing 2, which transforms the model into the usual HelloWorld class. Listing 1. Sample XML Model That Describes a Phrase <class name="HelloClass"> <phrase>Hello,World!</phrase> </class> Listing 2. JET Template That Converts the Model into a Working Java Source File public class <c:get { public static void main(String[] args) { System.out.println("<c:get"); } } You can identify both the static sections and the XML directives. In particular, <c:get /> prints the result of an XPath selector passed as parameter (such as /class/@name, which isolates the name attribute of the class tag). You are not limited to generating Java files either. For example, the following template transforms the same model into an equivalent Ruby class: class <c:get def sayPhrase puts "<c:get" end end These examples offer a glimpse of the power behind code generation and MDD: if the model is sufficiently robust, it is easy to adapt the final product to different environments and to migrate it to a new software architecture. Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/opensource/Article/34929
CC-MAIN-2014-10
refinedweb
604
52.39
[Date Index] [Thread Index] [Author Index] Re: Programming In article <4bandr$cqu at dragonfly.wri.com> Preston Nichols <nichols at godel.math.cmu.edu> writes: > If you want your extension of BuiltIn to be more invisible, > > Unprotect[BuiltIn]; > BuiltIn[args___] := > If[ Head[ resultOfBuiltIn=BuiltIn[args] ] =!= BuiltIn, > resultOfBuiltIn, > MyOperation[args] ] > Protect[BuiltIn]; > > Modifying built-in commands is generally rather risky, but I think > something like this should be pretty safe. This simple overlay for BuiltIn will not work because it traps absolutely every call to BuiltIn, even the one inside your own code where you want to evaluate BuiltIn normally using the kernel's built-in definitions Head[ resultOfBuiltIn=BuiltIn[args] ] =!= BuiltIn So your code calls itself, which calls itself, _ad infinitum_. Example: In[1]:= Unprotect[Integrate]; In[2]:= Integrate[args___] := If[ Head[ resultOfIntegrate=Integrate[args] ] =!= Integrate, resultOfIntegrate, MyOperation[args] ] In[3]:= Protect[Integrate]; In[4]:= Integrate[x^3, x] $RecursionLimit::reclim: Recursion depth of 256 exceeded. Out[4]= resultOfIntegrate In[5]:= Integrate[f[x], x] $RecursionLimit::reclim: Recursion depth of 256 exceeded. Out[5]= resultOfIntegrate That's why you need something to prevent your own rule from firing inside your code. The method I chose uses a flag that is reset as soon as your overlay of BuiltIn is called. It remains set for the duration of your program, and effectively announces "My overlay has already been called and is currently in progress, so don't call it again." Once your overlaid program is finished and has decided what result to return (the built-in one or MyOperation), the flag reverts to its usual value. The next time the user uses BuiltIn, your program will be called again. There are other ways to do it, but this way is one of the simpler ones. As Todd Gayley pointed out to me a while back, though, this method fails to invoke your overlay if there are nested uses of the function in the original input. For instance, the tecnique I gave in response to Jack Goldberg will not trap all three calls to f in this expression: f[ f[1, 2], f[3, 4] ] It will trap the outermost call, then evaluate the whole thing using normal rules for f, never again calling your overlay. I just thought of a possible solution to that weakness, and the new method isn't far from the original in structure: Possible General Technique even for Nested Calls ======================================================= Unprotect[SomeFunction] $SomeFunctionStatus = 0 SomeFunction[args___] := MyExtraCode[ SomeFunction[args] ] /; EvenQ[$SomeFunctionStatus++] ======================================================= As an example, let's have Module report every call to itself by printing to the screen. In[1]:= Unprotect[Module] Out[1]= {Module} In[2]:= $ModuleStatus = 0 Out[2]= 0 In[3]:= Module[args___] := EvenQ[$ModuleStatus++] (* A couple tests: *) In[4]:= Module[{x, y}, x^2 - y^2] 2 2 Module[{x, y}, x - y ] was called 2 2 Out[4]= x$2 - y$2 In[5]:= Module[{x, y}, {Module[{u}, u], Module[{v}, v]} ] Module[{x, y}, {Module[{u}, u], Module[{v}, v]}] was called Module[{u}, u] was called Module[{v}, v] was called Out[5]= {u$4, v$5} The basis for this trick is that we want to trap every call to a function, such as Module or Integrate, and do something, then call that function normally using the built-in rules that would have been used if we hadn't attached any baggage to the function. So calls to the function occur in pairs: first, our overlay program; second, the "real" call. The theory is that every other call to the function will be a "real" one, so we call our overlay if the counter is even, but otherwise let the function fall through to its normal rules. I haven't tested this since I just thought of it, so I don't know how well it works. The theory sounds right, so maybe it will endure some more exercise. Let me know how you break it... Robby Villegas
http://forums.wolfram.com/mathgroup/archive/1995/Dec/msg00139.html
CC-MAIN-2016-30
refinedweb
660
55.37
01 November 2011 10:11 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The January LLDPE contract closed at yuan CNY 9,385/tonne ($1,476/tonne) on Tuesday afternoon, CNY15/tonne lower from Monday’s settlement, according to data from Dalian Commodity Exchange (DCE). Investors retreated because of weaker crude prices, local futures brokers said. However, earlier on Tuesday, a decline in the purchasing manager’s index (PMI) for October pushed the January LLDPE futures contract on the DCE to as high as CNY9,625/tonne. The lower PMI is seen as an indication that “Investor sentiment was strengthened by the prospect of more liquidity,” the broker said. ($1 = CNY6.36)
http://www.icis.com/Articles/2011/11/01/9504295/china-lldpe-futures-fall-0.16-on-weaker-crude-prices.html
CC-MAIN-2014-52
refinedweb
111
51.58
In this part of the series, we’re going to scrape the contents of a webpage and then process the text to display word counts. Updates: - 03/22/2016: Upgraded to Python version 3.5.1 as well as the latest versions of requests, BeautifulSoup, and nlt. (current) -: - requests (2.9.1) - a library for sending HTTP requests - BeautifulSoup (4.4.1) - a tool used for scraping and parsing documents from the web - Natural Language Toolkit (3.2) - a natural language processing library Navigate into the project directory to activate the virtual environment, via autoenv, and then install the requirements: $ cd flask-by-example $ pip install requests==2.9.1 beautifulsoup4==4.4.1 nltk==3.2 $ pip freeze > requirements.txt Refactor the Index Route To get started, let’s get rid of the “hello world” part of the index route in our app.py file and set up the route to render a form to accept URLs. First, add a templates folder to hold our templates and add an index.html file to it. $ mkdir templates $ touch templates/index.html Set up a very basic HTML page: <!DOCTYPE html> <html> <head> <title>Wordcount</title> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <link href="//netdna.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" rel="stylesheet" media="screen"> <style> .container { max-width: 1000px; } </style> </head> <body> <div class="container"> <h1>Wordcount 3000</h1> <form role="form" method='POST' action='/'> <div class="form-group"> <input type="text" name="url" class="form-control" id="url-box" placeholder="Enter URL..." style="max-width: 300px;" autofocus required> </div> <button type="submit" class="btn btn-default">Submit</button> </form> <br> {% for error in errors %} <h4>{{ error }}</h4> {% endfor %} </div> <script src="//code.jquery.com/jquery-2.2.1.min.js"></script> <script src="//netdna.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js"></script> </body> </html> We used Bootstrap to add a bit of style so our page isn’t completely hideous. Then we added a form with a text input box for users to enter a URL into. Additionally, we utilized a Jinja for loop to iterate through a list of errors, displaying each one. Update app.py to serve the template: import os from flask import Flask, render_template from flask.ext.sqlalchemy import SQLAlchemy app = Flask(__name__) app.config.from_object(os.environ['APP_SETTINGS']) app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False db = SQLAlchemy(app) from models import Result @app.route('/', methods=['GET', 'POST']) def index(): return render_template('index.html') if __name__ == '__main__': app.run() Why both HTTP methods, methods=['GET', 'POST']? Well, we will eventually use that same route for both GET and POST requests - to serve the index.html page and handle form submissions, respectively. Fire up the app to test it out: $ python manage.py runserver Navigate to and you should see the form staring back at you. Requests Now let’s use the requests library to grab the HTML page from the submitted URL. Change your index route like so: @app.route('/', methods=['GET', 'POST']) def index(): errors = [] results = {} if request.method == "POST": # get url that the user has entered try: url = request.form['url'] r = requests.get(url) print(r.text) except: errors.append( "Unable to get URL. Please make sure it's valid and try again." ) return render_template('index.html', errors=errors, results=results) Make sure to update the imports as well: import os import requests from flask import Flask, render_template, request from flask.ext.sqlalchemy import SQLAlchemy - Here, we imported the requestslibrary as well as the requestobject from Flask. The former is used to send external HTTP GET requests to grab the specific user-provided URL, while the latter is used to handle GET and POST requests within the Flask app. - Next, we added variables to capture both errors and results, which are passed into the template. Within the view itself, we checked if the request is a GET or POST- - If POST: We grabbed the value (URL) from the form and assigned it to the urlvariable. Then we added an exception to handle any errors and, if necessary, appended a generic error message to the errorslist. Finally, we rendered the template, including the errorslist and resultsdictionary. - If GET: We simply rendered the template. Let’s test this out: $ python manage.py runserver You should be able to type in a valid webpage and in the terminal you’ll see the text of that page returned. Text Processing With the HTML in hand, let’s now count the frequency of the words that are on the page and display them to the end user. Update your code in app.py to the following and we’ll walk through what’s happening: import os import requests import operator import re import nltk from flask import Flask, render_template, request from flask.ext.sqlalchemy import SQLAlchemy from stop_words import stops from collections import Counter from bs4 import BeautifulSoup app = Flask(__name__) app.config.from_object(os.environ['APP_SETTINGS']) app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = True db = SQLAlchemy(app) from models import Result @app.route('/', methods=['GET', 'POST']) def index(): errors = [] results = {} if request.method == "POST": # get url that the person has entered try: url = request.form['url'] r = requests.get(url) except: errors.append( "Unable to get URL. Please make sure it's valid and try again." ) return render_template('index.html', errors=errors) if r: # text processing raw = BeautifulSoup(r.text, 'html.parser') results = sorted( no_stop_words_count.items(), key=operator.itemgetter(1), reverse=True ) try: result = Result( url=url, result_all=raw_word_count, result_no_stop_words=no_stop_words_count ) db.session.add(result) db.session.commit() except: errors.append("Unable to add item to database.") return render_template('index.html', errors=errors, results=results) if __name__ == '__main__': app.run() Create a new file called stop_words.py and add the following list: stops = [ 'i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', 'she', 'her', 'hers', 'herself', 'it', 'its', 'itself', 'they', 'them', 'their', 'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', ', 'should', 'now', 'id', 'var', 'function', 'js', 'd', 'script', '\'script', 'fjs', 'document', 'r', 'b', 'g', 'e', '\'s', 'c', 'f', 'h', 'l', 'k' ] What’s happening? Text Processing In our index route we used beautifulsoup to clean the text, by removing the HTML tags, that we got back from the URL as well as nltk to- - Tokenize the raw text (break up the text into individual words), and - Turn the tokens into an nltk text object. In order for nltk to work properly, you need to download the correct tokenizers. First, create a new directory - mkdir nltk_data- then run - python -m nltk.downloader. When the installation window appears, update the ‘Download Directory’ to whatever_the_absolute_path_to_your_app_is/nltk_data/. Then click the ‘Models’ tab and select ‘punkt’ under the ‘Identifier’ column. Click ‘Download’. Check the official documentation for more information. Remove Punctuation, Count Raw Words - Since we don’t want punctuation counted in the final results, we created a regular expression that matched anything not in the standard alphabet. - Then, using a list comprehension, we created a list of words without punctuation or numbers. - Finally, we tallied the number of times each word appeared in the list using Counter. Stop Words Our current output contains a lot of words that we likely don’t want to count - i.e., “I”, “me”, “the”, and so forth. These are called stop words. - With the stopslist, we again used a list comprehension to create a final list of words that do not include those stop words. - Next, we created a dictionary with the words (as keys) and their associated counts (as values). - And finally we used the sorted method to get a sorted representation of our dictionary. Now we can use the sorted data to display the words with the highest count at the top of the list, which means that we won’t have to do that sorting in our Jinja template. For a more robust stop word list, use the NLTK stopwords corpus. Save the Results Finally, we used a try/except to save the results of our search and the subsequent counts to the database. Display Results Let’s update index.html in order to display the results: <!DOCTYPE html> <html> <head> <title>Wordcount</title> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <link href="//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css" rel="stylesheet" media="screen"> <style> .container { max-width: 1000px; } </style> </head> <body> <div class="container"> <div class="row"> <div class="col-sm-5 col-sm-offset-1"> <h1>Wordcount 3000</h1> <br> <form role="form" method="POST" action="/"> <div class="form-group"> <input type="text" name="url" class="form-control" id="url-box" placeholder="Enter URL..." style="max-width: 300px;"> </div> <button type="submit" class="btn btn-default">Submit</button> </form> <br> {% for error in errors %} <h4>{{ error }}</h4> {% endfor %} <br> </div> <div class="col-sm-5 col-sm-offset-1"> {% if results %} <h2>Frequencies</h2> <br> <div id="results"> <table class="table table-striped" style="max-width: 300px;"> <thead> <tr> <th>Word</th> <th>Count</th> </tr> </thead> {% for result in results%} <tr> <td>{{ result[0] }}</td> <td>{{ result[1] }}</td> </tr> {% endfor %} </table> </div> {% endif %} </div> </div> </div> <br><br> <script src="//code.jquery.com/jquery-1.11.0.min.js"></script> <script src="//netdna.bootstrapcdn.com/bootstrap/3.1.1/js/bootstrap.min.js"></script> </body> </html> Here, we added an if statement to see if our results dictionary has anything in it and then added a for loop to iterate over the results and display them in a table. Run your app and you should be able to enter a URL and get back the count of the words on the page. $ python manage.py runserver What if we wanted to display only the first ten keywords? results = sorted( no_stop_words_count.items(), key=operator.itemgetter(1), reverse=True )[:10] Test it out. Summary Okay great. Given a URL we can count the words that are on the page. If you use a site without a massive amount of words, like, the processing should happen fairly quickly. What happens if the site has a lot of words, though? For example, try out. You’ll notice that this takes longer to process. If you have a number of users all hitting your site at once to get word counts, and some of them are trying to count larger pages, this can become a problem. Or perhaps you decide to change the functionality so that when a user inputs a URL, we recursively scrape the entire web site and calculate word frequencies based on each individual page. With enough traffic, this will significantly slow down the site. What’s the solution? Instead of counting the words after each user makes a request, we need to use a queue to process this in the backend - which is exactly where will start next time in Part 4. For now, commit your code, but before you push to Heroku, you should remove all language tokenizers except for English along with the zip file. This will significantly reduce the size of the commit. Keep in mind though that if you do process a non-English site, it will only process English words. └── nltk_data └── tokenizers └── punkt ├── PY3 │ └── english.pickle └── english.pickle Push it up to the staging environment only since this new text processing feature is only half finished: $ git push stage master Test it out on staging. Comment if you have questions. See you next time! Free Bonus: Click here to get access to a free Flask + Python video tutorial that shows you how to build Flask web app, step-by-step..
https://realpython.com/flask-by-example-part-3-text-processing-with-requests-beautifulsoup-nltk/
CC-MAIN-2018-34
refinedweb
1,938
57.67
Each piece of data in a C# program has a type. Several types have been introduced: int for integers, double for numbers allowing a fractional part, approximating more general real numbers. There are many other numeric types and also non-numeric types, but we can use int and double for examples now. Data gets stored in computer memory. Specific locations in memory are associated with the type of data stored there and with a name to refer to it. A program allocates a named storage spot for a particular type of data with a declaration statement, like: int width; Each declaration must specify a type for the data to be stored and give a name to refer to it. These names associated with a data storage location are called variables. The declaration statement above sets aside a location to store an int, and names the location width. Several variables of the same type can be listed together, like: double x, y, z; identifying three storage locations for variables of type double. To be useful, data needs to be stored in these locations. This is done with an assignment statement. For example: width = 5; A simple schematic diagram with a name for a location in memory (the box): Although we are used to reading left to right, an assignment statement works right to left. The value on the right side of the equal sign is calculated and then placed in the memory location associated with the variable on the left side of the equal sign, either giving an initial value or overwriting any previous value stored there. Variables can also be initialized as they are declared: int width = 5; double x = 12.5, y = 27, z = 0.5; or initializations and plain declarations can be mixed: int width = 5, height, area; height = 7; Stylistically the example above is inconsistent, but it illustrates what is possible. Technically an initialization is not an assignment. We will see some syntax that is legal in initializers, but not in assignment statements. We could continue with a further assignment statement: area = width * height; Look at this in detail. The assignment statement starts by evaluating the expression on the right-hand side: width * height. When variables are used in an expression, their current values are substituted, like in evaluating an expression in math, so the value is the same as 5 * 7 which finally evaluates to 35. In the last step of the assignment statement, the value 35 is then assigned to the variable on the left, area. Warning You want one spot in memory prepared for each variable. This happens with declaration, not assignment: Assignment just changes the value at the current location. Do not declare the same variable more than once. You will get an error. More on the fine points around that in Local Scope. We continue introducing Csharp: Remember that in csharp you can just give an expression, and csharp responds with a value. That syntax and reaction is special to csharp. In csharp you can also test regular C# statements, like declarations and assignments. The most recent versions of csharp do not require you to end a statement with a semicolon, though we tend to put semicolons after statements in our illustrations (and no semicolon for just an expression). As in a regular program, statements do not give an immediate visible response in csharp. Still in csharp you can display a variable value easily: csharp> int width = 5, height, area; csharp> height = 7; csharp> area = width * height; csharp> area 35 In the last line, area is an expression, and csharp will give back its value, which is just the current value of the variable. At this point you should be able to make sense of some more features of csharp. You can start with the csharp special help A lot of this is still beyond us but these parts are useful: ShowVars (); - Shows defined local variables. quit; - You'll never believe it - this quits the repl! help; - This help text We can continue the csharp session above and illustrate ShowVars(): csharp> ShowVars(); int width = 5 int height = 7 int area = 35 displaying all the variables currently known to csharp, plus their current values. We refer to “current values”. An important distinction between variables in math and variables in C# is that C# values can change. Follow this csharp sequence: csharp> int n = 3; csharp> n 3 csharp> n = 7; csharp> n 7 showing we can change the value of a variable. The most recent assignment is remembered (until the next assignment....) We can imagine a schematic diagram: We can carry this csharp session one step further, illustrating a difference between C# and math: csharp> n = n + 1; csharp> n 8 Clearly n = n + 1 is not a true mathematical equation: It is a C# assignment, executing with a specific sequence of steps. n + 1. n, which we set to 7, so the expression is the same as 7 + 1which is 8. nagain. nis 8, replacing the old 7. There are many occasions in which such an operation will be useful. Assignment syntax does have two strikes against it: Still this usage is common to many programming languages. Warning Remember in an assignment that the sides of the equal sign have totally different meanings. You assign to a variable on the left side after evaluating the expression on the right. We can illustrate a likely mistake in csharp: csharp> 3 = n; {interactive}(1,2): error CS0131: The left-hand side of an assignment must be a variable, a property or an indexer Students commonly try to assign left to right. At least in this case you get an error message so you see a mistake. If you mean to assign the value of x to y, and write: x = y; you get the opposite effect, changing x rather than y, with no error statement. Be careful! There is some weirdness in csharp because it adds special syntax for expressions which does not appear in regular programs, but it also wants to allow syntax of regular programs. Some conflict can occur when trying to display an expression, sometimes leading to csharp giving a strange error for apparently no reason. In that case, try putting parentheses around the expression, which is always legal for an expression, but would never start a regular statement: csharp> int width = 3; csharp> int height = 5; csharp> width * height {interactive}(1,2): error CS0246: The type or namespace name 'width' could not be found. Are you missing a using directive or an assembly reference? csharp> (width * height) 15 Expressions like 27 or 32.5 or "hello" are called literals, coming from the fact that they literally mean exactly what they say. They are distinguished from variables, whose value the compiler cannot infer directly from the name alone. The sequence of characters used to form a variable name (and names for other C# entities later) is called an identifier. It identifies a C# variable or other entity. There are some restrictions on the character sequence that make up an identifier: _, and must start with a letter. In particular, punctuation and blanks are not allowed. We will only discuss a small fraction of the keywords in this course, but the curious may look at the full list. C# is case sensitive: The identifiers LAST, and LaSt are all different. Be sure to be consistent. The compiler can usually catch these errors, since it is the version used in the one declaration that matters. price_at_opening. priceAtOpening Use the choice that fits your taste (or the taste or convention of the people you are working with). We will tend to use camel-case for variable inside programs, while we use underscores in program file names (since different operating systems deal with case differently). Think what the result would be in csharp: int x = 1; x = x + 1; x = x * 3; x = x * 5; x Write your prediction. Then test. Can you explain it if you got it wrong?
http://books.cs.luc.edu/introcs-csharp/data/variables.html
CC-MAIN-2019-09
refinedweb
1,335
60.75
language, GHC extensions, GHC As with all known Haskell systems, GHC implements some extensions to the language. They are all enabled by options; by default GHC understands only plain Haskell 98.. Language options languageoption optionslanguage extensionsoptions controlling The language option flags control what variation of the language are permitted. Leaving out all of them gives you standard Haskell -XMagicHash extension (). The primops make extensive use of unboxed types and unboxed tuples, which we briefly summarise here. Unboxed types Unboxed types (Glasgow extension) (but it is only a convention) that primitive types, values, and operations have a # suffix (see ).. There are some restrictions on the use of primitive types: The main restriction is. aren't really exported by GHC.Exts, they're available by default with -fglasgow-exts. An unboxed tuple looks like this: (# e_1, ..., e language extension -XMagicHash allows "#" as a postfix modifier to identifiers. Thus, "x#" is a valid variable, and "T#" is a valid type constructor or data constructor. The hash sign does not change sematics ); the -XMagicHash extension then allows you to refer to the Int# that is now in scope. The -XMagicHash also enables some new forms of literals (see ): 'x'# has type Char# "foo"# has type Addr# 3# has type Int#. In general, any Haskell 98 integer lexeme followed by a # is an Int# literal, e.g. -0x3A# as well as 32#. 3## has type Word#. In general, any non-negative Haskell 98 integer lexeme followed by ## is a Word#. 3.2# has type Float#. 3.2## has type Double# New qualified operator syntax A new syntax for referencing qualified operators is planned to be introduced by Haskell', and is enabled in GHC with the -XNewQualifiedOperators-XNewQualifiedOperators option. In the new syntax, the prefix form of a qualified operator is written module.(symbol) (in Haskell 98 this would be (module.symbol)), and the infix form is written `module.(symbol)` (in Haskell 98 this would be `module.symbol`. For example: add x y = Prelude.(+) x y subtract y = (`Prelude.(-)` y) The new form of qualified operators is intended to regularise the syntax by eliminating odd cases like Prelude... For example, when NewQualifiedOperators is on, it is possible to write the enumerated sequence [Monday..] without spaces, whereas in Haskell 98 this would be a reference to the operator ‘.‘ from module Monday. When -XNewQualifiedOperators is on, the old Haskell 98 syntax for qualified operators is not accepted, so this option may cause existing Haskell 98 code to break.. View patterns View patterns are enabled by the flag -XView manage sharing). Without view patterns, (. Efficiency: When the same view function is applied in multiple branches of a function definition or a case expression (e.g., in size above),. n+k patterns -XNoNPlusKPatterns n+k pattern support is enabled by default. To disable it, you can use the -XNoNPlusKPatterns flag. The recursive do-notation. The -XDoRec flag provides the necessary syntactic support. Here is a simple (albeit contrived) example: {-# LANGUAGE DoRec #-} justOnes = do { rec { xs <- Just (1:xs) } ; return (map negate xs) } As you can guess justOnes will evaluate to Just [-1,-1,-1,.... The background and motivation for recursive do-notation is described in A recursive do for Haskell, by Levent Erkok, John Launchbury, Haskell Workshop 2002, pages: 29-37. Pittsburgh, Pennsylvania. The theory behind monadic value recursion is explained further in Erkok's thesis Value Recursion in Monadic Computations. However, note that GHC uses a different syntax than the one described in these documents. Details of recursive do-notation The recursive do-notation is enabled with the flag -XDoRec or, equivalently, the LANGUAGE pragma DoRec. It introduces the single new keyword "rec", which wraps a mutually-recursive group of monadic statements, producing a single statement. Similar to a let statement, static and dynamic semantics of rec can be described as follows: First, similar to let-bindings, the rec is broken into minimal recursive groups, a process known as segmentation. For example: rec { a <- getChar ===> a <- getChar ; b <- f a c rec { b <- f a c ; c <- f b a ; c <- f b a } ; putChar c } putChar c The details of segmentation are described in Section 3.2 of A recursive do for Haskell. Segmentation improves polymorphism, reduces the size of the recursive "knot", and, as the paper describes, also has a semantic effect (unless the monad satisfies the right-shrinking law). Then each resulting rec is desugared, using a call to Control.Monad.Fix.mfix. For example, the rec group in the preceding example is desugared like this: rec { b <- f a c ===> (b,c) <- mfix (\~(b,c) -> do { b <- f a c ; c <- f b a } ; c <- f b a ; return (b,c) }) In general, the statment rec ss is desugared to the statement vs <- mfix (\~vs -> do { ss; return vs }) where vs is a tuple of the variables bound by ss. The original rec typechecks exactly when the above desugared version would do so. For example, this means that the variables vs are all monomorphic in the statements following the rec, because they are bound by a lambda. The mfix function is defined in the MonadFix class, in Control.Monad.Fix, thus: class Monad m => MonadFix m where mfix :: (a -> m a) -> m a Here are some other important points in using the recursive-do notation: It is enabled with the flag -XDoRec, which is in turn implied by -fglasgow-exts. a rec; that is, all the names bound in a single rec must be distinct (Section 3.3 of the paper). It supports rebindable syntax (see ). Mdo-notation (deprecated) GHC used to support the flag -XRecursiveDo, which enabled the keyword mdo, precisely as described in A recursive do for Haskell, but this is now deprecated. Instead of mdo { Q; e }, write do { rec Q; e }. Historical note: The old implementation of the mdo-notation (and most of the existing documents) used the name MonadRec for the class and the corresponding library. This name is not supported by GHC. Parallel List Comprehensions list comprehensionsparallel parallel list comprehens. Generalised (SQL-Like) List Comprehensions list comprehensionsgeneralised extended list comprehensions group sql flag -XTransformListComp. Here is an example: employees = [ ("Simon", "MS", 80) , ("Erik", "MS", 100) , ("Phil", "Ed", 40) , ("Gordon", "Ed", 45) , ("Paul", "Yale", 60)] output = [ (the dept, sum salary) | (name, dept, salary) <- employees , then group by dept , then sortWith by (sum salary) , then take 5 ] In this example, the list output would take on the value: [("Yale", 60), ("Ed", 85), ("MS", 180)] There are three new keywords: group, by, and using. (The function sortWith is not a keyword; it is an ordinary function that by e This form of grouping is essentially the same as the one described above. However, since no function to use for the grouping has been supplied it will fall back on the groupWith function defined in GHC.Exts. This is the form of the group statement that we made use of in the opening example.",...] Rebindable syntax and the implicit Prelude import -XNoImplicitPrelude option -XRebindableSyntax flag causes the following pieces of built-in syntax to refer to whatever is in scope, not the Prelude versions:! -XRebindableSyntax implies -XNo. Postfix operators The -XPostfixOperators flag. Tuple sections The -XTupleSections flag enables Python-style. Record field disambiguation -XDisambiguateRecordFields flag, ). For exampe: module Foo where import M x=True ok3 (MkS { x }) = x+1 -- Uses both disambiguation and punning With -XDisambiguateRecordFields you can use unqualifed field names even if the correponding selector is only in scope qualified For example, assuming the same module M as in our earlier example, this is legal: module Foo where import qualified M -- Note qualified ok4 (M.MkS { x = n }) = n+1 -- Unambiguous Since the constructore MkS is only in scope qualified, you must name it M.MkS, but the field x does not need to be qualified even though M.x is in scope but x is not. (In effect, it is qualified by the constructor.) Record puns Record puns are enabled by the flag -XNamed.) Record wildcards Record wildcards are enabled by the flag -XRecordWildCards. This flag implies -XDis: Wildcards can be mixed with other patterns, including expressions, writing,. The ".." expands to the missing in-scope record fields, where "in scope" includes both unqualified and qualified-only. Any fields that are not in scope are not filled in. For example module M where data R = R { a,b,c :: Int } module X where import qualified M( R(a,b) ) f a b = R { .. } The {..} expands to {M.a=a,M.b=b}, omitting c since it is not in scope at all. Local Fixity Declarations flag is necessary to enable them. Package-qualified imports With the -XPackageImports flag,. Note:. With the -fglasgow-exts flag, ). Such data types have only one value, namely bottom. Nevertheless, they can be useful when defining "phantom types". Data type contexts Haskell allows datatypes to be given contexts, e.g. data Eq a => Set a = NilSet | ConsSet a (Set a) give constructors with types: NilSet :: Set a ConsSet :: Eq a => a -> Set a -> Set a In GHC this feature is an extension called DatatypeContexts, and on by default. = ... and data constructor T, and similarly for :*:. Int `a` Bool. Function arrow is infixr with fixity 0. (This might change; I'm not sure what it should be.) Liberalised type synonyms Type synonyms are like macros at the type level, but Haskell 98 imposes many rules on individual synonym declarations. With the -XLiber -XUn has, foo has the legal (in GHC) type: foo :: forall x. x -> . So, for example, this will be rejected: type Pr = (# Int, Int #) h :: Pr -> Int h x = ... because GHC does not allow unboxed tuples on the left of a function arrow. Existentially quantified data constructors. Why existential?. Existentials and type classes. Record Constructors) Restrictions is the result of f1. One way to see why this is wrong is to ask what type f1 has::! Declaring data types with explicit constructor signatures , can only be declared using this form. Notice that GADT-style syntax generalises existential types (). are distinct type variables, then the data type is ordinary; otherwise is a generalised data type (). As with other type signatures, you can give a single signature for several data constructors. In this example we give a single signature for T1 patten Foo clause a does not appear in the result type of either constructor. Although it is universally quantified in the type of the constructor, such a type variable is often called "existential". Indeed, the above declaration declares precisely the same type as the data Foo in . must be the same (modulo alpha conversion). The Child constructor. Generalised Algebraic Data Types (GADTs) -XGADTs. The -XGADTs flag also sets -XRelaxedPolyRec. A GADT can only be declared using GADT-style syntax (); data type above, the type of each constructor must end with Term ty, but the ty need not be a type variable (e.g. the Lit constructor). clause for a GADT; only for an ordinary data type. As mentioned, ). declaration attached to a data declaration, is a GADT, but you can generate the instance declaration using stand-alone deriving. The TypableX class, whose kind suits that of the data type constructor, and then writing the data type instance by hand. GHC now permits such instances to be derived instead, using the flag -XGeneralizedNewtypeDeriving,. built-in derivation applies (section 4.3.3. of the Haskell Report). (For the standard classes Eq, Ord, Ix, and Bounded it is immaterial whether the standard method is used or the one described here.) Class and instances declarations Class declarations This section, and the next one, documents GHC's type-class extensions. There's lots of background in the paper Type classes: exploring the design space (Simon Peyton Jones, Mark Jones, Erik Meijer). All the extensions are enabled by the -fglasgow-exts flag.). Functional dependencies. class (Monad m) => MonadState s m | m -> s where ... class Foo a b c | a b -> c where ... There should be more documentation, but there isn't (yet). Yell if you need it. Rules for functional dependencies In a class declaration, all of the class type variables must be reachable (in the sense mentioned [implparam], where they are identified as one point in a general design space for systems of implicit parameter in two ways. The -XFlexibleInstances flag allows the head of the instance declaration to mention arbitrary nested types. For example, this becomes a legal instance declaration instance C (Maybe Int) where ... See also the rules on overlap. With the -XTypeSynonymInstances flag, instance heads may use type synonyms. As always, using a type synonym is just shorthand for writing the RHS of the type synonym definition. For example: type Point = (Int,Int) instance C Point where ... instance C [Point] where ... is legal. However, if you added instance C (Int,Int) where ... as well, then the compiler will complain about the overlapping (actually, identical) instance declarations. As always, type synonyms must be fully applied. You cannot, for example, write: type P a = [[a]] instance Monad P where ..., Int [b] => [b] -> [b] That postpones the question of which instance to pick to the call site for f by which time more is known about the type b. You can write this type signature yourself if you use the -XFlexibleContexts flag. -XFlexibleInstances to do this.) Warning: overlapping instances must be used with care. They can give rise to incoherence (ie different instance choices are made in different parts of the program) even without -XInco.) The willingness to be overlapped or incoherent is a property of the instance declaration itself, controlled by the presence or otherwise of the -XOverlappingInstances and -XIncoherentInstances flags when that module is being defined. Neither flag is required in a module that imports and uses the instance declaration. Specifically, during the lookup process: An instance declaration is ignored during the lookup process if (a) a more specific match is found, and (b) the instance declaration was compiled with -XOverlappingInstances. The flag setting for the more-specific instance does not matter.. If an instance declaration is compiled without -XOverlappingInstances, then that instance can never be overlapped. This could perhaps be inconvenient. Perhaps the rule should instead say that the overlapping instance declaration should be compiled in this way, rather than the overlapped one. Perhaps overlap at a usage site should be permitted regardless of how the instance declarations are compiled, if the -XOverlappingInstances flag is used at the usage site. (Mind you, the exact usage site can occasionally be hard to pin down.) We are interested to receive feedback on these points. The -XIncoherentInstances flag implies the -XOverlappingInstances flag, but not vice versa. Overloaded string literals GHC supports overloaded string literals. Normally a string literal has type String, but with overloaded string literals enabled (with -XOverloadedStrings) a string literal has type (IsString a) => a. This means that the usual string syntax can be used, e.g., for packed strings and other variations of string like types. String literals behave very much like integer literals, i.e., they can be used in both expressions and patterns. If used in a pattern the literal.. Type families are enabled by the flag -XTypeFamilies. Additional information on the use of type families in GHC is available on the Haskell wiki page on type families.. Data data :: * Data permitted when an appropriate family declaration is in scope - just as a class instance declaratoin exprss type -- WRONG: These two equations together... foo B = 2 -- ...will produce a type error. Instead, you would have to write foo as a class operation, thus: class C data instances When an associated data of data instances. Type Associated type. Type instance declarations of type synonym instances of type synonym do not contain any type family constructors, the total number of symbols (data type constructors and type variables) in s1 .. sm is strictly smaller than in t1 .. tn, and for every type variable a, a occurs in s1 .. sm GHC 6.10. Type families and instance declarations Type families require us to extend the rules for the form of instance heads, which are given in . Specifically: Data type families may appear in an instance head Type synonym families may not appear (at all) in an instance head The reason for the latter restriction is that there is no way to check for.. Other type system extensions Explicit universal quantification (forall).. sort :: (?cmp :: a -> a -> Bool) => [a] -> [a]: sortBy :: (a -> a -> Bool) -> [a] -> [a] sort :: (?cmp :: a -> a -> Bool) => [a] -> [a] sort = sortBy ?cmp Implicit-parameter type constraints:'s call site is quite unambiguous, and fixes the type a. Implicit-parameter bindings An implicit parameter is bound using the standard let or where binding forms. For example, we define the min function by binding cmp. min :: [a] -> a min = let ?cmp = (<=) in least: f t = let { ?x = t; ?y = ?x+(1::Int) } in ?x + ?y The use of ?x in the binding for ?y does not "see" the binding for ?x, so the type of f is f :: (?x::Int) => Int -> Int Implicit parameters and polymorphic recursion. Implicit parameters and monomorphism. Explicitly-kinded quantification:'s in type signatures: f :: forall (cxt :: * -> *). Set cxt Int: f :: (Int :: *) -> Int g :: forall a. a -> (a :: *) The syntax is atype ::= '(' ctype '::' kind ') The parentheses are required. Arbitrary-rank polymorphism :: (Ord a => [a] -> [a]) -> Swizzle: data T a = MkT (Either a b) (b -> b) it's just as if you had written this: data T a = MkT (forall b. Either a b) (forall b. b ->,. Type inference In general, type inference for arbitrary-rank types is undecidable. (),. Implicit quantification: f :: a -> a f :: forall a. a -> a g (x::a) = let h :: a -> b -> b h x y = y in ... g (x::a) = let h :: forall b. a -> b -> b h x y = y in ... Notice that GHC NOTE: the impredicative-polymorphism feature is deprecated in GHC 6.12, and will be removed or replaced in GHC 6.14. GHC supports impredicative polymorphism, enabled with -XImpredicativeTypes. This means that you can]). The technical details of this extension are described in the paper Boxy types: type inference for higher-rank types and impredicativity, which appeared at ICFP 2006. Lexically scoped type variables. Lexically-scoped type variables are enabled by -XScopedTypeVariables. This flag implies -XRelaxedPolyRec.) Declaration type signatures's type signature is explicit. For example: g :: [a] -> [a] g (x:xs) = xs ++ [ x :: a ] This program will be rejected, because "a" does not scope over the definition of . Expression type signatures a. ST s Bool brings the type variable s into scope, in the annotated expression (op >>= \(x :: STRef s Int) -> g x). Pattern type signatures. If -XRelaxedPolyRec is specified:). With -XRelaxedPolyRec in an expression context. ''T has type Name, and names the type constructor T. That is, ''thing interprets thing in a type context. These Names can be used to construct Template Haskell expressions, patterns, declarations etc. They may also be given as an argument to the reify function.. (Compared to the original paper, there are many differences of detail. The syntax for a declaration splice uses "$" not "splice". The type of the enclosed expression must be Q [Dec], not [Q Dec]. Pattern splices and quotations are not implemented.) Using Template Haskell.) You can only run a function at compile time if it is imported from another module that is not part of a mutually-recursive group of modules that includes the module currently being compiled. Furthermore, all of the modules of the mutually-recursive group must be reachable by non-SOURCE imports from the module where the splice is to be run. For example, when compiling module A, you can only run Template Haskell functions imported from B if B does not import A (directly or indirectly). The reason should be clear: to run B we must compile and run A, but we are currently type-checking A.. A Template Haskell Worked Example To help you get over the confidence barrier, try out this skeletal worked example. First cut and paste the two modules below into "Main.hs" and "Printf.hs": {- calts are like altsChoice class includes a combinator ArrowChoice) at least for strict k. (This should be automatic if you're not using seq.) This ensures that environments seen by the subcommands are environments of the whole command, and also allows the translation to safely trim these environments. The operator must also not use any variable defined within the current arrow abstraction. We could define our own operator, this stack was empty. In the second argument to handleA, this stack consists of one value, the value of the exception. The command form of lambda merely gives this value a name. More concretely, the values on the stack are paired to the right of the environment. So e is a polymorphic variable (representing the environment) and ti are the types of the values on the stack, with t1 being the top. The polymorphic variable e must not occur in a, ti or t. However the arrows involved need not be the same. Here are some more examples of suitable operators:C supports several pragmas, or instructions to the compiler placed in the source code. Pragmas don't normally affect the meaning of the program, but they might affect the efficiency of the generated code. Pragmas all take the form {-# word ... #-} where word indicates the type of pragma, and is followed optionally by information specific to that type of pragma. Case is ignored in word. The various values for word that GHC understands are described in the following sections; any pragma encountered with an unrecognised word is supose ordinarly (). ANN pragmas GHC offers the ability to annotate various code constructs with additional data by using three pragmas. This data can then be inspected at a later date by using GHC-as-a-library. Annotating values ANN and Data instances = ... Annotating types ANN type ANN You can annotate types with the ANN pragma by using the type keyword. For example: {-# ANN type Foo (Just "A `Maybe String' annotation") #-} data Foo = ... Annotating modules ANN module ANN You can annotate modules with the ANN pragma by using the module keyword. For example: {-# ANN module (Just "A `Maybe String' annotation") #-}. A SPECIALIZE has the effect of generating (a) a specialised version of the function and (b) a rewrite rule (see ) that rewrites a call to the un-specialised function into a call to the specialised one. subequently. Obselete in detail.. GHC keeps trying to apply the rules as it optimises the program. For example, consider: let s = map f t = map g in s (t xs).am.am is a modifier to INLINE/NOINLINE because it really only makes sense to match f on the LHS of a rule if you are sure that f is not going to be inlined before the rule has a chance to fire. List fusion take, filter iterate, repeat zip, zipWith The following are good consumers: List comprehensions array (on its second argument) ++ (on its first argument) foldr map take,:. lazy allows you to fool the type checker. Generic classes The ideas behind this extension are described in detail in "Derivable type classes", Ralf Hinze and Simon Peyton Jones, Haskell Workshop, Montreal Sept 2000, pp94-105. An example will give the idea: import Generics' This class declaration explains how toBin and fromBin work for arbitrary data types. They do so by giving cases for unit, product, and sum, which are defined thus in the library module Generics: data Unit = Unit data a :+: b = Inl a | Inr b data a :*: b = a :*: b Now you can make a data type into an instance of Bin like this: instance (Bin a, Bin b) => Bin (a,b) instance Bin a => Bin [a] That is, just leave off the "where" clause. Of course, you can put in the where clause and over-ride whichever methods you please. Using generics To use generics you need to Use the flags -fglasgow-exts (to enable the extra syntax), -X.) Changes wrt the paper and restrictions: class Foo a where op :: a -> (a, Bool) op {| Unit |} Unit = (Unit, True) op x = (x, False): class Foo a where op :: a -> Bool op {| p :*: q |} (x :*: y) = op (x :: p) ... The type patterns in a generic default method must take one of the forms: a :+: b a :*: b Unit where "a" and "b" are type variables. Furthermore, all the type patterns for a single type constructor (:*:, say) must be identical; they must use the same type variables. So this is illegal: class Foo a where op :: a -> Bool op {| a :+: b |} (Inl x) = True op {| p :+: q |} (Inr y) = False The type patterns must be identical, even in equations for different methods of the class. So this too is illegal: class Foo a where op1 :: a -> Bool op1 {| a :*: b |} (x :*: y) = True op2 :: a -> Bool op2 {| p :*: q |} (x :*: y) = False (The reason for this restriction is that we gather all the equations for a particular type constructor into a single generic instance declaration.) A generic method declaration must give a case for each of the three type constructors. The type for a generic method can be built only from: Function arrows Type variables Tuples Arbitrary types not involving type variables Here are some example type signatures for generic methods: op1 :: a -> Bool op2 :: Bool -> (a,Bool) op3 :: [Int] -> a -> a op4 :: [a] -> Bool Here, op1, op2, op3 are OK, but op4 is rejected, because it has a type variable inside a list. This restriction is an implementation restriction: we just haven. (Of course, these things can only arise if you are already using GHC extensions.) However, you can still give an instance declarations for types which break these rules, provided you give explicit code to override any generic default methods. The option -ddump-deriv dumps incomprehensible stuff giving details of what the compiler does with generic declarations. Another example Just to finish with, here's another example I rather like: class Tag a where nCons :: a -> Int nCons {| Unit |} _ = 1 nCons {| a :*: b |} _ = 1 nCons {| a :+: b |} _ = nCons (bot::a) + nCons (bot::b) tag :: a -> Int tag {| Unit |} _ = 1 tag {| a :*: b |} _ = 1 tag {| a :+: b |} (Inl x) = tag x tag {| a :+: b |} (Inr y) = nCons (bot::a) + tag y.
https://gitlab.haskell.org/nineonine/ghc/-/raw/22a25aa92bdbdcd4b29ada4cea187496b44bc53b/docs/users_guide/glasgow_exts.xml?inline=false
CC-MAIN-2021-17
refinedweb
4,362
64.71
public class Test{int i[] = {0}; public static void change_i(int i[]){ int j[] = {2}; i = j; } static public void main(String args[]){ int i[] = {1}; change_i(i); System.out.println(i[0]);}} ---------------------------------------------- You created a local copy of i inside change_i(). At the start of the method, both the local i and the i in main are pointing to the same array object. After i = j; the local copy is now pointing to the {2} array. At the end of the method, the local i goes out of scope. The local variable i[] in change_i() hides the instance variable i[] in the Test object. Try changing the local variable's name from "i" to "z" and you'll understand better what's going on here. 2) I understand that arrays are treated as objects and objects are passed as reference and hence changes made to the reference or object itself in a method, will carry through after the end of that method.
http://www.coderanch.com/t/238829/java-programmer-SCJP/certification/Array
CC-MAIN-2014-10
refinedweb
163
77.57
The class implements the random forest predictor. More... #include <opencv2/ml.hpp> The class implements the random forest predictor. Creates the empty model. Use StatModel::train to train the model, StatModel::train to create and train the model, Algorithm::load to load the pre-trained model. The size of the randomly selected subset of features at each tree node and that are used to find the best split(s). If you set it to 0 then the size will be set to the square root of the total number of features. Default value is 0. If true then variable importance will be calculated and then it can be retrieved by RTrees::getVarImportance. Default value is false. The termination criteria that specifies when the training algorithm stops. Either when the specified number of trees is trained and added to the ensemble or when sufficient accuracy (measured as OOB error) is achieved. Typically the more trees you have the better the accuracy. However, the improvement in accuracy generally diminishes and asymptotes pass a certain number of trees. Also to keep in mind, the number of tree increases the prediction time linearly. Default value is TermCriteria(TermCriteria::MAX_ITERS + TermCriteria::EPS, 50, 0.1) Returns the variable importance array. The method returns the variable importance vector, computed at the training stage when CalculateVarImportance is set to true. If this flag was set to false, the empty matrix is returned. Returns the result of each individual tree in the forest. In case the model is a regression problem, the method will return each of the trees' results for each of the sample cases. If the model is a classifier, it will return a Mat with samples + 1 rows, where the first row gives the class number and the following rows return the votes each class had for each sample. Loads and creates a serialized RTree from a file. Use RTree::save to serialize and store an RTree to disk. Load the RTree from this file again, by calling this function with the path to the file. Optionally specify the node for the file containing the classifier
https://docs.opencv.org/trunk/d0/d65/classcv_1_1ml_1_1RTrees.html
CC-MAIN-2019-43
refinedweb
351
57.77
NAME swapon, swapoff - start/stop swapping to file/device SYNOPSIS #include <unistd.h> #include <asm/page.h> /* to find PAGE_SIZE */ . On success, zero is returned. On error, -1 is returned, and errno is set appropriately. ERRORS EBUSY (for swapon()) The specified path is already being used as a swap area. EINVAL The file path exists, but refers neither to a regular file nor to a block device; or, for swapon(), the indicated path does not contain a valid swap signature or resides on an in-memory file system like tmpfs; or,.27 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.ubuntu.com/manpages/oneiric/man2/swapon.2.html
CC-MAIN-2015-11
refinedweb
112
68.16
Running doctests from TextMate for Google App Engine modules I love Python's doctests. Basically, you test out your functions in the interactive shell and copy the results into the comments for a function. That's it! So simple. Example: def http_request(self, url, data, method=urlfetch.POST): """ Makes an API call to Triggermail and returns the response. >>> client = TriggerMail() >>> client.http_request('email', {'email':'EMAIL_REMOVED'}, urlfetch.GET) {u'blacklist': u'0', u'templates': {u'test2': 0}, u'verified': u'0', u'vars': {u'first_name': u'Aral', u'last_name': u'Balkan'}, u'optout': u'0'} """ To run the doctests, you just need a main method in your module that looks like this: if __name__ == "__main__": import doctest doctest.testmod() And, if you're working with TextMate, you can run the current script and its doctests by pressing ⌘ R. Sweet! However, when working with Google App Engine, this doesn't work out of the box. If you try it, you'll get an error similar to the following: ImportError: No module named google.appengine.api This is because the local GAE environment isn't set up properly. The same goes when trying to test your apps from the Python interactive shell. (If you're using Django for your app, you're in luck, all you have to do is ./manage.py shell and you're up and running with an interactive shell that's configured for your GAE project.) Thankfully, Duncan over at the GAE forums went to the trouble of finding out exactly which imports are necessary to get you up and running. His code listing actually goes beyond setting up the environment to finding your modules and running the tests. For my purposes, I just want to be able to hit ⌘ R in TextMate and run the tests for my current module while developing it, so I took the top bit of his code and put it into a module called gae_doctests.py. It looks like this: # To enable doctests to run from TextMate, import this module # (Use only when testing, then comment out.) # From: import sys import os sys.path = sys.path + ['/usr/local/google_appengine', '/usr/local/google_appengine/lib/django', '/usr/local/google_appengine/lib/webob', '/usr/local/google_appengine/lib/yaml/lib', '/usr/local/google_appengine/google/appengine','/Users/aral/singularity/'] from google.appengine.api import apiproxy_stub_map from google.appengine.api import datastore_file_stub from google.appengine.api import mail_stub from google.appengine.api import urlfetch_stub from google.appengine.api import user_service_stub APP_ID = u'test_app' AUTH_DOMAIN = 'gmail.com' LOGGED_IN_USER = 't...@example.com' # set to '' for no logged in user # Start with a fresh api proxy. apiproxy_stub_map.apiproxy = apiproxy_stub_map.APIProxyStubMap() # Use a fresh stub datastore. stub = datastore_file_stub.DatastoreFileStub(APP_ID, '/dev/null', '/dev/null') apiproxy_stub_map.apiproxy.RegisterStub('datastore_v3', stub) # Use a fresh stub UserService. apiproxy_stub_map.apiproxy.RegisterStub('user', user_service_stub.UserServiceStub()) os.environ['AUTH_DOMAIN'] = AUTH_DOMAIN os.environ['USER_EMAIL'] = LOGGED_IN_USER # Use a fresh urlfetch stub. apiproxy_stub_map.apiproxy.RegisterStub( 'urlfetch', urlfetch_stub.URLFetchServiceStub()) # Use a fresh mail stub. apiproxy_stub_map.apiproxy.RegisterStub( 'mail', mail_stub.MailServiceStub()) (Either replace the /usr/local/ bit with the actual path to your GAE install or use Duncans code which is neater -- I was lazy and copied the contents of the sys.path list from the Django interactive shell.) To use it, simply: import gae_doctests And hit ⌘ R in TextMate. Sweet! Once I'm done hacking away on a module, I simply comment out the import. Joeles I tried it with textmate and it kind of works, but i still get a 404 error.July 16th, 2008 at 12:17 am i think it has something to do with the google webapp framework for some reason probably “/” is not fetched? any hints on howto solve this while debugging or even setting which handler to call? Beech Horn A brilliant aid to those of any IDE, thank you. You may want to add:November 15th, 2008 at 5:57 pm >>> from google.appengine.api.memcache import memcache_stub to your import section, along with: >>> # Use a fresh memcache stub. >>> apiproxy_stub_map.apiproxy.RegisterStub( >>> ‘memcache’, memcache_stub.MemcacheServiceStub()) at the end of the file, negating memcache errors. Stuart Grimshaw This seems like a daft question, but where did you put the gae_doctest module?January 6th, 2009 at 3:51 pm
http://aralbalkan.com/1358
crawl-002
refinedweb
700
59.3
41206/how-to-check-if-user-has-permission-in-web-ui You can implement this as follows: First, add an authentication filter to authenticate a user. Refer this: If you don't install the filter then this won't work. Next, enable Access Control Lists like this: val sc = new SparkContext(new SparkConf()) ./bin/spark-submit <all your existing options> --spark.acls.enable=true For a user to have modification access ...READ MORE Hey, You can try this code to get ...READ MORE You can use this: lines = sc.textFile(“hdfs://path/to/file/filename.txt”); def isFound(line): if ...READ MORE Hi, I have the input RDD as There another property where you can set ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/41206/how-to-check-if-user-has-permission-in-web-ui
CC-MAIN-2020-05
refinedweb
125
61.43
How to compile MySensors on Platformio for Blue Pill Hello. I am trying to change to PlatformIO, which has many advantages over Arduino IDE. I want to compile for the STM32 Blue Pill module but have not been able to get the right configuration. I have found many suggestions but have not gotten any to work. At one point I got a compile, but it ignored the MySensors code. Can some one please post their working platformio.ini, and are there any detailed instructions anywhere for the exact configuration such as which STM32 library to use? I have spent days researching and tried things like; board_build.core = maple Using a Python script to get rid of multiple main/premain error Other Platform & Board options. (such as genericSTM, etc) But cant seem to get the right mix. Thank you for anyone who can help. Also, I have been using MySensors for over 5 years: this is a great system, a great website, has great admin, and a fantastic group of contributors. Thank you all very much for MySensors! ++++++++++++++++++++++++++++++++ [ Below is a sample of the various options I have tried in an experimental platformio.ini. None worked.] ;[env:bluepill_f103c8] ;platform = ststm32 ;board = bluepill_f103c8 ;framework = arduino ;upload_protocol = stlink ; NOTE - Above lines are not acceptable to MySensors on Blue Pill. Must use maple as shown below. ; NOTE - In Platfromio docs, Maple is a different processor ;[env:maple] ;platform = ststm32 ;board = maple ;framework = arduino ;upload_protocol = stlink ; NOTE - The below was found when searching the net. It appears an alternative set of ENV entries. [env:genericSTM32F103C8] platform = ststm32 board = genericSTM32F103C8 framework = arduino board_build.core = maple ;lib_extra_dirs = D:\Dropbox\Elektronik\platformio\libraries ;lib_deps = upload_protocol = stlink ; Serial Monitor options monitor_speed = 115200 Hi @novicit , I was just trying to test a BluePill adding a LoRa chip (SX1276) as a new type of MySensors node but got blocked in the first step (no compiling when I include MySensors library). In PlatoformIO I installed the default platform (STSTM32). Sorry I cannot give you a solution,... Did you get MySensors to compile in the Arduino environment with STM32? (planning to give it a try if it works). Which boards definition?. If you are interested I could post the compiler output, but I believe is of no use to you since I have not digged much into the issue. I have used ESP32 and ATmega328 variants in the past with MySensors but wanted to give a try to STM32 architecture as it seems quite promissing when dealing with Analog signals (ADC specs), so I am still at the begginin of the journey... @CarloMagno If you are successful to implement MySensors on the Blue Pill, please post your method. Thank you! I have spent 2 weeks on this and remain unable to compile MySensors for the Blue Pill. I can compile and download programs to the BP on both PlatformIO and Arduino IDE, but as soon as I put #include <mysensors.h> in the code, errors appear. I have found these errors on some forum entries, but a clear step-by-step solution method which works has not yet been posted (that I have found). I am no expert on coding, so I assume I am missing something. Well, I have tried the Arduino IDE both with the current stable version of MySensors and with the Development version. In both cases i get a HAL error: In file included from C:\Users\xxxxx\Documents\Arduino\STM32_MySensors_Test1_v0\STM32_MySensors_Test1_v0.ino:56: C:\Users\xxxxx\Documents\Arduino\libraries\MySensors-development/MySensors.h:83:2: error: #error Hardware abstraction not defined (unsupported platform) 83 | #error Hardware abstraction not defined (unsupported platform) | ^~~~~ exit status 1 Error compiling for board Generic STM32F1 series. Line 56 of the sketch is #include <MySensors.h> So it seems it is related to the compatibility of the official STM32 core provided by ST and the MySensors library. I believe I have read somewhere that the successful implementation of MySensors and STM32 was using another core... I have to dig a little bit more about it. Nevertheless. I'd rather use the ST "official" core as it seems it supports a great number of board and is in continuous development, so new boards that appear could be ready to use... (Cube Cell looks really nice but integrates another type of LoRa radio that I believe will require quite more work from MySensors to get it supported). I will answer to myself... forcing the architecture in the sketch including this line before MySensors (it should be picked automatically but it seems it's not): #define ARDUINO_ARCH_STM32F1 #include <MySensors.h> The error now goes to the hardware definition where indeed it seems it expects to have the maple core: In file included from C:\Users\xxxxx\Documents\Arduino\libraries\MySensors-development/hal/architecture/STM32F1/MyHwSTM32F1.cpp:20, from C:\Users\xxxxx\Documents\Arduino\libraries\MySensors-development/MySensors.h:71, from C:\Users\xxxxx\Documents\Arduino\STM32_MySensors_Test1_v0\STM32_MySensors_Test1_v0.ino:56: C:\Users\xxxxx\Documents\Arduino\libraries\MySensors-development/hal/architecture/STM32F1/MyHwSTM32F1.h:23:10: fatal error: libmaple/iwdg.h: No such file or directory 23 | #include <libmaple/iwdg.h> | ^~~~~~~~~~~~~~~~~ compilation terminated. exit status 1 Error compiling for board Generic STM32F1 series. I quick search pointed to the Roger Clark github repository (it seems it is the origin of STM32 support in Arduino IDE) I downloaded the zip file with the repository, created a hardware folder in MyDocuments/arduino folder (it did't exist), and decompressed the zip there. The folder structure is as follows: C:\Users\xxxxx\Documents\Arduino\hardware\Arduino_STM32-master I closed the Arduino IDE and opened it again so the new boards definition would be available, and in the type of board, under the new group "STM32F1 Boards (STM32duino.com)" I selected the board type "Generic STM32F103C series" and everything compiles OK, (with or without the line #define ARDUINO_ARCH_STM32F1 Now I have to check if following the indication in, including the option "board_build.core = maple" in platformio.ini the compilation will force to use the maple core that seems the one working with MySensors. .. still no luck in Platformio. platformio.ini: [env:genericSTM32F103CB] platform = ststm32 board = genericSTM32F103CB framework = arduino board_build.core = maple monitor_speed = 115200 monitor_filters = time, default [env:Sensor-Serial-Windows] upload_port = COM7 monitor_port = COM7 [env:Sensor-Serial-Linux] ; any port that starts with /dev/ttyUSB upload_port = /dev/ttyUSB* monitor_port = /dev/ttyUSB* After some warnings regarding definition of SPI2 and not been used, we get to the error when MySensors is included: Compiling .pio\build\genericSTM32F103CB\lib5ec\MySensors_ID548\MyASM.S.o Archiving .pio\build\genericSTM32F103CB\lib473\libEEPROM.a Archiving .pio\build\genericSTM32F103CB\libd15\libSPI.a Archiving .pio\build\genericSTM32F103CB\lib5ec\libMySensors_ID548.a Linking .pio\build\genericSTM32F103CB\firmware.elf .pio\build\genericSTM32F103CB\src\main.cpp.o: In function `premain()': main.cpp:(.text.startup._Z7premainv+0x0): multiple definition of `premain()' .pio\build\genericSTM32F103CB\FrameworkArduino\main.cpp.o:main.cpp:(.text.startup._Z7premainv+0x0): first defined here .pio\build\genericSTM32F103CB\src\main.cpp.o: In function `main': main.cpp:(.text.startup.main+0x0): multiple definition of `main' .pio\build\genericSTM32F103CB\FrameworkArduino\main.cpp.o:main.cpp:(.text.startup.main+0x0): first defined here collect2.exe: error: ld returned 1 exit status *** [.pio\build\genericSTM32F103CB\firmware.elf] Error 1 ================================================ [FAILED] Took 19.66 seconds ================================================ For what I understand of the error... (nothing)... It seems it will take me quite some time to go any further... At least Arduino IDE is working and I can start testing the Radio with that. @CarloMagno Thank you for putting up your experience and success with the Arduino IDE. It appears we did many of the same steps. I also ran into the HAL error, tried the #define ARDUINO_ARCH..., and failed. Also tried the Roger Clark version and id not get that to run. Today I took an older PC and clean installed the IDE and Roger Clark core as you described and SUCCESS, was able to compile MySensors. There must be something wrong with my configuration on my main PC. The "official" core is also my preference as it is available on PlatformIO. And there are libraries available for it that may not run on Roger's. For example I have read there is a sleep/low power library for Blue Pill that only runs on the STM32 "official" core (I have not tested that). I might take another try at getting PlatformIO to work with Roger Clarks core but we are getting to the fringes of my technical skills. In either case, I am very happy to be able to compile MySensors for the Blue Pill. Essentially same cost and size of the Pro Mini, but so much more capable. I have a number of applications it will fit perfectly for. If I make any progress on PlatformIO, will post here. . You need to either use unofficial stm32 core which is what Mysensors port is built upon, or to make a new port which will be compatible with official core. I looked into this and it seems feasible for me to accomplish. I am now working on a node with STM32, so I will try to make it work. @monte have you seen ? Maybe that is a good way forward for stm32 support. @novicit said in How to compile MySensors on Platformio for Blue Pill: . ...It could be because I used the ZIP file to install the Arduino IDE instead of the windows installer... good to point if someone else faces the same problems. @monte , @mfalkvidd ... Fantastic!!, is good to know that there there is development to support the official ST Core, for sure it will provide much better future stability with the new hardware from ST. @m).
https://forum.mysensors.org/topic/11151/how-to-compile-mysensors-on-platformio-for-blue-pill/20
CC-MAIN-2020-29
refinedweb
1,607
56.05
import imageAlexito Aug 16, 2014 8:19 PM Hello, I am going to implement a dynamic legend using JavaScript in Adobe Acrobat. The document contains a lot of layers. Every layer has an own legend. The origin idea is to implement the legend so, that it contains the images in a dialog box for the visible layers. I can only hide/show the layers by setting state to false or true (this.getOCGs()[i].state = false;) on document-level. Question 1: Can I extract data from layer somehow for legend establishing? I think no, as we only have these function on layers: getIntent(), setIntent() and setAction(). Right? Therefore I decided to arrange it so, that all needed icons for every layer are saved in a folder with corresponding names. JavaScript should import the icons and I build the a dialog window with icons of visible Layers and place a text(description for this icon). I tried all possibilities of image import described here:. I got only one way (Convert the icons as hexadecimal strings). This way isn't good, as it is too much work to create with an other tool a hexadecimal string from a images and place it into a javascript code. Unfortunately, I cannot import image using other methods:(. Since the security settings in Adobe are changed after version 7 or so, it is not possible to use functions like app.newDoc, app.openDoc, even app.getPath On document-level. I decided to implement the import on the folder level using trusted functions like this: Variant 1: var importImg = app.trustedFunction(function() { app.beginPriv(); var myDoc = app.newDoc({ nWidth: 20, nHeight: 20 }); var img = myDoc.importIcon("icon", "/icon.png", 0); app.endPriv(); return img; }); var oIcon = importImg(); NotAllowedError: Security settings prevent access to this property or method. App.newDoc:109:Folder-Level:User:acrobat.js Variant 2: var importImg = app.trustedFunction(function() { var appPath = var phPath = app.getPath({ cCategory: "user", cFolder: "javascript" }); try { app.beginPriv(); var doc = app.openDoc({ cPath: phPath + "/icon.png", bHidden: true }); app.endPriv(); } catch (e) { console.println("Could not open icon file: " + e); return; } var oIcon = util.iconStreamFromIcon(doc.getIcon("icon")); return oIcon; }); var oIcon = importImg(); Error: Could not open icon file: NotAllowedError: Security settings prevent access to this property or method. The settings in Preferences->JavaScript-> JavaScript Security are disabled (Enable menu item JS execution privileges, enable global object security policy). Question 2: Is it not allowed or should I change some other settings or use the import on any other way? I tried all these possibilities with .jpg, .png, .pdf. with different sizes(big images and 20x20 pxls), It doesn't work. Could somebody help me, as I spent a lot of time with trying different possibilities. It would be actually better to implement the main goal described above on document level, are there other possibilities to access images, maybe using xml or something else) (Question 3)? Thank you and kind regards, Alex 1. Re: import imagetry67 Aug 17, 2014 1:37 AM (in response to Alexito) I tried the first code and it worked fine, in the sense that it generated a new file, but it couldn't import the image because the file path you specified is partial. You need to specify the full file path using the right syntax. Also, the third parameter of that method should only be used when importing a PDF file. 2. Re: import imageAlexito Aug 17, 2014 2:53 AM (in response to try67) Thanks! It was no problem with the path, as I change it before posting. I printed out and checked every time. I installed Acrobat and I works now. Before I tried in Reader. Ok, when this code works, why the similar code doest not work on document level? function importImg() { var shortPath = "/Macintosh HD/tmpAcrobat/"; console.println(shortPath + "icon.png"); var img = this.importIcon("icon", shortPath + "ico.png",0); // I tried with different types of imgs: png, jpg, pdf and sizes : 16x16, 20x20, without the third parameter as well. oIcon = util.iconStreamFromIcon(this.getIcon("icon")); console.println(oIcon.width); // returns the icon's width console.println(oIcon.height); // return the icon's height return oIcon; }; console.println("call importImg"); var oIcon = importImg(); Error: RangeError: Invalid argument value. Doc.importIcon:5:Document-Level:init Do you have any idea, how to get it work? 3. Re: import imagetry67 Aug 17, 2014 5:04 AM (in response to Alexito) It seems I was wrong about the path. You can specify a relative one, but you can't specify it at all from a doc-level script, only from a trusted context. Also, you should remove the nPage parameter, as I've said earlier. 4. Re: import imageAlexito Aug 17, 2014 5:55 AM (in response to try67) I removed the nPage parameter, thanks! So it looks now like this: function importImg(targetImg) { var shortPath = "/Macintosh HD/Users/tmpAcrobat/"; console.println(shortPath + "icon.jpeg"); var img = this.importIcon("icon", shortPath + "icon.png"); oIcon = util.iconStreamFromIcon(this.getIcon("icon")); console.println(oIcon.width); // returns the icon's width console.println(oIcon.height); // return the icon's height return oIcon; }; var oIcon = importImg("Button1"); >> You can specify a relative one, but you can't specify it at all from a doc-level script, only from a trusted context. What do you mean here, that it is not allowed to specify the path like this "/Macintosh HD/Users/tmpAcrobat/icon.png" ? Are there no possibilities to specify the path from a trusted content from a doc-level? 5. Re: import imagetry67 Aug 17, 2014 6:04 AM (in response to Alexito) I already explained, you can't specify the cDIPath parameter from a doc-level script. 6. Re: import imageAlexito Aug 17, 2014 7:29 AM (in response to try67) I've got a bit confused about "can", "cannot" and trusted content. All right, thank you!
https://forums.adobe.com/message/6647804
CC-MAIN-2018-51
refinedweb
981
58.89
This application explains basic concepts of image processing including brightening, converting to gray and adding notes to an image using System.Drawing.Imaging namespace classes. Following code shows how to display an image in a picture box. private Bitmap b = The screen shot above shows an image before and after increasing brightness. This is the main function, where we do the main processing. If you look into the code you will notice, I am using unsafe code, where I am doing the main processing. This is really important because unsafe mode allow it to run that piece of code without CLR, which make the code to run much faster. Here its important because we care about the speed of the routine. public ©2014 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/UploadFile/ShrutiShrivastava/ImageProcessing12192005061519AM/ImageProcessing.aspx
CC-MAIN-2014-42
refinedweb
132
66.74
LSB Core Tests This article describes how to run the official LSB Core test suite (OLVER) against the local development version of glibc. The test suite covers 1193 glibc functions (see LSB 12.3. "Interfaces for libc") and checks them for conformance with LSB, SUSv2, SUSv3, SUSv4, LFS and SVID.4 standards. The test suite is a part of the LSB certification tests for Linux distributions. 1. Installation The test suite requires LSB SDK, LSB support package (e.g. redhat-lsb in Fedora, lsb in Debian, etc.), GCC, Perl 5, JDK 6, zlib, ncurses, Linux-PAM (development files), GNU Automake and Libtool to be installed on your host. By default the test suite checks glibc installation from /usr system directory. But it's not convenient for testers because installing latest glibc to /usr is very difficult and dangerous operation for the system. Unfortunately, the test suite is not configurable enough yet to easily switch the path of tested glibc installation. This section describes a workaround, that allows you to run the test suite against the local glibc install tree. Install glibc to a local directory (/home/user/glibc-2.17 for example) $ wget $ tar -xzf glibc-2.17.tar.gz $ cd glibc-2.17 $ mkdir build $ cd build $ ../configure --prefix=/usr $ make $ make install install_root=/home/user/glibc-2.17 Install LSB SDK $ wget $ tar -xzf lsb-sdk-4.1.5-1.ia32.tar.gz $ cd lsb-sdk $ sudo ./install.sh Install lsbcc compiler to a local directory (/home/user/local-sdk for example) $ bzr branch $ cd build_env/lsbdev-cc $ LSBCC_LSBVERSION=4.1 INSTALL_ROOT=/home/user/local-sdk make install Configure LSB SDK $ sudo mv /opt/lsb/bin/lsbcc /opt/lsb/bin/lsbcc-orig $ sudo vim /opt/lsb/bin/lsbcc #!/bin/bash /home/user/local-sdk/lsbcc -Wl,-rpath=/home/user/glibc-2.17/lib $@ $ sudo chmod +x /opt/lsb/bin/lsbcc $ sudo ln -sf /home/user/glibc-2.17/lib/ld-linux.so.2 /lib/ld-lsb.so.3 $ bzr branch Build the test suite $ cd olver-core-tests $ sudo ./configure.sh $ ./build_conf_tests.sh $ sudo ./install.sh 2. Usage 2.1. Run tests The list of scenarios to run is bin/testplan. Some non-glibc scenarios can be removed from the testplan: util_pam_scenario (libpam), util_compress_scenario (libz), ncurses_* (libncurses). Some scenarios are disabled by default and you can enable them back by removing leading '#'. To execute tests run (may take several hours) $ ./bin/olver_run_tests 2.2. Reports Summary report - shows number of executed scenarios, tested functions, failed requirements and divides found problems into groups of known problems and new ones /var/opt/lsb/test/olver-core/DATE/summary.htm Execution report - shows information on executed scenarios /var/opt/lsb/test/olver-core/DATE/report/index.html Requirements coverage - shows covered requirements of standards /var/opt/lsb/test/olver-core/DATE/result.htm TET report - machine readable report in the TET format /var/opt/lsb/test/olver-core/DATE/tet/nice_tet.log See detailed tutorial on OLVER reports in "OLVER Reports: Reference Guide". 2.3. Debugging The test suite has a model-based architecture. Each target function is called by the agent program (./bin/agent). The model program (./bin/olver) tells the agent program which arguments should be passed to the function and receives its return value(s) back. Then the model checks constraints on the return value, parameters and the state of function. Click on the failure to see description of failed requirement (Requirement failed: {localtime.01} The localtime() function shall convert the time in seconds into a local time for example), name of the scenario (time_conversion_scenario for example) and name of the specification function (localtime_spec() for example). You can find the implementation of the specification function under the ./src/model directory (*_model.sec file) and the code of the requirement by its id. To debug the agent program you should run the model first in one terminal $ ./bin/olver -c ./etc/olver.conf SCENARIO The SCENARIO is one of the scenarios from the testplan (e.g. math_integer_scenario). Then run the agent program under the debugger in the second terminal $ gdb ./bin/agent $ b FUNCTION $ run $ ... The FUNCTION is one of the tested functions in the target SCENARIO (e.g. abs, div, etc.). 3. Current results This section describes current test results for various glibc versions on Fedora host. The test suite already has its own database of known problems, so it automatically divides found problems into a group of known problems and a group of new problems in the summary report. 3.1. x86 2.18: Fixed problems bug543(hypotl), bug512(tgamma), bug511_1(initstate) and bug803(getdate). Fixed requirements: math.exp.hypotl.08, math.gamma.tgamma.11.01, math.rand.initstate.11 and util.format.time.getdate.{01, 03, 04, 05, 07, 13, 18.01}. 3.2. Known problems TODO: new problems should be analyzed by maintainers and the result should be recorded here util_dl_scenario [dlsym.03] Requirement failed: {dlsym.03} correct symbol not found #include <dlfcn.h> int main() { void* h = dlopen(NULL, RTLD_NOW | RTLD_GLOBAL); void* r = dlsym(h, "authnone_create"); return 0; } The issue is that dlsym cannot find a symbol authnone_create in the libc. There is only authnone_create\@GLIBC_2.0 symbol. The problem is not reproducible on glibc versions older than 2.14. util_crypt_scenario [crypt.09] Requirement failed: Unexpected error code returned: [EINVAL] code 0x16 #include <unistd.h> #include <errno.h> int main() { int e1 = 0; int e2 = 0; int r = 0; e1 = errno; r = crypt("Key", "a"); e2 = errno; return 0; } The issue is that the new implementation of the crypt function in glibc 2.17 requires the salt parameter to be longer than two characters and returned an error code EINVAL (22). This error code is not described in the standard for this function. 4. Samples Test report for glibc 2.17 in Fedora 18 on x86 (Linux 3.6.10-4.fc18.i686.PAE) Test report for glibc 2.17 in ROSA 2012 on x86 (Linux 3.0.28-nrj-desktop-2rosa.lts)
http://sourceware.org/glibc/wiki/Testing/LSB_Core?action=diff&rev1=8&rev2=7
CC-MAIN-2014-49
refinedweb
999
51.04
How to bring a running python program under debugger control Of course pdb has already got functions to start a debugger in the middle of your program, most notably pdb.set_trace(). This however requires you to know where you want to start debugging, it also means you can't leave it in for production code. But I've always been envious of what I can do with GDB: just interrupt a running program and start to poke around with a debugger. This can be handy in some situations, e.g. you're stuck in a loop and want to investigate. And today it suddenly occurred to me: just register a signal handler that sets the trace function! Here the proof of concept code: import os import signal import sys import time def handle_pdb(sig, frame): import pdb pdb.Pdb().set_trace(frame) def loop(): while True: x = 'foo' time.sleep(0.2) if __name__ == '__main__': signal.signal(signal.SIGUSR1, handle_pdb) print(os.getpid()) loop() Now I can send SIGUSR1 to the running application and get a debugger. Lovely! I imagine you could spice this up by using Winpdb to allow remote debugging in case your application is no longer attached to a terminal. And the other problem the above code has is that it can't seem to resume the program after pdb got invoked, after exiting pdb you just get a traceback and are done (but since this is only bdb raising the bdb.BdbQuit exception I guess this could be solved in a few ways). The last immediate issue is running this on Windows, I don't know much about Windows but I know they don't have signals so I'm not sure how you could do this there. 4 comments: Marius Gedminas said... (I hate blogger. Every time I start commenting, it immediately reloads the page and loses the half-sentence of text I've already typed.) Neat idea! I long wished for gdb-like attachability for Python. This is very close to that ideal, but still requires a bit of foresight. It should be possible to use gdb to attach to the Python interpreter and then invoke some API calls to effectively set a breakpoint after the next bytecode. I haven't seen anyone do that yet, though. fumanchu said... The Windows version might use the 'break' event and look something like this (although more Pythonic than this, since I had to move things around to get the point across in Blogger): import pdb import win32api import win32con def handle(event): """Handle console control events (like Ctrl-C).""" if event != win32con.CTRL_BREAK_EVENT: return 0 pdb.set_trace() # First handler to return True stops the calls to further handlers return 1 result = win32api.SetConsoleCtrlHandler(handle, 1) if result == 0: sys.exit('Could not SetConsoleCtrlHandler (error %r)' % win32api.GetLastError()) David Glick said... I did something similar several months ago, for use with Zope 2, in The essential difference is that I subclassed Pdb to make it possible to set a breakpoint on a particular file and line, rather than breaking immediately. The file and line are passed in via a file with a predefined name that you create before triggering the signal. With the help of a command in my editor that creates that file and then triggers the signal, debugging has become much more efficient! (It does get a bit complicated because Zope is multi-threaded, and pdb can only set a trace in the current thread. So mr.freeze patches the main loop processing HTTP requests to set a trace in *each* thread once the signal has been received, then does some bookkeeping to make sure that it only actually breaks in the first thread where the breakpoint is hit.) Anonymous said... "And the other problem the above code has is that it can't seem to resume the program after pdb got invoked, after exiting pdb you just get a traceback and are done" "c" in pdb will allow you to continue execution as normal without quitting. New comments are not allowed.
http://blog.devork.be/2009/07/how-to-bring-running-python-program.html
CC-MAIN-2019-09
refinedweb
676
69.92
Opened 3 years ago Closed 23 months ago Last modified 17 months ago #19031 closed New feature (fixed) Add a warning that @override_settings may not work with DATABASES Description I'm aware of the fact that SQLite's :memory: mode doesn't work with threads, so I wanted to override TEST_NAME using override_settings to make SQLite use a filesystem DB for a single test: from django.test import TransactionTestCase from django.db import models from django.test.utils import override_settings from threading import Thread class OverrideDATABASESTest(TransactionTestCase): @override_settings(DATABASES={'default': {'BACKEND': 'django.db.backends.sqlite3', 'TEST_NAME': 'test-db'}}) def test_override_DATABASES(self): t = Thread(target=SomeModel.objects.get) t.start() t.join() This doesn't work with threads, it fails with a DatabaseError: no such table: app_modelname exception in the thread. The test works if I set TEST_NAME in my settings.py though, so this seems like an issue with override_settings. Change History (18) comment:1 Changed 3 years ago by akaariai - Needs documentation unset - Needs tests unset - Patch needs improvement unset - Resolution set to wontfix - Status changed from new to closed comment:2 Changed 3 years ago by jonash In that case, it might be useful to have some sort of warning or exception if someone tries to override the DATABASE setting. Or a note in the docs -- along with all the other settings that shouldn't/can't be overriden (if any exist). comment:3 Changed 3 years ago by claudep - Component changed from Testing framework to Documentation - Easy pickings set - Resolution wontfix deleted - Status changed from closed to reopened - Triage Stage changed from Unreviewed to Accepted I do agree with Anssi that overriding DATABASE is too much for override_settings. However, I also agree that this could be documented. comment:4 Changed 3 years ago by Architekt I don't think that overriding DATABASE settings is too much. It depends on what do you expect from it. I can imagine proper use cases for it. In the docs is described that the test database is created before running tests. And every time you run tests you see in the output: Creating test database... -> runing tests -> Destoying test database... This should be enough to understand that overriding DATABASE will not change test database in the middle of running tests. Overriding settings is always tricky. Situation described in this ticket is just one case out of many others. I think this particular situation does not need any special documentation. jonash: Right solution for your problem is skippping the test for SQLite's :memory: database. comment:5 Changed 3 years ago by jonash It's not since then I either need to run the tests twice (with :memory: and a real database) or my test is not run at all. comment:6 Changed 2 years ago by aaugustin - Status changed from reopened to new comment:7 Changed 2 years ago by joeri - Owner changed from nobody to joeri - Status changed from new to assigned comment:8 Changed 2 years ago by joeri - Cc joeri added - Has patch set Just added a pull request: So, override_settings provides a way to change settings in your Django settings. However, some settings are only accessed during the initialization of your project. For example, the DATABASES and CACHES settings are read and cached by Django and Django internals only use these cached objects (and not read your settings file over and over again). Still, overriding the DATABASE setting with override_settings does change the DATABASE setting when accessed via django.conf.settings. This override however has no effect in practice. I call this unexpected behaviour because you probably expected that your database backend actually changed. In addition to documenting these settings that show unexpected behaviour, I also added a UserWarning when people try to change these settings. I specifically not raise an exception because you might want to test something not related to what Django does internally (although you probably do). Note that this also introduces the interesting fact that the Django test suite shows this warning. This actually related to another ticket #20075 where this problem is described. comment:9 Changed 2 years ago by aaugustin The setting_changed signal is designed to catch such changes and clear whatever caches may exist in Django. I would prefer to raise the warning in a setting_changed listener, for consistency. comment:10 Changed 2 years ago by joeri Sounds good. However, this will trigger the warning twice since this signal is also given when leaving the context. Unless there is a way to check if its change/change back signal. comment:11 Changed 2 years ago by aaugustin As discussed together it would make sense to add a boolean argument to the setting_changed signal to tell entering the block from exiting it. comment:12 Changed 2 years ago by joeri Changed pull request as per our discussion. comment:13 Changed 2 years ago by timo - Component changed from Documentation to Testing framework Updating component since the patch is more than just documentation. comment:14 Changed 2 years ago by timo - Cc timograham@… added - Patch needs improvement set - Summary changed from Using @override_settings doesn't override DATABASES in threads +SQLite to Add a warning that @override_settings may not work with DATABASES and CACHES - Type changed from Bug to New feature Left some comments on the pull request. comment:15 Changed 2 years ago by timo Was looking into editing the pull request with my comments and noticed this causes the Django test suite to throw a warning because django/contrib/sessions/tests.py calls @override_settings with CACHES. Not sure how we should handle this, since it appears CACHES can apparently be overridden successfully, at least in some cases? comment:16 Changed 23 months ago by timo - Patch needs improvement unset - Summary changed from Add a warning that @override_settings may not work with DATABASES and CACHES to Add a warning that @override_settings may not work with DATABASES - Version changed from 1.4 to master comment:17 Changed 23 months ago by Tim Graham <timograham@…> - Resolution set to fixed - Status changed from assigned to closed I don't think we should claim that you can switch the used database in-flight and expect everything to work. My guess is that the problem here is no syncdb ran for the DB. You could try if things work if you run syncdb manually in setUp(). Wontfixing this, as ensuring that the database is set up correctly, and is also torn down correctly seems hard to do.
https://code.djangoproject.com/ticket/19031
CC-MAIN-2015-22
refinedweb
1,080
59.53
Object vSphereClient is already established and logged in. using VMware.Vim; private SessionManager GetSessionManager() { SessionManager sessionManager = null; try { ManagedObjectReference _svcRef = new ManagedObjectReference(); _svcRef.Type = "ServiceInstance"; _svcRef.Value = "ServiceInstance"; ServiceInstance _service = new ServiceInstance(vSphereClient, _svcRef); ServiceContent _sic = _service.RetrieveServiceContent(); sessionManager = (SessionManager)vSphereClient.GetView(_sic.SessionManager, null); } catch (Exception e) { throw (e); } return sessionManager; } public SessionManagerLocalTicket GetLocalTicket(string userName) { return GetSessionManager().AcquireLocalTicket(userName); } Here you see at method GetLocalTicket I can acquire a local ticket. It creates a file with a one time username and password on the host, for instance: /var/run/vmware-hostd-ticket/52b65cfa-d0d1-0dc6-ffc1-c8428d10e973 This file is available for a few seconds only (4 or 5 seconds, or so), after that it dissapears. I can use the contents of it to authenticate against for instance a vSphere host, one time. The firstproblem is: 1. How do I read it from the host or use it for authentication directly? I tried displaying it with "more /var/run/vmware-hostd-ticket/52b65cfa-d0d1-0dc6-ffc1-c8428d10e973"and I only get data similar to this: "52 1f 94 41 69 db 7c 67-ee d9 3a e4 dc 2d 6e b9~" That doesn't make any sense to me, so the second problem is: 2. When acquired from the host, how do I use it? Is this the password string maybe? And my third question: 3. How do I set the expiration time for the ticket to be more then a few seconds? Or in other words, the server-determined expiration time? According to the expiration time is set some where on the host. " The local ticket that is returned becomes invalid either after it is used or after a server-determined ticket expiration time passes." I have answers for two of three questions now. Missing answer to question number three. 1. AcquireLocalTicket is meant to be used locally on the host where you can read the local ticket. If I want to send it somewhere, I will have to code something smart that runs on the host and makes the LocalTicket available as soon as it shows up. For instance a file watcher that copies the file to a tftp location where I run a tftp server. 2. When acquired from the host, the file contains the password that can be used for authentication as long as the ticket has not timed out. 3. ? I would ask what is the reason you are trying to use a local ticket? What exactly is it that you are trying to accomplish because there probably is a better way. Local tickets are meant to be used in the session, id: I use acquirecloneticket() (slightly different method) to open a remote html5 console to a vm, but I don't leave it there and save it for a long time to use it later. I would be happy to discuss with you what your requirements are so we can come up with a better solution. Josh
https://communities.vmware.com/t5/vSphere-Management-SDK/How-do-I-use-a-local-ticket-acquired-via-VMware-Vim/m-p/943462
CC-MAIN-2020-50
refinedweb
495
55.44
Starting Panda3D¶ Creating a New Panda3D Application¶ To start Panda3D, create a text file and save it with a .cxx extension. Any text editor will work. Enter the following text into your C++ file: #include "pandaFramework.h" #include "pandaSystem.h" int main(int argc, char *argv[]) { // Open a new window framework PandaFramework framework; framework.open_framework(argc, argv); // Set the window title and open the window framework.set_window_title("My Panda3D Window"); WindowFramework *window = framework.open_window(); // Here is room for your own code // Do the main loop, equal to run() in python framework.main_loop(); framework.close_framework(); return (0); } For information about the Window Framework to open a window, click here. pandaFramework.h and pandaSystem.h load most of the Panda3D modules. The main_loop() subroutine. Running the Program¶ The steps required to build and run your program were already explained in a previous page. If Panda3D has been installed properly, a gray window titled My Panda3D Window will appear when you run your program. There is nothing we can do with this window, but that will change shortly.
https://docs.panda3d.org/1.10/cpp/introduction/tutorial/starting-panda3d
CC-MAIN-2020-10
refinedweb
175
68.87
COCI 2016/2017 round 5 Join us this Saturday on the 5th round of this year's COCI! Click to see where the coding begins in your timezone. We can discuss the problems here after the contest. I loved solve hsin.hr problems. Quality of problem is high. Thank you. How often do you have a contest there? One in a year or mouth? Usually 7 rounds in one season and a special, more difficult final round. So, roughly one contest a month. But it is clashes with my birthday :( Just for remind Contest starts in 3 minutes. Good luck everyone. How to solve F? BTW, what was the purpose of this E? Seemed fairly easy, I only spent about 1.5 hours coding and I think I got all. My solutions: aij = a0, ij^xi^xj, where xi is xor'd every time city i is selected, aij is the final adjacency matrix, a0, ij the inital adj. matrix offline with map<> and a sum-BIT with range updates, reducible to: you have a tree with N vertices, block exactly K vertices so that each route from a leaf to the root contains exactly 1 blocked vertex and the root isn't blocked; solution: tree DP — straightforward in O(NK2), but for K < 64, O(NK) with bitsets is possible UPD: Damnit, my last problem failed just on a triviality. Or you could simply use long long instead of bitset(in the last problem). I always use long long *as* bitset. Can you tell more about your p4 and p5 solutions, please? Notice that for each edge, it would be redundant to switch both endpoints. This means we can simply go through the vertices one by one (denote this u) such that u is not marked, and if there is any other vertex v, such that there is no edge u-v, then we will switch all edges from u. We will then mark all vertices v such that there is no edge u-v (the same edges as mentioned above). Can you explain how can you reduce it to a tree since the graph may have cycles? The cycles are irrelevant. You can block what's on the cycles in any way you want. Could you explain problem 5's solution using BIT in more detail? Tnx. :) As the contest has ended, so how long should we wait for the system testing to finish? Edit: Ranklist can be seen now How did Mo's algorithm TLE on E? I thought 5 seconds was enough for 500,000*700 = 350,000,000. I used Mo's algorithm and got full points. Implementation details are often important in Mo's algorithm. My implementation is here: Oh, I see why now. I used map instead of unordered_map. Silly me XD. Thanks for the code. Actually with unordered_map it still TLEs.[Code] Don't use unordered_map. Why not? Because he is getting TLE? I mean why is it slower than a map? I never told him to use map. You are absolutely right. My solution (MO) had passed. I thought log(N) ≠ 700. Sqrt(N) = 700?) Yup, MO definitely worked for me, might be an error in the code or high constant. Same here. I thought my solution with Mo's algo would pass all testcases but gave TLE on last 4 test cases. Here is my solution You need O(1) acces to cnt[v[st]] , not O(log n) with map Did Coordinate compression so that the values fit inside array.Accepted code Thanks :) I got TLE with MO too :( Perhaps, you got TLE with map/unordered_map, too? Could you show your code? Can you please see my code, I also used map with Mo's algo but got TLE in last 4 cases. Don't use map. Oh wow I'm so dumb.. I have solved the exact problem a few weeks ago and I just copied the code and modified 2 lines, but I didn't see that the limits for the numbers are bigger here so I didn't switch from array to unordered_map. Well I guess that's what I get for copying the solution. Actually it's 500,000*700*log(N) when you update the index with map, use array instead of map then it becomes 500,000*700 Seems I got 105/120 on D with a seemingly wrong solution. I checked whether there exists an edge such that both of its nodes are in the same connected component. If so, return no. I couldn't find a counterexample during the contest, can maybe someone give me one? EDIT: Seems I only failed one test case, the solution might even be correct, I'd still like your opinion, since I can't prove it. EDIT 2: Seems the only test case I failed was 3 0. I didn't understand your algorithm. Aren't the endpoints of an edge always in the same connected component? He meant that all connected components in the graph are cliques. The solution is indeed correct(i got it accepted, probably he had bug in his code). The solution is easily provable to be correct. If we denote number of clicks(where a click toggles the edges and their complement) on a node as Anode then for some 3 nodes x, y, z if there is an edge already present Ax + Ay should be even, otherwise odd(similarly for Ax + Az and Ay + Az). It is easy to see this only satisfies when either Ax + Ay, Ax + Az, Ay + Az are all even or 1 of them is even while other two are odd. This corresponds to the clique condition. #include <iostream> #include <cstdio> #include <vector> using namespace std; const int maxn = int(1e3)+5; int n, in[maxn]; bool G[maxn][maxn]; void dfs(int node, int comp) { in[node] = comp; for(int i = 0;i < n;i++) { if(!in[i] && G[node][i]) { dfs(i, comp); } } } int main(void) { int m, u, v; scanf("%d%d", &n, &m); for(int i = 0;i < n;i++) G[i][i] = 1; if(!m) { printf("NE\n"); return 0; } while(m--) { scanf("%d%d", &u, &v); u--; v--; G[u][v] = G[v][u] = 1; } int ctr = 1; for(int i = 0;i < n;i++) { if(!in[i]) { dfs(i, ctr++); } } for(int i = 0;i < n;i++) { for(int j = 0;j < n;j++) { if(in[i] == in[j] && !G[i][j]) { printf("NE\n"); return 0; } } } printf("DA\n"); } EDIT: sorry for errors, i didnt pay attention while typing. With his wording, it sounds like he didn't think cliques of size two were okay. Or perhaps he missed the case where all Ax + Ay, Ax + Az, Ay + Az are odd. That is, there are no edges in graph originally. It is easy to miss it. The case I missed was 3 0. Even i_dont_talk's missed 2 0, luckily that was in the sample tests. I submitted a minute before the end so I didn't have time to look for special cases. My code is practically the same as i_don't_talk's: #include <bits/stdc++.h> #define MAXN 1010 using namespace std; bool mat[MAXN][MAXN]; int pos[MAXN]; int n; void dfs(int i, int comp) { pos[i] = comp; for(int j = 1; j <= n; j++) { if(mat[i][j] && pos[j] == 0) dfs(j, comp); } } int main() { int comp = 0, m; scanf("%d%d", &n, &m); for(int i = 0; i < m; i++) { int x, y; scanf("%d%d", &x, &y); mat[x][y] = 1; mat[y][x] = 1; } int poc1, poc2; for(int i = 1; i <= n; i++) { if(pos[i] == 0) { comp++; dfs(i,comp); } } for(int i = 1; i <= n; i++) { for(int j = 1; j < i; j++) { if(mat[i][j] == 0 && pos[i] == pos[j]) return printf("NE"), 0; } } printf("DA"); return 0; } Yes! Thank you so much! Yes, I just realized what I described were cliques haha Yes, I see it now why it's correct. Thanks again :) Sorry, I should've explained better. I marked connected components based on the edges that existed in the beginning, but the edges I checked whether they had endpoints in the same connected component were the non-existing ones, the ones you were supposed to create. So for N = 3, M = 1, e ={ 1, 2 }, the edges I looked at were {1, 3} and {2, 3}. I noticed that if you flipped the node x, you'd also have to flip every node it's currently connected to, so you could get that edge back after destroying it, so basically, you'd have to flip the entire connected component. The problem arises when two nodes that aren't directly connected are in the same connected component, let's say nodes w and y. After flipping w, you'd create the edge {_w, y_}, but you'd have to flip y too, therefore destroying the edge. The only thing I'm not sure about, and only based on my gut feeling is that each node will be flipped at most once. Flipping a node twice is the same as not flipping it, so yes, you were correct. It's a 2SAT problem. Well, I may be wrong, but not really right? Say if you flipped a node, then flipped some other nodes, then flipped this very node again, you will obtain different graph. This would result in the same graph as if you had never flipped the first node. No, you won't obtain a different graph. It's the same as flipping the same other nodes and not touching this one. You are right. I'm sorry, I'm stupid In a frenzy, I accidentally submitted my STRELICE solution for POKLON, and got 0 points... after the contest, I submitted the old POKLON solution and it got full points :( For problem D. The only case that this can happen is either there exists only one complete graph (trivial case), or two complete graphs. Code P6 is clearly undefined if Hansel can't paint exactly K colors. (e.g S = 1) The game can't proceed, and there is no winner and loser. I made a clarification request about this after 1h of contest, but couldn't hear any response. I understand everyone can make a mistake, but please don't dismiss the clarification next time! Just want to point out that p5 is almost exactly the same as 375D - Tree and Queries. So if anyone wants practice for something similar, this problem is perfect. Did anyone else solved Problem E : Poklon with segment tree? i would like to compare my segment tree approach! Could you share your approach for a start? xD I solved it using MO's algorithm but I am interested how you did it using segment tree. My approach: problem: how many numbers in a segment that appears exactly 2 times! PREREQUISITE : loj - 1188; i wrote an editorial there. Solution : off-line solve, segment tree. okay lets jump into it with an example (given array): 8 1 1 1 2 3 5 1 2 first lets solve the problem if the query always starts from 0th index and ends at any index. how to approach that? using this array: 0 1 -1 0 0 0 0 1 (and make cumulative sum array of course!) 0 1 -1 0 0 0 0 1 query [0-7] ans = 1. correct. query [0-1] ans = 1. correct. okay let me explain how i made this array here. we marked the index where x has appeared 2nd time as 1, meaning: till this index, there is an answer. then, as we need numbers who are present EXACTLY 2 times, we marked the the index where x has appeared 3rd time as -1, which crosses out index where x appeared 2nd time, meaning: till this point there is no answer! i hope the making of the array is clear now! now the next part, what happens when the starting index of query is not 0th index? this is where off-line solve comes into play! we will use this array still: but how do we get the answer when the query is lets say [1,2] using current array if we try to find the answer then the answer will be 0, which is wrong! so how do we find it? think about it! if we make the 1st index 0, 2nd index 1 and 6th index -1 then we can answer any query starting from 1st index! :D and the array is : 0 0 1 0 0 0 -1 1 can you say why we specifically updated the 1st,2nd,6th index? because those are the places where 1 is present. and we changed the positions where 1 is present, why? because 1 is 0th index, which is the previous 1st index. so for example if the query is [2-7] the we would have to do the same for 0th index value, 1st index value. and the array will be like: 0 0 0 0 0 0 1 1 and the answer is 2! we simple used segment tree to update and find the query. CODE PS: sorry for my bad English and the way i wrote, i usually do not post in codeforces :S Tnx. I learned a lot from you! Have a nice day. ;) I used a segment tree :- But it is clash with global game jam :( How do I solve p3? A trivial solution gains 50 points. You only have to find the area of the first quadrant and then multiply it by 4. - For each rectangle do x=x/2 and y=y/2. - For each X store the max Y coordinate for it. - Start from largest X and visit till x=1. If for current X the Y coordinate(if exist) is greater than the max_y_coordinate(till now), update the max_y_coordinate. Answer=Answer+max_y_coordinate. Thanks!! I love you. I'll try it.
http://codeforces.com/blog/entry/49873
CC-MAIN-2018-51
refinedweb
2,347
82.14
Hi, I tried doing Project Euler #27 today, which asks the following: Euler published the remarkable quadratic formula: n² + n + 41 It turns out that the formula will produce 40 primes for the consecutive values n = 0 to 39. However, when n = 40, 40^(2) + 40 + 41 = 40(40 + 1) + 41 is divisible by 41, and certainly when n = 41, 41² + 41 + 41 is clearly divisible by 41. Using computers,. So, I did my thinking and came up with the following code in a few minutes: #include <iostream> using namespace std; int* sieve(int i) { bool* primes = new bool[i]; int* primelist = new int[i]; primes[0] = false; primes[1] = false; for (int j = 2; j < i; j++) primes[j] = true; for (int j = 2; j * j < i; j++) { if (primes[j]) { for (int k = j; k*j < i; k++) primes[k*j] = false; } } for (int k = 2; k < i; k++) { if (primes[k]) primelist[k] = k; } return primelist; } bool isPrime(int check) { int* primes = sieve(1000); for (int i = 0; i < 1000; i++) { if (check == primes[i]) return true; } return false; } int main() { int* primes = sieve(1000); int max_co = 0; int max_co_sec = 0; int index = 2; int max = 0; int x; for (int a = -999; a < 1000; a++) { for (int b = -999; b < 1000; b = primes[index++]) { x = 0; while (isPrime(x*x+a*x+b)) x++; if (x > max) { max = x; max_co = a; max_co_sec = b; } } } cout << max_co * max_co_sec << endl; return 0; } What I did: Function sieve() returns an array of prime numbers using the sieve of eratosthenes. This part is OK. I validated the function by generating random ranges of prime numbers and using brute-force to check them. It works. Next, in the main(), there are two loops for checking combinations of n*n+a*n+b. isPrime() goes over the generated primes array to see if the number exists anywhere in it. If it does, then it is a prime. This is basically what my program does.. Problem is, that I'm getting the wrong output. What I'm getting is something like -75913. The correct answer is something like -51318. What am I doing wrong here? And, yes, I know there are many design flaws. Please point out as much as you can, as I want to improve my habits. Thanks for reading :D
https://www.daniweb.com/programming/software-development/threads/332944/need-help-with-a-frustrating-code-getting-the-wrong-output
CC-MAIN-2017-17
refinedweb
389
74.12
[Visual Studio] Creating Extensions for Multiple Visual Studio Versions By Carlos Quintero | August 2017 The release of a new version of Visual Studio is always a challenge for developers of extensions (packages, add-ins, templates and so forth). For example, Visual Studio 2010 introduced the new Visual Studio Installer for eXtensions (VSIX files); Visual Studio 2012 introduced the light/dark themes; and Visual Studio 2015 removed add-ins (with the Add-In Manager); not to mention that each Visual Studio version provides a new SDK, new extensibility assemblies and new APIs. With Visual Studio 2017, this challenge is even bigger, due to its new modular setup based on workloads and individual components, and to a new version of the manifest for the VSIX deployment mechanism. While some developers (most notably from Microsoft) release a different new extension for each Visual Studio version, most would prefer to release a single updated extension that can target the widest range of Visual Studio versions. In this article, I’ll show you how to accomplish this. For this purpose, I’ll focus on the most common scenario: a package with a command, created in a managed language (C#, in this case) and deployed as a VSIX file. The goals to be accomplished are the following: - To use a single Visual Studio project to create the package. - To use Visual Studio 2017 for development and debugging. - To generate a single package DLL as the result of the build. - To put that single DLL inside a single VSIX file. - To be able to install that VSIX file on Visual Studio 2017 and on many past versions (2015, 2013 and so on). Because two artifacts are needed—a DLL file (which is the package) and a VSIX file (which is the deployment vehicle for the package)—I’ll explain each of these separately: First, how they work at installation or run time; second, how to develop them. The VSIX File As mentioned earlier, Visual Studio 2010 introduced the VSIX deployment mechanism to install Visual Studio extensions, and it’s been the preferred way ever since. A VSIX file has the extension .vsix and can be installed in different ways. If the VSIX file is published on the Visual Studio Marketplace (formerly Visual Studio Gallery) and it’s compatible with the Visual Studio version and edition you’re using, you can install it using the Extensions and Updates dialog. Under the Tools menu, click on Extensions and Updates and then go to Online | Visual Studio Marketplace (see Figure 1 The Extensions and Updates Dialog Window). Figure 1 The Extensions and Updates Dialog Window You can also double-click a VSIX file. When this happens, a Visual Studio Launcher (C:\Program Files (x86)\Common Files\Microsoft Shared\MSEnv\VSLauncher.exe) associated to the .vsix file extension is executed; this locates the VSIXInstaller.exe utility of the highest installed Visual Studio version (the highest version is required to be able to install to all lower versions). Then, the VSIX Installer shows the dialog in Figure 2 The VSIX Installer so you can select the compatible Visual Studio versions and editions in which to install the extension. Figure 2 The VSIX Installer VSIX files can be installed programmatically, too, using the VSIXInstaller.exe utility with its command-line options, such as the target Visual Studio version (2017, 2015 and so on) and edition (Community, Professional and the like). You can find that utility in the Common7\IDE subfolder of your Visual Studio installation. In any case, either Visual Studio or the VSIXInstaller.exe utility needs to know which Visual Studio versions and editions the VSIX file supports. That information can be discovered via a manifest file inside the file. The VSIX file is actually a .zip file, so you can rename its .vsix file extension to .zip and then open it to examine its contents (see Figure 3 Contents of a VSIX File3). Figure 3 Contents of a VSIX File As you can see, there are several files inside: The .dll file is the package DLL. The .pkgdef file is used at installation time to add some keys to the Windows Registry that allows Visual Studio to recognize the DLL as a package. The [Content_Types].xml file describes the content type for each file extension (.dll, .json and so forth). The catalog.json and manifest.json files are required by Visual Studio 2017. And the extension.vsixmanifest file describes the name of the extension, version, and more, and which Visual Studio versions and editions it supports. You can unzip the extension.vsixmanifest file and open it with a text editor to examine its contents, which will look similar to what’s shown in Figure 4 The Contents of a Manifest File4. Figure 4 The Contents of a Manifest File As you can see, the manifest states in the InstallationTarget XML element the supported Visual Studio editions. Here, the Microsoft.VisualStudio.Pro value targets the Professional edition and higher, such as the Premium, Ultimate, Enterprise and any other such editions. Note that it also targets the Community edition, which is basically a Professional edition with some licensing restrictions and without some features. It also states the range of supported Visual Studio versions: 10.0 (2010), 11.0 (2012), 12.0 (2013), 14.0 (2015), 15.0 (2017). When the VSIX file of a per-user extension is installed (either by Visual Studio or by the VSIX Installer), the files inside are unzipped and copied to a random folder at this location: C:\Users\<user>\AppData\Local\Microsoft\VisualStudio\<version number>\Extensions\<random folder>. The <version number> can have an “Exp” suffix appended for the “Experimental Instance” (explained later), and for Visual Studio 2017 it will also include the “instance id” of the installed Visual Studio. This instance id is randomly generated at Visual Studio install time; it was added to support side-by-side installations of different editions of the same version (2017) of Visual Studio, something that wasn’t possible before. For machine-wide extensions, the subfolder Common7\IDE\Extensions is used. Notice that in any case each Visual Studio version uses its own folder for its extensions. While it would be nice if all Visual Studio versions supported the same manifest format, unfortunately that’s not the case. Visual Studio 2010 introduced VSIX and the first version of the manifest. Visual Studio 2012 introduced version 2, which is completely different and incompatible with version 1. However, Visual Studio 2012, 2013 and 2015—all of which support version 2—can still accept a version 1 manifest, so you can build a VSIX file with a version 1 and target from Visual Studio 2010 to Visual Studio 2015. But Visual Studio 2017 supports neither version 1 nor version 2. Instead, it requires a third version of the manifest. Fortunately, version 3 keeps using the value “2.0.0.0” in the Version attribute of the PackageManifest XML element and it adds only an XML element named <Prerequisites> (and the two new files, catalog.json and manifest.json, into the VSIX file). So, it’s completely backward-compatible with the second version, supported by Visual Studio 2012, 2013 and 2015 (but not by Visual Studio 2010, which only supports version 1). This means that you can’t target Visual Studio 2010-2017 with a single VSIX file. From this point, I’ll give up on Visual Studio 2010 and will continue with a VSIX file that supports Visual Studio 2012, 2013, 2015 and 2017. The Package DLL A managed Visual Studio package is a DLL that contains a class that inherits from Microsoft.VisualStudio.Shell.Package. It’s decorated with certain attributes that help at build time to generate a .pkgdef file (which, as mentioned earlier, you can find inside the VSIX file and in the installation folder of the extension). The .pkgdef file is used at startup (older versions of Visual Studio) or at installation time (version 15.3 of Visual Studio 2017) to register the DLL as a package for Visual Studio. Once it’s registered, Visual Studio will try to load the package at some point, either on startup or when one of its commands is executed if the package uses delay loading (which is the best practice). During the attempt to load the managed DLL and initialize the package, three things happen: the DLL will be loaded by the Common Language Runtime (CLR) of a Microsoft .NET Framework version; it will use some DLLs provided by a .NET Framework; and it will use some DLLs provided by Visual Studio. I will examine each of these in turn. A .NET Framework is the sum of two things: The CLR + libraries (both base class and additional libraries). The CLR is the runtime (the JIT compiler, garbage collector and so forth) and it loads managed DLLs. In the distant past, each .NET Framework version 1.0, 1.1 and 2.0 (used by Visual Studio.NET 2002, Visual Studio.NET 2003 and Visual Studio 2005) provided its own CLR version (1.0, 1.1 and 2.0). However, the .NET Frameworks 3.0 and 3.5, used by Visual Studio 2008, continued to use the exact same CLR 2.0 of .NET Framework 2.0, instead of introducing a new one. Visual Studio 2010 introduced .NET Framework 4 and CLR 4.0, but since then all new .NET Frameworks 4.x have used CLR 4.0 (although swapping it “in-place” with a backward-compatible version rather than reusing the exact CLR 4.0 of .NET Framework 4). Since Visual Studio 2012 and higher all use CLR 4.0, the CLR version is not a problem when the DLL of an extension targets Visual Studio 2012, 2013, 2015 and 2017. Libraries constitute the second part of a .NET Framework; these are DLLs referenced by a Visual Studio project and used at run time. To develop a single extension that targets multiple versions of Visual Studio, you must use the highest .NET Framework installed by default by the lowest Visual Studio version that you want to target. This means that if you want to target Visual Studio 2012 and higher, you need to use .NET Framework 4.5. You can’t use, say, .NET Framework 4.5.1 introduced by Visual Studio 2013, because any DLL introduced in that version would not be present on a computer with only Visual Studio 2012 installed. And unless you really need that DLL, you won’t want to force such users to install .NET Framework 4.5.1 to use your extension (it could hurt sales or downloads and support). The extension also needs DLLs that are provided by Visual Studio (typically named Microsoft.VisualStudio.*). At run time, Visual Studio finds its DLLs at some well-known locations, such as the folder Common7\IDE with its subfolders Common7\IDE\PublicAssemblies and Common7\IDE\PrivateAssemblies, and from the Global Assembly Cache (GAC). The GAC for .NET Framework 4.x is located at C:\Windows\Microsoft.NET\assembly (there’s another GAC at C:\Windows\assembly, but that one is for older .NET Frameworks). Visual Studio 2017 uses a more isolated installation that avoids the GAC, relying instead on the folders described previously. There are a couple of key principles to follow when developing and generating a VSIX file: You must use the versions provided by the lowest Visual Studio version your extension targets. That means that if you want to target Visual Studio 2012 and higher, you must use only assemblies and extensibility APIs provided by that version (or lower). If your extension uses a DLL introduced by Visual Studio 2013 or higher, the extension won’t work on a machine with only Visual Studio 2012. The second principle is that the extension never must deploy Visual Studio DLLs, neither to the locations I mentioned (folders of Visual Studio or GAC), nor to the installation folder of the extension. These DLLs are provided by the target Visual Studio, which means that the VSIX file shouldn’t include them. Many Visual Studio DLLs have a version number (8.0 … 15.0) in the name, such as Microsoft.VisualStudio.Shell.11.0.dll or Microsoft.VisualStudio.Shell.Immutable.10.0.dll. These help to identify the Visual Studio version that introduced them, but don’t get fooled: it’s a name, not a version. For example, there are four versions (11.0.0.0, 12.0.0.0, 14.0.0.0 and 15.0.0.0) of Microsoft.Visual.Studio.Shell.11.0.dll, each one provided, respectively, by a Visual Studio version (2012, 2013, 2015 and 2017). The first three 11.0.0.0 to 14.0.0.0 are installed by the respective Visual Studio version in the GAC and the fourth version, 15.0.0.0, used by Visual Studio 2017, is installed in the Common\IDE\PrivateAssemblies folder. Because an extension that targets Visual Studio 2012 and higher must use Visual Studio assemblies with version 11.0.0.0 (the first principle mentioned earlier), this means that the reference Microsoft.Visual.Studio.Shell.11.0.dll must be version 11.0.0.0. But because that version isn’t installed by Visual Studio 2013 and higher (they start at version 12.0.0.0), and the extension shouldn’t deploy Visual Studio DLLs (the second principle), wouldn’t the extension fail when trying to use that Visual Studio DLL? The answer is no, and it’s thanks to an assembly-binding redirection mechanism provided by the .NET Framework, which allows you to specify rules like “when something requests this version of an assembly, use this newer version of it.” Of course, the new version must be fully backward-compatible with the old version. There are several ways to redirect assemblies from one version to another. One way is this: An executable (.exe file extension) can provide an accompanying configuration file (.exe.config file extension) that specifies the redirections. So, if you go to the Common7\IDE folder of your Visual Studio installation, you’ll find the devenv.exe executable of Visual Studio, and a devenv.exe.config file. If you open the .config file with a text editor, you’ll see that it contains lots of assembly redirections: So, Visual Studio 2017 (15.0) has an assembly version redirection for Microsoft.VisualStudio.Shell.11.0 that states that whenever something requests old versions 2.0.0.0 to 14.0.0.0, use the new version 15.0.0.0 instead. That’s how Visual Studio 2013 or later can use an extension referencing Microsoft.VisualStudio.Shell.11.0 version 11.0.0.0, even if they don’t provide that exact version. Developing the Extension Now that you know how things work at run time, you can develop the package. To recap, you’ll create a VSIX project using Visual Studio 2017 with a manifest that targets Visual Studio versions from 12.0 to 15.0; it will contain a package and a command; and it will use only references with version 11.0.0.0 (or lower) installed by Visual Studio 2012. You might wonder at this moment which Visual Studio versions should be installed on your development machine. The best practice is to have two development machines as follows: On the first, if you have enough space on your disk, install all the Visual Studio versions—2012, 2013, 2015 and 2017. They can all coexist side by side and you’ll be able to test them during development. For Visual Studio 2017, even different editions such as Community, Professional and Enterprise can coexist at the same time, something that wasn’t possible with older versions of Visual Studio. If available space is a concern, install the minimal components for the old versions, or skip some version in the middle of the range (2013 or 2015). On your second development machine, install only Visual Studio 2017 or, even better, a build server with no Visual Studio version installed (just the Build Tools 2017), to build your extension for release. This approach will help ensure that you’re not inadvertently using DLLs or other dependencies from folders installed by older Visual Studio versions. You might also wonder if it wouldn’t be safer to develop or build on a machine with only Visual Studio 2012 installed and the answer is that it’s not possible: To generate a VSIX file for Visual Studio 2017 (which creates a version 3 manifest and adds the catalog.json and manifest.json files), you need the Visual Studio SDK 15.0 of Visual Studio 2017 or, with some work, the Visual Studio SDK 14.0 of Visual Studio 2015. Neither the Visual Studio SDK 12.0 of Visual Studio 2013 nor the Visual Studio SDK 11.0 of Visual Studio 2012 can generate VSIX files for Visual Studio 2017. And the best practice for (serious) testing is: Use a separate machine (virtual or cloud-based) for each Visual Studio version (so you’ll need four machines to test your extension on Visual Studio 2012 to Visual Studio 2017 in isolation). This best practice helped me to find some errors in the code sample for this article! To get the Visual Studio 2017 project templates to create a package (or any other kind of extension) you need the “Visual Studio extension development” workload. If you didn’t install it when you first installed Visual Studio 2017, go to the folder C:\Program Files (x86)\Microsoft Visual Studio\Installer, launch vs_Installer.exe, click the Modify button and select that workload at the bottom of the list. Create a new VSIX project using the File | New | Project menu; go to the Visual C# | Extensibility templates; ensure you’ve selected .NET Framework 4.5 on the dropdown list at the top; and select the VSIX Project template. Name the project VSIXProjectVS2012_2017. Double-click the source.extension.vsixmanifest file to open its custom editor. In the Metadata tab, set the product name, author, version and so on. In the Install Targets tab, click the Edit button, select the Microsoft.VisualStudio.Pro identifier (that value also targets the Community edition, which is basically a Professional edition) and set the target installation range, [11.0,15.0], as shown in Figure 5. A square bracket means the value is included. A parenthesis would mean that the value is excluded, so you can also set [11.0,16.0). You can also target a minor version (like 15.3) using the build number (such as 15.0.26208.1). Figure 5 Installation Targets In the Dependencies tab, delete all items. In the Prerequisites tab, click the Edit button and set the minimal Visual Studio 2017 component your extension requires. In this example, only the Visual Studio core editor is required. This section is new for Visual Studio 2017 and the version 3 manifest, so it only applies to version 15.0 (see Figure 6 Prerequisites6): Figure 6 Prerequisites Add a package to the VSIX project by right-clicking the VSIX project node in Solution Explorer, then select the Add | New Item menu to bring up the Add New Item dialog. Now, go to the Visual Studio C# Items | Extensibility | VSPackage node, select the Visual Studio Package template and name it MyPackage.cs. Add a command to the package repeating the actions of the previous step, but selecting this time the Custom Command template. Name this MyCommand1.cs. To follow the principle of using the fewest dependencies required, in the source code of MyPackage.cs and MyCommand1.cs, remove the unused (grayed) namespaces. Then right-click the VSIX project node in Solution Explorer and click the Manage NuGet Packages for Solution entry. In the Installed section, uninstall all the packages in the order shown here: Microsoft.VisualStudio.Shell.15.0 Microsoft.VisualStudio.Shell.Framework Microsoft.VisualStudio.CoreUtility Microsoft.VisualStudio.Imaging Microsoft.VisualStudio.Shell.Interop.12.0 Microsoft.VisualStudio.Shell.Interop.11.0 Microsoft.VisualStudio.Shell.Interop.10.0 Microsoft.VisualStudio.Threading Microsoft.VisualStudio.Shell.Interop.9.0 Microsoft.VisualStudio.Shell.Interop.8.0 Microsoft.VisualStudio.TextManager.Interop.8.0 Microsoft.VisualStudio.Shell.Interop Microsoft.VisualStudio.TextManager.Interop Microsoft.VisualStudio.Validation Microsoft.VisualStudio.Utilities Microsoft.VisualStudio.OLE.Interop (Don’t uninstall the Microsoft.VSSDK.BuildTools package, which is the Visual Studio SDK.) In the project’s References node in Solution Explorer, uninstall all the remaining references (that weren’t acquired as NuGet packages) except System and System.Design. Now you can rebuild the solution. You’ll get compilation errors that will be solved adding just the references shown in Figure 7. Figure 7 Visual Studio 2012 References Unfortunately, Microsoft doesn’t provide an official NuGet package for Microsoft.VisualStudio.Shell.11.0 (you can find an unofficial NuGet VSSDK.Shell.11 package, though). If you have Visual Studio 2012 installed (you should if that’s the minimal-supported version for your extension), you can get it from the GAC as explained earlier. Alternatively, you can get all the required assemblies by installing the Visual Studio 2012 SDK (bit.ly/2rnGsfq) that provides them in the subfolders v2.0 and v4.0 of the folder C:\Program Files (x86)\Microsoft Visual Studio 11.0\VSSDK\VisualStudioIntegration\Common\Assemblies. The last column of the table shows the subfolder of the Visual Studio 2012 SDK where you can find each assembly. To avoid dependencies on unofficial NuGet packages or on specific local folders (either from a Visual Studio SDK or from a Visual Studio installation), the best approach is to get the assemblies from wherever and create a folder called VS2012Assemblies under the root folder of the project. Then, copy the DLLs to that folder, reference them from there (using the Browse button of the project’s Reference Manager dialog) and add the VS2012Assemblies folder to source code control, ensuring that the DLLs are added to it (normally source code control tools don’t add DLLs by default). So, from this point, the required Visual Studio assemblies are part of the source code. To follow the principle of not including assembly references in the VSIX file and not even in the output folder, select each reference and in the Properties window ensure that the Copy Local property is set to False. At this point the solution can be rebuilt without errors. Using Windows Explorer, go to the output folder. Only these files should be generated: extension.vsixmanifest, VSIXProjectVS2012_2017.dll, VSIXProjectVS2012_2017.pkgdef and VSIXProjectVS2012_2017.vsix. When you build the project, one of the MSBuild targets deploys the extension to the Experimental instance of Visual Studio. This is an instance of Visual Studio that uses different folders and Registry entries than the normal instance, so that you don’t make the normal instance unusable if something goes wrong with your extension during development. (You can always reset the Experimental instance clicking the Windows Start button, typing “Reset the” and executing the “Reset the Visual Studio 2017 Experimental Instance” command.) If you go to the Debug tab on the Properties page of the project, you can set the Start external program field to the Visual Studio 2017 devenv.exe file. (It’s important to change this if upgrading, since it would point to an old version of Visual Studio.) You can also see that the Command line arguments specify “Exp” as the root suffix (see Figure 8 Debug Experimental Instance8), so that the Experimental Instance is also used for debugging. Figure 8 Debug Experimental Instance Click the Debug | Start Debugging menu entry and a new Visual Studio instance will be launched (notice its caption indicates “Experimental Instance”). If you click the Tools | Invoke MyCommand1 menu entry, the package will be loaded, the command will be executed and a message box will be shown. If you want to use Visual Studio 2017 to debug the extension on a previous Visual Studio version, you need to make two changes: First, because once the extension is built it’s deployed to the Visual Studio Experimental Instance of the version whose SDK was used to build the project, you need to remove the NuGet package Microsoft.VSSDK.BuildTools version 15.0 and use version 14.0 for Visual Studio 2015 or version 12.0 for Visual Studio 2013. For Visual Studio 2012 there isn’t a NuGet package for the VSDK, so you need to edit the .csproj file and point the VSToolsPath variable to the location of the VSSDK 11.0 (C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0), which you must install separately. Second, you need to go to the Debug tab on the Properties page of the project and set the Start external program field to the matching Common7\IDE\devenv.exe executable. As you probably know, many Visual Studio projects support round-tripping. That is, they can be opened and debugged by several Visual Studio versions without suffering modifications. This is not the case with extensibility projects “out of the box.” However, with some mastering of MSBuild and Visual Studio SDKs, you may achieve it, but it’s always a tricky approach. Once you’re done with the development and debugging, you can build your extension in Release configuration and test it on Visual Studio versions installed in isolated instances on test machines. If everything goes well, you can then publish your extension on the Visual Studio Marketplace! Carlos Quintero has received the Microsoft Most Valuable Professional award 14 times, currently in the category of Visual Studio and Development Technologies. He has been helping other developers to create extensions for Visual Studio since 2002, blogging about it since 2006 at visualstudioextensibility.com and more recently tweeting about it: @VSExtensibility. Thanks to the following technical experts for reviewing this article: Justin Clareburt, Alex Eyler and Mads Kristensen Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. Visual Studio - Creating Extensions for Multiple Visual Studio Versions Hi, The article was written and published before Microsoft made AsyncPackage mandatory de facto for VS 2017 (... Nov 26, 2018 Visual Studio - Creating Extensions for Multiple Visual Studio Versions The article told me exactly what I needed to understand ... but not exactly how to do what I need to do. I need to create a VSIX package that supports VS2015, 2017 and 2019. Based on the article [and other articles] that means I will need to base my ... Nov 26, 2018 Visual Studio - Creating Extensions for Multiple Visual Studio Versions Thank you for this detailed article. Unfortunately, I am stuck because the C# template "Visual Studio Package" is missing in the current SDK of VS 2017. Apparently, it has been replaced by "Visual Studio AsyncPackage", which I suppose will be incompa... Jul 14, 2018 Visual Studio - Creating Extensions for Multiple Visual Studio Versions Hi. Nice article. But it's not quite compete as it didn't discuss problem of transforming T4 template . If we need not just a simple VSIX but a DSL VSIX then things are getting much harder. As DSL SDK targets import t4-includes basing on its install... Oct 20, 2017 Visual Studio - Creating Extensions for Multiple Visual Studio Versions This was an excellent article. It provided the background behind everything, provided an easy to follow set of steps, and allowed me to setup my 2017 extension to target 2015 flawlessly, first try. Thanks a ton for making it! Sep 27, 2017 Visual Studio - Creating Extensions for Multiple Visual Studio Versions good notice Aug 7, 2017 Visual Studio - Creating Extensions for Multiple Visual Studio Versions I have reported it to the MSDN Magazine editors. The last draft was OK, not sure why figures were swapped afterwards. Thanks for reporting it.My portal and blog about VSX:; Twitter:... Aug 2, 2017 Visual Studio - Creating Extensions for Multiple Visual Studio Versions Figures 7 and 8 are swapped. The assembly references listed in Figure 8 belong in Figure 7, and the screenshot of the Debugger property page in Figure 7 belongs in Figure 8. Aug 2, 2017 Visual Studio - Creating Extensions for Multiple Visual Studio Versions Each new version of Visual Studio produces challenges for developers creating extensions for the IDE. Learn how to create a single Visual Studio package that can work on every version of Visual Studio from 2012 to 2017, by following best practices. R... Aug 1, 2017
https://msdn.microsoft.com/en-in/magazine/mt493251
CC-MAIN-2019-43
refinedweb
4,736
56.15
>>." Woohoo (Score:2, Interesting) Go C! Re:Woohoo (Score:5, Funny) Go C! Actually, Go has some catching up to do on C. Re:Woohoo (Score:5, Funny) oh no you don't my friend told me to visit go c before i'm not falling for that one again Re:Woohoo (Score:4, Interesting) Not going to complain about that. It's depressing that Objective-C, Ruby and VB.Net have gone up, and see C# go down... But nice to see C and Bash go up, as well as Java go down. Then again, Java goes down on everything, that's how much it blows. I need a job that doesn't require me to program so much of that. Oh well, occasionally I can get to use C, Bash or Python... Re: : (Score:3) It's hard to take a vendor and OS locked language seriously. When so much is going cloud and so much of cloud is on Linux... Microsoft only languages just can't compete. There doesn't seem to be much hope for them in the mobile world either because WP isn't doing much but being laughed out of the market. (Is there even a VB for WP? Bah; it doesn't even matter.) While there may be some some killer features to VB, there's also some killer draw backs. There's no such thing as portable VB. If it's not a d Re: (Score:3) Re: (Score:3) No, the mono libraries are NOT incompatible. Most of the mono libraries are a compatible subset of the Microsoft libs (with what is/isn't compatible clearly defined). There is the "incompatible" GTK# library, but you can download GTK for Windows and fix that problem quickly enough. Yes, they won't implement some libraries (WPF... Which is garbage anyway, though enough people have drank that kool-aide, and parts the overly-convoluted entity framework), but those tend to be somewhat niche parts. Java tends to use Re: (Score:3) Without the middle-man: [tiobe.com] Dying gasps (Score:3, Funny) Doesn't a dying star expand into giant before it dies? Re:Dying gasps (Score:5, Informative) You would be surprised how many mission critical embedded systems - are still being written in C Re:Dying gasps (Score:5, Funny) Re:Dying gasps (Score:4, Funny) I kid. Re:Dying gasps (Score:4, Interesting) Oh snap. See, displying "Yay! it fit in the memory they gave me!" can't be 7 megs. C was only ever a shorthand for PDP-11 machine language, (back when C was young we'd routinely look at the compiler output. At that point it was passing arguments in registers and Dave Conroy sat in the next cube over working on what has today morphed into* gcc. That's one long lived piece of code.) and in tight spaces and critical loops you want machine language. Romable node.js would be the only thing I'd consider other the C for embedded code. I don't mind paying that overhead for the inherent asynchronous I/O advantages;, you have to muck around in C a lot to do that so it's worth the trade-off. Anything else just didn't bring enough to the table to warrant the overhead IMO. Contemporary support for C outside of Bell labs was because of embedded code (a camera gantry project, later, the Halifax postal processing plant) - the RSX11M C compiler written for that became DECUS C which went public and then went everywhere, including replacing the Bell compiler. * Yeah I know the claim is gcc is a clean rewrite, but logging into toad.com in the early 90s I found DGC commented source in the mix. Jon wasn't aware of it apparently... Re: (Score:2) Critical systems written by vendors other than Microsoft? Re: (Score:2, Insightful) If you think the failures are due to the language being selected, you're the problem, not C#. Re: (Score:3) Re: (Score:3) Re: (Score:3) a less well documented language like Java Are you serious? WTF are you smoking? Java is very well documented. Re: (Score:3) provided you follow what's listed on the Mono project as compatible, you get better cross platform compatibility as well. That proviso you listed is a *major flaw* in Mono. Cross-platform always was important but is becoming more important as time goes one - the computing environment is becoming more heterogeneous after a decade decrease in diversity. Java has its full set of libraries that work on *every* platform, which means it is a better strategic (long term) choice than C#/Mono. Like Java, C# will be attractive to nubblecake coders due to the ease of use, Ah! hubris and ego. You do know that with even a little critical analysis you'd come to the conclusion that you ought to choose as simple langua Re:Dying gasps (Score:5, Funny): (Score:3) Re: (Score:3) Re: (Score:3) As for multiple threads, Re: (Score:3) Embedded systems would explain the growth, Java though... makes me doubt the validity of TIOBE heavily, object-C doesn't help either, I get that there's a lot of android/iOS programming going on (I believe t Re: (Score:3) Java though... makes me doubt the validity of TIOBE heavily, object-C doesn't help either, I get that there's a lot of android/iOS programming going on (I believe this is what object-C is used for mostly nowadays, but... more than 90% of businesses combined using .NET... doubtful). Maybe if TIOBE was based on +/- % changes I'd understanding, but as an overall popularity index, businesses have the $, and businesses use .NET unless they're web based... And therein lies the rub in your argument. Many companies actually are web based. Many others are into mobile. And a shit ton of stuff is embedded or low level enough that .Net isn't even an option. In my own industry (telco/finance), hardly anyone I'm aware of uses C# or VB.Net. It's almost all C for the low level stuff, Java for the enterprisy stuff, and java/Obj-C for mobile stuff. Oh, and there's some COBOL for legacy stuff, too. In my brother's (automation/machine tools), it's mostly C for low level stuf What would you use? (Score:2) What would people switch to? Forth, Pascal? About 25 years ago, working in an embedded product company, I had a friendly little argument with my software colleagues (me design hardware, UGH!) They insisted that there was nothing around that could compete with the C-compiler-that-later-became-Microsoft's for tight compiled code. So we had a little contest: they wrote a chunk of our kind of code in C, and I did it in Modula-2 (Logitech's compiler.) In both cases we were building reusable code with object methods. Quite enlightening. How the comparison Re:Dying gasps (Score:4, Interesting) > While I agree that C is a bad language, it has no competition in low-level coding. Mostly agree. Although I prefer turning all the crap in C++ off to get better compiler support. > Although C++ could take its role and it even fixes many of its shortcomings (e.g. namespaces) Uh, you don't remembered "Embedded C++" back in the late 90's early 00's ? If you think namespaces are part of the problems you really don't understand the complexity of C++ at _run_time_ ... Namely: * Exception Handling * RTTI * dynamic memory allocation and the crappy way new/delete handle out of memory * dynamic casts * no _standard_ way to specify order of global constructors/destructors Embedded systems NEED to be deterministic, otherwise you are just gambling with a time-bomb. [wikipedia.org] -- There are 2 problems with C++. Its design and implementation. Re:Dying gasps (Score:4, Interesting) After you accept the constraints of an embbebed environment and low level access, C is not a bad language anymore. Any language useable on that kind of environment is at least as bad as C. Re:Dying gasps (Score:5, Interesting) I've been using C for so long that I think I've lost objectivity. C is the first language I learned (other than line numbered basic.) In my mind, C is the language all other languages are judged against. But if there's any truth to this (when did the TIOBE index become the official word?) it makes me wonder if it's not C itself that is making a comeback, but good old fashioned procedural style programming. All these fancy new languages with their polymorphism, encapsulation, templates and functional features have lost their sparkle. Programmers are rediscovering that there isn't anything you can't do (even runtime polymorphism) with just functions, structs, arrays and pointers. It can be easier to understand, and although it may be more typing, it has the virtue that you know exactly what the compiler is going to do with it. Re: ies Re:Dying gasps (Score:4, Insightful) Let me shout it loud: Premature classification is the root of all evil! Re: (Score:3) E.g. you don't really want a single class of Image as a container of Pixel classes for each bit depth, that's rather wrong classification. It may make more sense to stop subdivision at image level and have a number of classes that represent images of different bit depth/ number of channels. Or, it may be even better to take Re: (Score:3) .NET Micro Framework? for systems with 64K of RAM and 256K of storage. That gives you C# and VB. As a bonus you get emulators and debugging built in and its open source under an Apache license. Re: (Score:3) Yeah, I'm of the opinion that a person who cannot properly use C (and understand how memory management works) has no business writing mission-critical software in any language. JVM's garbage collector is for sissies. =P: (Score:3) The enduring popularity and success of C is a strong argument that there aren't any really bad languages - that language design just doesn't matter very much. That designing new languages is largely a waste of time. The only real requirement for a language is that it gives you enough control to do whatever you want; from there you can get anywhere, and you can also write progressively higher levels of APIs t Re:Dying gasps (Score:5, Interesting) Not really. While I agree that C is a bad language, it has no competition in low-level coding. Oh, there's competion, just not much because it's not wanted or needed. As a hobby I started building an x86 OS from scratch with only a hex editor. From there created an assembler in machine code, then used it to create a small text editor and then assemble a disassembler, then disassembled the assembler (to save me from re-entering ASM), then I began work on creating my own system level language from scratch to build the OS with. The thing is, if you want to make the leanest language just barely above the metal, but still be cross platform, guess what you get? You get C. Seriously. My syntax was different, but because the op-codes (eg: jmp, the movs and push, pop, call, ret, enter, leave, protected mode & architecture features like the MMU, restricting use of code & stack registers, etc. when you add any features (like functions) you end up creating something almost exactly like C in all but name. The architecture is responsible for C, it's a product of its environment. For instance, I wanted to use multiple stacks: a separate stack for the call stack and another one for parameters / local vars / etc -- In fact I wanted to extend that to support co-routines at the lowest levels possible, all while eliminating stack smashing as a direct exploit vector -- Ah, but because of the way Virtual Memory Addressing works, and because there are dedicated operations for doing single stack per thread function calling, there's a huge performance hit to doing things in other ways down at the low level (I figured out a few tricks, but it's still slower than C functions). Now, most folks wouldn't tolerate a system level language that was any more inefficient than it had to be, and slightly contrary to the way things want to work at the hardware level just to add features globally that many programs don't need (e.g. namespaces, call-stack security, co-routines, etc), so they'll follow the restrictions of the system and the language produced will come out to be just like C, maybe with slightly different syntax, but all the same idioms. Maybe function calling would be something other than CDECL (instead, for variadics, I pass the number of parameters in a register, and have the callee clean up the stack, reduces code size a bit -- and I have other reasons), but even this is possible to do now in C too at the compiler level. Even when you get to adding OOP to the language, you run into C++ and it's problem with diamond inheritance, and dynamic casting (if you do things the most efficient way possible) -- I allow virtual variables as well as functions to eliminate the diamond inheritance issue with shared bases having variables -- Just make them "virtual", like you would a function, it's slower, another layer of indirection, but if I did it the fast way I'd just be re-implementing C++! There's a fine line I'm walking, a little too far from the architecture / ASM and my language might as well run via VM, a little less and I might as well just use C/C++. So, the space we have to innovate in to squeeze more worth out of a compilable language isn't really that big. Indeed, when I take a look at GoLang disassembled I see all the same familiar C idioms -- They just give you a nicer API for some things like concurrency, and add a few (inefficient) layers of indirection to do things like duck-typing. Great for application level logic, but I'd still write an OS and its drivers in a C like language instead. There's a reason why C "has no competition in low-level coding" It's because we don't need a different syntax for low level coding, it's done. As a language designer / implementer when I hear folks say "C is a bad language" I chuckle under my breath and think they might be noobs. Maybe you meant the design by committee approach sucks, but probably not. The features C has it needs to have, the syntax it has, it needs to have, e.g., pointer to a pointer to a Re: (Score:3) Not really. They're different languages that happen to run on the JVM. You should then include JRuby and Jython under Java too. C# and VB.NET both run on the CLR. Should they be combined? Perhaps you should include everything under "x86" or "ARM". Yes, unfortunately TIOBE is bollocks. (Score:5, Informative) Seriously, for the last fucking time, can we stop posting on Slashdot random shit picked up from TIOBE? The TIOBE index is so completely and utterly full of fail that I can't believe people are STILL clinging onto it as evidence of anything whatsoever. It shouldn't be traditional to do anything with TIOBE, except perhaps laugh at it or set it on fire. So once last time, one final fucking time I'll try and explain to the 'tards who think it has any merit whatsoever why it absolutely does not. We start here, with the TIOBE index definition, the horses mouth explanation of how they cludge together this table of bollocks they call and "index": [tiobe.com] First, there is their definition of programming language. They require two criteria, these are: 1) That the language have an entry on Wikipedia 2) That the language be Turing complete This means that if I go and delete the Wikipedia entry on C, right this moment, it is no longer a programming language, and hence no longer beating anything. Apparently. The next step, is to scroll past the big list of languages, to the ratings section, where we see that they state they take the top 9 sites on Alexa that have a search option, and they execute the search: +" programming" Then weight the results as follows: Google: 30% Blogger: 30% Wikipedia: 15% YouTube: 9% Baidu: 6% Yahoo!: 3% Bing: 3% Amazon: 3% The first problem here is with search engines like Google, I run this query against C++ and note the following: "About 21,500,000 results" In other words, Google's figure is hardly anything like a reasonable estimate because a) Most these results are fucking bollocks, and b) The number is at best a ballpark - this accounts for 30% of the weighting. The next problem is that Blogger, Wikipedia, and YouTube account for 54% of the weighting. These are all sites that have user generated content, as such you could literally, right now, pick one of the lowest languages on the list, and go create a bunch of fake accounts, talking about it, and turn it into the fastest growing language of the moment quite trivially. To cite an example, I just ran their query on English Wikipedia for the PILOT programming language and got one result. A few fake or modified Wikipedia entries later and tada, suddenly PILOT has grown massively in popularity. The next point is the following: "Possible false positives for a query are already filtered out in the definition of "hits(PL,SE)". This is done by using a manually determined confidence factor per query." In other words yes, they apply an utterly arbitrary decision to each language about what does and doesn't count. Or to put it simply, they apply a completely arbitrary factor in which you can have no confidence of being of any actual worth. I say this because further down they have a list of terms they filter out manually, they have a list of the confidence factors they use, and it takes little more than a second to realise massive gaps and failings in these confidence factors. For example, they have 100% confidence in the language "Scheme" with the exceptions "tv", and "channel" - I mean really? the word Scheme wouldn't possibly used for anything else? Seriously? So can we finally put to bed the idea that TIOBE tells us anything of any value whatsoever? As I've pointed out before a far better methodology would at least taken into account important programming sites like Stack Overflow, but ideally you'd simply refer to job advert listings on job sites across the globe - these will tell you far more about what languages are sought after, what languages are being used, and what languages are growing in popularity than any of this shit. Finally I do recall last year stumbling across a competitor to TIOBE that was at least slightly better but still not ap Re: (Score:2) I think someone should mod AC informative. This does sound like a worthless statistic. Re: (Score:3) Thanks a lot for making this write-up. Now I can just post a link to it every time someone submits yet another TIOBE story. Re: (Score:3) But they don't seem to follow their own rules. PowerShell [wikipedia.org] has a Wikipedia page, why isn't it listed? More confusingly, if you code in VBScript are you included in the classic VB bucket? What about JScript and JavaScript? If so, fine. But if not, than there's two other languages they're excluding despite their own rules. Since VBScript and JScript aren't listed individually, I assume that JScript queries are all counted as JavaScript. Ok fine. But wait... ActionScript does not! What's the difference between JS ...Bash? (Score:5, Interesting) Re:...Bash? (Score:5, Insightful) More like something is wrong with the measuring system being used. Re:...Bash? (Score:5, Informative) Yep, they use frequency of search on the internet for the language to estimate. Which means confusing, and easily broken languages like C, and infrequently used(and thus easily forgotten) languages like bash get a huge leg-up. Re: (Score:2) Yep, they use frequency of search on the internet for the language to estimate. Which means confusing, and easily broken languages like C, and infrequently used(and thus easily forgotten) languages like bash get a huge leg-up. How come BATCH (.BAT) isn't on there, then? Re: (Score:2) Yeah, I think it also has a lot to do with how much reference is needed for a particular language. Bash, perl and PHP are all odd ducks. Searching for things about them are more indicative of how messed up their syntax is, rather than a measure of their usage. a bit of latency (Score:5, Interesting) Java will come back to number 1 in a few years thanks to Android... Re:a bit of latency (Score:5, Informative) Here they have the most popular iOS game development library ported for programming on android in C++. Re: (Score:2) Re: (Score:2) Re: (Score:2) And if it doesn't can we finally put the notion of the inevitable ascendency of Android to rest? Re: (Score:3) Re: (Score:2) Don't feed trolls! definition (Score:5, Informative) thx, bye. Re: (Score:3) Re: (Score:2) So it isn't really about usage then. Bash gets a lot of hits because it is a popular shell, not because so many people want to program in it. C gets some lift because of so many C-like languages and C bindings used by people are not necessarily programming in C. Re: (Score:2) So its definition of "popularity" is: "I'm trying to use this language, but I don't know how." This may say more about the number of C programs whose original authors have left the field, than the number of new C programs being written. The other one (Score:5, Informative) Is called PYPL (PopularitY of Programming Languages), and it ranked C# as #1 and C down in #5 based on a different methadology. Honestly, they both sound pretty silly to me. [google.com] Using the TIOBE methodology (Score:5, Funny) - Abduction by alien - Going to prison - Dying Re: (Score:2) Using the TIOBE methodology, I deduce that the following activities are more popular that C Programming: - Abduction by alien - Going to prison - Dying Yeah, I program in C a lot, and that sounds about right. Re:Using the TIOBE methodology (Score:4, Funny) Well hey, dying is an activity practised at least once by the entire population of this planet. Not surprising (Score:4, Informative) Xbox Live Indie Games and several others (Score:2) C is the best overall language, it spans every platform I can think of I can think of several platforms that C doesn't easily span. Xbox Live Indie Games and Windows Phone 7 only support C#, the Web only supports JavaScript, Flash Player only supports ActionScript and other languages that compile to ActionScript bytecode, and the Java applet environment and MIDP phones only support Java and other languages that compile to JVM bytecode. Or are you counting Emscripten as "C support"? XBL Arcade vs. Indie Games (Score:3) Ok, think about the statement. Then tell me how many XBox GAMES are written in C#... Xbox Live Marketplace has two separate environments. The Arcade environment allows C but is open only to established studios. The XNA environment, called "Indie Games" in the menu, is open to any startup with $99 per year but requires that all applications be compiled to verifiably type-safe, Emit-free, P/Invoke-free CIL, which in effect requires Re: (Score:2, Informative) Jets suck -- About 4,770,000 results Yankees suck -- About 1,430,000 results Knicks suck -- About 1,370,000 results Krypton sucks -- About 166,000 results Re:TIOBE algorithms (Score:5, Funny) I think you mean: - php sucks: 500 Internal server error? Parallelism (Score:2) C and C++ are still the best languages for parallelism, in particular vectorization and shared memory systems. C used in your favorite programming language (Score:2) Re:C used in your favorite programming language (Score:4, Informative) It should be noted that for most programming languages, it is highly likely that the compiler and other code used for most if not all programming languages are written in C. And the C compilers are written in C++. LLVM has been C++ from the beginning, GCC is transitioning to C++ and is now being built with G++, not GCC, and allows C++ constructs in the code. I think many JVMs are written in C++, too. As predicted (Score:3): (Score:3) C could be a better language if it got rid of some idiosyncrasies, like its weird declarator syntax (it's the only language I know of that had tools like cdecl [die.net] written for it), or stillborn features like separate namespaces for structs/enums/unions (that everyone works around by using typedefs), or certain unsafeties in the language itself - e.g. implicit downcast from void* to any pointer type and from double to int, or mixed signed/unsigned arithmetic being perfectly a-ok but not doing what you expect hal Number One? (Score:4, Funny) Yes but which one is number zero? Not a surprise (Score:3) C programmers have an understanding of the machine they use that Java people will never be able to reach. The only advantage Java programmers have is that they are cheap. Or better, they look cheap to management. In fact they are hugely expensive because most will write code that sucks badly. Re:C? (Score:5, Insightful) Re:C? (Score:5, Insightful) Re:C? (Score:5, Interesting) And why not C++? It has a number of advantages over C... Re: (Score:2, Interesting) Name three Re: (Score:2, Insightful) Those aren't clear advantages. Rather, they're a shortcut toward writing shitty code. Re:C? (Score:4, Insightful) The name of the fallacy you just demonstrated is "No True Scotsman". Re: (Score:3) Name three true Scotsmen! Re:C? (Score:5, Funny) Boo-ya. Re: (Score:3) Re: (Score:2) Encapsulation - the ability to hide functions inside classes is a far bigger feature of C++ than any of the above. Additionally, C++ has the advantage that you can write one piece of code that can compile either in C++ or in C, which can be a huge advantage when doing communications code with embedded systems. Re:C? (Score:5, Informative) Encapsulation - the ability to hide functions inside classes is a far bigger feature of C++ than any of the above. And how do you hide functions? You put them behind a "private" or "protected" access specifier, but you still have to show them in the class definition in the header file. That's not hiding. That's saying "look at all my nifty functions, none of which you can use, neener neener neener". In C you prefix those functions with a "static" keyword, and they aren't visible anywhere outside the original source file. Once you compile them into a .o file it's as if they never existed. That is hiding. Re: (Score:3) Labeling a C function "static" makes it invisible outside of the source file, and uncallable from anywhere else in the program. That's how we C programmers practiced encapsulation way back in the '70s.? Re: (Score:2) C++ causes more problems than it solves. If you're using C++, you should probably be using C. If C isn't a good fit for your project, C++ isn't likely to be the answer. Re: (Score:3) C++ causes more problems than it solves. Only if you are a poor programmer. C++ gives you much more power than C. That power gives much more opportunity for use or abuse. If you abuse it, that's your fault for being a bad programmer. And now I'm going to go back to writing very efficient progrmams using elegant, high level and efficient libraries in C++ of the sort that simply do not exist in C. Re:C? (Score:5, Insightful) Re: (Score:2) Uh, pretty much everything? Re:C? (Score:5, Interesting) So most of the public will go through their day probably using C or C++ based code 99% of the time and a bit will be say the timesheet software running Java that they access through their C based browser using C based network drivers on viewed through a video card with C based drivers on a C based OS with their packets going through C based routers and switches after using a C based security system to get into the building where they used a C based elevator system to get up to work. Of course many of the above systems use a smattering of other bits such as scripting libraries but those are being run by a C library. The only other language that the average person might encounter would be some Objective-C on their iPhone or some Java on their Android; but again those OS's are basically C. When they get home and browse the web they then get the full onslaught of servers running a dog's breakfast of PHP, Java, RoR, etc. But those servers are all programmed in.... you guessed it C. Re: (Score:3) the virtual machine that runs your Java code? Re: (Score:2) Javascript is a special case because it is used almost exclusively for writing front-end code for websites. Not because front-end web developers love Javascript, but because browsers do not support any other language. So in one specific field (front-end web applications), Javascript is king because developers have no other choice. But next to nobody uses the language for anything else than that. It could be used for other stuff (node.js etc.), but that's not significant by a long shot.
https://developers.slashdot.org/story/13/01/07/181219/c-beats-java-as-number-one-language-according-to-tiobe-index?sbsrc=developers
CC-MAIN-2017-13
refinedweb
5,066
69.92
Fair Model Checking with Process Counter Abstraction - Susanna Berry - 3 years ago - Views: Transcription 1 Fair Model Checking with Process Counter Abstraction Jun Sun, Yang Liu, Abhik Roychoudhury, Shanshan Liu and Jin Song Dong School of Computing, National University of Singapore Abstract. Parameterized systems are characterized by the presence of a large (or even unbounded) number of behaviorally similar processes, and they often appear in distributed/concurrent systems. A common state space abstraction for checking parameterized systems involves not keeping track of process identifiers by grouping behaviorally similar processes. Such an abstraction, while useful, conflicts with the notion of fairness. Because process identifiers are lost in the abstraction, it is difficult to ensure fairness (in terms of progress in executions) among the processes. In this work, we study the problem of fair model checking with process counter abstraction. Even without maintaining the process identifiers, our on-the-fly checking algorithm enforces fairness by keeping track of the local states from where actions are enabled / executed within an execution trace. We enhance our home-grown PAT model checker with the technique and show its usability via the automated verification of several real-life protocols. 1 Introduction Parameterized concurrent systems consist of a large (or even unbounded) number of behaviorally similar processes of the same type. Such systems frequently arise in distributed algorithms and protocols (e.g., cache coherence protocols, control software in automotive / avionics) where the number of behaviorally similar processes is unbounded during system design, but is fixed later during system deployment. Thus, the deployed system contains fixed, finite number of behaviorally similar processes. However during system modeling/verification it is convenient to not fix the number of processes in the system for the sake for achieving more general verification results. A parameterized system represents an infinite family of instances, each instance being finitestate. Property verification of a parameterized system involves verifying that every finite state instance of the system satisfies the property in question. In general, verification of parameterized systems is undecidable [2]. A common practice for analyzing parameterized systems can be to fix the number of processes to a constant. To avoid state space explosion, the constant is often small, compared to the size of the real applications. Model checking is then performed in the hope of finding a bug which is exhibited by a fixed (and small) number of processes. This practice can be incorrect because the real size of the systems is often unknown during system design (but fixed later during system deployment). It is also difficult to This research has been supported in part by National University of Singapore FRC grant R and R 2 fix the number of processes to a large enough constant such that the restricted system with fixed number of processes is observationally equivalent to the parameterized system with unboundedly many processes. Computing such a large enough constant is undecidable after all, since the parameterized verification problem is undecidable. Since parameterized systems contain process types with large number of behaviorally similar processes (whose behavior follows a local finite state machine or FSM), a natural state space abstraction is to group the processes based on which state of the local FSM they reside in [23, 7, 24]. Thus, instead of saying process 1 is in state s, process 2 is in state t and process 3 is in state s we simply say 2 processes are in state s and 1 is in state t. Such an abstraction reduces the state space by exploiting a powerful state space symmetry (concrete global states with different process identifiers but the same count of the processes in the individual local states get grouped into the same abstract global state), as often evidenced in real-life concurrent systems such as a caches, memories, mutual exclusion protocols and network protocols. Verification by traversing the abstract state space here produces a sound and complete verification procedure. However, if the total number of processes is unbounded, the aforementioned counter abstraction still does not produce a finite state abstract system. The count of processes in a local state can still be ω (unbounded number), if the total number of processes is ω. To achieve a finite state abstract system, we can adopt a cutoff number, so that any count greater than the cutoff number is abstracted to ω. This yields a finite state abstract system, model checking which we get a sound but incomplete verification procedure any linear time Temporal Logic (LTL) property verified in the abstract system holds for all concrete finite-state instances of the system, but not vice-versa. Contributions In this paper, we study the problem of fair model checking with process counter abstraction. Imagine a bus protocol where a large / unbounded number of processors are contending for bus access. If we do not assume any fairness in the bus arbitration policy, we cannot prove the non-starvation property, that is, bus accesses by processors are eventually granted. In general, fairness constraints are often needed for verification of such liveness properties ignoring fairness constraints results in unrealistic counterexamples (e.g. where a processor requesting for bus access is persistently ignored by the bus arbiter for example) being reported. These counterexamples are of no interest to the protocol designer. To systematically rule out such unrealistic counterexamples (which never happen in a real implementation), it is important to verify the abstract system produced by our process counter abstraction under fairness. We do so in this paper. However, this constitutes a significant technical challenge since we do not even keep track of the process identifiers, how can we ensure a fair scheduling among the individual processes! In this work, we develop a novel technique for model checking parameterized systems under (weak or strong) fairness, against linear temporal logic (LTL) formulae. We show that model checking under fairness is feasible, even without the knowledge of process identifiers. This is done by systematically keeping track of the local states from which actions are enabled / executed within any infinite loop of the abstract state space. We develop necessary theorems to prove the soundness of our technique, and also present efficient on-the-fly model checking algorithms. Our method is realized within our home-grown PAT model checker [26]. The usability / scalability of PAT is demon- 2 3 strated via (i) automated verification of several real-life parameterized systems and (ii) a quantitative comparison with the SPIN model checker [17]. 2 Preliminaries We begin by formally defining our system model. Definition 1 (System Model). A system model is a structure S = (Var G, init G, Proc) where Var G is a finite set of global variables, init G is their initial valuation and Proc is a parallel composition of multiple processes Proc = P 1 P 2 such that each process P i = (S i, init i, i ) is a transition system. We assume that all global variables have finite domains and each P i has finitely many local states. A local state represents a program text together with its local context (e.g. valuation of the local variables). Two local states are equivalent if and only if they represent the same program text and the same local context. Let State be the set of all local states. We assume that State has finitely many elements. This disallows unbounded non-tail recursion which results in infinite different local states. Proc may be composed of infinitely many processes. Each process has a unique identifier. In an abuse of notation, we use P i to represent the identifier of process P i when the context is clear. Notice that two local states from different processes are equivalent only if the process identifiers are irrelevant to the program texts they represent. Processes may communicate through global variables, (multi-party) barrier synchronization or synchronous/asynchronous message passing. It can be shown that parallel composition is symmetric and associative. Example 1. Fig. 1 shows a model of the readers/writers problem, which is a simple protocol for the coordination of readers and writers accessing a shared resource. The protocol, which we refer to as RW, is designed for arbitrary number of readers and writers. Several readers can read concurrently, whereas writers require exclusive access. Global variable counter records the number of readers which are currently accessing the resource; writing is true if and only if a writer is updating the resource. A transition is of the form [guard]name{assignments}, where guard is a guard condition which must be true for the transition to be taken and assignments is a simple sequential program which updates global variables. The following are properties which are to be verified.!(counter > 0 writing) Prop 1 counter > 0 Prop 2 Property Prop 1 is a safety property which states that writing and reading cannot occur simultaneously. Property Prop 2 is a liveness property which states that always eventually the resource can be accessed by some reader. In order to define the operational semantics of a system model, we define the notion of a configuration to capture the global system state during the execution, which is referred to as concrete configurations. This terminology distinguishes the notion from the state space abstraction and the abstract configurations which will be introduced later. 3 4 global variables: int counter = 0; bool writing = false; R0 [!writing] startread{counter++} R1 [counter==0 &&!writing] startwrite{writing:=true} W0 W1 stopread{counter--} stopwrite{writing:=false} proc Reader proc Writer Fig. 1. Readers/writers model Definition 2 (Concrete Configuration). Let S be a system model. A concrete configuration of S is a pair (v, s 1, s 2, ) where v is the valuation of the global variables (channel buffers may be viewed as global variables), and s i S i is the local state in which process P i is residing. A system transition is of the form (v, s 1, s 2, ) Ag (v, s 1, s 2, ) where the system configuration after the transition is (v, s 1, s 2, ) and Ag is a set of participating processes. For simplicity, set Ag (short for agent) is often omitted if irreverent. A system transition could be one of the following forms: (i) a local transition of P i which updates its local state (from s i to s i ) and possibly updating global variables (from v to v ). An example is the transition from R0 to R1 of a reader. In such a case, P i is the participating process, i.e., Ag = {P i }. (ii) a multi-party synchronous transition among processes P i,, P j. Examples are message sending/receiving through channels with buffer size 0 (e.g., as in Promela [17]) and alphabetized barrier synchronization in the classic CSP. In such a case, local states of the participating processes are updated simultaneously. The participating processes are P i,, P j. (iii) process creation of P m by P i. In such a case, an additional local state is appended to the sequence s 1, s 2,, and the state of P i is changed at the same time. Assume for now that the sequence s 1, s 2, is always finite before process creation. It becomes clear in Section 5 that this assumption is not necessary. In such a case, the participating processes are P i and P m. (iv) process deletion of P i. In such case, the local state of P i is removed from the sequence ( s 1, s 2, ). The participating process is P i. Definition 3 (Concrete Transition System). Let S = (Var G, init G, Proc) be a system model, where Proc = P 1 P 2 such that each process P i = (S i, init i, i ) is a local transition system. The concrete transition system corresponding to S is a 3- tuple T S = (C, init, ) where C is the set of all reachable system configurations, init is the initial concrete configuration (init G, init 1, init 2, ) and is the global transition relation obtained by composing the local transition relations i in parallel. An execution of S is an infinite sequence of configurations E = c 0, c 1,, c i, where c 0 = init and c i c i+1 for all i 0. Given a model S and a system configuration c, let enabled S (c) (or enabled(c) when the context is clear) be the set of processes which is ready to make some progress, i.e., enabled(c) = {P i c, c Ag c 4 5 P i Ag}. The following defines two common notions of fairness in system executions, i.e., weak fairness and strong fairness. Definition 4 (Weak Fairness). Let S be a system model. An execution c 1, c 2, of T S is weakly fair, if and only if, for every P i there are infinitely many k such that c k Ag c k+1 and P i Ag if there exists n so that P i enabled(c m ) for all m > n. Weak fairness states that if a process becomes enabled forever after some steps, then it must be engaged infinitely often. From another point of view, weak fairness guarantees that each process is only finitely faster than the others. Definition 5 (Strong Fairness). Let S be a system model. An execution c 1, c 2, of T S is strongly fair, if and only if, for every P i there are infinitely many k such that c k Ag c k+1 and P i Ag if there are infinitely many n such that P i enabled(c n ). Strong fairness states that if a process is infinitely often enabled, it must be infinitely often engaged. This type of fairness is particularly useful in the analysis of systems that use semaphores, synchronous communication, and other special coordination primitives. Clearly, strong fairness guarantees weak fairness. In this work, we assume that system properties are expressed as LTL formulae constituted by propositions on global variables. One way to state property of a single process is to migrate part of its local context to global variables. Let φ be a property. S satisfies φ, written as S φ, if and only if every execution of T S satisfies φ. S satisfies φ under weak fairness, written as S wf φ, if and only if, every weakly fair execution of T S satisfies φ. T satisfies φ under strong fairness, written as T sf φ, if and only if, every strongly fair execution of T satisfies φ. Given the RW model presented in Fig. 1, it can be shown that RW Prop 1. It is, however, not easy to prove it using standard model checking techniques. The challenge is that many or unbounded number of readers and writers cause state space explosion. Also, RW fails Prop 2 without fairness constraint. For instance, a counterexample is startwrite, stopwrite, i.e., a writer keeps updating the resource without any reader ever accessing it. This is unreasonable if the system scheduler is well-designed or the processors that the readers/writers execute on have comparable speed. To avoid such counterexamples, we need to perform model checking under fairness. 3 Process Counter Representation Parameterized systems contain behaviorally similar or even identical processes. Given a configuration (v,, s i,, s j, ), multiple local states 1 may be equivalent. A natural abstraction is to record only how many copies of a local state are there. Let S be a system model. An alternative representation of a concrete configuration is a pair (v, f ) where v is the valuation of the global variables and f is a total function from a local state to the set of processes residing at the state. For instance, given that R0 is a local state in Fig. 1, f (R0) = {P i, P j, P k } if and only if reader processes P i, P j and P k are residing at state R0. This representation is sound and complete because processes 1 The processes residing at the local states may or may not have the same process type. 5 6 at equivalent local states are behavioral equivalent and composition is symmetric and associative (so that processes ordering is irrelevant). Furthermore, given a local state s and processes residing at s, we may consider the processes indistinguishable (as the process identifiers must be irrelevant given the local states are equivalent) and abstract the process identifiers. That is, instead of associating a set of process identifiers with a local state, we only keep track of the number of processes. Instead of setting f (R0) = {P i, P j, P k }, we now set f (R0) = 3. In this and the next section, we assume that the total number of processes is bounded. Definition 6 (Abstract Configuration). Let S be a system model. An abstract configuration of S is a pair (v, f ) where v is a valuation of the global variables and f : State N is a total function 2 such that f (s) = n if and only if n processes are residing at s. Given a concrete configuration cc = (v, s 0, s 1, ), let F( s 0, s 1, ) returns the function f (refer to Definition 6) that is, f (s) = n if and only if there are n states in s 0, s 1, which are equivalent to s. Further, we write F(cc) to denote (v, F( s 0, s 1, )). Given a concrete transition c Ag c, the corresponding abstraction transition is written as a Ls a where a = F(c) and a = F(c ) and Ls (short for local-states) is the local states at which processes in Ag are. That is, Ls is the set of local states from which there is a process leaving during the transition. We remark that Ls is obtained similarly as Ag is. Given a local state s and an abstract configuration a, we define enabled(s, a) to be true if and only if a, a Ls a s Ls, i.e., a process is enabled to leave s in a. For instance, given the transition system in Fig. 2, Ls = {R0} for the transition from A0 to A1 and enabled(r0, A1) is true. Definition 7 (Abstract Transition System). Let S = (Var G, init G, Proc) be a system model, where Proc = P 1 P 2 such that each process P i = (S i, init i, i ) is a local transition system. An abstract transition system of S is a 3-tuple A S = (C, init, ) where C is the set of all reachable abstract system configurations, init C is (init G, F(init G, init 1, init 2, )) and is the abstract global transition relation. We remark that the abstract transition relation can be constructed without constructing the concrete transition relation, which is essential to avoid state space explosion. Given the model presented in Fig. 1, if there are 2 readers and 2 writers, then the abstract transition system is shown in Fig. 2. A concrete execution of T S can be uniquely mapped to an execution of A S by applying F to every configuration in the sequence. For instance, let X = c 0, c 1,, c i, be an execution of T S (i.e., a concrete execution), the corresponding execution of A S is L = F(c 0 ), F(c 1 ),, F(c i ), (i.e., the abstract execution). In an abuse of notation, we write F(X ) to denote L. Notice that the valuation of the global variables are preserved. Essentially, no information is lost during the abstraction. It can be shown that A S φ if and only if T S φ. 2 In PAT, the mapping from a local state to 0 is always omitted for memory saving. 6 7 startread startread stopwrite A2 A1 A0 A3 stopread stopread startwrite A0: ((writing,false),(counter,0),(r0,2),(r1,0),(w0,2),(w1,0)) A1: ((writing,false),(counter,1),(r0,1),(r1,1),(w0,2),(w1,0)) A2: ((writing,false),(counter,2),(r0,0),(r1,2),(w0,2),(w1,0)) A3: ((writing,true),(counter,0),(r0,2),(r1,0),(w0,1),(w1,1)) 4 Fair Model Checking Method Fig. 2. Readers/writers model Process counter abstraction may significantly reduce the number of states. It is useful for verification of safety properties. However, it conflicts with the notion of fairness. A counterexample to a liveness property under fairness must be a fair execution of the system. By Definition 4 and 5, the knowledge of which processes are enabled or engaged is necessary in order to check whether an execution is fair or not. In this section, we develop the necessary theorems and algorithms to show that model checking under fairness constraints is feasible even without the knowledge of process identifiers. By assumption the total number of processes is finite, the abstract transition system A S has finitely many states. An infinite execution of A S must form a loop (with a finite prefix to the loop). Assume that the loop starts with index i and ends with k, written as L k i = c 0,, c i, c i+1,, c i+k, c i+k+1 where c i+k+1 = c i. We define the following functions to collect loop properties and use them to define fairness later. always(l k i ) = {s : State j : {i,, i + k}, enabled(s, c j )} once(l k i ) = {s : State j : {i,, i + k}, enabled(s, c j )} leave(l k i ) = {s : State j : {i,, i + k}, c j Ls c j +1 s Ls} Intuitively, always(l k i ) is the set of local states from where there are processes, which are ready to make some progress, throughout the execution of the loop; once(l k i ) is the set of local states where there is a process which is ready to make some progress, at least once during the execution of the loop; leave(l k i ) is the set of local states from which processes leave during the loop. For instance, given the abstract transition system in Fig. 2, X = A0, A1, A2 is a loop starting with index 0 and ending with index 2. always(x ) = ; once(x ) = {R0, R1, W 0}; leave(x ) = {R0, R1}. The following lemma allows us to check whether an execution is fair by only looking at the abstract execution. Lemma 1. Let S be a system model; X be an execution of T S ; L k i = F(X ) be the respective abstract execution of A S. (1). always(l k i ) leave(lk i ) if X is weakly fair; (2). once(l k i ) leave(lk i ) if X is strongly fair. Proof: (1). Assume X is weakly fair. By definition, if state s is in always(l k i ), there must be a process residing at s which is enabled to leave during every step of the loop. If it is the same process P, P is always enabled during the loop and therefore, by definition 4, P must participate in a transition infinitely often because X is weakly fair. Therefore, P must leave s during the loop. By definition, s must be in leave(l k i ). If there are different processes enabled at s during the loop, there must be a process leaving s, so that s leave(l k i ). Thus, always(lk i ) leave(lk i ). 7 8 (2). Assume X is strongly fair. By definition, if state s is in once(l k i ), there must be a process residing at s which is enabled to leave during one step of the loop. Let P be the process. Because P is infinitely often enabled, by Definition 4, P must participate in a transition infinitely often because X is strongly fair. Therefore, P must leave s during the loop. By definition, s must be in leave(l k i ). The following lemma allows us to generate a concrete fair execution if an abstract fair execution is identified. Lemma 2. Let S be a model; L k i be an execution of A S. (1). If always(l k i ) leave(lk i ), there exists a weakly fair execution X of T S such that F(X ) = L k i ; (2). If once(lk i ) leave(l k i ), there exists a strongly fair execution X of T S such that F(X ) = L k i. Proof: (1). By a simple argument, there must exist an execution X of T S such that F(X ) = L k i. Next, we show that we can unfold the loop (of the abstract fair execution) as many times as necessary to let all processes make some progress, so as to generate a weakly fair concrete execution. Assume P is the set of processes residing at a state s during the loop. Because always(l k i ) leave(lk i ), if s always(lk i ), there must be a transition during which a process leaves s. We repeat the loop multiple times and choose a different process from P to leave each time. The generated execution must be weakly fair. (2). Similarly as above. The following theorem shows that we can perform model checking under fairness by examining the abstract transition system only. Theorem 1. Let S be a system model. Let φ be an LTL property. (1). S wf φ if and only if for all executions L k i of A S we have always(l k i ) leave(lk i ) Lk i φ; (2). S sf φ if and only if for all execution L k i of A S we have once(l k i ) leave(lk i ) L k i φ. Proof: (1). if part: Assume that for all L k i of A S we have L k i φ if always(l k i ) leave(l k i ), and S wf φ. By definition, there exists a weakly fair execution X of T S such that X φ. Let L k i be F(X ). By lemma 1, always(l k i ) leave(lk i ) and hence L k i φ. Because our abstraction preserves valuation of global variables, L k i φ as X φ. We reach a contradiction. only if part: Assume that S wf φ and there exists L k i of A S such that always(l k i ) leave(l k i ), and Lk i wf φ. By lemma 2, there must exist X of T S such that X is weakly fair. Because process counter abstraction preserves valuations of global variables, X φ. Hence, we reach contradiction. (2). Similarly as above. Thus, in order to prove that S satisfies φ under fairness, we need to show that there is no execution L k i of A S such that L k i φ and the execution satisfies an additional constraint for fairness, i.e., always(l k i ) leave(lk i ) for weak fairness or once(lk i ) leave(l k i ) for strong fairness. Or, if S wf φ, then there must be an execution L k i of A S such that L k i satisfies the fairness condition and Lk i φ. In such a case, we can generate a concrete execution. Following the above discussion, fair model checking parameterized systems is reduced to searching for particular loops in A S. There are two groups of methods for loop searching. One is based on nested depth-first-search (DFS) [17] and the other is based 8 9 on identifying strongly connected components (SCC) [12]. It has been shown that the nested DFS is not suitable for model checking under fairness assumptions, as whether an execution is fair depends on the path instead of one state [17]. In this work, we extend the approaches presented in [12, 27] to cope with weak or strong fairness and process counter abstraction. Given A S and a property φ, model checking involves searching for an execution of A S which fails φ. In automata-based model checking, the negation of φ is translated to an equivalent Büchi automaton B φ, which is then composed with A S. Notice that a state in the produce of A S and B φ is a pair (a, b) where a is an abstract configuration of A S and b is a state of B φ. Model checking under fairness involves searching for a fair execution which is accepted by the Büchi automaton. Given a transition system, a strongly connected subgraph is a subgraph such that there is a path connecting any two states in the subgraph. An MSCC is a maximal strongly connected subgraph. Given the product of A S and B φ, let scg be a set of states which, together with the transitions among them, forms a strongly connected subgraph. We say scg is accepting if and only if there exists one state (a, b) in scg such that b is an accepting state of B φ. In an abuse of notation, we refer to scg as the strongly connected subgraph in the following. The following lifts the previously defined functions on loops to strongly connected subgraphs. always(scg) = {y : State x : scg, enabled(y, x)} once(scg) = {y : State x : scg, enabled(y, x)} leave(scg) = {z : State x, y : scg, z leave(x, y)} always(scg) is the set of local states such that for any local state in always(scg), there is a process ready to leave the local state for every state in scg; once(scg) is the set of local states such that for some local state in once(scg), there is a process ready to leave the local state for some state in scg; and leave(scg) is the set of local states such that there is a transition in scg during which there is a process leaving the local state. Given the abstract transition system in Fig. 2, scg = {A0, A1, A2, A3} constitutes a strongly connected subgraph. always(scg) = nil; once(scg) = {R0, R1, W 0, W 1}; leave(scg) = {R0, R1, W 0, W 1}. Lemma 3. Let S be a system model. There exists an execution L k i of A S such that always(l k i ) leave(lk i ) if and only if there exists an MSCC scc of A S such that always(scc) leave(scc). Proof: The if part is trivially true. The only if part is proved as follows. Assume there exists execution L k i of A S such that always(l k i ) leave(lk i ), there must exist a strongly connected subgraph scg which satisfies always(scg) leave(scg). Let scc be the MSCC which contains scg. We have always(scc) always(scg), therefore, the MSCC scc satisfies always(scc) always(scg) leave(scg) leave(scc). The above lemma allows us to use MSCC detection algorithms for model checking under weak fairness. Fig. 3 presents an on-the-fly model checking algorithm based on Tarjan s algorithm for identifying MSCCs. The idea is to search for an MSCC scg such that always(scg) leave(scg) and scg is accepting. The algorithm terminates in two ways, either one such MSCC is found or all MSCCs have been examined (and it returns true). In the former case, an abstract counterexample is generated. In the latter case, we successfully prove the property. Given the system presented in Fig. 2, {A0, A1, A2, A3} 9 10 procedure checkingunderweakfairness(a S, B φ) 1. while there are un-visited states in A S B φ 2. use the improved Tarjan s algorithm to identify one SCC, say scg; 3. if scg is accepting to B φ and always(scg) leave(scg) 4. generate a counterexample and return false; 5. endif 6. endwhile 7. return true; Fig. 3. Model checking algorithm under weak fairness constitutes the only MSCC, which satisfies always(scg) leave(scg). The complexity of the algorithm is linear in the number of transitions of A S. Lemma 4. Let S be a system model. There exists an execution L k i of A S such that once(l k i ) leave(lk i ) if and only if there exists a strongly connected subgraph scg of A S such that once(scg) leave(scg). We skip the proof of the lemma as it is straightforward. The lemma allows us to extend the algorithm proposed in [27] for model checking under strong fairness. Fig. 4 presents the modified algorithm. The idea is to search for a strongly connected subgraph scg such that once(scg) leave(scg) and scg is accepting. Notice that a strongly connected subgraph must be contained in one and only one MSCC. The algorithm searches for MSCCs using Tarjan s algorithm. Once an MSCC scg is found (at line 2), if scg is accepting and satisfies once(scg) leave(scg), then we generate an abstract counterexample. If scg is accepting but fails once(scg) leave(scg), instead of throwing away the MSCC, we prune a set of bad states from the SCC and then examinate the remaining states (at line 6) for strongly connected subgraphs. Intuitively, bad states are the reasons why the SCC fails the condition once(scg) leave(scg). Formally, bad(scg) = {x : scg y, y leave(scg) y enabled(y, x)} That is, a state s is bad if and only if there exists a local state y such that a process may leave y at state s and yet there is no process leaving y given all transitions in scg. By pruning all bad states, there might be a strongly connected subgraph in the remaining states which satisfies the fairness constraint. The algorithm is partly inspired by the one presented in [16] for checking emptiness of Streett automata. Soundness of the algorithm follows the discussion in [27, 16]. It can be shown that any state of a strongly connected subgraph which satisfies the constraints is never pruned. As a result, if there exists such a strongly connected subgraph scg, a strongly connected subgraph which contains scg or scg itself must be found eventually. Termination of the algorithm is guaranteed because the number of visited states and pruned states are monotonically increasing. The complexity of the algorithm is linear in #states #trans where #states and #trans are the number of states and transitions of A S respectively. A tighter bound on the complexity can be found in [16]. 10 11 procedure checkingunderstrongfairness(a S, B φ, states) 1. while there are un-visited states in states 2. use Tarjan s algorithm to identify a subset of states which forms an SCC, say scg; 3. if scg is accepting to B φ 4. if once(scg) leave(scg) 5. generate a counterexample and return false; 6. else if checkingunderstrongfairness(a S, B φ, scg \ bad(scg)) is false 7. return false; 8. endif 9. endif 10. endwhile 11. return true; Fig. 4. Model checking algorithm under strong fairness 5 Counter Abstraction for Infinitely Many Processes In the previous sections, we assume that the number of processes (and hence the size of the abstract transition system) is finite and bounded. If the number of processes is unbounded, there might be unbounded number of processes residing at a local state, e.g., the number of reader processes residing at R0 in Fig. 1 might be infinite. In such a case, we choose a cutoff number and then apply further abstraction. In the following, we modify the definition of abstract configurations and abstract transition systems to handle unbounded number of processes. Definition 8. Let S be a system model with unboundedly many processes. Let K be a positive natural number (i.e., the cutoff number). An abstract configuration of S is a pair (v, g) where v is the valuation of the global variables and g : State N {ω} is a total function such that g(s) = n if and only if n( K ) processes are residing at s and g(s) = ω if and only if more than K processes are at s. Given a configuration (v, s 0, s 1, ), we define a function G similar to function F, i.e., G( s 0, s 1, )) returns function g (refer to Definition 8) such that given any state s, g(s) = n if and only if there are n states in s 0, s 1, which are equivalent to s and g(s) = ω if and only if there are more than K states in s 0, s 1, which are equivalent to s. Furthermore, G(c) = (v, G( s 0, s 1, )). The abstract transition relation of S (as per the above abstraction) can be constructed without constructing the concrete transition relation. We illustrate how to generate an abstract transition in the following. Given an abstract configuration (v, g), if g(s) > 0, a local transition from state s to state s, creating a process with initial state init may result in different abstract configurations (v, g ) depending on g. In particular, g equals g except that g (s) = g(s) 1 and g (s ) = g(s ) + 1 and g (init) = g(init) + 1 assuming ω + 1 = ω, K + 1 = ω and ω 1 is either ω or K. We remark that by assumption State is a finite set and therefore the domain of g is always finite. This allows us to drop the assumption that the number of processes must be finite before process creation. Similarly, we abstract synchronous transitions and process termination. 11 12 startread startread startread stopwrite stopread A2 A1 A0 A3 stopread stopread startwrite A0: ((writing,false),(counter,0),(r0,inf),(r1,0),(w0,inf),(w1,0)) A1: ((writing,false),(counter,1),(r0,inf),(r1,1),(w0,inf),(w1,0)) A2: ((writing,false),(counter,inf),(r0,inf),(r1,inf),(w0,inf),(w1,0)) A3: ((writing,true),(counter,0),(r0,inf),(r1,0),(w0,inf),(w1,1)) Fig. 5. Abstract readers/writers model The abstract transition system for a system model S with unboundedly many processes, written as R S (to distinguish from A S ), is now obtained by applying the aforementioned abstract transition relation from the initial abstract configuration. Example 2. Assume that the cutoff number is 1 and there are infinitely many readers and writers in the readers/writers model. Because counter is potentially unbounded and, we mark counter as a special process counter variable which dynamically counts the number of processes which are reading (at state R1). If the number of reading processes is larger than the cutoff number, counter is set to ω too. The abstract transition system A RW is shown in Fig. 5. The abstract transition system may contain spurious traces. For instance, the trace start, (stopread) is spurious. It is straightforward to prove that A RW Prop 1 based on the abstract transition system. The abstract transition system now has only finitely many states even if there are unbounded number of processes and, therefore, is subject to model checking. As illustrated in the preceding example, the abstraction is sound but incomplete in the presence of unboundedly many processes. Given an execution X of T S, let G(X ) be the corresponding execution of the abstract transition system. An execution L of R S is spurious if and only if there does not exist an execution X of T S such that G(X ) = L. Because the abstraction only introduces execution traces (but does not remove any), we can formally establish a simulation relation (but not a bisimulation) between the abstract and concrete transition systems, that is, R S simulates T S. Thus, while verifying an LTL property φ we can conclude T S φ if we can show that R S φ. Of course, R S φ will be accomplished by model checking under fairness. The following re-establishes Lemma 1 and (part of) Theorem 1 in the setting of R S. We skip the proof as they are similar to that of Lemma 1 and Theorem 1 respectively. Lemma 5. Let S be a system model, X be an execution of T S and L k i = G(X ) be the corresponding execution of R S. We have (1). always(l k i ) leave(lk i ) if X is weakly fair; (2).once(L k i ) leave(lk i ) if X is strongly fair. Theorem 2. Let S be a system model and φ be an LTL property. (1). S wf φ if for all execution traces L k i of R S we have always(l k i ) leave(lk i ) Lk i φ; (2). S sf φ if for all execution traces L k i of R S we have once(l k i ) leave(lk i ) Lk i φ; The reverse of Theorem 2 is not true because of spurious traces. We remark that the model checking algorithms presented in Section 4 are applicable to R S (as the abstraction function is irrelevant to the algorithm). By Theorem 2, if model checking of R S (using the algorithms presented in Section 4 under weak/fairness constraint) returns true, we conclude that the system satisfies the property (under the respective fairness). 12 13 6 Case Studies Our method has been realized in the Process Analysis Toolkit (PAT) [26]. PAT is designed for systematic validation of distributed/concurrent systems using state-of-the-art model checking techniques. In the following, we show the usability/scalability of our method via the automated verification of several real-life parameterized systems. All the models are embedded in the PAT package and available online. The experimental results are summarized in the following table, where NA means not applicable (hence not tried, due to limit of the tool); NF means not feasible (out of 2GB memory or running for more than 4 hours). The data is obtained with Intel Core 2 Quad 9550 CPU at 2.83GHz and 2GB RAM. We compared PAT with SPIN [17] on model checking under no fairness or weak fairness. Notice that SPIN does not support strong fairness and is limited to 255 processes. Model #Proc Property No Fairness Weak Fairness Strong Fairness Result PAT SPIN Result PAT SPIN Result PAT Spin LE 10 one leader false true true 0.06 NA LE 100 one leader false true 0.27 NF true 0.28 NA LE 1000 one leader false 0.04 NA true 2.26 NA true 2.75 NA LE one leader false 0.04 NA true NA true NA LE one leader false 0.06 NA true NA true NA KV 2 Prop Kvalue false true true 0.6 NA KV 3 Prop Kvalue false true true 4.59 NA KV 4 Prop Kvalue false true 29.2 NF true NA KV 5 Prop Kvalue false true NF true NA KV Prop Kvalue false 0.12 NA? NF NA? NF NA Stack 5 Prop stack false false 0.78 NF false 0.74 NA Stack 7 Prop stack false false 11.3 NF false 12.1 NA Stack 9 Prop stack false false NF false NA Stack 10 Prop stack false false NF false NA ML 10 access true true true 0.11 NA ML 100 access true 1.04 NF true 1.04 NF true 1.04 NA ML 1000 access true NA true NA true NA ML access true 13.8 NA true 13.8 NA true 13.8 NA The first model (LE) is a self-stabilizing leader election protocol for complete networks [11]. Mobile ad hoc networks consist of multiple mobile nodes which interact with each other. The interactions among the nodes are subject to fairness constraints. One essential property of a self-stabilizing population protocols is that all nodes must eventually converge to the correct configurations. We verify the self-stabilizing leader election algorithm for complete network graphs (i.e., any pair of nodes are connected). The property is that eventually always there is one and only one leader in the network, i.e., one leader. PAT successfully proved the property under weak or strong fairness for many or unbounded number of network nodes (with cutoff number 2). SPIN took much more time to prove the property under weak fairness. The reason is that the fair model checking algorithm in SPIN copies the global state machine n + 2 times (for n processes) so as to give each process a fair chance to progress, which increases the verification time by a factor that is linear in the number of network nodes. 13 14 The second model (KV ) is a K-valued register [3]. A shared K-valued multi-reader single-writer register R can be simulated by an array of K binary registers. When the single writer process wants to write v into R, it will set the v-th element of B to 1 and then set all the values before v-th element to 0. When a reader wants to read the value, it will do an upwards scan first from 0 to the first element u whose value is 1, then do a downwards scan from u to 0 and remember the index of the last element with value 1, which is the return value of the reading operation. A progress property is that Prop Kvalue = (read inv read res), i.e., a reading operation (read inv) eventually returns some valid value (read res). With no fairness, both PAT and SPIN identified a counterexample quickly. Because the model contains many local states, the size of A S increases rapidly. PAT proved the property under weak/strong fairness for 5 processes, whereas SPIN was limited to 3 processes with weak fairness. The third model (Stack) is a lock-free stack [28]. In concurrent systems, in order to improve the performance, the stack can be implemented by a linked list, which is shared by arbitrary number of processes. Each push or pop operation keeps trying to update the stack until no other process interrupts. The property of interest is that a process must eventually be able to update the stack, which can be expressed as the LTL Prop stack = (push inv push res) where event push inv (push res) marks the starting (ending) of push operation. The property is false even under strong fairness. The fourth model (ML) is the Java meta-lock algorithm [1]. In Java language, any object can be synchronized by different threads via synchronized methods or statements. The Java meta-locking algorithm is designed to ensure the mutually exclusive access to an object. A synchronized method first acquires a lock on the object, executes the method and then releases the lock. The property is that always eventually some thread is accessing the object, i.e., access, which is true without fairness. This example shows that the computational overhead due to fairness is negligible in PAT. In another experiment, we use a model in which processes all behave differently (so that counter abstraction results in no reduction) and each process has many local states. We then compare the verification results with or without process counter abstraction. The result shows the computational and memory overhead for applying the abstraction is negligible. In summary, the enhanced PAT model checker complements existing model checkers in terms of not only performance but also the ability to perform model checking under weak or strong fairness with process counter abstraction. 7 Discussion and Related Work We studied model checking under fairness with process counter abstraction. The contribution of our work is twofold. First, we presented a fully automatic method for property checking of under fairness with process counter abstraction. We showed that fairness can be achieved without the knowledge of process identifiers. Secondly, we enhanced our home-grown PAT model checker to support our method and applied it on large scale parameterized systems to demonstrate its scalability. As for future work, we plan to investigate methods to combine well-known state space reduction techniques (such as partial order reduction, data abstraction for infinite domain data variables) with the process counter abstraction so as to extend the applicability of our model checker. 14 15 Verification of parameterized systems is undecidable [2]. There are two possible remedies to this problem: either we look for restricted subsets of parameterized systems for which the verification problem becomes decidable, or we look for sound but not necessarily complete methods. The first approach tries to identify a restricted subset of parameterized systems and temporal properties, such that if a property holds for a system with up to a certain number of processes, then it holds for any number of processes in the system. Moreover, the verification for the reduced system can be accomplished by using model checking. This approach can be used to verify a number of systems [13, 18, 8]. The sound but incomplete approaches include methods based on synthesis of invisible invariant (e.g., [10]); methods based on network invariant (e.g., [21]) that relies on the effectiveness of a generated invariant and the invariant refinement techniques; regular model checking [19] that requires acceleration techniques. Verification of liveness properties under fairness constraints have been studied in [15, 17, 20]. These works are based on SCC-related algorithms and decide the existence of an accepting run of the product of the transition system and Büchi automata, Streett automata or linear weak alternating automaton. The works closest to ours are the methods based on counter abstraction (e.g., [7, 24, 23]). In particular, verification of liveness properties under fairness is addressed in [23]. In [23], the fairness constraints for the abstract system are generated manually (or via heuristics) from the fairness constraints for the concrete system. Different from the above work, our method handles one (possibly large) instance of parameterized systems at a time and uses counter abstraction to improve verification effectiveness. In addition, fairness conditions are integrated into the on-the-fly model checking algorithm which proceeds on the abstract state representation making our method fully automated. Our method is related to work on symmetry reduction [9, 5]. A solution for applying symmetry reduction under fairness is discussed in [9]. Their method works by finding a candidate fair path in the abstract transition system and then using special annotations to resolve the abstract path to a threaded structure which then determines whether there is a corresponding fair path in the concrete transition system. A similar approach was presented in [14]. Different from the above, our method employs a specialized form of symmetry reduction and deals with the abstract transition system only and requires no annotations. Additionally, a number of works on combining abstraction and fairness, were presented in [6, 22, 29, 4, 25]. Our work explores one particular kind of abstraction and shows that it works with fairness with a simple twist. References 1. O. Agesen, D. Detlefs, A. Garthwaite, R. Knippel, Y.S. Ramakrishna, and D. White. An Efficient Meta-Lock for Implementing Ubiquitous Synchronization. In OOPSLA, pages , K.R. Apt and D. Kozen. Limits for automatic verification of finite-state concurrent systems. Inf. Process. Lett., 22(6): , H. Attiya and J. Welch. Distributed Computing: Fundamentals, Simulations, and Advanced Topics. John Wiley & Sons, Inc., Publication, 2nd edition, D. Bosnacki, N. Ioustinova, and N. Sidorova. Using Fairness to Make Abstractions Work. In SPIN 04, volume 2989 of LNCS, pages Springer, 16 5. E. M. Clarke, T. Filkorn, and S. Jha. Exploiting Symmetry In Temporal Logic Model Checking. In CAV 93, volume 697 of LNCS, pages Springer, D. Dams, R. Gerth, and O. Grumberg. Fair Model Checking of Abstractions. In VCL G. Delzanno. Automatic Verification of Parameterized Cache Coherence Protocols. In CAV, pages 53 68, E. A. Emerson and K. S. Namjoshi. On Reasoning About Rings. Int. J. Found. Comput. Sci., 14(4): , E. A. Emerson and A. P. Sistla. Utilizing Symmetry when Model-Checking under Fairness Assumptions: An Automata-Theoretic Approach. ACM Trans. Program. Lang. Syst., 19(4): , Y. Fang, K. L. McMillan, A. Pnueli, and L. D. Zuck. Liveness by Invisible Invariants. In FORTE, volume 4229 of LNCS, pages , M.J. Fischer and H. Jiang. Self-stabilizing Leader Election in Networks of Finite-state Anonymous Agents. In OPODIS, volume 4305 of LNCS, pages , J. Geldenhuys and A. Valmari. More efficient on-the-fly LTL verification with Tarjan s algorithm. Theoretical Computer Science, 345(1):60 82, S.M. German and A.P. Sistla. Reasoning about Systems with Many Processes. J. ACM, 39(3): , V. Gyuris and A. P. Sistla. On-the-Fly Model Checking Under Fairness That Exploits Symmetry. In CAV, volume 1254 of LNCS, pages Springer, M. Hammer, A. Knapp, and S. Merz. Truly On-the-Fly LTL Model Checking. In TACAS, volume 3440 of LNCS, pages , M.R. Henzinger and J.A. Telle. Faster Algorithms for the Nonemptiness of Streett Automata and for Communication Protocol Pruning. In SWAT, pages 16 27, G.J. Holzmann. The SPIN Model Checker: Primer and Reference Manual. Addison Wesley, C. N. Ip and D. L. Dill. Verifying Systems with Replicated Components in Mur&b.phiv;. Formal Methods in System Design, 14(3): , B. Jonsson and M. Saksena. Systematic Acceleration in Regular Model Checking. In CAV, volume 4590 of LNCS, pages , Y. Kesten, A. Pnueli, L. Raviv, and E. Shahar. Model Checking with Strong Fairness. Formal Methods and System Design, 28(1):57 84, D. Lesens, N. Halbwachs, and P. Raymond. Automatic Verification of Parameterized Linear Networks of Processes. In POPL, pages , U. Nitsche and P. Wolper. Relative Liveness and Behavior Abstraction (Extended Abstract). In PODC 97, pages 45 52, A. Pnueli, J. Xu, and L.D. Zuck. Liveness with (0, 1, infty)-counter Abstraction. In CAV, pages , F. Pong and M. Dubois. A New Approach for the Verification of Cache Coherence Protocols. IEEE Trans. Parallel Distrib. Syst., 6(8): , F. Pong and M. Dubois. Verification Techniques for Cache Coherence Protocols. ACM Comput. Surv., 29(1):82 126, J. Sun, Y. Liu, J. S. Dong, and J. Pang. PAT: Towards Flexible Verification under Fairness. In CAV, volume 5643 of Lecture Notes in Computer Science, pages Springer, J. Sun, Y. Liu, J.S. Dong, and H.H. Wang. Specifying and verifying event-based fairness enhanced systems. In ICFEM, volume 5256 of LNCS, pages Springer, R.K. Treiber. Systems programming: Coping with parallelism. Technical report, U. Ultes-Nitsche and S. St. James. Improved Verification of Linear-time Properties within Fairness: Weakly Continuation-closed Behaviour Abstractions Computed from Trace Reductions. Softw. Test., Verif. Reliab., 13(4): , CONCURRENT AND REAL-TIME SYSTEMS: THE PAT APPROACH. LIU YANG (B.Sc. (Hons.), NUS) MODEL CHECKING CONCURRENT AND REAL-TIME SYSTEMS: THE PAT APPROACH LIU YANG (B.Sc. (Hons.), NUS) A THESIS SUBMITTED FOR THE DEGREE OF DOCTOR OF PHILOSOPHY DEPARTMENT OF COMPUTER SCIENCE NATIONAL UNIVERSITY: An Introduction Announcements Model Checking: An Introduction Meeting 2 Office hours M 1:30pm-2:30pm W 5:30pm-6:30pm (after class) and by appointment ECOT 621 Moodle problems? Fundamentals of Programming Languages CSCI Algorithmic Software Verification Algorithmic Software Verification (LTL Model Checking) Azadeh Farzan What is Verification Anyway? Proving (in a formal way) that program satisfies a specification written in a logical language. Formal Static Program Transformations for Efficient Software Model Checking Static Program Transformations for Efficient Software Model Checking Shobha Vasudevan Jacob Abraham The University of Texas at Austin Dependable Systems Large and complex systems Software faults are major T-79.186 Reactive Systems: Introduction and Finite State Automata T-79.186 Reactive Systems: Introduction and Finite State Automata Timo Latvala 14.1.2004 Reactive Systems: Introduction and Finite State Automata 1-1 Reactive Systems Reactive systems are a class of software Software Modeling and Verification Software Modeling and Verification Alessandro Aldini DiSBeF - Sezione STI University of Urbino Carlo Bo Italy 3-4 February 2015 Algorithmic verification Correctness problem Is the software/hardware system Coverability for Parallel Programs 2015 Coverability for Parallel Programs Lenka Turoňová* Abstract We improve existing method for the automatic verification of systems with parallel running processes. The technique Model Checking of Software Model Checking of Software Patrice Godefroid Bell Laboratories, Lucent Technologies SpecNCheck Page 1 August 2001 A Brief History of Model Checking Prehistory: transformational programs and theorem proving CHAPTER 7 GENERAL PROOF SYSTEMS CHAPTER 7 GENERAL PROOF SYSTEMS 1 Introduction Proof systems are built to prove statements. They can be thought as an inference machine with special statements, called provable statements, or sometimes, Software Engineering using Formal Methods Software Engineering using Formal Methods Model Checking with Temporal Logic Wolfgang Ahrendt 24th September 2013 SEFM: Model Checking with Temporal Logic /GU 130924 1 / 33 Model Checking with Spin model, Test Case Generation for Ultimately Periodic Paths Joint work with Saddek Bensalem Hongyang Qu Stavros Tripakis Lenore Zuck Accepted to HVC 2007 How to find the condition to execute a path? (weakest precondition Model Checking II Temporal Logic Model Checking 1/32 Model Checking II Temporal Logic Model Checking Edmund M Clarke, Jr School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 2/32 Temporal Logic Model Checking Specification Language: Automata-based Verification - I CS3172: Advanced Algorithms Automata-based Verification - I Howard Barringer Room KB2.20: email: howard.barringer@manchester.ac.uk March 2006 Supporting and Background Material Copies of key slides (already tutorial: hardware and software model checking tutorial: hardware and software model checking gerard holzmann and anuj puri { gerard anuj } @research.bell-labs.com Bell Labs, USA outline introduction (15 mins) theory and algorithms system modeling Specification and Analysis of Contracts Lecture 1 Introduction Specification and Analysis of Contracts Lecture 1 Introduction Gerardo Schneider gerardo@ifi.uio.no Department of Informatics, University of Oslo SEFM School, Oct. 27 - Nov. Model Checking based Software Verification Model Checking based Software Verification 18.5-2006 Keijo Heljanko Keijo.Heljanko@tkk.fi Department of Computer Science and Engineering Helsinki University of Technology 1/24 Virtual Time and Timeout in Client-Server Networks Virtual Time and Timeout in Client-Server Networks Jayadev Misra July 13, 2011 Contents 1 Introduction 2 1.1 Background.............................. 2 1.1.1 Causal Model of Virtual Time............... Model-Checking Verification for Reliable Web Service Model-Checking Verification for Reliable Web Service Shin NAKAJIMA Hosei University and PRESTO, JST nkjm@i.hosei.ac.jp Abstract Model-checking is a promising technique for the verification and validation UPDATES OF LOGIC PROGRAMS Computing and Informatics, Vol. 20, 2001,????, V 2006-Nov-6 UPDATES OF LOGIC PROGRAMS Ján Šefránek Department of Applied Informatics, Faculty of Mathematics, Physics and Informatics, Comenius University, InvGen: An Efficient Invariant Generator InvGen: An Efficient Invariant Generator Ashutosh Gupta and Andrey Rybalchenko Max Planck Institute for Software Systems (MPI-SWS) Abstract. In this paper we present InvGen, an automatic linear arithmetic Formal Verification and Linear-time Model Checking Formal Verification and Linear-time Model Checking Paul Jackson University of Edinburgh Automated Reasoning 21st and 24th October 2013 Why Automated Reasoning? Intellectually stimulating and challenging Overview Motivating Examples Interleaving Model Semantics of Correctness Testing, Debugging, and Verification Introduction Overview Motivating Examples Interleaving Model Semantics of Correctness Testing, Debugging, and Verification Advanced Topics in Software Engineering 1 Concurrent Programs Characterized Fundamentals of Software Engineering Fundamentals of Software Engineering Model Checking with Temporal Logic Ina Schaefer Institute for Software Systems Engineering TU Braunschweig, Germany Slides by Wolfgang Ahrendt, Richard Bubel, Rein. Runtime Verification - Monitor-oriented Programming - Monitor-based Runtime Reflection Runtime Verification - Monitor-oriented Programming - Monitor-based Runtime Reflection Martin Leucker Technische Universität München (joint work with Andreas Bauer, Christian Schallhart et. al) FLACOS Using Patterns and Composite Propositions to Automate the Generation of Complex LTL University of Texas at El Paso DigitalCommons@UTEP Departmental Technical Reports (CS) Department of Computer Science 8-1-2007 Using Patterns and Composite Propositions to Automate the Generation of Complex Validated Templates for Specification of Complex LTL Formulas Validated Templates for Specification of Complex LTL Formulas Salamah Salamah Department of Electrical, computer, Software, and Systems Engineering Embry Riddle Aeronautical University 600 S. Clyde Morris Temporal Logics. Computation Tree Logic Temporal Logics CTL: definition, relationship between operators, adequate sets, specifying properties, safety/liveness/fairness Modeling: sequential, concurrent systems; maximum parallelism/interleaving GameTime: A Toolkit for Timing Analysis of Software GameTime: A Toolkit for Timing Analysis of Software Sanjit A. Seshia and Jonathan Kotker EECS Department, UC Berkeley {sseshia,jamhoot}@eecs.berkeley.edu Abstract. Timing analysis is a key step in the Verifying Semantic of System Composition for an Aspect-Oriented Approach 2012 International Conference on System Engineering and Modeling (ICSEM 2012) IPCSIT vol. 34 (2012) (2012) IACSIT Press, Singapore Verifying Semantic of System Composition for an Aspect-Oriented Approach if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or Context-Bounded Model Checking of LTL Properties for ANSI-C Software. Jeremy Morse, Lucas Cordeiro, Bernd Fischer, Denis Nicole Context-Bounded Model Checking of LTL Properties for ANSI-C Software Jeremy Morse, Lucas Cordeiro, Bernd Fischer, Denis Nicole Model Checking C Model checking: normally applied to formal state transition 2. Formal Verification of Ad hoc On-demand Distance Vector (AODV) Protocol using Cadence SMV Formal Verification of Ad hoc On-demand Distance Vector (AODV) Protocol using Cadence SMV Xin Liu and Jun Wang Department of Computer Science, University of British Columbia 2366 Main Mall, Vancouver, Software Model Checking: Theory and Practice Software Model Checking: Theory and Practice Lecture: Specification Checking - LTL Model Checking Copyright 2004, Matt Dwyer, John Hatcliff, and Robby. The syllabus and all lectures for this course Brewer s Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web Services Brewer s Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web Services Seth Gilbert Nancy Lynch Abstract When designing distributed web services, there are three properties that 2.3 Scheduling jobs on identical parallel machines 2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed INF5140: Specification and Verification of Parallel Systems Motivation INF5140: Specification and Verification of Parallel Systems Lecture 1 Introduction: Formal Methods Gerardo Schneider Department of Informatics University of Oslo INF5140, Spring 2009 Outline AUTOMATIC PROTOCOL CREATION FOR INFORMATION SECURITY SYSTEM AUTOMATIC PROTOCOL CREATION FOR INFORMATION SECURITY SYSTEM Mr. Arjun Kumar arjunsingh@abes.ac.in ABES Engineering College, Ghaziabad Master of Computer Application ABSTRACT Now a days, security is very Monitoring Metric First-order Temporal Properties Monitoring Metric First-order Temporal Properties DAVID BASIN, FELIX KLAEDTKE, SAMUEL MÜLLER, and EUGEN ZĂLINESCU, ETH Zurich Runtime monitoring is a general approach to verifying system properties at Feature Specification and Automated Conflict Detection Feature Specification and Automated Conflict Detection AMY P. FELTY University of Ottawa and KEDAR S. NAMJOSHI Bell Laboratories Large software systems, especially in the telecommunications field,, - An Efficient Leader Election Algorithm of Performance-Related Characteristics for Dynamic Networks 2012 International Conference on Smart Grid Systems (ICSGS 2012) IPCSIT vol.45 (2012) (2012) IACSIT Press, Singapore An Efficient Leader Election Algorithm of Performance-Related Characteristics for Dynamic Characterizations for Java Memory Behavior Characterizations for Java Memory Behavior Alex Gontmakher Assaf Schuster Computer Science Department, Technion {gsasha,assaf}@cs.technion.ac.il Abstract We provide non-operational characterizations of, Program Synthesis is a Game Program Synthesis is a Game Barbara Jobstmann CNRS/Verimag, Grenoble, France Outline Synthesis using automata- based game theory. MoBvaBon, comparison with MC and LTL. Basics Terminology Reachability/Safety Quick Start Guide. June 3, 2012 The ERIGONE Model Checker Quick Start Guide Mordechai (Moti) Ben-Ari Department of Science Teaching Weizmann Institute of Science Rehovot 76100 Israel June 3, 2012. Formal Verification Methods 3: Model Checking Formal Verification Methods 3: Model Checking John Harrison Intel Corporation Marktoberdorf 2003 Fri 1st August 2003 (11:25 12:10) This presentation was not given in Marktoberdorf since it mostly duplicates, Factoring & Primality Factoring & Primality Lecturer: Dimitris Papadopoulos In this lecture we will discuss the problem of integer factorization and primality testing, two problems that have been the focus of a great amount Distributed Systems: Concepts and Design Distributed Systems: Concepts and Design Edition 3 By George Coulouris, Jean Dollimore and Tim Kindberg Addison-Wesley, Pearson Education 2001. Chapter 2 Exercise Solutions 2.1 Describe and illustrate Finding Liveness Errors with ACO Finding Liveness Errors with ACO Francisco Chicano and Enrique Alba Abstract Model Checking is a well-known and fully automatic technique for checking software properties, usually given as temporal logic/ From Workflow Design Patterns to Logical Specifications AUTOMATYKA/ AUTOMATICS 2013 Vol. 17 No. 1 Rados³aw Klimek* From Workflow Design Patterns to Logical Specifications 1. Introduction Formal methods in software 1 Limiting distribution for a Markov chain Copyright c 2009 by Karl Sigman Limiting distribution for a Markov chain In these Lecture Notes, we shall study the limiting behavior of Markov chains as time n In particular, under suitable easy-to-check Lecture 2: Universality CS 710: Complexity Theory 1/21/2010 Lecture 2: Universality Instructor: Dieter van Melkebeek Scribe: Tyson Williams In this lecture, we introduce the notion of a universal machine, develop efficient universal,
http://docplayer.net/275650-Fair-model-checking-with-process-counter-abstraction.html
CC-MAIN-2018-34
refinedweb
10,859
53.31
Hey Everyone: I am working on a site that is NOT a Rails app, but needs to access data from one that is. I would like to do some AJAXy stuff and have the data show up in a div based on a user request without redirecting to the Rails App Site or refreshing the current page. I believe retrieving the data from the Rails App in XML format is my best bet. my controller action looks like this: def index @foo = Bar.find(:all) respond_to do |format| format.html format.xml { render :xml => @foo.to_xml } end end So my questions is - how can I parse this data using javascript within the non-Rails site? I have been using some examples from. I know this is possible, but when I try it nothing happens - the xml never seems to load (it stalls at xmlhttprequest.open(“GET”,url,true) - where url is app.com/foo.xml If I type this url into my browser the result is an xml document tree as expected. When I try it via the AJAX call I get nothing. I assume this would be something like how you would handle data received via an rss feed from Rails or a data-grid type thing (neither of which I’ve done). This is my first attempt at creating a web-service feature in Rails, so please be kind Thanks, divotdave
https://www.ruby-forum.com/t/parsing-to-xml-respond-to-with-javascript/107437
CC-MAIN-2022-40
refinedweb
233
81.12
The'. The video 'Dealing with the explosion of complexity in web test automation' gives you a good idea of how QF-Test handles a deeply nested DOM structure. Though they often go unnoticed, at least until the first ComponentNotFoundException occurs, the 'Component' nodes are the heart of a test-suite. Everything else revolves around them. Explaining this requires a little side-tracking: Live recording of the special webinar 'Component recognition'. The GUI of an application consists of one or more windows which hold a number of components. The components are nested in a hierarchical structure. Components that hold other components are called containers. As QF-Test itself is a complex application, its main window should serve well as an example: The window contains a menu bar which holds the menus for QF-Test. Below that is the toolbar with its toolbar buttons. The main area employs a split pane to separate the tree view from the details. The tree view consists of a label ("Test-suite") and the tree itself. The detail view contains a complex hierarchy of various components like text fields, buttons, a table, etc. Actually there are many more components that are not obvious. The tree, for example, is nested in a scroll pane which will show scroll bars if the tree grows beyond the visible area. Also, various kinds of panes mainly serve as containers and background for other components, like the region that contains the "OK" and "Cancel" buttons in the detail view. SWT In SWT the main GUI components are called Control, Widget or Item. Unless explicitly stated otherwise the term "component", as used in this manual, also applies to these and not only to AWT/Swing/JavaFX Components. JavaFX The same is valid for JavaFX components called Nodes that build up the component hierarchy denominated as Scene graph. Windows In the manual we will use the term component für the GUI elements of native Windows applications, called Controls, too. Web The internal representation of an HTML page is based on the Document Object Model (DOM) as defined by the W3C, a tree structure consisting of nodes. The root node, a Document can contain Frame nodes with further Document nodes and/or a root Element with a tree structure of further Element nodes. Though an HTML page with its DOM is quite different from a Swing, JavaFX or SWT interface, the abstractions QF-Test uses work just as well and the general term "component" also applies to DOM nodes. Actions by the end-user of an application are transformed into events by the Java VM. Every event has a target component. For a mouse click this is the component under the mouse cursor, for a key press it is the component that has the keyboard focus. When an event is recorded by QF-Test, the component information is recorded as well, so that the event can later be replayed for the same component. This may sound trivial and obvious, but component recognition is actually the most complex part of QF-Test. The reason for this is the necessity to allow for change. QF-Test is a tool designed for regression testing, so when a new version of the SUT is released, tests should continue to run, ideally unchanged. So when the GUI of the SUT changes, QF-Test needs to adapt. If, for example, the "OK" and "Cancel" buttons were moved from the bottom of the detail view to its top, QF-Test would still be able to replay events for these buttons correctly. The extent to which QF-Test is able to adapt varies and depends on the willingness of developers to plan ahead and assist a little bit in making the SUT well-suited to automated testing. But more on that later (section 5.6 and section 5.7). The recorded components are transformed into 'Window' and 'Component' nodes which form a hierarchy that represents the actual structure of the GUI. These nodes are located under the 'Windows and components' node. The following image shows part of the 'Components' representing QF-Test's main window. Web Instead of a 'Window', the root Document of a web page is represented as a 'Web page' node. Nested Documents inside Frames are represented as 'Component' nodes. Every time a sequence is recorded, nodes are generated for components that are not yet represented. When the sequence is discarded later on, the 'Components' remain, hence 'Component' nodes have a tendency to proliferate. The popup menu (right button click) for 'Window' and 'Component' nodes has two items, »Mark unused components...« and »Remove unused components«, which will mark or remove those 'Component' nodes that are no longer being referred to. Be careful though if you are referencing 'Components' across test-suite boundaries or use variable values in 'QF-Test component ID' attributes as these are not taken into account unless the test-suites belong to the same project or the 'Dependencies (reverse includes)' attribute of the 'Test-suite' root node is set correctly . Note 4.0+ Besides this way of representing components as nodes it is also possible to address components as multi-level sub-items with an XPath-like syntax called QPath as explained in subsection 6.3.2 The attributes of 'Components' and the algorithm for component recognition are explained in detail in section 44.2. Here we will concentrate on the association between 'Component' nodes and the rest of the test-suite. Windows In order to control native Windows applications QF-Test provides some procedures in the package qfs.autowin of the standard library. You need to determine the criteria for the identification of the windows components manually, using the procedures provided in the package qfs.autowin.helpers, and then pass them as parameters to the procedures performing the action on the components. For details please refer to chapter 47. Every node of the test suite has a 'QF-Test ID' attribute which is secondary for most kinds of nodes. For 'Component' nodes however, the 'QF-Test ID' has an important function. It is the unique identifier for the 'Component' node by which events, checks and other nodes that have a target component refer to it. Such nodes have a 'QF-Test component ID' attribute which is set to the 'Component's' 'QF-Test ID'. This level of indirection is important. If the GUI of the SUT changes in a way that QF-Test cannot adapt to automatically, only the 'Component' nodes for the unrecognized components need to be updated to reflect the change and the test will run again. It is essential to understand that the 'Component's' 'QF-Test ID' is an artificial concept for QF-Test's internal use and should not be confused with the 'Name' attribute, which serves for identifying components in the SUT and is explained in detail in the following section. The actual value of the 'QF-Test ID' is completely irrelevant, except for the requirement to be unique, and it bears no relation whatever to the actual component in the GUI of the SUT. However, the 'QF-Test ID' of the 'Component' is shown in the tree of the test-suite, for 'Component' nodes as well as for events and other nodes that refer to a 'Component'. For this reason, 'Components' should have expressive 'QF-Test IDs' that allude to the actual GUI component. When creating a 'Component' node, QF-Test has to assign a 'QF-Test ID' automatically. It does its best to create an expressive value from the information available. The option Prepend parent QF-Test ID to component QF-Test ID controls part of this process. If the generated 'QF-Test ID' doesn't suit you, you can change it. QF-Test will warn you if you try to assign a 'QF-Test ID' that is not unique and if you have already recorded events that refer to the 'Component', it will change their 'QF-Test component ID' attribute to reflect the change. Note that this will not cover references with a variable 'QF-Test component ID' attribute. Note A common mistake is changing the 'QF-Test component ID' attribute of an event instead of the 'QF-Test ID' itself. This will break the association between the event and the 'Component', leading to an UnresolvedComponentIdException. Therefore you should not do this unless you want to change the actual target component of the event. Experienced testers with a well-structured concept for automated testing will find the component recording feature described in section 4.4 useful. It can be used to record the component hierarchy first in order to get an overview over the structure of the GUI and to assign 'QF-Test IDs' that suit you. Then you can continue to record the sequences and build the test-suite around the components. The class of a component is a very important attribute as it describes the type of the recorded component. Once QF-Test records a button, it will only look for a button on replay, not for a table or a tree. Thus the component class conveniently serves to partition the components of a GUI. This improves performance and reliability of component recognition, but also helps you associate the component information recorded by QF-Test with the actual component in the GUI. Besides its role in component identification, the class of a component is also important for registering various kinds of resolvers that can have great influence on the way QF-Test handles components. Resolvers are explained in detail in subsection 49.1.6. Each toolkit defines its own system-specific classes for components like Buttons or Tables. In case of Buttons, that definition could be javax.swing.JButton for Java Swing or org.eclipse.swt.widgets.Button for Java SWT or javafx.scene.control.ButtonBase For JavaFX or INPUT:SUBMIT for web applications. In order to allow your tests to run independently of the actually utilized technology QF-Test unifies those classes via so-called generic classes, e.g. all buttons are simply called Button now. This approach provides a certain degree of independence from the dedicated technical classes and will allow you to create tests without taking care about the specific technology. You can find a detailed description of generic classes at chapter 56. In addition to generic classes QF-Test records system-specific classes as 'Extra features' with the state "Ignore". In case of component recognition problems due to too many similar components these can be activated to have a stricter component recognition at the expense of flexibility. Another reason for generic classes is that dedicated technical classes could get changed during development, e.g. due to introduction of a new base framework or even another technology. In such cases QF-Test needs to be quite flexible in order to recognize a proper class. Here the concept of generic classes allows you to be able to cope with those changes and for the most part to re-use existing tests. You can find more details at subsection 5.4.3. For Swing, FX and SWT QF-Test works with the actual Java GUI classes whereas a pseudo class hierarchy is used for web applications as follows: As shown, "NODE" is at the root of the pseudo class hierarchy. It matches any kind of element in the DOM. Derived from "NODE" are "DOCUMENT", "FRAME", "DOM_NODE" and "DIALOG", the types of nodes implementing the pseudo DOM API explained in section 49.11. "DOM_NODE" is further sub-classed according to the tag name of the node, e.g. "H1", "A" or "INPUT" where some tags have an additional subclass like "INPUT:TEXT". QF-Test can record classes of component in various ways, therefore it organizes component classes in various categories. Those categories are called as the specific class, the technology-specific system class, the generic class and the dedicated type of the generic class. Each category is recorded at 'Extra features'. The option Record generic class names for components is checked by default. Using this option allows you to record generic classes in order to share and re-use your tests when testing a different technology with just minor changes to the existing tests. In case you work with one Java engine only and you prefer to work with the "real" Java classes, you could also work without the generic class recording. But in this case you should consider to check the option Record system class only. This option makes QF-Test to record the technology-specific system class instead of the derived class. If you switch off this option you will get the derived class which enables you to make a very well targeted recognition but could cause maintenance efforts in case of changes coming from refactoring. Web In web applications QF-Test records classes as described in the previous chapter subsection 5.4.2. In case you have to work with a supported AJAX toolkit (see section 46.2), QF-Test records generic classes as well. You shouldn't modify the default options for this technology. Depending on its class a component has a set of (public) methods and fields which can be used in an 'SUT script' once you have a reference to the object (see subsection 12.2.4). Select the entry »Show the component's methods...« from the context menu of a node under the 'Windows and components' branch to display the methods and fields of the corresponding class or right click on a component in the SUT while you are in component recording mode (see section 4.4). Web The methods and fields displayed for (HTML) elements in a browser cannot be used directly with an object returned by rc.getComponent(). These are at JavaScript level and require a wrapping of the method calls into evalJS (cf. section 49.11). Test automation can be improved tremendously if the developers of the SUT have either planned ahead or are willing to help by defining names for at least some of the components of the SUT. Such names have two effects: They make it easier for QF-Test to locate components even after significant changes were made to the SUT and they are highly visible in the test-suite because they serve as the basis for the 'QF-Test IDs' QF-Test assigns to components. The latter should not be underestimated, especially for components without inherent features like text fields. Nodes that insert text into components called "textName", "textAddress" or "textAccount" are far more readable and maintainable than similar nodes for "text", "text2" or "text3". Indeed, coordinated naming of components is one of the most deciding factors for the efficiency of test automation and the return of investment on QF-Test. If development or management is reluctant to spend the little effort required to set names, please try to have them read this chapter of the manual. Note Please note that recorded names are stored in the 'Name' attribute of 'Component' nodes. Because they also serve as the basis for the 'QF-Test ID' for the same node, 'Name' and 'QF-Test ID' are often identical. But always keep in mind that the 'QF-Test ID' is used solely within QF-Test and that the 'Name' is playing the critical part in identifying the component in the SUT. If the name of a component changes, it is the 'Name' attribute that must be updated, there is no need to touch the 'QF-Test ID'. The technique to use for setting names during development depends on the kind of SUT: Swing All AWT and Swing components are derived from the AWT class Component, so its method setName is the natural standard for Swing SUTs and some developers make good use of it even without test automation in mind, which is a great help. JavaFX For JavaFx setId is the pendant of Swing's setName method to set identifiers for components (called 'Nodes'). Alternatively IDs can be set via the FXML attribute fx:id. While the ID of a 'Node' should be unique within the scene graph, this uniqueness is not enforced. This is analogous to the 'ID' attribute on an HTML element. SWT Unfortunately SWT has no inherent concept for naming components. An accepted standard convention is to use the method setData(String key, Object value) with the String "name" as the key and the designated name as the value. If present, QF-Test will retrieve that data and use it as the name for the component. Obviously, with no default naming standard, very few SWT applications today have names in place, including Eclipse itself. Fortunately QF-Test can derive names for the major components of Eclipse/RCP based applications from the underlying models with good results - provided that IDs were specified for those models. See the Automatic component names for Eclipse/RCP applications option for more details. Web The natural candidate for naming the DOM nodes of a web application is the 'ID' attribute of a DOM node - not to be confused with the 'QF-Test ID' attribute of QF-Test's 'Component' nodes. Unfortunately the HTML standard does not enforce IDs to be unique. Besides, 'ID' attributes are a double-edged sword because they can play a major role in the internal JavaScript operations of a web application. Thus there is a good chance that 'ID' attributes are defined, but they cannot be defined as freely as the names in a Swing, JavaFX or SWT application. Worse, many DHTML and Ajax frameworks need to generate 'ID' attributes automatically, which can make them unsuited for naming. The option Turn 'ID' attribute into name where "unique enough" determines whether QF-Test uses 'ID' attributes as names. WebIn case you want to test a web application using a supported AJAX toolkit, please take a look at subsection 46.2.2 for details about assigning IDs. If developers have implemented some other consistent naming scheme not based on the above methods, those names can still be made accessible to QF-Test by implementing a NameResolver as described in subsection 49.1.6. The reason for the tremendous impact of names is the fact that they make component recognition reliable over time. Obviously, locating a component that has a unique name assigned is trivial. Without the help of a name, QF-Test uses lots of different kinds of information to locate a component. The algorithm is fault-tolerant and configurable and has been fine-tuned with excellent results. However, every other kind of information besides the name is subject to change as the SUT evolves. At some time, when the changes are significant or small changes have accumulated, component recognition will fail and manual intervention will be required to update the test-suite. Another aspect of names is that they make testing of multi-lingual applications independent of the current language because the name is internal to the application and does not need to be translated. There is one critical requirement for names: They must not change over time, not from one version of the SUT to another, not from one invocation of the SUT to the next and not while the SUT executes, for example when a component is destroyed and later created anew. Once a name is set it must be persistent. Unfortunately there is no scheme for setting names automatically that fulfills this requirement. Such schemes typically create names based on the class of a component and an incrementing counter and invariably fail because the result depends on the order of creation of the components. Because names play such a central role in component identification, non-persistent names, specifically automatically generated ones, can cause a lot of trouble. If development cannot be convinced to replace them with a consistent scheme or at least drop them, such names can be suppressed with the help of a NameResolver as described in subsection 49.1.6. QF-Test does not require ubiquitous use of names. In fact, over-generous use can even be counter-productive because QF-Test also has a concept for components being "interesting" or not. Components that are not considered interesting are abstracted away so they can cause no problem if they change. Typical examples for such components are panels used solely for layout. If a component has a non-trivial name QF-Test will always consider it interesting, so naming trivial components can cause failures if they are removed from the component hierarchy in a later version. Global uniqueness of names is also not required. Each class of components has its own namespace, so there is no conflict if a button and a text field have the same name. Besides, only the names of components contained within the same window should be unique because this gives the highest tolerance to change. If your component names are unique on a per-window basis, set the options Name override mode (replay) and Name override mode (record) to "Override everything". If names are not unique per window but identically named components are at least located inside differently named ancestors, "Hierarchical resolution" is the next best choice for those options. Two questions remain: Which components should have names assigned and which names to use? As a rule of thumb, all components that a user directly interacts with should have a name, for example buttons, menus, text fields, etc. Components that are not created directly, but are automatically generated as children of complex components don't need a name, for example the scroll bars of a JScrollPane, or the list of a JComboBox. The component itself should have a name, however. If components were not named in the first place and development is only willing to spend as little effort as possible to assign names to help with test automation, a good strategy is to assign names to windows, complex components like trees and tables, and to panels that comprise a number of components representing a kind of form. As long as the structure and geometry of the components within such forms is relatively consistent, this will result in a good compromise for component recognition and useful 'QF-Test ID' attributes. Individual components causing trouble due to changing attributes can either be named by development when identified or taken care of with a NameResolver. Since QF-Test "knows" the components for which setName is most useful, it comes with a feature to locate and report these components. QF-Test even suggests names to assign, though these aren't necessarily useful. This feature is similar to component recording and is explained in the documentation for the option Hotkey for components. Web The suggested names for DOM nodes are currently not very useful. Unavoidably the components of the SUT are going to change over time. If names are used consistently this is not really a problem, since in that case QF-Test can cope with just about any kind of change. Without names however, changes tend to accumulate and may reach a point where component recognition fails. To avoid that kind of problem, QF-Test's representation of the SUT's components should be updated every now and then to reflect the current state of affairs. This can be done with the help of the »Update component(s)« menu-item in the context menu that you get by right-clicking on any node under the 'Windows and components' node. Note This function can change a lot of information in your test-suite at once and it may be difficult to tell whether everything went fine or whether some components have been misidentified. To avoid problems, always create a backup file before updating multiple components. Don't update too many components at once, take things 'Window' by 'Window'. Make sure that the components you are trying to update are visible except for the menu-items. After each step, make sure that your tests still run fine. Provided that you are connected to the SUT, this function will bring up the following dialog: If you are connected to multiple SUT clients, you must choose one to update the components for. Select whether you only want to update the selected 'Component' node or all its child nodes as well. You can choose to include components that are not currently visible in the SUT. This is mostly useful for menu-items. The 'QF-Test ID' for an updated node is left unchanged if "Use QF-Test component ID of original node" is selected. Otherwise, updated nodes will receive a 'QF-Test ID' generated by QF-Test. If the 'QF-Test ID' of a node is changed, all nodes referring to that node via their 'QF-Test component ID' attribute will be updated accordingly. QF-Test also checks for references to the component in all suites of the same project and in those suites that are listed in the 'Dependencies (reverse includes)' attribute of the 'Test-suite' node. Those suites are loaded automatically and indirect dependencies are resolved as well. Note In this case, QF-Test will open modified test-suites automatically, so you can save the changes or undo them. After pressing "OK", QF-Test will try to locate the selected components in the SUT and fetch current information for them. Components that are not found are skipped. The 'Component' nodes are then updated according to the current structure of the SUT's GUI, which may include moving nodes to different parents. Note For large component hierarchies this very complex operation can take a while, in extreme cases even a few minutes. This function is especially useful when names have been set for the first time in the SUT. If you have already generated substantial test-suites before convincing the developers to add names, you can use this function to update your 'Components' to include the new names and update their 'QF-Test IDs' accordingly. This will work best if you can get hold of an SUT version that is identical to the previous one except for the added names. Note Very important note: When updating whole windows or component hierarchies of significant size you may try to update components that are not currently visible or available. In that case it is very important to avoid false-positive matches for those components. You may want to temporarily adjust the bonus and penalty options for component recognition described in subsection 37.3.4 to prevent this. Specifically, set the 'Feature penalty' to a value below the 'Minimum probability', i.e. to 49 if you have not changed the default settings. Don't forget to restore the original value afterwards. If you need to change the setting of the options Name override mode (replay) and Name override mode (record) because, for example, component names turned out not to be unique after all, change only the setting for the recording options before updating the components. When finished, change the replay option accordingly. If your SUT has changed in a way that makes it impossible for QF-Test to locate a component, your test will fail with a ComponentNotFoundException. This should not be confused with an UnresolvedComponentIdException which is caused by removing a 'Component' node from the test-suite or changing the 'QF-Test component ID' attribute of an 'Event' node to a non-existing 'QF-Test ID'. There are two videos available explaining in detail how to deal with a ComponentNotFoundException: A video of a simple case is 'ComponentNotFoundException - complex case', a more complex case is discussed in the video 'ComponentNotFoundException - complex case'. Windows In case you have problems with the component recognition with native Windows applications in procedures of the package qfs.autowin of the standard library please continue in chapter 47. When you get a ComponentNotFoundException, rerun the test with QF-Test's debugger activated so that the test gets suspended and you can look at the node that caused the problem. Here it pays if your 'QF-Test ID' attributes are expressive because you need to understand which component the test tried to access. If you cannot figure out what this node is supposed to do, try to deactivate it and rerun the test to see if it runs through now. It could be a stray event that was not filtered during recording. In general your tests should only contain the minimum of nodes required to achieve the desired effect. If the node needs to be retained, take a look at the SUT to see if the target component is currently visible. If not, you need to modify your test to take that situation into account. If the component is visible, ensure that it was already showing at the time of replay by checking the screenshot in the run-log and try to re-execute the failed node by single-stepping. If execution now works you have a timing problem that you need to handle by either modifying the options for default delays (subsection 37.3.5) or with the help of a 'Wait for component to appear' node or a 'Check' node with a 'Timeout'. As a last resort you can work with a fixed delay. If the component is visible and replay fails consistently, the cause is indeed a change in the component or one of its parent components. The next step is identifying what changed and where. To do so, re-record a click on the component, then look at the old and new 'Component' node in the hierarchy under 'Windows and components'. Note You can jump directly from the 'Event' node to the corresponding 'Component' node by pressing [Ctrl-W] or right-clicking and selecting »Locate component«. You can jump back via [Ctrl-Backspace] or »Edit«-»Select previous node«. A clever trick is to mark the 'Component' nodes to compare by setting breakpoints on them to make them easier to spot. The crucial point is where the hierarchy for those two components branches. If they are located in different 'Window' nodes, the difference is in the 'Window' itself. Otherwise the old and new 'Component' have a common ancestor just above the branching point and the crucial difference is in the respective nodes directly below that branch. When you have located those nodes, examine their attributes top-to-bottom and look for differences. Note You can open a second QF-Test window via »View«-»New window...« so as to place the detail views of the nodes to compare side to side. The only differences that will always cause recognition failures are 'Class name' and 'Name'. Differences in 'Feature', structure or geometry attributes can usually be compensated unless they accumulate. A change in the 'Class name' attribute can be caused by refactoring done by development, in which case you need to update your 'Class name' attribute(s) to reflect the change(s). Another possible cause is obfuscation, a technique for making the names of the application classes illegible for protection against prying eyes. This poses a problem because the class names can then change with each version. You can prevent both refactoring and obfuscation problems by activating the option Record system class only. If the 'Name' has changed things get more difficult. If the change is apparently intentional, e.g. a typo was fixed, you can update the 'Name' attribute accordingly. More likely the cause is some automatically generated name that may change again anytime. As explained in the previous section, your options in this case are discussing things with development or suppressing such names with the help of a NameResolver as described in subsection 49.1.6. Changes to the 'Feature' attribute are common for 'Window' nodes, where the 'Feature' represents the window title. When combined with a significant change in geometry such a change can cause recognition to break. This can be fixed by updating the 'Feature' to match the new title or, preferably, by turning it into a regular expression that matches all variants. Depending on the kind and amount of changes to accommodate there are two ways to deal with the situation: Note Automatic updates for references from other test-suites require that the suites belong to the same project or the correct setting the 'Dependencies (reverse includes)' attribute of the 'Test-suite' root node. Hidden fields are not captured by default and therefore not stored under the 'Windows and components' node. In case you frequently need to access hidden fields you can deactivate the Take visibility of DOM nodes into account option. Another way to get hidden fields recorded is the following: To access a hidden field's attributes (e.g. the 'value' attribute) you can create a simple 'SUT script' as shown below. Details on scripting in general, the used methods and parameters can be found in Scripting, Run-context API and Pseudo DOM API respectively.
https://www.qfs.de/en/qf-test-manual/lc/manual-en-user_components.html
CC-MAIN-2021-31
refinedweb
5,453
60.04
Michael McGrady wrote: > No problem with them being bad. I agree. huh? Michael, no offense but you keep contradicting yourself. :( >. So a simple decoupling into -api jars is all that's needed, and from what I could tell that is exactly what Niclas has been doing in his branch. > How about: > > apache.river.Lease > apache.river.Transaction > apache.river.Entry > apache.river.jini (service platform) > apache.river.javaspace.JavaSpace And then you realize that in order to obtain a Transaction you need to lookup the Transaction service, and in order to renew or cancel a Lease you need to access the LeaseManager, so all you have accomplished is moving classes around from one package namespace (net.jini.core) to another, with no actual benefit. Which is exactly Niclas' point. -h
http://mail-archives.apache.org/mod_mbox/river-dev/200812.mbox/%3C494FC5FA.8070306@wizards.de%3E
CC-MAIN-2014-41
refinedweb
130
65.22
Hey guys, I have code that uses a button and when pressed will switch on an LED(which will stay on) and then when the button is pressed again, the LED is turned off. I have connected the Arduino and Processing using Standard Firmata successfully and the LED works. What I want to be able to do is that when the button is pressed, an image will show up in Processing but also stay on the screen, just like the LED. I'm just testing it with shapes for now. This is what I have so far: import processing.serial.*; import cc.arduino.*; Arduino arduino; int buttonPin = 7; int buttonState = 0; int buttonPressed = 0; void setup(){ size(600, 600); println(Arduino.list()); arduino = new Arduino(this, Arduino.list()[6], 57600); arduino.pinMode(buttonPin, arduino.INPUT); } void draw() { buttonState = arduino.digitalRead(buttonPin); if (buttonState == arduino.HIGH && buttonPressed == 0) { buttonPressed = 1; rect(10, 10, 10, 10); text("hello", 10, 10); } if (buttonState == arduino.LOW && buttonPressed == 1) { buttonPressed = 0; rect(50, 50, 10, 10); } } Any help would be appreciated!
https://forum.arduino.cc/t/arduino-switch-with-processing/228736
CC-MAIN-2022-27
refinedweb
176
58.28
These are chat archives for django/django class MyFilter(django_filters.FilterSet): def __init__(self, data=None, queryset=None, prefix=None, strict=None): # based on the data from request.query_params construct other filters and # add to base_filters as well. filter = df.CharFilter(name="extra__{}".format(each), lookup_expr='exact') self.base_filters[each] = filter ... Now when I call filter_class from the API view, the base_filters are preserved. After every requests, the extra-filters I add are preserved. I expected at every request, a new instance will be created with new extra-filters. But extra-filters keep on getting appended on every new request, despite the fact I am calling it MyFilter(data=self.request.query_params). And furthermore, on every unique request, I check id of the initialized filter_class and find that they are same. like, id(my_filter_instance) What am I doing wrong? hello i need some help I have a website where the user can be put three numbers on my html template and get some results from my personal mathematical algorithm. the result save at user personal table on my database and can see in specific tab in my website. my problem is where the algorithm to calculate result maybe take time between 5-10 minutes in this time the browser stay on reload. if user change tab or close browser or maybe have problem with internet connection then loose that request and need again to put the numbers and again wait for results. I want after the user request from my form in html to keep this request and work my algorithm without reload the page and where the algorithm finish then to send the user some email or message or just need the user visit the tab of results to see new results. that I want to avoid the problems where the algorithm is in running and user loose data or request or time. is easy to do that using suproccess,celery or RabbitMQ ? any idea ? here the code views.py def math_alg(request): if request.method == "POST": test = request.POST.get('no1') test = request.POST.get('no3') test = request.POST.get('no3') #start algorith calc_math(no1,no1,no3,result) instance = rmodel.objects.create(user=request.user,rfield=result) instance.save return render(request, 'page.html', {'result': result}) html : <form action="" method="POST">{% csrf_token %} op math calculate:<br> <input type="number" name="no1" step="any" min="0" max="1" value="0.5"> <input type="number" name="no2" step="any" min="0" max="1" value="9999"> <input type="number" name="no3" step="any" min="0" max="1" value="1000000000000000"> <br> <input class="btn btn-primary" type="submit"> {{result }} </form> hello guys ! ,, evenings I have a question if I have a model like { user1: user2: } and I want to get all rows with distinct on user1 & user2 ex: row1: user1 = 1 , user2 = 2 row2: user1 = 3, user2 = 1 this shall return only 2 rows the first and second what shall my query look like? thx
https://gitter.im/django/django/archives/2017/10/08
CC-MAIN-2019-43
refinedweb
493
55.84
Hi I am trying to set manual tab stop for each paragraph in Document? How do i need to do this ? Can you provide me some example to set tab stop in Document . I am using Aspose.words.java latest version . Thanks Hi Hi<?xml:namespace prefix = o Thanks for your request. Please try using the following code: // Create TabStop TabStop tab = new TabStop(100); // Set TabStop paragraph.getParagraphFormat().getTabStops().add(tab); Also please follow the link to learn more: Best regards, Hi Thanks for your reply. From above example ,In converted document i can see tab stop in MSWord ruler at 1.5 inch . I just need to bring below like format in document. Is it possible ? I just gone through manual but it couldn help me For ex : Name : Aspose Age : 10 Please do help me . thanks Hi Anbu, Thanks for your inquiry. I believe Andrey's code above does provide the basis for doing this. Remember you need the tab character (ControlChar.Tab) for the text to be lined up with the next tabstop. Could you please provide us with some more information, prehapes a document which samples what you would like to achieve. Thanks,
https://forum.aspose.com/t/need-help-to-set-tab-stop/62037
CC-MAIN-2022-40
refinedweb
198
77.94
PhoneGap Build - cli-6.3.0 gives wrong icon for IOSchinumca Aug 3, 2016 3:02 AM I am building the app using. When using cli 6.3.0 for building an app results in the failure to override default icon in IOS version. the icon is specified in the config.xml file. Couple of days ago, I used cli 5.4.1 to build the app and icon shown properly on IOS devices is there a solution for cli 6.3.0 to fix this or can I change the version to 5.4.1 while build. 1. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOSVectorP Aug 3, 2016 3:07 AM (in response to chinumca) Yes, you can use a lower version, using the 'phonegap-version' preference. However, the correct icons should show (except 29x29@3 and 167x167, for which a bug report is still open). If they don't, please - post your config.xml here, so forum participants can find errors - confirm that you have both index.html and config.xml in the root directory of your zip file. 2. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOSchinumca Aug 3, 2016 4:03 AM (in response to chinumca) Add below line to config.xml. <preference name="phonegap-version" value="cli-5.4.1" /> 3. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOSmartinb77004568 Aug 5, 2016 8:06 AM (in response to VectorP) - 5. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOSgsdrackspace Aug 5, 2016 9:53 AM (in response to chinumca) I've got the same problem as this. It had the correct push icon with 5.4.1 but 6.x does not. I can't just revert ad build with 5.4.1 because another plugin we are implementing requires 6.x 6. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOSGary__F Aug 20, 2016 6:43 PM (in response to chinumca) Same problem here. Push icon showing on as default PGB icon when cli-6.3.0 is used but my custom icon is used with 5.4.1 or lower. Adobe said a fix was made in prd 4 days ago but I've rebuilt just now and the bug is still there. 7. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOSoscara51041274 Aug 30, 2016 11:45 AM (in response to chinumca) Same problem here too... Can anyone confirm if this is fixec in PGB using cli-6.3.0? 8. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOSGary__F Sep 7, 2016 4:44 AM (in response to chinumca) Adobe - please can we have an update on this bug which still persists in 6.3.0? I was told to open a new thread about it which I did but no one has responded to that either. :-( Push icon not working for iOS in 6.3.0 Thank you. 9. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOSvangroover Sep 7, 2016 1:23 PM (in response to chinumca)1 person found this helpful App Icon - Graphics - iOS Human Interface Guidelines the icons are here: 120px by 120px 87px by 87px 80px by 80px 58px by 58px please ensure these icons are in the cli-6.3.0 build 10. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOSstephena27889904 Sep 28, 2016 3:57 AM (in response to vangroover) guys i had the same problem and fixed this by adding the new icon sizes to my 6.3 config.xml *all iphone icon sizes for 6.3 are below" "/> 11. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOSitcxp29572386 Dec 23, 2016 7:04 AM (in response to chinumca) Hi, I have the same problem. After to update phonegap 5.x to 6.3, the icons not working in iOS. I copy the code config.xml: <!-- GENERAL preferences --> <!-- <preference name="phonegap-version" value="cli-5.2.0" /> --> <preference name="phonegap-version" value="cli-6.3.0" /> <preference name="orientation" value="default" /> <preference name="fullscreen" value="false" /> <preference name="permissions" value="none" /> <preference name="webviewbounce" value="true" /> <preference name="stay-in-webview" value="false" /> <preference name="show-splash-screen-spinner" value="true" /> <preference name="auto-hide-splash-screen" value="true" /> <preference name="disable-cursor" value="false" /> <!-- IOS preferences only --> <preference name="target-device" value="universal" /> <preference name="prerendered-icon" value="false" /> <preference name="detect-data-types" value="true" /> <preference name="exit-on-suspend" value="false" /> <preference name="ios-statusbarstyle" value="black-opaque" /> <!-- ANDROID preferences only --> <preference name="android-minSdkVersion" value="7" /> <preference name="android-installLocation" value="auto" /> <!-- PLUGINS --> <gap:plugin <gap:plugin <gap:plugin <gap:plugin <gap:plugin <icon src="icon.png" /> <icon gap: <icon gap: <icon gap: <icon gap: <icon gap: <icon gap: <icon gap: <icon gap: <icon gap: <icon gap: <icon gap: <icon gap: <icon gap: <gap:splash gap: <gap:splash gap: <gap:splash gap: <gap:splash gap: <gap:splash gap: <gap:splash gap: <gap:splash gap: <gap:splash gap: <gap:splash gap: <gap:splash gap: <gap:splash gap: <!-- This plugin implements a whitelist policy for navigating the application webview on Cordova --> <gap:plugin <!-- Network Request Whitelist --> <access origin="*" /> <!-- Navigation Whitelist --> <allow-navigation <!-- Intent Whitelist --> > <platform name="winphone"/> <engine name="android" spec="^4.0.0" /> <engine name="ios" spec="^3.8.0" /> 12. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOSGary__F Dec 23, 2016 7:27 AM (in response to itcxp29572386) itcxp29572386, you are missing out loads of sizes that iOS devices require. See the post just before yours from Stephen. It's just bad planning from Apple that with each new device they require new icons and splash screens. If you miss one out you will get the default PGB icon in its place. I have no idea why Apple can't use the highest resolution graphic and resize it on the fly. Apps are about 1MB larger than they need to be because of all the unused images for any one phone model. Failing any thoughtfulness from Apple, it would be cool if PGB only required 1 high res icon and splash screen and it automatically generated all the icons and splash screens required and added the lines to the config.xml too. 13. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOSitcxp29572386 Dec 23, 2016 10:23 AM (in response to Gary__F) Excelent support. Yes, I dont understand why Apple does not automate the service icons. For now you have to customize each icon. I copy the link of the sizes that nowadays request and that work in phonegap version 6.3.0. Launch Screen - Graphics - iOS Human Interface Guidelines Thank you very much 14. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOSkerrishotts Dec 23, 2016 11:11 AM (in response to itcxp29572386) Apple is progressing in that direction with two different fronts: - launch storyboards: for native apps, this replicates your initial view, and scales using auto layout. For hybrid apps, support is in cordova-ios@4.3.1 (but not available in PGB yet, 4.3.0 is the latest IIRC). Hybrid apps, however, will only use images (no third-party storyboard designer), but if you're careful with your design, you'll be able to use a single image for all supported devices. See the docs for the splashscreen plugin: Splashscreen - Apache Cordova - app assets can be generated from PDFs when using Xcode, but this isn't supported for app icons yet. I don't know if Apple will ever support that or not. And if they did, I don't know if PGB would or not. As to why Apple doesn't necessarily automate this, there's a good reason, especially for app icons. Different sizes are used in different contexts, which may need additional customization in order to ensure legibility. Furthermore, by requiring you to specify each icon size, it (ideally) forces you to think about whether or not scaling is appropriate or not for that particular use case or if you need to make additional tweaks. 15. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOScanados Jun 14, 2017 9:06 AM (in response to kerrishotts) Hello everyone, I think the problem is back, at least for me. The cli I am using is cli-6.5.0 (4.3.1 / 6.1.2 / 4.4.3) My app keeps being rejected by apple app store team. After some investigation I discovered that no matter how much icon I add in the config.xml, the IPA file contains only the default cordova ones. I did a test with a previous IPA which I compiled last october 2016, and with a compile I did today Here is the result after extracting all assets from both IPAs As you can see my previous build contains my icons, ( and other default cordova as well ) but the new build, contains only the default ones. My questions is, how can I fix this ? Is the problem from my side, or is the problem from the PGB compiling process ? thanks Here is my icon list ( I also tried without gap: ) <icon gap: <icon gap: <icon gap: <icon gap: <icon gap: <icon gap: <icon gap: <icon gap: <icon gap: <icon gap: <icon gap: <icon gap: <icon gap: <icon gap: 16. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOSkerrishotts Jun 14, 2017 12:32 PM (in response to canados) I wouldn't use the gap: namespace -- instead put those icons under a <platform name="ios"> tag and see if that improves. A few other quick thoughts: - Make sure the icons themselves match the specified width and height exactly (if they don't, problems can occur) - Make sure the src path is reachable from config.xml (paths are relative to config.xml's location) - Make sure there are no case differences in your path and filenames 17. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOScanados Jun 14, 2017 12:41 PM (in response to kerrishotts) Hello kerrishotts Thank you for your reply, As I mentioned in my previous message I also tested without the gap: namespace. I already checked the names the sizes and the cases As I also mentioned in my previous message, the same code and references were working in October 2016, I did not changed anything on my icons, icons names, icons path etc ... There is something to investigate with the phonegap build. ( since it was working fine in previous compilation ) Thanks 18. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOScanados Jun 14, 2017 1:12 PM (in response to canados) before I used to upload the file. And today I also tried the repo method, but I have the same issue. do you need my application # ? Thanks 19. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOSvangroover Jun 14, 2017 5:35 PM (in response to canados) hmmm ... the config.xml says res/icons/ios but the real path is res/icon/ios in the app. 20. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOScanados Jun 14, 2017 7:58 PM (in response to vangroover) Ahhhh Thank you vangroover you spotted my mistake. I recompiled and opened the ipa and now all the icon are the correct ones The only thing remaining is the config.xml present at the root of Payload, it has a default hello cordova options <widget id='io.cordova.helloCordova' version='2.0.0' xmlns=''> <name> HelloCordova </name> <description> A sample Apache Cordova application that responds to the deviceready event. </description> <author email='dev@cordova.apache.org' href=''> Apache Cordova Team </author> is that normal ? Thanks a lot for your correction 21. Re: PhoneGap Build - cli-6.3.0 gives wrong icon for IOSRG2 Aug 16, 2017 8:21 AM (in response to vangroover) I am having a similar problem but it seems to be random. My last build a few weeks ago worked fine but without making any changes to my app config except for the version number the icons are replaced with generic on next build, This has happened several times in the last few months. Using version cli 6.3.0. Is there a fix?
https://forums.adobe.com/thread/2190444
CC-MAIN-2018-22
refinedweb
2,072
74.69
Opened 8 years ago Closed 8 years ago #11344 closed (fixed) Update Documentation About Mod-wsgi/django installation Description In this page of the docs: When install mod-wsgi, it says: import os import sys os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() If your project is not on your PYTHONPATH by default you can add: sys.path.append('/usr/local/django') just above the import line to place your project on the path. Remember to replace 'mysite.settings' with your correct settings file. - It says that the code sys.path.append('/usr/local/django') should be added above the import statement; this is ambiguous, because there are in fact 3 import statements - it should specify above the 'import django.core.handlers.wsgi' statement but below the other two import statements - It says to put sys.path.append('/usr/local/django') ; however, instead, it should say to put a link to where your python path is stored, i.e. sys.path.append('/path/to/django'), because not everyone symlinks their stuff into /usr/local/django; instead, it should suggest finding the python path and putting it in here; to find the python path, type python, then import django, then type django, and use that path. Thanks! Change History (2) comment:1 Changed 8 years ago by comment:2 Changed 8 years ago by Note: See TracTickets for help on using tickets. Note the sys.appendis for placing your project's path into the python path, not the path to django itself. If someone has figured out how to get as far as trying to deploy with mod_wsgi without having django in the default python path, they likely aren't going to need detailed instructions on finding it, so I think the 2nd point can be effectively dealt with by adding a note similar to the one for mysite.settings, which should also serve to reinforce that the path being talked about here is the project path.
https://code.djangoproject.com/ticket/11344
CC-MAIN-2017-34
refinedweb
334
54.32
- Overview - Transcript 3.1 Write a Real User Provider In this lesson, we’ll build on the basic authentication we’ve already looked at and start implementing useful authentication components. We’ll update the database and our User class, as well as write a real user provider..1 Write a Real User Provider In the past three lessons, we have customized our application with some pretty meaningless modifications. So in this lesson, we're going to do something useful. We're going to add a new column to our users table that determines whether or not a user is an admin. And then we are going to change our application, so that we will take that new column into account. And we will protect pieces of our application, based upon whether or not a user is an admin. So the first thing we need to do is modify our users table, and we're going to do that with a new migration. So let's do php artisan make:migration. Let's call this add_is_admin_to_users and then we want to specify the users table. So that's going to create that migration that we can then go in and modify. So we just want to add that new column. So let's open up that migration, and we are going to say table and we're going to call the tinyInteger method. Now, we might could call the Boolean method, but I have to say I've never done that. I assume that behind the scenes, that would create a tiny int for MySQL because MySQL does not have a Boolean field. So I'm going to play it safe and I'm going to call tinyInteger. Let's call this is_admin, and let's also set a default value. We want users to default to not to be an admin. That's something that we want to give only certain users. So that's our only thing that we need to add to our table. We want to go ahead and modify our user class, because inside of our code we want that is_admin property to be a Boolean. So we are going to say casts, and inside of this array, we will specify is_admin needs to be cast as a Boolean value. And so with that done, we can execute our migration. So let's go back, let's do php artisan and migrate, and that is going to modify our table. Just to make sure, let's go and look at it. So there is the users table. Let's go into there. The data should still be there, which it is. But if we scroll on over, we should have our new is_admin, and we do. The next thing we're going to do is write a new user provider. One that is going to retrieve a user by the given credentials, but it's only going to return that user if it is an admin. So let's go to app and then Extensions, and we are going to add a new class, and let's call this AdminUserProvider. And the great thing about this class is we don't have to write a lot of code, because really, we are just piggybacking off of the built-in EloquentUserProvider. So let's first of all put this in our namespace, that is going to be App\Extensions, and we want to use that EloquentUserProvider. So we'll add a use statement Illuminate\Auth\EloquentUserProvider. And then we will define our class, class AdminUserProvider. And that is going to extend the EloquentUserProvider. And really the only thing that we want to do is implement the retrieveByCredentials method, because everything else is going to be the same. So we will go ahead and write that. We want to retrieveByCredentials. Hopefully I typed that correctly, and it's going to accept an array that contains the credentials. And here we can go ahead and retrieve the user with our parent class' method of the same name. So let's say $user = parent::retrieveByCredentials() and we will pass in those credentials. Now it's entirely possible that user is null because of the credentials that were passed don't give us a user, then we need to return null here. But we also want to return null if the user is not an admin. Because the whole purpose of this AdminUserProvider is to return a user that is an admin. So let's first check to see if user is null or if user is_admin, or rather is not is_admin. If that is the case, then we will return null, otherwise they will return our user. So now that we have this AdminUserProvider, let's go ahead and register it. So we would need to go to our Providers folder, open up the AuthServiceProvider, and we're going to add another provider here. We'll go ahead and keep the two useless things that we have. By the time we're done in this lesson, those things aren't going to be used by the application anyway. So we want to say Auth::provider and we're going to call this admin user provider. And then we need our closure. It's going to accept the app and then the config. Now in this case, we need to use the app and the config in order to initialize our AdminUserProvider, because if you remember, the base class is the EloquentUserProvider and there are things that it needs in order to perform its work. So, we will new up AdminUserProvider. The first thing we need to pass in is the application hash. So that's the first thing that's passed to the constructor. The second thing is the model, because the EloquentUserProvider needs to know the model class for our user, and we saw that inside of the config. If we scroll on down to our providers, where the user's provider is defined the driver's eloquent, the model is the user class. So we need to pass this to the constructor, but we do so using our config. So we say config, and then we say model, and that gives us the value that is specified here, the user class. So we now have that registered. Let's go back to our auth.php file and we are going to add another entry here. We're going to call this admin-users and our driver is going to be what we just registered, and I already forgot what that was, so let's go back. Let's copy that and paste it in. That is the value. So driver is the key, and then the model is going to essentially be what we have for the already defined user's provider, because it's going to use the same class. Now let's scroll on up to where the guards defined. We are going to fix the web guard. So we're going to reset the driver to session. Let's get rid of the driver for the do not use, and the provider is going to be set back to users. Now we're going to create another guard here and we're going to call it admin. So that is going to be the key, the value is going to be an array that is going to resemble what we have for the web guard, but we are going to change the provider to the provider that we just created. So let's get that name, it is admin-users. That is going to be the name of our provider here. We will still use the session driver, but that is really what we want to do, the session driver is a very good driver. So let's go to our AuthServiceProvider. We do need to add a use statement for our new provider. So let's add that in, AdminUserProvider, and we should be okay to at least go to the browser, refresh the page, and see what happens. If we get any errors, then we know that something needs to be fixed, but hopefully we won't get anything. So let's go to the home page, that looks fine. Let's go to the login page. And there we go, let's also log in with the user that we have created already, so we should be able to log in. Everything there is working okay. And so now all we need to do in order to add this admin authentication to our application is write a little bit of middleware and we will do that in the next lesson.
https://code.tutsplus.com/courses/authentication-with-laravel/lessons/write-a-real-user-provider
CC-MAIN-2022-40
refinedweb
1,458
81.12
[]( Hettiarachchi/LIHArcView) Usage To run the example project, clone the repo, and run pod install from the Example directory first. Using Storyboard Drag UIView to your view controller and set "LIHArcView" as the class of the UIVIew. Using Code Import the module using import LIHAlert let arcView = LIHArcView() arcView.width = 10 arcView.arcColor = UIColor.blueColor See the example project for more customizations. You can change following properties public var startAngle:CGFloat //Default is 0 public var endAngle:CGFloat //Default is 2*π public var isAnimatable: Bool //Default is false public var arcColor: UIColor //Default is UIColor.blueColor public var width: CGFloat //Default is 20.0 public var isDashedLine: Bool //Default is false public var dashedLinePattern: [CGFloat] //Default is [width,width] public var isDottedLine: Bool //Default is false public var dottedSpacing: CGFloat = 40 //Default is width*2 public var roundCorners: Bool //Default is false public var topPadding: CGFloat //Default is 0.0 public var bottomPadding: CGFloat //Default is 0.0 public var leftPadding: CGFloat //Default is 0.0 public var rightPadding: CGFloat //Default is 0.0 Requirements iOS 7+ Installation LIHArcView is available through CocoaPods. To install it, simply add the following line to your Podfile: pod "LIHArcView" Author Lasith Hettiarachchi, [email protected] License LIHArcView is available under the MIT license. See the LICENSE file for more info. Latest podspec { "name": "LIHArcView", "version": "0.1.3", "summary": "Highly customizable arc view", "description": "LIHArcView is can be used to draw and animate arcs.", "homepage": "", "license": "MIT", "authors": { "Lasith Hettiarachchi": "[email protected]" }, "source": { "git": "", "tag": "0.1.3" }, "platforms": { "ios": "8.0" }, "requires_arc": true, "source_files": "Pod/Classes/**/*", "resource_bundles": { "LIHArcView": [ "Pod/Assets/*.png" ] } } Wed, 23 Mar 2016 00:20:06 +0000
https://tryexcept.com/articles/cocoapod/liharcview
CC-MAIN-2019-35
refinedweb
277
50.23
32998/unable-to-find-valid-certification-path-to-requested-target I have a class that will download a file from a https server. When I run it, it returns a lot of errors. It seems that I have a problem with my certificate. Is it possible to ignore the client-server authentication? If so, how? The problem appears when your server has self signed certificate. To workaround it you can add this certificate to the list of trusted certificates of your JVM. You can either edit JAVA_HOME/jre/lib/security/cacerts file or run you application with -Djavax.net.ssl.trustStore parameter. Verify which JDK/JRE you are using too as this is often a source of confusion. trust. You can also use java.nio.file.Path and java.nio.file.Paths. Path ...READ MORE Check your javac path on Windows using Windows Explorer C:\Program Files\Java\jdk1.7.0_02\bin and ...READ MORE If you are getting error: could not ...READ MORE Hi @hannah, you can execute the following ...READ MORE You need to add the key and cert to the createServer function. const options ...READ MORE If you check your config file, it ...READ MORE As per your snap you have selected ...READ MORE Unfortunately - it could be many things ...READ MORE import java.util.Arrays; import java.util.Collections; import org.apache.commons.lang.ArrayUtils; public class MinMaxValue { ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/32998/unable-to-find-valid-certification-path-to-requested-target?show=98970
CC-MAIN-2021-21
refinedweb
241
62.54
. If you'll follow the steps you'll build what you saw in the video. It's a good tutorial for learning about gyroscope and the NeoPixel Ring. I'm building this tutorial because of the interest i saw on some other tutorial of min. In this tutorial i've replaced simple led's with a NeoPixel Ring. The ring it's simpler to use through an Adafruit library and it's definitely more spectacular. So if you have these components lying around this is a great way to make use of them, i'll try to take you step by step through building the device and also explain how it works in the last step. Step 1: Things Required Parts 1. Arduino Pro Mini 328P 2. Breadboard 3. MPU6050 gyroscope 4. 24 NeoPixel LED Ring 5. 4 x AA battery pack with 4 batteries 6. U-shape jumper cables (optional). I've used these jumper cables because they look better on the breadboard, and the leds are more visible this way. You can find a box of 140 on ebay at about 4$. If you don't have these cables you can replace them with dupont wires. Tools: 1. USB to serial FTDI adapter FT232RL to programm the arduino pro mini 2. Arduino IDE Skills: 1. Soldering 3. Basic arduino programming Step 2: Assembly I've attached the fritzing schematic in fzz format and a picture of it for easy visualization of the connections. 1. You need to solder 3 male pins on the back of the neopixel ring like shown in the picture - solder the positive pin - solder the ground - solder the data input pin 2. Then the 4x battery holder should have a way of a connecting to the breadboard, a easy solution is to solder two male dupont wires to it's terminals. 3. Prepare the breadboard. - place the neopixel ring, microcontroller and gyroscope on the breadboard like in the image - place all the negative wires: to the microcontroller, neopixel ring, gyro - place all the positive wires: to the microcontroller, neopixel ring, gyro - place all the data wires: * SDA and SCL from the to the microcontroller to the gyro * pin D6 from the microcontroller to the neopixel ring - double check all connections before powering - optionally using duct tape, tape the battery pack on the back of the bradboard to hold it in place and make it more portable Step 3: The Code and Calibration First you need to download and install two libraries: 1. Adafruit neopixel library fir controlling the neopixel 2. MPU6050 library for the gyroscope They are two great libraries that will do the heavy lifting! More details on the neopixels here Then download and install my library from here or copy it from below: #include "I2Cdev.h" #include <Adafruit_NeoPixel.h> #include "MPU6050_6Axis_MotionApps20.h" #include "Wire.h" #define NEOPIXED_CONTROL_PIN 6 #define NUM_LEDS 24 const int MAX_ANGLE = 45; const int LED_OFFSET = 12; MPU6050 mpu; Adafruit_NeoPixel strip = Adafruit_NeoPixel(NUM_LEDS, NEOPIXED_CONTROL_PIN, NEO_RBG + NEO_KHZ800); unsigned long lastPrintTime = 0; bool initialization = Quaternion q; // [w, x, y, z] quaternion container VectorFloat gravity; // [x, y, z] gravity vector float ypr[3]; // [yaw, pitch, roll] yaw/pitch/roll container and gravity vector volatile bool mpuInterrupt = false; // indicates whether MPU interrupt pin has gone high</p><p>void setup() { Serial.begin(9600); Serial.println("Program started"); initialization = initializeGyroscope(); strip.begin(); } void loop() { if (!initialization) { return; } mpuInterrupt = false; mpuIntStatus = mpu.getIntStatus(); fifoCount = mpu.getFIFOCount(); if (hasFifoOverflown(mpuIntStatus, fifoCount)) { mpu.resetFIFO(); return; }); redrawLeds(ypr[0] * 180/M_PI, ypr[1] * 180/M_PI, ypr[2] * 180/M_PI); } } boolean hasFifoOverflown(int mpuIntStatus, int fifoCount) { return mpuIntStatus & 0x10 || fifoCount == 1024; } void redrawLeds(int x, int y, int z) { x = constrain(x, -1 * MAX_ANGLE, MAX_ANGLE); y = constrain(y, -1 * MAX_ANGLE, MAX_ANGLE); if (y < 0 and z > 0) { lightLeds(y, z, 0, 5, 0, 89); } else if (y < 0 and z < 0) { lightLeds(y, z, 6, 12, 89, 0); } else if (y > 0 and z < 0) { lightLeds(y, z, 13, 19, 0, 89); } else if (y > 0 and z > 0) { lightLeds(y, z, 20, 24, 89, 0); } } void lightLeds(int x, int y, int fromLedPosition, int toLedPosition, int fromAngle, int toAngle) { double angle = (atan((double) abs(x) / (double) abs (y)) * 4068) / 71; int ledNr = map(angle, fromAngle, toAngle, fromLedPosition, toLedPosition); printDebug(x, y, ledNr, angle); uint32_t color; for (int i=0; i < NUM_LEDS; i++) { color = strip.Color(0, 0, 0); if (i == ledNr) { color = strip.Color(0, 180, 0); } else if (i == ledNr - 1) { color = strip.Color(0, 5, 0); } strip.setPixelColor(normalizeLedPosition(i), color); strip.show(); } } int normalizeLedPosition(int position) { if (NUM_LEDS > position + LED_OFFSET) { return position + LED_OFFSET; } return position + LED_OFFSET - NUM_LEDS; } void printDebug(int y, int z, int lightLed, int angle) { if (millis() - lastPrintTime < 500) { return; } Serial.print("a=");Serial.print(angle);Serial.print("; "); Serial.print("ll=");Serial.print(lightLed);Serial.print("; "); Serial.print("y=");Serial.print(y);Serial.print("; "); Serial.print("z=");Serial.print(z);Serial.println("; "); lastPrintTime = millis(); } bool initializeGyroscope() { Wire.begin(); TWBR = 24; mpu.initialize();.print(F("DMP Initialization failed (code "));Serial.println(devStatus); return false; } mpu.setDMPEnabled(true); Serial.println(F("Enabling interrupt detection (Arduino external interrupt 0)...")); attachInterrupt(0, dmpDataReady, RISING); mpuIntStatus = mpu.getIntStatus(); Serial.println(F("DMP ready! Waiting for first interrupt...")); packetSize = mpu.dmpGetFIFOPacketSize(); return true; } void dmpDataReady() { mpuInterrupt = true; } Upload the code: Using the FTDI adapter upload the code to the arduino. Connect the power supply (batteries) Calibration: The most important thing to calibrate here is "LED_OFFSET" constant. In my example is 12. You need to adjust this from 0 to 23 so that after powering the board the led will light in the direction you tilt the board. If you want to find out more details about how it works check out the next step Step 4: How It Works (optional) First a little information about the MPU6050 gyroscope. This is a MEMS gyroscope (MEMS stands for Microelectromechanical systems). Each type of MEMs gyroscope has some form of oscillating component from where the acccleration, and hence direction change, can be detected. This is because, as per the conservation of motion law, a vibrating object likes to continue vibrating in the same plane, and any vibrational deviation can be used to derive a change in direction. The gyro also contains a microcontroller of it's own to compute the roll, pitch and yaw through some fancy maths. But the gyro raw data suffers from noise and drift, so we used an external library to smooth things out and give us clean usable data. The Neopixel are RGB leds individually addressable and chained into bands and rings. They work on 5V and they contain they own circuitry so you only need to power the neopixels and to communicate with them using the data line. The communication is done with a single data line containing clock and data (more details here). Adafruit provides a clean library for interacting with the neopixel rings. The code Inside the loop() function the MPU6050_6Axis_MotionApps20 library is called. When the library has new data from the gyroscpe it calls redrawLeds(x, y, z) with 3 arguments representing yaw, pitch and roll Inside redrawLeds(): - we're focusing on two axis : y, z - we're constraining both axys from -MAX_ANGLE to +MAX_ANGLE, we defined max angle to 45 and it's changable - we're splitting 360 degreeds into 4 quadrants and call lightLeds() functions for each as follows: * y negative, z positive first quadrant will control led's from 0 to 5, the angle will be from 0 to 89 * y negative, z negative second quadrant controls led's from 6 to 12, the angle will be from 89 to 0 * ...etc - inside the lightLeds function * i'm calculating an angle based on the two axis using arctangent (check the attached picture) * i'm calculating what led to show using the arduino map function * i'm resetting the led strip all but two led's, the one corresponding to the led position i've calculated before and a led position before (to show a fade effect) * i'm using a function called normalizeLedPosition() to take into account the neopixel calibration. The calibration is useful because the neopixel ring can be rotated as pleased, and should be aligned with the gyroscope * i'm also printing the tow axis, what led has light and the angle The math I've attached a picture with the led ring and the trigonometric function used to determine the angle.
https://www.hackster.io/danionescu/gyroscope-fun-with-neopixel-ring-3a0b84
CC-MAIN-2018-34
refinedweb
1,403
51.58
This is another built in React hook that helps with state management in React, but it has more capabilities and is used to manage complex state. The reason why this is preferred is that, useReducer can be used to manage states that are closely related and share same values. For example, lets say we want to manage a form that has an email field and a password field, and then you also want to check the validity of the email input and password input. Imagine you had wanted to use the useState Hook for this., the code would have been robust with so many helper functions, but we'll have a cleaner code with the useReducer. Before we dive into the code, lets understand useReducer and how it works.. useReducer is a react Hook that export 2 values that can be destructured, the current state and a dispatch function. useReducer also takes in 3 properties, the reducer function, the initial state and and initial function. The current state will always be the current state after it has been changed, just like you have in useState. The dispatch function is the state updating function, almost like useState, but here, the dispatch function returns an action which is an object with a type and a payload. The action type helps the reducer to know the function that is updating the state and the payload is the value that needs to be updated. Another analogy is, dispatch function acts like the delivery man, delivery man holds the pizza name or type which is the action type, while the action payload is the pizza, pizza is the content and you want to update your stomach with 😂😂😂😂😂 The reducer function receives the latest state and the action that the dispatch function sent and then returns a new updated state The initial state is the very first state you seed your useReducer hook with. The initial function is rarely used, but it's a function you use to set your initial state. Okay then, lets dive in and work on the code with what we've understood so far If you've noticed, i created our state object and seeded it into useReducer, i have also created my reducer function and also removed the initial function from the useReducer, since we won't be using it. import React, {useReducer} from "react"; const reducerFxn = (state, action) => { } }) } return <form> <div> <label htmlFor="email">E-Mail</label> <input type="email" id="email" value={state.enteredEmail} onChange={emailChangeHandler} /> </div> <div> <label htmlFor="password">Password</label> <input type="password" id="password" value={state.enteredPassword} onChange={passwordChangeHandler} /> </div> </form> } export default Login We have updated our jsx with a form, our code now has the emailChangeHandler and passwordChangeHandler, inside these handlers, you'll see our dispatch function doing what we said earlier, our dispatch function is returning an action object with type and payload. The types and payload are different for each input handler as you know. The magic happens in the reducerFxn which you'll see below import React, { useReducer } from "react"; const reducerFxn = (state, action) => { if (action.type === "ADD_EMAIL") { return { enteredEmail: action.payload, emailIsValid: action.payload.includes("@"), enteredPassword: state.enteredPassword, passwordIsValid: state.passwordIsValid, }; } if (action.type === "ADD_PASS") { return { enteredEmail: state.enteredEmail, emailIsValid: state.emailIsValid, enteredPassword: action.payload, passwordIsValid: action.payload.trim().length >= 6, }; } return state; };, }); }; const submitHandler = (e) => { e.preventDefault(); console.log(currentState); }; return ( <form onSubmit={submitHandler}> <div> <label htmlFor="email">E-Mail</label> <input type="email" id="email" value={currentState.enteredEmail} onChange={emailChangeHandler} /> </div> <div> <label htmlFor="password">Password</label> <input type="password" id="password" value={currentState.enteredPassword} onChange={passwordChangeHandler} /> </div> <button>Submit</button> </form> ); }; export default Login; We've been able to update our state using our reducerfxn, let's walk through what i did there., Remember i told you that, reducerfxn takes in 2 values, the current state and the action(which contains what the dispatch function dispatched). It checks for the type of dispatch and changes the state according to who sent it, in the case of the email, it checked it with if(action.type === 'ADD_EMAIL') block which returns true and it corresponds with what we dispatched and it will change the state with the payload as you have seen. The enteredEmail field is updated with the action.payload which is equal to the event.target.value that we dispatched, now this is where useReducer is powerful, we now updated the emaiIsValid field instantly by checking if the payload contains '@' and this will return true or false. This saves us the extra stress of creating another useState hook if we wanted to update the state with useState. To access the current states and maybe display them in your list item, you access the latest state with the currentState field that we destructured from useReducer. To get the emailField will be currentState.emailField, and same with others.. So basically, useState is great for independent pieces of data, but useReducer is used when one state is dependent on each other like the case of enteredEmail and emailIsValid, and often times, you'll know when to use it, meanwhile you might not really need useReducer when all you have to do is change a single value of a particular state, because most at times you will be fine with useState, and using useReducer might just be an overkill. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/emmalegend/managing-state-with-usereducer-hook-1gpj
CC-MAIN-2022-05
refinedweb
892
52.09
Each Answer to this Q is separated by one/two green lines. I’m just learning python and confused when a “def” of a function ends? I see code samples like: def myfunc(a=4,b=6): sum = a + b return sum myfunc() I know it doesn’t end because of the return (because I’ve seen if statements… if FOO than return BAR, else return FOOBAR). How does Python know this isn’t a recursive function that calls itself? When the function runs does it just keep going through the program until it finds a return? That’d lead to some interesting errors. Thanks In Python whitespace is significant. The function ends when the indentation becomes smaller (less). def f(): pass # first line pass # second line pass # <-- less indentation, not part of function f. Note that one-line functions can be written without indentation, on one line: def f(): pass And, then there is the use of semi-colons, but this is not recommended: def f(): pass; pass The three forms above show how the end of a function is defined syntactically. As for the semantics, in Python there are three ways to exit a function: Using the returnstatement. This works the same as in any other imperative programming language you may know. Using the yieldstatement. This means that the function is a generator. Explaining its semantics is beyond the scope of this answer. Have a look at Can somebody explain me the python yield statement? By simply executing the last statement. If there are no more statements and the last statement is not a returnstatement, then the function exists as if the last statement were return None. That is to say, without an explicit returnstatement a function returns None. This function returns None: def f(): pass And so does this one: def f(): 42 Python is white-space sensitive in regard to the indentation. Once the indentation level falls back to the level at which the function is defined, the function has ended. To be precise, a block ends when it encounter a non-empty line indented at most the same level with the start. This non empty line is not part of that block For example, the following print ends two blocks at the same time: def foo(): if bar: print "bar" print "baz" # ends the if and foo at the same time The indentation level is less-than-or-equal to both the def and the if, hence it ends them both. Lines with no statement, no matter the indentation, does not matter def foo(): print "The line below has no indentation" print "Still part of foo" But the statement that marks the end of the block must be indented at the same level as any existing indentation. The following, then, is an error: def foo(): print "Still correct" print "Error because there is no block at this indentation" Generally, if you’re used to curly braces language, just indent the code like them and you’ll be fine. BTW, the “standard” way of indenting is with spaces only, but of course tab only is possible, but please don’t mix them both. Interestingly, if you’re just typing at the python interactive interpreter, you have to follow a function with a blank line. This does not work: def foo(x): return x+1 print "last" although it is perfectly legal python syntax in a file. There are other syntactic differences when typing to the interpreter too, so beware. white spaces matter. when block is finished, that’s when the function definition is finished. when function runs, it keeps going until it finishes, or until return or yield statement is encountered. If function finishes without encountering return or yield statements None is returned implicitly. there is plenty more information in the tutorial. So its the indentation that matters. As other users here have pointed out to you, when the indentation level is at the same point as the def function declaration your function has ended. Keep in mind that you cannot mix tabs and spaces in Python. Most editors provide support for this. It uses indentation def func(): funcbody if cond: ifbody outofif outof_func In my opinion, it’s better to explicitly mark the end of the function by comment def func(): # funcbody ## end of subroutine func ## The point is that some subroutine is very long and is not convenient to scroll up the editor to check for which function is ended. In addition, if you use Sublime, you can right click -> Goto Definition and it will automatically jump to the subroutine declaration.
https://techstalking.com/programming/python/python-def-function-how-do-you-specify-the-end-of-the-function/
CC-MAIN-2022-40
refinedweb
767
69.52
Hey, I'm trying to use polymorphism as a type of generic/template, and it's exploding in my face bigtime. I have this class: It's an abstract class. It has a function in it called:It's an abstract class. It has a function in it called:Code:classA classB inherits from classA with the following structure:classB inherits from classA with the following structure:Code:virtual classA GetClass() = 0; In it I'd like to have a function that returns an instance of classB, ex.In it I'd like to have a function that returns an instance of classB, ex.Code:class classB : public classA { }; The caller will be thinking of it as an instance of classA, with all of the pure virtual functions implemented in classB.The caller will be thinking of it as an instance of classA, with all of the pure virtual functions implemented in classB.Code:classB GetClass(); There's a problem, if I change the return type to the child class (like above), it thinks I'm trying to override it, yells, fails to compile. If I change the return type to classA (the parent), it realizes that this will instantiate a pure virtual / abstract class and pukes again. I know that you can use polymorphism this way, I've used it before to call "down" into a child class (basically one object is going to be using classB objects as if they were classA objects, as there's a bunch of pure virtual functions in classA). Though I've never had to RETURN a child class as if it was a parent class. As for the object using these classes, its prototype specifies classA, but it is passed classB, so it's not like I'm ever going to be instantiating pure virtuals. Anybody know how I can get this done? I know it's do-able, never had to return it, though, and my design was kind of dependent on it . Any help is greatly appreciated.. Any help is greatly appreciated. EDIT: And obviously it goes without saying I can't change the prototype in classA to reflect the child class...
https://cboard.cprogramming.com/cplusplus-programming/133636-pure-virtual-polymorphism-help.html
CC-MAIN-2017-51
refinedweb
362
68.5
The QDict class is a template class that provides a dictionary based on QString keys. More... #include <qdict.h> Inherits QPtrCollection. List of all member functions. QMap is an STL-compatible alternative to this class. QDict is implemented as a template class. Define a template instance QAsciiDict template. A QDict has the same performance as a;In this example we use a dictionary to keep track of the line edits we're using. We insert each line edit into the dictionary with a unique name and then access the line edits via the dictionary. ); }In the example we are using the dictionary to provide fast random access to the keys, and we don't care what the values are. The example is used to generate a menu of QStyles, each with a unique accelerator key (or no accelerator if there are no unused letters left)..(). Returns the setting of the auto-delete option. The default is FALSE. See also setAutoDelete(). Removes all items from the dictionary. The removed items are deleted if auto-deletion is enabled. All dictionary iterators that operate on the dictionary are reset. See also remove(), take() and setAutoDelete(). Reimplemented from QPtrCollection. Returns the number of items in the dictionary. See also isEmpty(). Reimplemented from QPtrCollection.. The key does not have to be unique. If multiple items are inserted with the same key, only the last item will be accessible. item may not be 0. See also replace(). Example: themes/themes.cpp. Returns TRUE if the dictionary is empty, i.e. count() == 0; otherwise returns FALSE. See also count().(). This file is part of the Qt toolkit. Copyright © 1995-2002 Trolltech. All Rights Reserved.
http://doc.trolltech.com/3.0/qdict.html
crawl-001
refinedweb
277
71.21
Slashdot Log In NSI Wants .banc and .shop dakfu writes: "NSI is suggesting two new TLDs, .banc and .shop." I want .rob and .dot please. Is that too much to ask for? I think .god would be fun too, but I think there really ought to be a .sex just to help me (ummm) avoid it. Yeah. Avoid it. This discussion has been archived. No new comments can be posted. NSI Wants .banc and .shop | Log In/Create an Account | Top | 172 comments (Spill at 50!) | Index Only | Search Discussion The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way. oh please (Score:3) The current situation is just fine. NSI blew it with why do we even need TLDs anymore? (Score:3) in fact, now that I think of it, let's just let registrars register their own new TLDs from NSI, first-come, first-served... Eric Cross-registering $$ (Score:4) The problem is bad enough as it is, with companies registering a Does this make any sense whatsoever? Doesn't the This is NOT going to do anything beneficial. (Score:5) 2) to infer this scheme will somehow lessen the stress on the supply of domain names now out there is absurd. NOONE is going to give up any of the existing registered names because a 3) If anything, this will help the domain-squatting industry as it will rush to register EVERY common sense dictionary word/phrase and lock them up behind the internets answer to ticket scalpers, unless NSI plans to do the unthinkable and limit the number of domains a single entity can register (not bloody likely). 4) Conclusion - this is a scam, a swindle, to make bucks. I spit on it. Actually, there's a reason for banc ... (Score:3) As a general rule, the parent corporation of a bank is a 'Banc'. So while you bank at Bank One, for example, the parent company is Banc One Corporation. ikaros, oh, the things you learn geeking for a financial institution ... :) Not enough! More! (Score:3) If NSI wants more money, they should make more! Change it so that any TLD is possible. Immediately, we have N-squared namespace. That's N-squared more money! Still not enough! Enforce any two words for a TLD. foobar.dope.name. This is N cube! But why stop there? foobar.dopey.sounding.name. N to the fourth! foobar.very.very.long.name. N to the fifth! In fact, don't have any restrictions at all. Potentially N to aleph-nought! What are you waiting for NSI! Make money now! Not enough (Score:3) If we had 100,000 TLDs, and each cost $50, then only a huge company like McDonald's or Coke (who have a good case for exlusive Trademark protection across all industries) would even consider buying them all. But even they wouldn't need to, because the obvious one for McDonalds (.com, The only way to stop abuse and squatting is to dilute the value of any single TLD so that it's up to the company to make their domain stand out, rather than counting on (or worrying about) people guessing or stumbling across a domain. The *only* solution. Allow all possible TLDs! (Score:3) However, it should still require registrations to be of the form DOMAIN.TLD, i.e., both parts domain and TLD extension are both needed to constitute a single registration application. The TLDs themselves can be registered to no one, just like no one "owns" org or com or uk. Of course the root servers will need some custom software to deal with this. I say, use the 1st letter of the TLD to decide what nameserver ([A-Z0-9].ROOT-SERVERS.NET) gets the request. This will accomplish load balancing and should be straightforward to implement. The benefits of the system I described here include: (1) An end to squatting by CorpInc on corpInc.{com|net|org|cc|...} because there would now be (for all practical purposes) and infinite number op possible combinations of CorpInc.* and *.CorpInc. Even microsoft can't affort to buy up microsoft.* and *.microsoft. (2) An end to domain hoarders in general. With unlimited variations, no one domain name is all that important. Thus they lose their resaleable value. (3) Space for similarly named companies to all happily coexist. apple.computers, apple.records, apple.farms, apple.employment, john.apple, the-big.apple, etc. No need to sue for limited domain name since they're no longer a limited resource. Other possibility is to allow the full Unicide character set in domain names. Thoughts? Re:Why not add TLDs that people really wany? (Score:3) In 1996, Name.Space [xs2.net] began accepting suggestions for new gTLDs from public input, and has moderated the list to the present number of 549, from thousands of requests. These gTLDs came into operation between the autumn of 1996 and the present and are currently available for registration. Register here! [name-space.com] Here are the top 20 new gTLDs suggested by the public and presently in operation by Name.Space [xs2.net]: web space. art. sex. info. zone. music. firm. design. media. travel. online. arts. inc. x. mail. family. 2000. bank. usa. news. ltd. world. fuck. mag. corp. direct. law. free. love. auction. sale. casino. service. games. fun. mall. studios. cam. market. asia. sports. cafe. mad. internet. hacker. city. network. see Vote for new gTLDs [global-namespace.net] and Name.Space active gTLDs [xs2.net]. In an early effort to gain the global recognition of the new gTLDs serviced by Name.Space [xs2.net], a letter was sent to Network Solutions on March 11, 1997 requesting the addition of the gTLDs serviced by Name.Space [xs2.net] and their associated nameservers into the ROOT.ZONE file (the recognized master list of globally-routed TLDs, controlled by NSI). NSI refused the request to amend the ROOT.ZONE file and Name.Space [xs2.net] subsequently filed an ANTITRUST [xs2.net] action against NSI on March 20, 1997. After more than three years of litigation, the Court of Appeals ruled against Name.Space [xs2.net] and in favor of NSI, granting NSI IMMUNITY from antitrust prosecution, for their "conduct in this case". The court's decision was an obvious POLITICAL decision, not a legal one. (see [namespace.org]) In the original complaint, Name.Space [xs2.net] also listed a group of "non-party co-conspirators", many of whom, or their associates now make up ICANN and the key influential persons surrounding the ICANN process and formerly known as the IAHC (International Ad Hoc Committee) at the time the initial lawsuit was filed in March, 1997. Now that NSI has been declared IMMUNE from antitrust prosecution for refusing to allow competitors, including Name.Space [xs2.net], to add new TLDs to the root, NSI presents the addition of new TLDs as if it was their idea in the first place--in light of the fact that Name.Space [xs2.net] and others were denied precisely what NSI is carving out for themselves. Why did James Tierney [mailto] close down the DoJ's antitrust investigation into NSI and their parent company SAIC without finding any wrongdoing? Perhaps you should all write to Mr. Tierney at the DoJ and ask why the US Government is protecting NSI, while crusading against Microsoft? Is this another case of "selective enforcement"? Who is benefiting financially from all of this? Why is there no oversight into conflicts of interest within ICANN? How did NSI get away with paying public relations "flacks" and other "shills" to disrupt, discredit, and coerce their competitors such as Name.Space, with such impunity? The addition of new gTLDs to the root is a matter of a simple TEXT EDIT of the ROOT.ZONE [xs2.net] file. Isn't it about time that this be done without further delay? Get a head start--if you are an ISP you can run the expanded ROOT.ZONE [xs2.net] file today by downloading it and installing it on your DNS servers. For more info, see go to Switch to Name.Space [xs2.net] Re:Actually, there's a reason for banc ... (Score:3)
http://slashdot.org/articles/00/04/21/1716244.shtml
crawl-002
refinedweb
1,377
66.64
This is the 3rd of 8 workshops and is a new one for Kent. He kicked off and was aware the timings might be a bit off because this is the first time through. This is going to be a big one and there is probably enough in the exercises that he didn't think we'd get to those in the extra credit. Timing was an issue and unfortunately the service worker he had developed for the course was playing up in his local environment. That didn't get in the way of some excellent instruction and helpful learnings until right near the end. We are diving right in from a bare CRA. The task was simple enough - render a heading and some button to the screen. We added a Dialog, using @reach/dialog which was small and fun. Kent reckons that code base is a good one to learn from. It might be interesting to have a look at the code base and think about why it is effective and what patterns I could use. One of the upcoming workshops is on React patterns, so that will probably useful in this context. Kent presents an opinionated approach to React. He is sharing what he thinks is the best approach based on his experience in production and teaching. He suggests using emotion and CSS-in-JS. He likes Tailwind and almost taught with that but decided at the last moment to stick with this as it was potentially too much to learn. I'm a Tailwind fan too but this was my first exposure to emotion. There are two ways to use emotion. You can create a styled component: import styled from '@emotion/styled' const Button = styled.button` color: turquoise; ` You can use object notation const Button = styled.button({ color: 'turquoise', }) Or you could even pass a function that will return the styles: const Button = styled.button(props => { return { color: props.primary ? 'hotpink' : 'turquoise', } }) // or with the string form - as long as what you return is valid css const Button = styled.button` color: ${props => (props.primary ? 'hotpink' : 'turquoise')}; ` This library allows us to use hover states and pseudo selectors which is one drawback of Tailwind. The second way to use it is with the CSS prop - which means you can avoid single use components ( Wrapper, Container, etc). This is similar to the inline styles prop but you can use the pseudo-selector. You have to override the JSX parser and then use the css prop either with the object or css syntax. This at the top of every relevant file: /** @jsx jsx */ /** @jsxFrag React.Fragment */ import { jsx } from '@emotion/core' import React from 'react' and then you can do this: function SomeComponent() { return ( <div css={{ backgroundColor: 'hotpink', '&:hover': { color: 'lightgreen', }, }} > This has a hotpink background. </div> ) } // or with string syntax: function SomeOtherComponent() { const color = 'darkgreen' return ( <div css={css` background-color: hotpink; &:hover { color: ${color}; } `} > This has a hotpink background. </div> ) } This was a longer exercise and I didn't get a chance to finish it up before Kent called us back from our breakout rooms. In the styled component, you can pass in as many CSS objects as you like. It will use something like Object.assign and so the latter the object the more priority it will have. With emotion, we can opt back into the CSS cascade and pass on some styles to children of a similar type. <form css={{ display: 'flex', flexDirection: 'column', alignItems: 'stretch', '> div': { margin: '10px auto', width: '100%', maxWidth: '300px', }, }} onSubmit={handleSubmit} > Importable Babel plugins are something that Kent helped develop and allow for customisation of the parsing, building and printing processes. Emotion provides a plugin for their version of JSX (which basically just adds support for the CSS prop). The emotion styled/macro variant attaches the name of the component to the classname allowing you to trace where the styling is coming from. This is really helpful for debugging. Media-queries and variables are all available. So stop hankering for SASS! :) We are behind time and we're going super fast now. Kent introduces the concept of having a API client handler. This means that you don't have to deserialize the data every time and can abstract some of those details away. Having a hook that encapsulates some of the logic makes things more managable and useful. What was helpful for me here was thinking about tidying up my components by extracting this data logic to a single point. No longer changing res => res.json() on every call. Now it's been pointed out it feels really obvious but that's true with most insightful insights :) There are lots of different ways to authenticate - basically you need to be able to deal with your backend. There was a lot in this section - I need to review this again. I'm thinking a lot about applications I'm writing at the moment and there are lots of things here that would be useful to implement immediately. Full page refresh: window.location.assign(window.location) This is using React Router 6 which is in beta at the moment but is likely to be released very soon. The router doesn't have to be top level - we can keep it at the least common parent and, in this case, just wrap the AuthenticatedApp. The BrowserRouter provides the Context.Provider that is needed to pass the props between the various components of react-router. href -> to Redirect component needs a from prop. Kent points out that cache management can be grouped into two buckets: When these are conflated, we can introduce unnecessary complexity. Given that caching is one of the hardest problems in software development, that's not a good thing :) Kent suggests react-query as a good solution for managing the server cache. It provides hooks to query, cache and mutate data in a way that is flexible enough for most use cases. Time was running out for this exercise and Kent ran a poll - most wanted Kent's explanations and talk-through. This is where the service worker was really playing up and so it was more challenging to follow. I took an hour to do this exercise myself after the workshop. The components had to be swapped out in a lot of places, so this exercise hit a lot of files. But, the main crux of the change was in creating a listItems client. I really liked how this was set up. Firstly, there was a listItemsClient - this had the specific CRUD operations for this data type. Then, a list-items file was used to gather the hooks that made use of those client operations. They aren't handled explicitly but instead are carried out by the useMutation hook from react-query. These hooks are then brought into the components and the update/ create/ delete are destructured from the array returned from the useMutation. So, in the component we see the things we want ( update, create, delete). In the client, we see the explicit CRUD operations and the hooks file holds it all together. This was an awesome section but probably could have been a workshop on it's own! :)
https://www.kevincunningham.co.uk/posts/building-react-apps/
CC-MAIN-2020-29
refinedweb
1,199
72.36
import "github.com/lxc/lxd/lxd/db/query" Package query implements helpers around database/sql to execute various kinds of very common SQL queries. config.go count.go doc.go dump.go expr.go objects.go retry.go slices.go transaction.go Count returns the number of rows in the given table. CountAll returns a map associating each table name in the database with the total count of its rows. DeleteObject removes the row identified by the given ID. The given table must have a primary key column called 'id'. It returns a flag indicating if a matching row was actually found and deleted or not. Dump returns a SQL text dump of all rows across all tables, similar to sqlite3's dump feature InsertStrings inserts a new row for each of the given strings, using the given insert statement template, which must define exactly one insertion column and one substitution placeholder for the values. For example: InsertStrings(tx, "INSERT INTO foo(name) VALUES %s", []string{"bar"}). IsRetriableError returns true if the given error might be transient and the interaction can be safely retried. Params returns a parameters expression with the given number of '?' placeholders. E.g. Params(2) -> "(?, ?)". Useful for IN and VALUES expressions. Retry wraps a function that interacts with the database, and retries it in case a transient error is hit. This should by typically used to wrap transactions. func SelectConfig(tx *sql.Tx, table string, where string, args ...interface{}) (map[string]string, error) SelectConfig executes a query statement against a "config" table, which must have 'key' and 'value' columns. By default this query returns all keys, but additional WHERE filters can be specified. Returns a map of key names to their associated values. SelectIntegers executes a statement which must yield rows with a single integer column. It returns the list of column values. SelectObjects executes a statement which must yield rows with a specific columns schema. It invokes the given Dest hook for each yielded row. SelectStrings executes a statement which must yield rows with a single string column. It returns the list of column values. func SelectURIs(stmt *sql.Stmt, f func(a ...interface{}) string, args ...interface{}) ([]string, error) SelectURIs returns a list of LXD API URI strings for the resource yielded by the given query. The f argument must be a function that formats the entity URI using the columns yielded by the query. Transaction executes the given function within a database transaction. UpdateConfig updates the given keys in the given table. Config keys set to empty values will be deleted. UpsertObject inserts or replaces a new row with the given column values, to the given table using columns order. For example: UpsertObject(tx, "cars", []string{"id", "brand"}, []interface{}{1, "ferrari"}) The number of elements in 'columns' must match the one in 'values'. Dest is a function that is expected to return the objects to pass to the 'dest' argument of sql.Rows.Scan(). It is invoked by SelectObjects once per yielded row, and it will be passed the index of the row being scanned. Package query imports 11 packages (graph) and is imported by 32 packages. Updated 2020-10-22. Refresh now. Tools for package owners.
https://godoc.org/github.com/lxc/lxd/lxd/db/query
CC-MAIN-2020-50
refinedweb
533
58.18
Getting the current date in a strategy's next() method - Curtis Miller I have a moving average crossover strategy that uses different sets of moving averages in different periods. Over one period of time one set of moving averages is used, then in another a different set of moving averages is used, and so on. The time frames are passed via parameters as a list; the same is true with the moving average windows. This strategy has to handle multiple stock symbols. (This is not a strategy for trading; this is intended to simulate the effect of optimization in an account, where each set of moving averages is the product of optimization over different periods.) Here is the code:) def __init__(self): """Initialize the strategy""" self.fastma = dict() self.slowma = dict() self.regime = dict() self.date_combos =)] def next(self): """Define what will be done in a single step, including creating and closing trades""" # Determine which set of moving averages to use curdate = self.datas[0].datetime.date(0) dtidx = None # Will be index for sd, ed in self.date_combos: if sd < curdate and curdate <= ed: dtidx = (sd, ed) for d in self.getdatanames(): # Looping through all symbols pos = self.getpositionbyname(d).size or 0 if dtidx is None: # Not in any window break # Don't engage in trades if pos == 0: # Are we out of the market? # Consider the possibility of entrance # Notice the indexing; [0] always mens the present bar, and [-1] the bar immediately preceding # Thus, the condition below translates to: "If today the regime is bullish (greater than # 0) and yesterday the regime was not bullish")) Unfortunately, this strategy does not work because it never engages in a trade, and I don't know why. I passed the strategy pandas DataFrames. So while the strategy does run, it does not engage in any trades. This suggests the problem is in next(), likely around where I look up the current date of the bar being considered, then determining which period the strategy is currently in. So I believe that these lines are where the problem is: # Determine which set of moving averages to use curdate = self.datas[0].datetime.date(0) dtidx = None # Will be index for sd, ed in self.date_combos: if sd < curdate and curdate <= ed: dtidx = (sd, ed) Somehow I think this condition is not being triggered. With this in mind, I'd like to know if I am getting the time stamp for the current step in the backtest correctly. Is the line curdate = self.datas[0].datetime.date(0)how I am supposed to get the current date? - backtrader administrators Accounting of the current datetime is done by the only master object in the equation: the strategy itself. def next(self): curdt = self.datetime[0] # float curdtime = self.datetime.datetime(ago=0) # 0 is the default curdate = self.datetime.date(ago=0) # 0 is the default curtime = self.datetime.time(ago=0) # 0 is the default ago @Curtis-Miller said in Getting the current date in a strategy's next() method:)) Isn't that simple crossover? This could be done like this in __init__ bt.ind.CrossOver(d, fast, slow) And later checked like this: if thecrossover > 0: buy If the crossover < 0: sell @Curtis-Miller said in Getting the current date in a strategy's next() method: for d in self.getdatanames(): # Looping through all symbols It may be you use a debugger and this suggestions is superfluous and seems primitive, but something like print('{}: the dtixdx is {}'.format(len(self), dtidx))before that line could shed some light (the opinion here is that the more the printing, the easier is to debug) - Curtis Miller @backtrader Thanks for the help! I was accessing dates wrong; that was one problem, so thanks for telling me the right way. Using your suggestion of printouts (I tried logging but it didn't seem to work in the Jupyter notebook, but I guess something else was weird; the printouts worked fine the last time) I also discovered that I was creating a generator with zip(), not a list, so the generator would run and then the loop was never seen again. The class now works, and the strategy does what it's expected to do. This was the last job in the blog post, so I will write it up and share when it's published Monday. I'll look into using CrossOvernext time (the current code works, so I won't fix what isn't broken for now, lest it actually does break). Does that indicator plot both moving averages in the final visualization? - backtrader administrators There is a light python2-3adaptation layer inside backtrader in backtrader.utils.py3, mostly to avoid importing sixor similar packages. zipis including as a generator for Python2 ( itertools.izip) to match the Python3 style. Inside the core, the adaptation layer is used and if a generator is not the needed result it will be simply wrapped in a list(generator). This makes things consistent and allows the package to work with Python2/3
https://community.backtrader.com/topic/489/getting-the-current-date-in-a-strategy-s-next-method/1
CC-MAIN-2017-51
refinedweb
844
62.88
Converts an unsigned short integer from host byte order to Internet network byte order. ISODE Library (libisode.a) #include <sys/types.h> #include <netinet/in.h> unsigned short htons (HostShort) unsigned short HostShort; The htons subroutine converts an unsigned short (16-bit) integer from host byte order to Internet network byte order. The Internet network requires ports and addresses in network standard byte order. Use the htons subroutine to convert addresses and ports from their host integer representation to network standard byte order. The htons subroutine is defined in the net/nh.h file as a macro. The htons subroutine returns a 16-bit integer in Internet network byte order (most significant byte first). The htons subroutine is part of Base Operating System (BOS) Runtime. All applications containing the htons subroutine must be compiled with _BSD set to a specific value. Acceptable values are 43 and 44. In addition, all socket applications must include the BSD libbsd.a library. The htonl subroutine, ntohl subroutine, ntohs subroutine. Sockets Overview in AIX Version 4.3 Communications Programming Concepts.
http://ps-2.kev009.com/tl/techlib/manuals/adoclib/libs/commtrf2/htons.htm
CC-MAIN-2022-33
refinedweb
176
53.17
$ cnpm install @rocketsofawesome/mirage Live Demo of the Pattern Library First run the command: npm install @rocketsofawesome/mirage Then inside of your react project (preferably towards the top of the application) wrap the application in the <ROATheme></ROATheme> theme wrapper. This will set a bunch of props on children components such as colors, fonts, etc. To import other components into your react app, simply do the following: import { Logo } from '@rocketsofawesome/mirage' You should then have access to the component and can use it like a normal react component: <Logo /> It's that easy! Additional props are documented in To run the pattern library locally, run the following: npm start This will start the styleguidist server which parses through the directories in the src directory, and output those components to the pattern library. If you would like to publish your new component(s) to the styleguide demo, commit your changes like you would normally. First, be sure to do your work on a feature branch, we have scripts that are run specifically on the master branch. If you are happy with the changes that you have made to your component and you wish to publish your changes to npm, do the following. First, be sure to add your component to the src/index.js, if you test your components by pulling them in from the SRC directory, you should be able to tell if they are being exported properly for consumption. This is the file that gets distributed in the dist directory. Then run the following commands: npm version patch This command will bump the version of the pattern library by one patch number (Note: publishing to npm requires that a new version number be supplied). When you merge your feature branch onto master, the npm publish script will be run and your changes will then be available when you do npm install --save @rocketsofawesome/mirage in the codebase that is using our styled components. There is also another command that gets run that builds the styleguide code and then publishes that code to the gh-pages branch. This serves the static version of the component library. The Rockets of Awesome Pattern Library (RoA PL) is an organized set of interactive, reusable components that can be used to build out more complicated modules, pages, and templates. Pattern libraries often include example elements, sample code, variations, use cases, and considerations.?
https://npm.taobao.org/package/@rocketsofawesome/mirage
CC-MAIN-2020-05
refinedweb
399
55.27
Next Generation Architecture Discussion for design and implementation of a next generation overhaul to MoinMoin's architecture. This includes elements such as the Summer of Code 2006 storage refactoring project. - [get | view] (2006-09-03 22:59:18, 100.2 KB) [[attachment:MoinArchDiagram.png]] - [get | view] (2006-08-30 21:00:37, 52.7 KB) [[attachment:Phase1Arch.png]] - [get | view] (2006-08-30 21:29:26, 73.7 KB) [[attachment:Phase2Arch.png]] - Engines shared by multiple namespaces have potential problems when a namespace adds a page. A namespace must only add the page to this engine if the item does not exist in the entire namespace of all namespaces that use this particular engine. Or another approach is allow an item to be a member of multiple namespaces? - Namespaces can be built with specific rules over which engines data is saved upon by its implementations discretion. Namespace parameter to item creation could dictate which namespace is to be used. Namespace must verify by its rules if a certain ItemRevisionType is allowed within it. - Potentially all transactions are processed with a session object rather than passed every time to the SI by value. This will reduce memory and speed as it will cache generated paths related to the engine temporarily.
http://www.moinmo.in/NextGenerationArchitecture
crawl-003
refinedweb
209
51.14
On Tue, 2008-10-21 at 22:55 -0400, Daniel Jacobowitz wrote:> I haven't been following - but why this whole container restriction?> Checkpoint/restart of individual processes is very useful too.> There are issues with e.g. IPC, but I'm not convinced they're> substantially different than the issues already present for a> container.Containers provide isolation. Once you have isolation, you have adiscrete set of resources which you can checkpoint/restart.Let's say you have a process you want to checkpoint. If it uses acompletely discrete IPC namespace, you *know* that nothing else dependson those IPC ids. We don't even have to worry about who might have beenusing them and when.Also think about pids. Without containers, how can you guarantee arestarted process that it can regain the same pid?-- Dave
http://lkml.org/lkml/2008/10/21/431
CC-MAIN-2016-36
refinedweb
136
68.67
A Haskell kernel for IPython. cd ~/.local/bin;lsyields compile disintegrate ghc-modi hkc maple-update momiji pretty stylish-haskell density ghc-mod hakaru hlint mh normalize simplify ok. I burned everything to the ground, cloned the latest commit and followed the mac installation instructions. I can open an iHaskell notebook. \o/ Two problems: :cddoesn't seem to support directory names with spaces. (If it's relevant, I'm on macOS.) import IHaskell.Displayresults in the error: <interactive>:1:1: error: Failed to load interface for ‘IHaskell.Display’ I'm having to supply the --extra-include-dir=/usr/local/include, and --extra-lib-dir=/usr/local/lib options to my stack build --fast command, in order to get around the libmagic issue under MacOS X. Aren't those two locations fairly standard search targets, for most build processes? Why am I having to provide them explicitly? Hello, I'm trying to get through this tutorial for Haskell and Data Analysis but the author is using jupyter, which I never used before, so I had to install that plus the IHaskell thing so I have the option in jupyter to do a Haskell notebook but it's not working for me. I don't get any options of a Haskell notebook, and am having a LOT of difficulty with getting my dev environment set up to get this working. I am unable to get it to work right in either my Linux computer or my MBP and am having a lot of difficulty (was stuck for three days unable to get anywhere with it). Can somebody help me please? I followed these intructions for installing jupyter here (and that seemed to work): Then I tried doing this part to get IHaskell installed: but that did not work for me. It failed at this step where you install this pip3 install -r requirements.txt and here's the results of that: I can now run jupyter notebook (or ipython notebook) but I get no options for creating a Haskell notebook, only Python. Help me PLEASE! requirements.txtpart, and then went on to install it using haskell stack (which I also have). So then the final problem was that with stack, it needs the enrivonment variable path thing sorted. Which again, finally got that sorted. import Text.CSV, it did not work for me. %load_ext latexdoesn't work either ihaskellor something and also a mail address that comes with it ...
https://gitter.im/gibiansky/IHaskell?source=explore
CC-MAIN-2020-24
refinedweb
407
63.39
23 min read Bioinformatics is an interdisciplinary field that develops methods and software tools for analyzing and understanding biological data. More simply stated, you can simply think of it as data science for biology. Among the many types of biological data, genomics data is one of the most widely analyzed. Especially with the rapid advancement of next-generation DNA sequencing (NGS) technologies, the volume of genomics data has been growing exponentially. According to Stephens, Zachary D et al., genomics data acquisition is on the exabyte-per-year scale. In this post, I demo an example of analyzing a GFF3 file for the human genome with the SciPy Stack. Generic Feature Format Version 3 (GFF3) is the current standard text file format for storing genomic features. In particular, in this post you will learn how to use the SciPy stack to answer the following questions about the human genome: - How much of the genome is incomplete? - How many genes are there in the genome? - How long is a typical gene? - What does gene distribution among chromosomes look like? The latest GFF3 file for the human genome can be downloaded from here. The README file that comes in this directory provides a brief description of this data format, and a more thorough specification is found here. We will use Pandas, a major component of the SciPy stack providing fast, flexible, and expressive data structures, to manipulate and understand the GFF3 file. Setup First things first, let’s setup a virtual environment with the SciPy stack installed. This process can be time-consuming if built from the source manually, as the stack involves many packages – some of which depends on external FORTRAN or C code. Here, I recommend using Miniconda, which makes the setup very easy. wget bash Miniconda3-latest-Linux-x86_64.sh -b The -b flag on the bash line tells it to execute in batch mode. After the commands above are used to successfully install Miniconda, start a new virtual environment for genomics, and then install the SciPy stack. mkdir -p genomics cd genomics conda create -p venv ipython matplotlib pandas Note that we have only specified the 3 packages we are going to use in this post. If you want all the packages listed in the SciPy stack, simply append them to the end of the conda create command. If you are unsure of the exact name of a package, try conda search. Let’s activate the virtual environment and start IPython. source activate venv/ ipython IPython is a significantly more powerful replacement to the default Python interpreter interface, so whatever you used to do in the default python interpreter can also be done in IPython. I highly recommend every Python programmer, who hasn’t been using IPython yet, to give it a try. Download the Annotation File With our setup now completed, let’s download the human genome annotation file in GFF3 format. It is about 37 MB, a very small file compared to the information content of a human genome, which is about 3 GB in plain text. That’s because the GFF3 file only contains the annotation of the sequences, while the sequence data is usually stored in another file format called FASTA. If you are interested, you can download FASTA here, but we won’t use the sequence data in this tutorial. !wget The prefixed ! tells IPython that this is a shell command instead of a Python command. However, IPython can also process some frequently used shell commands like ls, pwd, rm, mkdir, rmdir even without a prefixed !. Taking a look at the head of the GFF file, you will see many metadata/pragmas/directives lines starting with ## or #!. According to the README, ## means the metadata is stable, while #! means it’s experimental. Later on you will also see ###, which is another directive with yet more subtle meaning based on the specification. Human readable comments are supposed to be after a single #. For simplicity, we will treat all lines starting with # as comments, and simply ignore them during our analysis. ##gff-version 3 ##sequence-region 1 1 248956422 ##sequence-region 10 1 133797422 ##sequence-region 11 1 135086622 ##sequence-region 12 1 133275309 ... ##sequence-region MT 1 16569 ##sequence-region X 1 156040895 ##sequence-region Y 2781480 56887902 #!genome-build GRCh38.p7 #!genome-version GRCh38 #!genome-date 2013-12 #!genome-build-accession NCBI:GCA_000001405.22 #!genebuild-last-updated 2016-06 The first line indicates that the version of GFF format used in this file is 3. Following that are summaries of all sequence regions. As we will see later, such information can also be found in the body part of the file. The lines starting with #! shows information about the particular build of the genome, GRCh38.p7, that this annotation file applies to. Genome Reference Consortium (GCR) is an international consortium, which oversees updates and improvements to several reference genome assemblies, including those for human, mouse, zebrafish, and chicken. Scanning through this file, here are the first few annotation lines. 1 GRCh38 chromosome 1 248956422 . . . ID=chromosome:1;Alias=CM000663.2,chr1,NC_000001.11 ### 1 . biological_region 10469 11240 1.3e+03 . . external_name=oe %3D 0.79;logic_name=cpg 1 . biological_region 10650 10657 0.999 + . logic_name=eponine 1 . biological_region 10655 10657 0.999 - . logic_name=eponine 1 . biological_region 10678 10687 0.999 + . logic_name=eponine 1 . biological_region 10681 10688 0.999 - . logic_name=eponine ... The columns are seqid, source, type, start, end, score, strand, phase, attributes. Some of them are very easy to understand. Take the first line as an example: 1 GRCh38 chromosome 1 248956422 . . . ID=chromosome:1;Alias=CM000663.2,chr1,NC_000001.11 This is the annotation of the first chromosome with a seqid of 1, which starts from the first base to the 24,895,622nd base. In other words, the first chromosome is about 25 million bases long. Our analysis won’t need information from the three columns with a value of . (i.e. score, strand, and phase), so we can simply ignore them for now. The last attributes column says Chromosome 1 also has three alias names, namely CM000663.2, chr1, and NC_000001.11. That’s basically what a GFF3 file looks like, but we won’t inspect them line by line, so it’s time to load the whole file into Pandas. Pandas is good fit for dealing with GFF3 format because it is a tab-delimited file, and Pandas has very good support for reading CSV-like files. Note one exception to the tab-delimited format is when the GFF3 contains ##FASTA . According to the specification, ##FASTA indicates the end of an annotation portion, which will be followed with one or more sequences in FASTA (a non tab-delimited) format. But this is not the case for the GFF3 file we’re going to analyze. In [1]: import pandas as pd In [2]: pd.__version__ Out[2]: '0.18.1' In [3]: col_names = ['seqid', 'source', 'type', 'start', 'end', 'score', 'strand', 'phase', 'attributes'] Out[3]: df = pd.read_csv('Homo_sapiens.GRCh38.85.gff3.gz', compression='gzip', sep='\t', comment='#', low_memory=False, header=None, names=col_names) The last line above loads the entire GFF3 file with pandas.read_csv method. Since it is not a standard CSV file, we need to customize the call a bit. First, we inform Pandas of the unavailability of header information in the GFF3 with header=None, and then we specify the exact name for each column with names=col_names. If the names argument is not specified, Pandas will use incremental numbers as names for each column. sep='\t' tells Pandas the columns are tab-separated instead of comma-separated. The value to sep= can actually be a regular expression (regex). This becomes handy if the file at hand uses different separators for each column (oh yeah, that happens). comment='#' means lines starting with # are considered comments and will be ignored. compression='gzip' tells Pandas that the input file is a gzip-compressed. In addition, pandas.read_csv has a rich set of parameters that allows different kinds of CSV-like file formats to be read. The type of the returned value is a DataFrame, which is the most important data structure in Pandas, used for representing 2D data. Pandas also has a Series and Panel data structure for 1D and 3D data, respectively. Please refer to the documentation for an introduction to Pandas’ data structures. Let’s take a look at the first few entries with .head method. In [18]: df.head() Out[18]: seqid source type start end score strand phase attributes 0 1 GRCh38 chromosome 1 248956422 . . . ID=chromosome:1;Alias=CM000663.2,chr1,NC_00000... 1 1 . biological_region 10469 11240 1.3e+03 . . external_name=oe %3D 0.79;logic_name=cpg 2 1 . biological_region 10650 10657 0.999 + . logic_name=eponine 3 1 . biological_region 10655 10657 0.999 - . logic_name=eponine 4 1 . biological_region 10678 10687 0.999 + . logic_name=eponine The output is nicely formatted in a tabular format with longer strings in the attributes column partially replaced with .... You can set Pandas to not omit long strings with pd.set_option('display.max_colwidth', -1). In addition, Pandas has many options that can be customized. Next, let’s get some basic information about this dataframe with the .info method. In [20]: df.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 2601849 entries, 0 to 2601848 Data columns (total 9 columns): seqid object source object type object start int64 end int64 score object strand object phase object attributes object dtypes: int64(2), object(7) memory usage: 178.7+ MB This shows that the GFF3 has 2,601,848 annotated lines, and each line has nine columns. For each column, it also shows their data types. That start and end are of int64 type, integers representing positions in the genome. The other columns are all of type object, which probably means their values consist of a mixture of integers, floats, and strings. The size of all the information is about 178.7+ MB stored in memory. This turns out to be more compact than the uncompressed file, which will be about 402 MB. A quick verification is shown below. gunzip -c Homo_sapiens.GRCh38.85.gff3.gz > /tmp/tmp.gff3 && du -s /tmp/tmp.gff3 402M /tmp/tmp.gff3 From a high-level view, we have loaded the entire GFF3 file into a DataFrame object in Python, and all of our following analysis will be based on this single object. Now, let’s see what the first column seqid is all about. In [29]: df.seqid.unique() Out[29]: array(['1', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '2', '20', '21', '22', '3', '4', '5', '6', '7', '8', '9', 'GL000008.2', 'GL000009.2', 'GL000194.1', 'GL000195.1', ... 'KI270757.1', 'MT', 'X', 'Y'], dtype=object) In [30]: df.seqid.unique().shape Out[30]: (194,) df.seqid is one way to access column data from a dataframe. Another way is df['seqid'], which is more general syntax, because if the column name is a Python reserved keyword (e.g. class) or contains a . or space character, the first way ( df.seqid) won’t work. The output shows that there are 194 unique seqids, which include Chromosomes 1 to 22, X, Y, and mitochondrion (MT) DNA as well as 169 others seqids. The seqids starting with KI and GL are DNA sequences – or scaffolds – in the genome that have not been successfully assembled into the chromosomes. For those who are unfamiliar with genomics, this is important. Although the first human genome draft came out more than 15 years ago, the current human genome is still incomplete. The difficulty in assembling these sequences is largely due to complex repetitive regions in the genome. Next, let’s take a look at the source column. The README says that the source is a free text qualifier intended to describe the algorithm or operating procedure that generated this feature. In [66]: df.source.value_counts() Out[66]: havana 1441093 ensembl_havana 745065 ensembl 228212 . 182510 mirbase 4701 GRCh38 194 insdc 74 This is an example of the use of the value_counts method, which is extremely useful for a quick count of categorical variables. From the result, we can see that there are seven possible values for this column, and the majority of entries in the GFF3 file come from havana, ensembl and ensembl_havana. You can learn more about what these sources mean and the relationships between them in this post. To keep things simple, we will focus on entries from sources GRCh38, havana, ensembl, and ensembl_havan.a. How Much of the Genome Is Incomplete? The information about each entire chromosome is in the entries from source GRCh38, so let’s first filter out the rest, and assign the filtered result to a new variable gdf. In [70]: gdf = df[df.source == 'GRCh38'] In [87]: gdf.shape Out[87]: (194, 9) In [84]: gdf.sample(10) Out[84]: seqid source type start end score strand phase attributes 2511585 KI270708.1 GRCh38 supercontig 1 127682 . . . ID=supercontig:KI270708.1;Alias=chr1_KI270708v... 2510840 GL000208.1 GRCh38 supercontig 1 92689 . . . ID=supercontig:GL000208.1;Alias=chr5_GL000208v... 990810 17 GRCh38 chromosome 1 83257441 . . . ID=chromosome:17;Alias=CM000679.2,chr17,NC_000... 2511481 KI270373.1 GRCh38 supercontig 1 1451 . . . ID=supercontig:KI270373.1;Alias=chrUn_KI270373... 2511490 KI270384.1 GRCh38 supercontig 1 1658 . . . ID=supercontig:KI270384.1;Alias=chrUn_KI270384... 2080148 6 GRCh38 chromosome 1 170805979 . . . ID=chromosome:6;Alias=CM000668.2,chr6,NC_00000... 2511504 KI270412.1 GRCh38 supercontig 1 1179 . . . ID=supercontig:KI270412.1;Alias=chrUn_KI270412... 1201561 19 GRCh38 chromosome 1 58617616 . . . ID=chromosome:19;Alias=CM000681.2,chr19,NC_000... 2511474 KI270340.1 GRCh38 supercontig 1 1428 . . . ID=supercontig:KI270340.1;Alias=chrUn_KI270340... 2594560 Y GRCh38 chromosome 2781480 56887902 . . . ID=chromosome:Y;Alias=CM000686.2,chrY,NC_00002... Filtering is easy in Pandas. If you inspect the value evaluated from the expression df.source == 'GRCh38', it’s a series of True and False values for each entry with the same index as df. Passing it to df[] will only return those entries where their corresponding values are True. There are 194 keys in df[] for which df.source == 'GRCh38'. As we’ve seen previously, there are also 194 unique values in the seqid column, meaning each entry in gdf corresponds to a particular seqid. Then we randomly select 10 entries with the sample method to take closer look. You can see that the unassembled sequences are of type supercontig while the others are of chromosome. To compute the fraction of genome that’s incomplete, we first need to know the length of the entire genome, which is the sum of the lengths of all seqids. In [90]: gdf = gdf.copy() In [91]: gdf['length'] = gdf.end - gdf.start + 1 In [93]: gdf.head() Out[93]: seqid source type start end score strand phase attributes length 0 1 GRCh38 chromosome 1 248956422 . . . ID=chromosome:1;Alias=CM000663.2,chr1,NC_00000... 248956421 235068 10 GRCh38 chromosome 1 133797422 . . . ID=chromosome:10;Alias=CM000672.2,chr10,NC_000... 133797421 328938 11 GRCh38 chromosome 1 135086622 . . . ID=chromosome:11;Alias=CM000673.2,chr11,NC_000... 135086621 483370 12 GRCh38 chromosome 1 133275309 . . . ID=chromosome:12;Alias=CM000674.2,chr12,NC_000... 133275308 634486 13 GRCh38 chromosome 1 114364328 . . . ID=chromosome:13;Alias=CM000675.2,chr13,NC_000... 114364327 In [97]: gdf.length.sum() Out[97]: 3096629532 In [99]: chrs = [str(_) for _ in range(1, 23)] + ['X', 'Y', 'MT'] In [101]: gdf[-gdf.seqid.isin(chrs)].length.sum() / gdf.length.sum() Out[101]: 0.0037021917421198327 In the above snippet first, we made a copy of gdf with .copy(). Otherwise, the original gdf is just a slice of df, and modifying it directly would result in SettingWithCopyWarning (see here for more details). We then calculate the length of each entry and add it back to gdf as a new column named “length”. The total length turns out to be about 3.1 billion, and the fraction of unassembled sequences is about 0.37%. Here is how the slicing works in the last two commands. First, we create a list of strings that covers all seqids of well assembled sequences, which are all chromosomes and mitochondria. We then use the isin method to filter all entries whose seqid are in the chrs list. A minus sign ( -) is added to the beginning of the index to reverse the selection, because we actually want everything that is not in the list (i.e. we want the unassembled ones starting with KI and GL)… Note: Since the assembled and unassembled sequences are distinguished by the type column, the last line can alternatively be rewritten as follows to obtain the same results. gdf[(gdf['type'] == 'supercontig')].length.sum() / gdf.length.sum() How Many Genes Are There? Here we focus on the entries from source ensembl, havana and ensembl_havana since they’re where the majority of the annotation entries belong. In [109]: edf = df[df.source.isin(['ensembl', 'havana', 'ensembl_havana'])] In [111]: edf.sample(10) Out[111]: seqid source type start end score strand phase attributes 915996 16 havana CDS 27463541 27463592 . - 2 ID=CDS:ENSP00000457449;Parent=transcript:ENST0... 2531429 X havana exon 41196251 41196359 . + . Parent=transcript:ENST00000462850;Name=ENSE000... 1221944 19 ensembl_havana CDS 5641740 5641946 . + 0 ID=CDS:ENSP00000467423;Parent=transcript:ENST0... 243070 10 havana exon 13116267 13116340 . + . Parent=transcript:ENST00000378764;Name=ENSE000... 2413583 8 ensembl_havana exon 144359184 144359423 . + . Parent=transcript:ENST00000530047;Name=ENSE000... 2160496 6 havana exon 111322569 111322678 . - . Parent=transcript:ENST00000434009;Name=ENSE000... 839952 15 havana exon 76227713 76227897 . - . Parent=transcript:ENST00000565910;Name=ENSE000... 957782 16 ensembl_havana exon 67541653 67541782 . + . Parent=transcript:ENST00000379312;Name=ENSE000... 1632979 21 ensembl_havana exon 37840658 37840709 . - . Parent=transcript:ENST00000609713;Name=ENSE000... 1953399 4 havana exon 165464390 165464586 . + . Parent=transcript:ENST00000511992;Name=ENSE000... In [123]: edf.type.value_counts() Out[123]: exon 1180596 CDS 704604 five_prime_UTR 142387 three_prime_UTR 133938 transcript 96375 gene 42470 processed_transcript 28228 ... Name: type, dtype: int64 The isin method is used again for filtering. Then, a quick value count shows that the majority of the entries are exon, coding sequence (CDS), and untranslated region (UTR). These are sub-gene elements, but we are mainly looking for the gene count. As shown, there are 42,470, but we want to know more. Specifically, what are their names, and what do they do? To answer these questions, we need to look closely at the information in the attributes column. In [127]: ndf = edf[edf.type == 'gene'] In [173]: ndf = ndf.copy() In [133]: ndf.sample(10).attributes.values Out[133]: array(['ID=gene:ENSG00000228611;Name=HNF4GP1;biotype=processed_pseudogene;description=hepatocyte nuclear factor 4 gamma pseudogene 1 [Source:HGNC Symbol%3BAcc:HGNC:35417];gene_id=ENSG00000228611;havana_gene=OTTHUMG00000016986;havana_version=2;logic_name=havana;version=2', 'ID=gene:ENSG00000177189;Name=RPS6KA3;biotype=protein_coding;description=ribosomal protein S6 kinase A3 [Source:HGNC Symbol%3BAcc:HGNC:10432];gene_id=ENSG00000177189;havana_gene=OTTHUMG00000021231;havana_version=5;logic_name=ensembl_havana_gene;version=12', 'ID=gene:ENSG00000231748;Name=RP11-227H15.5;biotype=antisense;gene_id=ENSG00000231748;havana_gene=OTTHUMG00000018373;havana_version=1;logic_name=havana;version=1', 'ID=gene:ENSG00000227426;Name=VN1R33P;biotype=unitary_pseudogene;description=vomeronasal 1 receptor 33 pseudogene [Source:HGNC Symbol%3BAcc:HGNC:37353];gene_id=ENSG00000227426;havana_gene=OTTHUMG00000154474;havana_version=1;logic_name=havana;version=1', 'ID=gene:ENSG00000087250;Name=MT3;biotype=protein_coding;description=metallothionein 3 [Source:HGNC Symbol%3BAcc:HGNC:7408];gene_id=ENSG00000087250;havana_gene=OTTHUMG00000133282;havana_version=3;logic_name=ensembl_havana_gene;version=8', 'ID=gene:ENSG00000177108;Name=ZDHHC22;biotype=protein_coding;description=zinc finger DHHC-type containing 22 [Source:HGNC Symbol%3BAcc:HGNC:20106];gene_id=ENSG00000177108;havana_gene=OTTHUMG00000171575;havana_version=3;logic_name=ensembl_havana_gene;version=5', 'ID=gene:ENSG00000249784;Name=SCARNA22;biotype=scaRNA;description=small Cajal body-specific RNA 22 [Source:HGNC Symbol%3BAcc:HGNC:32580];gene_id=ENSG00000249784;logic_name=ncrna;version=1', 'ID=gene:ENSG00000079101;Name=CLUL1;biotype=protein_coding;description=clusterin like 1 [Source:HGNC Symbol%3BAcc:HGNC:2096];gene_id=ENSG00000079101;havana_gene=OTTHUMG00000178252;havana_version=7;logic_name=ensembl_havana_gene;version=16', 'ID=gene:ENSG00000229224;Name=AC105398.3;biotype=antisense;gene_id=ENSG00000229224;havana_gene=OTTHUMG00000152025;havana_version=1;logic_name=havana;version=1', 'ID=gene:ENSG00000255552;Name=LY6G6E;biotype=protein_coding;description=lymphocyte antigen 6 complex%2C locus G6E (pseudogene) [Source:HGNC Symbol%3BAcc:HGNC:13934];gene_id=ENSG00000255552;havana_gene=OTTHUMG00000166419;havana_version=1;logic_name=ensembl_havana_gene;version=7'], dtype=object) They are formatted as semicolon-separated list of tag-value pairs. The information we are most interested in is gene name, gene ID and description, and we will extract them with regular expression (regex). import re RE_GENE_NAME = re.compile(r'Name=(?P<gene_name>.+?);') def extract_gene_name(attributes_str): res = RE_GENE_NAME.search(attributes_str) return res.group('gene_name') ndf['gene_name'] = ndf.attributes.apply(extract_gene_name) First, we extract the gene names. In the regex Name=(?P<gene_name>.+?); , +? is used instead of + because we want it to be non-greedy and let the search stop at the first semicolon; otherwise, the result will match up to the last semicolon. Also, the regex is first compiled with re.compile instead of being used directly as in re.search for better performance because we will apply it to thousands of attribute strings. extract_gene_name serves as a helper function to be used in pd.apply, which is the method to use when a function needs to be applied on every entry of a dataframe or series. In this particular case, we want to extract the gene name for every entry in ndf.attributes, and add the names back to ndf in a new column called gene_name. Gene IDs and description are extracted in a similar way. RE_GENE_ID = re.compile(r'gene_id=(?P<gene_id>ENSG.+?);') def extract_gene_id(attributes_str): res = RE_GENE_ID.search(attributes_str) return res.group('gene_id') ndf['gene_id'] = ndf.attributes.apply(extract_gene_id) RE_DESC = re.compile('description=(?P<desc>.+?);') def extract_description(attributes_str): res = RE_DESC.search(attributes_str) if res is None: return '' else: return res.group('desc') ndf['desc'] = ndf.attributes.apply(extract_description) The regex for RE_GENE_ID is a bit more specific since we know that every gene_id must start with ENSG, where ENS means ensembl and G means gene. For entries that don’t have any description, we will return an empty string. After everything is extracted, we won’t use the attributes column anymore, so let’s drop it to keep things nice and clean with the method .drop: In [224]: ndf.drop('attributes', axis=1, inplace=True) In [225]: ndf.head() Out[225]: seqid source type start end score strand phase gene_id gene_name desc 16 1 havana gene 11869 14409 . + . ENSG00000223972 DDX11L1 DEAD/H-box helicase 11 like 1 [Source:HGNC Sym... 28 1 havana gene 14404 29570 . - . ENSG00000227232 WASH7P WAS protein family homolog 7 pseudogene [Sourc... 71 1 havana gene 52473 53312 . + . ENSG00000268020 OR4G4P olfactory receptor family 4 subfamily G member... 74 1 havana gene 62948 63887 . + . ENSG00000240361 OR4G11P olfactory receptor family 4 subfamily G member... 77 1 ensembl_havana gene 69091 70008 . + . ENSG00000186092 OR4F5 olfactory receptor family 4 subfamily F member... In the above call, attributes indicates the specific column we want to drop. axis=1 means we are dropping a column instead of a row ( axis=0 by default). inplace=True means that the drop is operated on the DataFrame itself instead of returning a new copy with specified column dropped. A quick .head look shows that the attributes column is indeed gone, and three new columns: gene_name, gene_id, and desc have been added. Out of curiosity, let’s see if all gene_id and gene_name are unique: In [232]: ndf.shape Out[232]: (42470, 11) In [233]: ndf.gene_id.unique().shape Out[233]: (42470,) In [234]: ndf.gene_name.unique().shape Out[234]: (42387,) Surprisingly, the number of gene names is smaller than that of gene IDs, indicating that some gene_name must correspond to multiple gene IDs. Let’s find out what they are. In [243]: count_df = ndf.groupby('gene_name').count().ix[:, 0].sort_values().ix[::-1] In [244]: count_df.head(10) Out[244]: gene_name SCARNA20 7 SCARNA16 6 SCARNA17 5 SCARNA15 4 SCARNA21 4 SCARNA11 4 Clostridiales-1 3 SCARNA4 3 C1QTNF9B-AS1 2 C11orf71 2 Name: seqid, dtype: int64 In [262]: count_df[count_df > 1].shape Out[262]: (63,) In [263]: count_df.shape Out[263]: (42387,) In [264]: count_df[count_df > 1].shape[0] / count_df.shape[0] Out[264]: 0.0014863047632528842 We group all entries by the value of gene_name, then count the number of items in each group with .count(). If you inspect the output from ndf.groupby('gene_name').count(), all columns are counted for each group, but most of them have the same values. Note that NA values won’t be considered when counting, so only take the count of the first column, seqid ( we use .ix[:, 0] to ensure that there are no NA values). Then sort the count values with .sort_values and reverse the order with .ix[::-1]. In the result, a gene name can be shared with up to seven gene IDs. In [255]: ndf[ndf.gene_name == 'SCARNA20'] Out[255]: seqid source type start end score strand phase gene_id gene_name desc 179399 1 ensembl gene 171768070 171768175 . + . ENSG00000253060 SCARNA20 Small Cajal body specific RNA 20 [Source:RFAM%3BAcc:RF00601] 201037 1 ensembl gene 204727991 204728106 . + . ENSG00000251861 SCARNA20 Small Cajal body specific RNA 20 [Source:RFAM%3BAcc:RF00601] 349203 11 ensembl gene 8555016 8555146 . + . ENSG00000252778 SCARNA20 Small Cajal body specific RNA 20 [Source:RFAM%3BAcc:RF00601] 718520 14 ensembl gene 63479272 63479413 . + . ENSG00000252800 SCARNA20 Small Cajal body specific RNA 20 [Source:RFAM%3BAcc:RF00601] 837233 15 ensembl gene 75121536 75121666 . - . ENSG00000252722 SCARNA20 Small Cajal body specific RNA 20 [Source:RFAM%3BAcc:RF00601] 1039874 17 ensembl gene 28018770 28018907 . + . ENSG00000251818 SCARNA20 Small Cajal body specific RNA 20 [Source:RFAM%3BAcc:RF00601] 1108215 17 ensembl gene 60231516 60231646 . - . ENSG00000252577 SCARNA20 small Cajal body-specific RNA 20 [Source:HGNC Symbol%3BAcc:HGNC:32578] A closer look at all the SCARNA20 genes shows that they’re indeed all different. While they share the same name, they are located at different positions of the genome. Their descriptions, however, don’t seem very helpful in distinguishing them. The point here is to know that gene names are not unique for all gene IDs, and about 0.15% of the names that are shared by multiple genes. How Long Is a Typical Gene? Similar to what we did when we were trying to understand the incompleteness of the genome, we can easily add a length column to ndf: In [277]: ndf['length'] = ndf.end - ndf.start + 1 In [278]: ndf.length.describe() Out[278]: count 4.247000e+04 mean 3.583348e+04 std 9.683485e+04 min 8.000000e+00 25% 8.840000e+02 50% 5.170500e+03 75% 3.055200e+04 max 2.304997e+06 Name: length, dtype: float64 .describe() calculates some simple statistics based on the length values: Mean length of a gene is about 36,000 bases Median length of a gene is about 5,200 bases long Minimum and maximum gene lengths are about eight and 2.3 million bases long, respectively. Because the mean is much larger than the median, it implies that length distribution is skewed to the right. To have a more concrete look, let’s plot the distribution. import matplotlib as plt ndf.length.plot(kind='hist', bins=50, logy=True) plt.show() Pandas provides a simple interface to matplotlib to make plotting very handy with DataFrames or series. In this case, it says that we want a histogram plot ( kind='hist') with 50 bins, and let the y axis be on a log scale ( logy=True). From the histogram, we can see that the majority of genes are within the first bin. However, some gene lengths can be more than two million bases. Let’s find out what they are: In [39]: ndf[ndf.length > 2e6].sort_values('length').ix[::-1] Out[39]: seqid source type start end score strand phase gene_name gene_id desc length 2309345 7 ensembl_havana gene 146116002 148420998 . + . CNTNAP2 ENSG00000174469 contactin associated protein-like 2 [Source:HG... 2304997 2422510 9 ensembl_havana gene 8314246 10612723 . - . PTPRD ENSG00000153707 protein tyrosine phosphatase%2C receptor type ... 2298478 2527169 X ensembl_havana gene 31097677 33339441 . - . DMD ENSG00000198947 dystrophin [Source:HGNC Symbol%3BAcc:HGNC:2928] 2241765 440886 11 ensembl_havana gene 83455012 85627922 . - . DLG2 ENSG00000150672 discs large MAGUK scaffold protein 2 [Source:H... 2172911 2323457 8 ensembl_havana gene 2935353 4994972 . - . CSMD1 ENSG00000183117 CUB and Sushi multiple domains 1 [Source:HGNC ... 2059620 1569914 20 ensembl_havana gene 13995369 16053197 . + . MACROD2 ENSG00000172264 MACRO domain containing 2 [Source:HGNC Symbol%... 2057829 As you can see, the longest gene is named CNTNAP2, which is short for contactin associated protein-like 2. According to its wikipedia page, This gene encompasses almost 1.6% of chromosome 7 and is one of the largest genes in the human genome. Indeed! We just verified that ourselves. In contrast, what about the smallest genes? It turns out that they can be as short as eight bases. In [40]: ndf.sort_values('length').head() Out[40]: seqid source type start end score strand phase gene_name gene_id desc length 682278 14 havana gene 22438547 22438554 . + . TRDD1 ENSG00000223997 T cell receptor delta diversity 1 [Source:HGNC... 8 682282 14 havana gene 22439007 22439015 . + . TRDD2 ENSG00000237235 T cell receptor delta diversity 2 [Source:HGNC... 9 2306836 7 havana gene 142786213 142786224 . + . TRBD1 ENSG00000282431 T cell receptor beta diversity 1 [Source:HGNC ... 12 682286 14 havana gene 22449113 22449125 . + . TRDD3 ENSG00000228985 T cell receptor delta diversity 3 [Source:HGNC... 13 1879625 4 havana gene 10238213 10238235 . - . AC006499.9 ENSG00000271544 23 The lengths of the two extreme cases are five orders of magnitude apart (2.3 million vs. 8), which is enormous and which can be an indication of the level of diversity of life. A single gene can be translated to many different proteins via a process called alternative splicing, something we haven’t explored. Such information is also inside the GFF3 file, but outside the scope of this post. Gene Distribution Among Chromosomes The last thing I’d like to discuss is gene distribution among chromosomes, which also serves as an example for introducing the .merge method for combining two DataFrames. Intuitively, longer chromosomes likely host more genes. Let’s see if that is true. In [53]: ndf = ndf[ndf.seqid.isin(chrs)] In [54]: chr_gene_counts = ndf.groupby('seqid').count().ix[:, 0].sort_values().ix[::-1] Out[54]: chr_gene_counts seqid 1 3902 2 2806 11 2561 19 2412 17 2280 3 2204 6 2154 12 2140 7 2106 5 2002 16 1881 X 1852 4 1751 9 1659 8 1628 10 1600 15 1476 14 1449 22 996 20 965 13 872 18 766 21 541 Y 436 Name: source, dtype: int64 We borrowed the chrs variable from the previous section, and used it to filter out the unassembled sequences. Based on the output, the largest Chromosome 1 indeed has the most genes. While Chromosome Y has the smallest number of genes, it is not the smallest chromosome. Note that there seem to be no genes in the mitochondrion (MT), which is not true. A bit more filtering on the first DataFrame df returned by pd.read_csv shows that all MT genes are from source insdc (which were filtered out before when generating edf where we only considered sources of havana, ensembl, or ensembl_havana). In [134]: df[(df.type == 'gene') & (df.seqid == 'MT')] Out[134]: seqid source type start end score strand phase attributes 2514003 MT insdc gene 648 1601 . + . ID=gene:ENSG00000211459;Name=MT-RNR1;biotype=M... 2514009 MT insdc gene 1671 3229 . + . ID=gene:ENSG00000210082;Name=MT-RNR2;biotype=M... 2514016 MT insdc gene 3307 4262 . + . ID=gene:ENSG00000198888;Name=MT-ND1;biotype=pr... 2514029 MT insdc gene 4470 5511 . + . ID=gene:ENSG00000198763;Name=MT-ND2;biotype=pr... 2514048 MT insdc gene 5904 7445 . + . ID=gene:ENSG00000198804;Name=MT-CO1;biotype=pr... 2514058 MT insdc gene 7586 8269 . + . ID=gene:ENSG00000198712;Name=MT-CO2;biotype=pr... 2514065 MT insdc gene 8366 8572 . + . ID=gene:ENSG00000228253;Name=MT-ATP8;biotype=p... 2514069 MT insdc gene 8527 9207 . + . ID=gene:ENSG00000198899;Name=MT-ATP6;biotype=p... 2514073 MT insdc gene 9207 9990 . + . ID=gene:ENSG00000198938;Name=MT-CO3;biotype=pr... 2514080 MT insdc gene 10059 10404 . + . ID=gene:ENSG00000198840;Name=MT-ND3;biotype=pr... 2514087 MT insdc gene 10470 10766 . + . ID=gene:ENSG00000212907;Name=MT-ND4L;biotype=p... 2514091 MT insdc gene 10760 12137 . + . ID=gene:ENSG00000198886;Name=MT-ND4;biotype=pr... 2514104 MT insdc gene 12337 14148 . + . ID=gene:ENSG00000198786;Name=MT-ND5;biotype=pr... 2514108 MT insdc gene 14149 14673 . - . ID=gene:ENSG00000198695;Name=MT-ND6;biotype=pr... 2514115 MT insdc gene 14747 15887 . + . ID=gene:ENSG00000198727;Name=MT-CYB;biotype=pr... This example also shows how to combine two conditions during filtering with &; the logical operator for “or” would be |. Note that the parentheses around each condition are required, and this part of the syntax in Pandas is different from Python, which would have been literal and and or. Next, let’s borrow the gdf DataFrame from the previous section as a source for the length of each chromosome: In [61]: gdf = gdf[gdf.seqid.isin(chrs)] In [62]: gdf.drop(['start', 'end', 'score', 'strand', 'phase' ,'attributes'], axis=1, inplace=True) In [63]: gdf.sort_values('length').ix[::-1] Out[63]: seqid source type length 0 1 GRCh38 chromosome 248956422 1364641 2 GRCh38 chromosome 242193529 1705855 3 GRCh38 chromosome 198295559 1864567 4 GRCh38 chromosome 190214555 1964921 5 GRCh38 chromosome 181538259 2080148 6 GRCh38 chromosome 170805979 2196981 7 GRCh38 chromosome 159345973 2514125 X GRCh38 chromosome 156040895 2321361 8 GRCh38 chromosome 145138636 2416560 9 GRCh38 chromosome 138394717 328938 11 GRCh38 chromosome 135086622 235068 10 GRCh38 chromosome 133797422 483370 12 GRCh38 chromosome 133275309 634486 13 GRCh38 chromosome 114364328 674767 14 GRCh38 chromosome 107043718 767312 15 GRCh38 chromosome 101991189 865053 16 GRCh38 chromosome 90338345 990810 17 GRCh38 chromosome 83257441 1155977 18 GRCh38 chromosome 80373285 1559144 20 GRCh38 chromosome 64444167 1201561 19 GRCh38 chromosome 58617616 2594560 Y GRCh38 chromosome 54106423 1647482 22 GRCh38 chromosome 50818468 1616710 21 GRCh38 chromosome 46709983 2513999 MT GRCh38 chromosome 16569 The columns that are not relevant to the analysis are dropped for clarity. Yes, .drop can also take a list of columns and drop them altogether in one operation. Note that the row with a seqid of MT is still there; we will get back to it later. The next operation we will perform is merge the two datasets based on the values of seqid. In [73]: cdf = chr_gene_counts.to_frame(name='gene_count').reset_index() In [75]: cdf.head(2) Out[75]: seqid gene_count 0 1 3902 1 2 2806 In [78]: merged = gdf.merge(cdf, on='seqid') In [79]: merged Out[79]: seqid source type length gene_count 0 1 GRCh38 chromosome 248956422 3902 1 10 GRCh38 chromosome 133797422 1600 2 11 GRCh38 chromosome 135086622 2561 3 12 GRCh38 chromosome 133275309 2140 4 13 GRCh38 chromosome 114364328 872 5 14 GRCh38 chromosome 107043718 1449 6 15 GRCh38 chromosome 101991189 1476 7 16 GRCh38 chromosome 90338345 1881 8 17 GRCh38 chromosome 83257441 2280 9 18 GRCh38 chromosome 80373285 766 10 19 GRCh38 chromosome 58617616 2412 11 2 GRCh38 chromosome 242193529 2806 12 20 GRCh38 chromosome 64444167 965 13 21 GRCh38 chromosome 46709983 541 14 22 GRCh38 chromosome 50818468 996 15 3 GRCh38 chromosome 198295559 2204 16 4 GRCh38 chromosome 190214555 1751 17 5 GRCh38 chromosome 181538259 2002 18 6 GRCh38 chromosome 170805979 2154 19 7 GRCh38 chromosome 159345973 2106 20 8 GRCh38 chromosome 145138636 1628 21 9 GRCh38 chromosome 138394717 1659 22 X GRCh38 chromosome 156040895 1852 23 Y GRCh38 chromosome 54106423 436 Since chr_gene_counts is still a Series object, which doesn’t support a merge operation, we need to convert it to a DataFrame object first with .to_frame. .reset_index() converts the original index (i.e. seqid) into a new column and resets current index as 0-based incremental numbers. The output from cdf.head(2) shows what it looks like. Next, we used the .merge method to combine the two DataFrame on the seqid column ( on='seqid'). After merging gdf and cdf, the MT entry is still missing. This is because, by default, .merge operates an inner join, while left join, right join, or outer join are available by tuning the how parameter. Please refer to the documentation for more details. Later, you may find that there is also a related .join method. .merge and .join are similar yet have different APIs. According to the official documentation says. Basically, .merge is more general-purpose and used by .join. Finally, we are ready to calculate the correlation between chromosome length and gene_count. In [81]: merged[['length', 'gene_count']].corr() Out[81]: length gene_count length 1.000000 0.728221 gene_count 0.728221 1.000000 By default .corr calculates the Pearson correlation between all pairs of columns in a dataframe. But we have only a single pair of columns in this case, and the correlation turns out to be positive – 0.73. In other words, the larger the chromosome, the more likely it is to have more genes. Let’s also plot the two columns after sorting the value pairs by length: ax = merged[['length', 'gene_count']].sort_values('length').plot(x='length', y='gene_count', style='o-') # add some margin to both ends of x axis xlim = ax.get_xlim() margin = xlim[0] * 0.1 ax.set_xlim([xlim[0] - margin, xlim[1] + margin]) # Label each point on the graph for (s, x, y) in merged[['seqid', 'length', 'gene_count']].sort_values('length').values: ax.text(x, y - 100, str(s)) As seen in image above, even though it is a positive correlation overall, it does not hold for all chromosomes. In particular, for Chromosome 17, 16, 15, 14, 13, the correlation is actually negative, meaning the number of genes on the chromosome decreases as the chromosome size increases. Findings and Future Research That ends our tutorial on the manipulation of an annotation file for human genome in GFF3 format with the SciPy stack. The tools we’ve mainly used include IPython, Pandas, and matplotlib. During the tutorial, not only have we learned some of the most common and useful operations in Pandas, we also answered some very interesting questions about our genome. In summary: - About 0.37% of the human genome is still incomplete even though the first draft came out over 15 years ago. - There are about 42,000 genes in the human genome based on this particular GFF3 file we used. - The length of a gene can range from a few dozen to over two million bases. - Genes are not evenly distributed among the chromosomes. Overall, the larger the chromosome, the more genes it hosts, but for a subset of the chromosomes, the correlation can be negative. The GFF3 file is very rich in annotation information, and we have just scratched the surface. If you are interested in further exploration, here are a few questions you can play with: - How many transcripts does a gene typically have? What percentage of genes have more than 1 transcript? - How many isoforms does a transcript typically have? - How many exons, CDS, and UTRs does a transcript typically have? What sizes are they? - Is it possible to categorize the genes based on their function as described in the description column?
https://www.toptal.com/python/comprehensive-introduction-your-genome-scipy
CC-MAIN-2018-43
refinedweb
6,465
58.58
In this tutorial, we’ll learn how to call an Azure Function from an ASP.NET Core MVC application. We will get started with creating an ASP.NET Core MVC application that will call our Azure Function to validate an email address entered into a login screen of the application: - In Visual Studio 2017, create a new project and select ASP.NET Core Web Application from the project templates. Click on the OK button to create the project. This is shown in the following screenshot: - On the next screen, ensure that .NET Core and ASP.NET Core 2.0 is selected from the drop-down options on the form. Select Web Application (Model-View-Controller) as the type of application to create. Don’t bother with any kind of authentication or enabling Docker support. Just click on the OK button to create your project: - After your project is created, you will see the familiar project structure in the Solution Explorer of Visual Studio: Creating the login form For this next part, we can create a plain and simple vanilla login form. For a little bit of fun, let’s spice things up a bit. Have a look on the internet for some free login form templates: - I decided to use a site called colorlib that provided 50 free HTML5 and CSS3 login forms in one of their recent blog posts. The URL to the article is:. - I decided to use Login Form 1 by Colorlib from their site. Download the template to your computer and extract the ZIP file. Inside the extracted ZIP file, you will see that we have several folders. Copy all the folders in this extracted ZIP file (leave the index.html file as we will use this in a minute): - Next, go to the solution for your Visual Studio application. In the wwwroot folder, move or delete the contents and paste the folders from the extracted ZIP file into the wwwroot folder of your ASP.NET Core MVC application. Your wwwroot folder should now look as follows: - Open the Index.cshtml file and remove all the markup (except the section in the curly brackets) from this file. Paste the HTML markup from the index.html file from the ZIP file we extracted earlier. - Your Index.cshtml file should now look as follows: @{ ViewData["Title"] = "Login Page"; } - Next, open the Layout.cshtml file and add all the links to the folders and files we copied into the wwwroot folder earlier. Use the index.html file for reference. You will notice that the _Layout.cshtml file contains the following piece of code—@RenderBody(). This is a placeholder that specifies where the Index.cshtml file content should be injected. If you are coming from ASP.NET Web Forms, think of the _Layout.cshtml page as a master page. Your Layout.cshtml markup should look as follows: @ViewData["Title"] - CoreMailValidation @RenderBody()@RenderSection("Scripts", required: false) - If everything worked out right, you will see the following page when you run your ASP.NET Core MVC application. The login form is obviously totally non-functional: However, the login form is totally responsive. If you had to reduce the size of your browser window, you will see the form scale as your browser size reduces. This is what you want. If you want to explore the responsive design offered by Bootstrap, head on over to and go through the examples in the documentation: The next thing we want to do is hook this login form up to our controller and call the Azure Function we created to validate the email address we entered. Let’s look at doing that next. Hooking it all up To simplify things, we will be creating a model to pass to our controller: - Create a new class in the Models folder of your application called LoginModel and click on the Add button: - The next thing we want to do is add some code to our model to represent the fields on our login form. Add two properties called Email and Password: namespace CoreMailValidation.Models { public class LoginModel { public string Email { get; set; } public string Password { get; set; } } } - Back in the Index.cshtml view, add the model declaration to the top of the page. This makes the model available for use in our view. Take care to specify the correct namespace where the model exists: @model CoreMailValidation.Models.LoginModel @{ ViewData["Title"] = "Login Page"; } - The next portion of code needs to be written in the HomeController.cs file. Currently, it should only have an action called Index(): public IActionResult Index() { return View(); } - Add a new async function called ValidateEmail that will use the base URL and parameter string of the Azure Function URL we copied earlier and call it using an HTTP request. I will not go into much detail here, as I believe the code to be pretty straightforward. All we are doing is calling the Azure Function using the URL we copied earlier and reading the return data: private async Task ValidateEmail(string emailToValidate) { string azureBaseUrl = "- validation.azurewebsites.net/api/HttpTriggerCSharp1"; string urlQueryStringParams = $"? code=/IS4OJ3T46quiRzUJTxaGFenTeIVXyyOdtBFGasW9dUZ0snmoQfWoQ ==&email={emailToValidate}"; using (HttpClient client = new HttpClient()) { using (HttpResponseMessage res = await client.GetAsync( $"{azureBaseUrl}{urlQueryStringParams}")) { using (HttpContent content = res.Content) { string data = await content.ReadAsStringAsync(); if (data != null) { return data; } else return ""; } } } } - Create another public async action called ValidateLogin. Inside the action, check to see if the ModelState is valid before continuing. - We then do an await on the ValidateEmail function, and if the return data contains the word false, we know that the email validation failed. A failure message is then passed to the TempData property on the controller. If the email validation passed, then we know that the email address is valid and we can do something else. Here, we are simply just saying that the user is logged in. In reality, we will perform some sort of authentication here and then route to the correct controller. So now you know how to call an Azure Function from an ASP.NET Core application. If you found this tutorial helpful and you’d like to learn more, go ahead and pick up the book C# 7 and .NET Core Blueprints. Read Next Why ASP.NET makes building apps for mobile and web easy How to dockerize an ASP.NET Core application Thank you very much
https://hub.packtpub.com/azure-function-asp-net-core-mvc-application/
CC-MAIN-2019-35
refinedweb
1,053
66.23
Eli Collins created HDFS-3937: --------------------------------- Summary: The BlockManager should not use the FSN lock Key: HDFS-3937 URL: Project: Hadoop HDFS Issue Type: Sub-task Components: name-node Reporter: Eli Collins Like HDFS-2206 but for the BM. The BM currently calls into the FSN to acquire it's write lock, it looks like this is only needed for synchronization of it's local structures, in which case it can have it's own RW lock. This helps us further decouple block and namespace management, and also avoids scenarios like HDFS-3936 where the FSN holds the lock and calls into BM which may be blocked trying to acquire the FSN lock. Per Todd's comment in HDFS-2206 we should do this as part of revisiting NN locking (HDFS-2184) vs just a local change. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see:
http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201209.mbox/%3C506432558.79014.1347602767513.JavaMail.jiratomcat@arcas%3E
CC-MAIN-2014-41
refinedweb
161
54.76
Dear Sir! I tried to customize a field value in polygon featureclass. For that I wrote a python function. Then using Field Calculator, I copy this function to Pre_Logic Script Code: and call the function. it gives me error. Then I tried the same using Calculate Field tool in ArcToolbox. It works correctly. Pl explain why I cannot run the above function using Field Calculator. My function is given below: def lblcal(a): Dic1 = {'11':'PP', '13':'TOPOPP', '14':'FTP', '15':'VP', '16':'FVP', '17':'CP', '18':'FCP', '19':'FSP', '20':'FSPP', '21':'ISPP', '22':'FUP'} if a[2:4] in Dic1.keys(): m = Dic1.get(a[2:4]) if (a[9] == 's') is True: str1 = 'SUP' + str(int(a[10:15])) else: str1 = 'Inset' + str(int(a[10:15])) if (a[15] == 's') is True: str2 = 'Sheet' + str(int(a[16:20])) str3 = m + ' ' + str(int(a[4:9])) + str1 + str2 return (str3) I am working in ArcGIS 9.3 Thanks Padmasiri you have to supply it with a field name as a substitute for 'a' the function should go in the code block section then the call to the is made in the expression box like lblcal([yourfield]) if using VB or lblcal(!yourfield!) if using the Python parser
https://community.esri.com/thread/162046-problem-in-field-calculator
CC-MAIN-2018-43
refinedweb
211
73.88
Conversations Conversations 21 - 40 of 337 Do you think you could find out if there is a way to alert people when a change has been made on a MJ map? enhanced Filter "Would it be possible to have the same functionality as the thick client has ?!? e.g. - filter for resources - save custom filter - ... import excel "Have the user follow a simple protocol for import directly from Excel into Maps! In many cases detailed analysis is done in Excel, but ... Displaying topic notes in a right or left hand column In the desktop product topic notes can be displayed in a column on the left or right hand side of the main window. In Connect they are o... Flash Player!!!!!!!!!!!!!! What value is iPad version when Flash is not supported? (from Ralph Summerford) Allow print pagination Allow a topic to be tagged (perhaps with an icon) such that when the map is printed each tagged topic prints as a separate page. Put a se... Making the Mindmanager as Outlook CRM "Ability to search a contact and display all the outlook calendar items, emails and tasks for that contact" (from Anonymous) Multiple Hyperlinks - Please can I switch them off "I would like to be able to switch of the topic multiple hyperlink function. I just want a single hyperlink which is overwritten not adde... Finish development of Gantt with ALL functionality of JCVGantt. "For some reason the Mindjet developers have the idea that they are -done- with the implementation of the JCVGantt add-in. They better... LEARNING - a special interface dedicated to learn activities "The most of us know that learning is a kind of activity we all can permanently need, exactly as we also often need a brainstorming envir... ... Add picture support "Mindjet for Android should support Pictures as Mindmanager for Windows does. If you click on picture in the current version you only g... Being able to give a link a custom made name "With multiple links to one subject, I would like to have the ability to (re)name the links." (from Anonymous) iCloud integration iCloud integration will be awesome (from Martin Druchlik) """Send To Linked Map""" "This is my most favorite and powerful function in Mindmanager for Windows. I've been a devote and daily user of MindManger and I was de... mac app store "please add your program to the mac app store - all is better (update, licence handling etc)" (from Martin Druchlik) Let users specify session timeout interval I am seemingly constantly getting booted out of my session. I can only suppose that letting my session sit for what only seems like a few... Ribbon modifications "It should be easy to modify the ribbon, i.e. add a tab with buttons (to run macro's etc)" (from Anonymous) iCloud integration iCloud integration will be awesome (from Martin Druchlik)
https://community.mindjet.com/mindjet/people/import_lg?profile-topic-list%5Bsettings%5D%5Baction_filter%5D=metooed&profile-topic-list%5Bsettings%5D%5Bpage%5D=2&profile-topic-list%5Bsettings%5D%5Btype%5D=idea
CC-MAIN-2019-43
refinedweb
474
60.14
Danger Detector Introduction: Danger Detector Danger Detector is a gas and smoke detector built utilising an Intel Edison and Grove sensors. Step 1: What It Is and Where to Place the Device The Danger Detector is a personal device that alerts a person of FIRE,Co2 and NATURAL DISASTERS. The FIRE and CO2 are NOT dependent on the availability of the internet.This detector is unlike a detector that is used in the household but has the ability to be placed safely in sleeping areas while traveling.With it's magnetized backing it can be placed easily on most doors that are usually found in hotels. We recommend placing the device just above eye level near the opening for maximum exposure and you will be reminded to take it with you when leaving. Push notification in your smart phone is now available. Step 2: Features This device will also alert a wristband that vibrates for deaf individuals.Natural Disasters such as Tornadoes,Severe Storms, and Flash Floods will be included as part of the warning system where the internet is available. Step 3: Getting Started - What You Will Need. Our team constructed this project at the Intel® IoT Roadshow in Boston, Massachusetts held in March 2015. Intel graciously provided us with both an Edison development board and a Grove Starter Kit Plus, Intel IoT Edition. The only other items used were - plastic Sterilite 5.3 cup box from Wal Mart. - magnetic tape - Grove Gas Sensor(MQ2) also provided by Intel a set of Grove Home Automation sensors for our team. Step 4: Connecting the Hardware If you attend an Intel hackathon they will walk you through setting up and testing your Intel Edison. If you are doing this on your own you will find instructions on their IoT website. Once you have your Edison up and running you should remove the Grove base shield from the Starter Kit and place it on your Edison board. Next you should attach the Grove boards as follows: - Grove LED to socket D2 - Grove Buzzer to socket D5 - Grove Gas Sensor to socket A0 If you refer to the photo at the left you will see these connections. Step 5: The Software We used Intel's free C/C++ Application Development Toolkit (ADT) - (Eclipse) to develop our software in C++. Since the Edison runs Linix, one has the choice of many languages. Step 6: Testing the Project If the sensor detects either gas or smoke the buzzer will sound and the LED flash. We used a BIC lighter and a paper towel in a tin can to test our project. First we released butane gas from the BIC lighter without lighting the flame. This should set off the alarm. Next we used the lighter to set a paper towel on fire in a tin can. We then blew out the flame so that the paper smoldered and produced smoke. This also set off the alarm. Once you are satisfied everything is working you can place everything in a plastic box. By fixing magnetic tape to the box, you can temporarily attach the box to any steel door. Here is the program: #include <unistd.h> #include <iostream> #include "mq3.h" #include "grove.h" #include "buzzer.h" #include <signal.h> #include <stdlib.h> #include <sys/time.h> #include "mraa/gpio.h" #define TRIGGER 200 int is_running = 0; uint16_t buffer [128]; upm::MQ3 *sensor = NULL; void sig_handler(int signo) { printf("got signal\n"); if (signo == SIGINT) { is_running = 1; } } //! [Interesting] int main(int argc, char **argv) { // Create the Grove LED object using GPIO pin 2 upm::GroveLed* led = new upm::GroveLed(2); mraa_gpio_context gpio = NULL; gpio = mraa_gpio_init(5); mraa_gpio_dir(gpio, MRAA_GPIO_OUT); // Print the name std::cout << led->name() << std::endl; // Attach gas sensor to A0 sensor = new upm::MQ3(0); signal(SIGINT, sig_handler); thresholdContext ctx; ctx.averageReading = 0; ctx.runningAverage = 0; ctx.averagedOver = 2; // Infinite loop, ends when script is cancelled // Repeatedly, take a sample every 2 milliseconds; // find the average of 128 samples; and // print a running graph of the averages using a resolution of 5 while (!is_running) { int len = sensor->getSampledWindow (2, 128, buffer); if (len) { int thresh = sensor->findThreshold (&ctx, 30, buffer, len); sensor->printGraph(&ctx, 5); if (thresh > TRIGGER) { led->on(); mraa_gpio_write(gpio, 1); sleep(1); } else { led->off(); mraa_gpio_write(gpio, 0); sleep(1); } } mraa_gpio_write(gpio, 0); led->off(); } // std::cout << "exiting application" << std::endl; // Delete the Grove LED object delete led; return 0; } //! [Interesting] Step 7: Still to Come Because of time constraints we were not able to implement all the features we wanted during the hackathon. Still left undone are: Selecting a vibrating bracelet with Bluetooth connectivity to serve as an additional alarm. Connecting the Edison to the Internet to both receive alarms such as weather alerts as well as provide notifications of alarm conditions via SMS etc. Adding additional sensors to detect other hazards and someone opening the door unexpectedly (a break in). Better packaging - we put our project in a rather large plastic container and powered it with the wall-wart provided with the Edison. Clearly a much smaller container with a battery would be more suitable for traveling. Step 8: Final Thoughts While we had a lot of fun with this project t is clearly not ready for prime time. In particular "real" fire alarms undergo extensive testing and are generally approved by a testing agency such as Underwriters Laboratory. For the next version of this project perhaps a improvement would be to utilise a commercially available detector that has been tested an approved and which provides an alarm output connection. We could then connect the output of the alarm to an input pin of the Edison to have the best of both worlds. Cool idea!
http://www.instructables.com/id/Danger-Detector/
CC-MAIN-2017-43
refinedweb
957
62.98
Related Tutorial Using index.js for Fun and Public using an index.js file in the root of each folder to re-export a subset of files, you can effectively create explicit public interfaces for your React modules. React Modules Organizing React projects into modules has been widely adopted by the React community. The core idea is that instead organizing your projects files by type (function first, feature second): src/ components/ app/ todos/ users/ user-details.js user-details.css users.js users.css user-list.js user-list.css actions/ app-actions.js todo-actions.js user-actions.js reducers/ app-reducer.js todo-reducer.js user-reducer.js constants/ utils/ You flip that around and organize by modules (feature first, function second): src/ base/ modules/ app/ todos/ users/ __tests__/ components/ UserDetails/ Users/ __tests__/ index.js styles.css UsersList/ index.js actions.js actionTypes.js constants.js index.js reducer.js selectors.js shared/ utils/ index.css index.js This method of organizing has a number of benefits: - 1. It decreases coupling and increases cohesion between the different sections of your apps, effectively reducing cognitive load while developing. - 2. Since related files are co-located, there is less jumping around while developing (i.e., when adding a feature which requires changing actions, reducers, and components, they are all right there together). - 3. You can use index.jsto create a public interface for your different modules. Using index.js This last point is, in my opinion, one of the most subtle but useful benefits of this organization structure. In ES6, having an index.js file in a folder lets you perform an import from the folder implicitly without specifying the index.js in the import statement – just like how web servers will serve up the index.html in a folder without you needing to explicitly put the index.html in the URL. This gives you some semblance of control over what you export from the module and therefore what you can import and use from another section of your app. In other words, this allows you to create a public interface to expose certain files, while keeping others “private”. export { default as App } from "./App"; export { default as Home } from "./Home"; export { default as Login } from "./Login"; export { default as Navigation } from "./Navigation"; export { default as NotFound } from "./NotFound"; export { default as Signup } from "./Signup"; In this example, since we are talking about src/modules/app/index.js, it is describing which files are publicly exported from the app module. - 1. It gives a clear picture of which components are used throughout the rest of the application. - 2. It communicates that the rest of the files inside the appfolder are only ever used inside the appmodule. When you want to import something from a module (e.g., for a <Route />), you can do so quite cleanly in a single import statement: import { Home, Login, NotFound, Signup } from "../modules/app"; import { Todos } from "../modules/todos"; import { Users } from "../modules/users"; Conclusion At the end of the day, this is all just syntactic sugar. It is still possible to disregard the guidelines and create files anywhere, or import files by deep linking into the folder structure. However, by organizing your application into modules, and then using index.js to re-export files as a public interface, you can attempt to communicate about intended usage to both other developers and future-you. 👉 For more information about this topic, I highly recommend reading Three Rules For Structuring (Redux) Applications by Jack Hsu as well as How to better Organize Your React Applications by Alexis Mangin.
https://www.digitalocean.com/community/tutorials/react-index-js-public-interfaces
CC-MAIN-2020-34
refinedweb
601
50.12
apache / duo_unix / ed38a958bb39f42b77fb794ca7fb9a66420e46c0 / . / tests / pexpect.py blob: 67c6389faa1cddcac5ea6b5b8c00360a7b810e4f [ file ] [ log ] [ blame ] "" Pexpect -- the function, run() and the class, spawn. You can call the run() function to execute a command and return the output. This is a handy replacement for os.system(). For example:: pexpect.run('ls -la') The more powerful interface is the spawn class. You can use this to spawn an external child command and then interact with the child by sending lines and expecting responses. For example:: child = pexpect.spawn('scp foo myname@host.example.com:.') child.expect ('Password:') child.sendline (mypassword) This works even for commands that ask for passwords or other input outside of the normal stdio streams. (Let me know if I forgot anyone.)) 2008 Noah Spurrier $Id: pexpect.py 507 2007-12-27 02:40:52Z noah $ """ try: import os, sys, time import select import string import re import struct import resource import types import pty import tty import termios import fcntl import errno import traceback import signal except ImportError, e: raise ImportError (str(e) + """ A critical module was not found. Probably this operating system does not support it. Pexpect is intended for UNIX-like operating systems.""") __version__ = '2.3' __revision__ = '$Revision: 399 $' __all__ = ['ExceptionPexpect', 'EOF', 'TIMEOUT', 'spawn', 'run', 'which', 'split_command_line', '__version__', '__revision__'] # Exception classes used by this module. class ExceptionPexpect(Exception): """Base class for all exceptions raised by this module. """ def __init__(self, = filter(self.__filter_not_pexpect, tblist) tblist = [item for item in tblist if self.__filter_not_pexpect(item)] tblist = traceback.format_list(tblist) return ''.join(tblist) def __filter_not_pexpect(self, trace_list_item): """This returns True if list item 0 the string 'pexpect.py' in it. """ if trace_list_item[0].find('pexpect.py') == -1: return True else: return False class EOF(ExceptionPexpect): """Raised when EOF is read from a child. This usually means the child has exited.""" class TIMEOUT(ExceptionPexpect): """Raised when a read time exceeds the timeout. """ ##class TIMEOUT_PATTERN(TIMEOUT): ## """Raised when the pattern match time exceeds the timeout. ## This is different than a read TIMEOUT because the child process may ## give output, thus never give a TIMEOUT, but the output ## may never match a pattern. ## """ ##class MAXBUFFER(ExceptionPexpect): ## """Raised when a scan buffer fills before matching an expected pattern.""" def run (command, timeout=-1, withexitstatus=False, events=None, extra_args=None, logfile=None, cwd=None, env=None): """ pseudo myname@host.example.com:.') child.expect ('(?i)password') child.sendline (mypassword) The previous code can be replace with the following:: from pexpect import * run ('scp foo myname@host) Tricky Examples =============== a dictionary of patterns and responses. Whenever one of the patterns is seen in the command out run() will send the associated response string. Note that you should put newlines in your string if Enter is necessary. The responses may also contain callback functions. Any callback is function that takes a dictionary. """ if timeout == -1: child = spawn(command, maxread=2000, logfile=logfile, cwd=cwd, env=env) else: child = spawn(command, timeout=timeout, maxread=2000, logfile=logfile, cwd=cwd, env=env) if events is not None: patterns = events.keys() responses = events.values() else: patterns=None # We assume that EOF or TIMEOUT will save us. responses=None child_result_list = [] event_count = 0 while 1: try: index = child.expect (patterns) if type(child.after) in types.StringTypes: child_result_list.append(child.before + child.after) else: # child.after may have been a TIMEOUT or EOF, so don't cat those. child_result_list.append(child.before) if type(responses[index]) in types.StringTypes: child.send(responses[index]) elif type(responses[index]) is types.FunctionType: callback_result = responses[index](locals()) sys.stdout.flush() if type(callback_result) in types.StringTypes: child.send(callback_result) elif callback_result: break else: raise TypeError ('The callback must be a string or function type.') event_count = event_count + 1 except TIMEOUT, e: child_result_list.append(child.before) break except EOF, e: child_result_list.append(child.before) break child_result = ''.join(child_result_list) if withexitstatus: child.close() return (child_result, child.exitstatus) else: return child_result class spawn (object): """This is the main class interface for Pexpect. Use this class to start and control child applications. """ def __init__(self, command, args=[], timeout=30, maxread=2000, searchwindowsize=None, logfile=None, cwd=None, env=None): "". The searchwindowsize attribute sets the how far back in the incomming seach buffer Pexpect will search for pattern matches. Every time Pexpect reads some data from the child it will append the data to the incomming buffer. The default is to search from the beginning of the imcomming buffer each time new data is read from the child. But this is very inefficient if you are running a command that generates a large amount of data where you want to match The searchwindowsize does not effect the size of the incomming data buffer. You will still have access to the full buffer after expect() returns. = file('mylog.txt','w') child.logfile = fout Example log to stdout:: child = pexpect.spawn('some_command') To separately log output sent to the child use logfile_send:: self.logfile_send = fout 0 to return to the old behavior. Most Linux machines don't like this to be below 0.03. I don't know why.. If you need more detail you can also read the self.status member which stores the status returned by os.waitpid. You can interpret this using os.WIFEXITED/os.WEXITSTATUS or os.WIFSIGNALED/os.TERMSIG. """ self.STDIN_FILENO = pty.STDIN_FILENO self.STDOUT_FILENO = pty.STDOUT_FILENO self.STDERR_FILENO = pty.STDERR_FILENO self.stdin = sys.stdin self.stdout = sys.stdout self.stderr = sys.stderr self.searcher = None self.ignorecase = False self.before = None self.after = None self.match = None self.match_index = None self.terminated = True self.exitstatus = None self.signalstatus = None self.status = None # status returned by os.waitpid self.flag_eof = False self.pid = None self.child_fd = -1 # initially closed self.timeout = timeout self.delimiter = EOF self.logfile = logfile self.logfile_read = None # input from child (read_nonblocking) self.logfile_send = None # output to send (send, sendline) self.maxread = maxread # max bytes to read at one time into buffer self.buffer = '' # This is the read buffer. See maxread. self.searchwindowsize = searchwindowsize # Anything before searchwindowsize point is preserved, but not searched. # Most Linux machines don't like delaybeforesend to be below 0.03 (30 ms).. self.softspace = False # File-like object. self.' # File-like object. self.encoding = None # File-like object. self.closed = True # File-like object. self.cwd = cwd self.env = env self.__irix_hack = (sys.platform.lower().find('irix')>=0) # This flags if we are running on irix # Solaris uses internal __fork_pty(). All others use pty.fork(). if (sys.platform.lower().find('solaris')>=0) or (sys.platform.lower().find('sunos5')>=0): self.use_native_pty_fork = False else: self.use_native_pty_fork = True # allow dummy instances for subclasses that may not use command or args. if command is None: self.command = None self.args = None self.name = '<pexpect factory incomplete>' else: self._spawn (command, args) def __del__(self): """This makes sure that no system resources are left open. Python only garbage collects Python objects. OS file descriptors are not Python objects, so they must be handled explicitly. If the child file descriptor was opened outside of this class (passed to the constructor) then this does not close it. """ if not self.closed: # It is possible for __del__ methods to execute during the # teardown of the Python VM itself. Thus self.close() may # trigger an exception because os.close may be None. # -- Fernando Perez try: self.close() except AttributeError: pass def __str__(self): """This returns a human-readable string that represents the state of the object. """ s = [] s.append(repr(self)) s.append('version: ' + __version__ + ' (' + __revision__ + ')') s.append('command: ' + str(self.command)) s.append('args: ' + str(self.args)) s.append('searcher: ' + str(self.searcher)) s.append('buffer (last 100 chars): ' + str(self.buffer)[-100:]) s.append('before (last 100 chars): ' + str(self.before)[-100:]) s.append('after: ' + str(self.after)) s.append('match: ' + str(self.match)) s.append('match_index: ' + str(self.match_index)) s.append('exitstatus: ' + str(self.exitstatus)) haved spawned a child # that performs some task; creates no stdout output; and then dies. # If command is an int type then it may represent a file descriptor. if type(command) == type(0): raise ExceptionPexpect ('Command is an int type. If this is a file descriptor then maybe you want to use fdpexpect.fdspawn which takes an existing file descriptor instead of a command string.') if type (args) != type([]): raise TypeError ('The argument, args, must be a list.') if args == []: self.args = split_command_line(command) self.command = self.args[0] else: self.args = args[:] # work with a copy self.args.insert (0, command) self.command = command command_with_path = which(self.command) if command_with_path is None: raise ExceptionPexpect ('The command was not found or was not executable: %s.' % self.command) self.command = command_with_path self.args[0] = self.command self.' assert self.pid is None, 'The pid member should be None.' assert self.command is not None, 'The command member should not be None.' if self.use_native_pty_fork: try: self.pid, self.child_fd = pty.fork() except OSError, e: raise ExceptionPexpect('Error! pty.fork() failed: ' + str(e)) else: # Use internal __fork_pty self.pid, self.child_fd = self.__fork_pty() if self.pid == 0: # Child try: self.child_fd = sys.stdout.fileno() # used by setwinsize() self.setwinsize(24, 80) except: # Some platforms do not like setwinsize (Cygwin). # This will cause problem when running applications that # are very picky about window size. # This is a serious limitation, but not a show stopper. pass # Do not allow child to inherit open file descriptors from parent. max_fd = resource.getrlimit(resource.RLIMIT_NOFILE)[0] for i in range (3, max_fd): try: os.close (i) except OSError: pass # I don't know why this works, but ignoring SIGHUP fixes a # problem when trying to start a Java daemon with sudo # (specifically, Tomcat). signal.signal(signal.SIGHUP, signal.SIG_IGN) if self.cwd is not None: os.chdir(self.cwd) if self.env is None: os.execv(self.command, self.args) else: os.execvpe(self.command, self.args, self.env) # Parent self.terminated = False self.closed = False def __fork_pty(self): """This implements a substitute for the forkpty system call. This should be more portable than the pty.fork() function. Specifically, this should work on Solaris. Modified 10.06.05 by Geoff Marshall: Implemented __fork_pty() method to resolve the issue with Python's pty.fork() not supporting Solaris, particularly ssh. Based on patch to posixmodule.c authored by Noah Spurrier:: """ parent_fd, child_fd = os.openpty() if parent_fd < 0 or child_fd < 0: raise ExceptionPexpect, "Error! Could not open pty with os.openpty()." pid = os.fork() if pid < 0: raise ExceptionPexpect, "Error! Failed os.fork()." elif pid == 0: # Child. os.close(parent_fd) self.__pty_make_controlling_tty(child_fd) os.dup2(child_fd, 0) os.dup2(child_fd, 1) os.dup2(child_fd, 2) if child_fd > 2: os.close(child_fd) else: # Parent. os.close(child_fd) return pid, parent_fd def __pty_make_controlling_tty(self, tty_fd): """This makes the pseudo-terminal the controlling tty. This should be more portable than the pty.fork() function. Specifically, this should work on Solaris. """ child_name = os.ttyname(tty_fd) # Disconnect from controlling tty if still connected. fd = os.open("/dev/tty", os.O_RDWR | os.O_NOCTTY); if fd >= 0: os.close(fd) os.setsid() # Verify we are disconnected from controlling tty try: fd = os.open("/dev/tty", os.O_RDWR | os.O_NOCTTY); if fd >= 0: os.close(fd) raise ExceptionPexpect, "Error! We are not disconnected from a controlling tty." except: # Good! We are disconnected from a controlling tty. pass # Verify we can open child pty. fd = os.open(child_name, os.O_RDWR); if fd < 0: raise ExceptionPexpect, "Error! Could not open child pty, " + child_name else: os.close(fd) # Verify we now have a controlling tty. fd = os.open("/dev/tty", os.O_WRONLY) if fd < 0: raise ExceptionPexpect, "Error! Could not open controlling tty, /dev/tty" else: os.close(fd) def fileno (self): # File-like object. """This returns the file descriptor of the pty for the child. """ return self.child_fd def close (self, force=True): # File-like). """ if not self.closed: self.flush() os.close (self.child_fd) time.sleep(self.delayafterclose) # Give kernel time to update process status. if self.isalive(): if not self.terminate(force): raise ExceptionPexpect ('close() could not terminate the child using terminate()') self.child_fd = -1 self.closed = True #self.pid = None def flush (self): # File-like object. """This does nothing. It is here to support the interface for a File-like object. """ pass def isatty (self): # File-like object. """This returns True if the file descriptor is open and connected to a tty(-like) device, else False. """ return os.isatty(self.child_fd) is None then this method to block forever) def getecho (self): """This returns the terminal echo mode. This returns True if echo is on or False if echo is off. Child applications that are expecting you to enter a password often set ECHO False. See waitnoecho(). """ attr = termios.tcgetattr(self.child_fd) if attr[3] & termios.ECHO: return True return False') p.sendline ('1234') # We will see this twice (once from tty echo and again from cat). p.expect (['1234']) p.expect (['1234'])') # We will see this twice (once from tty echo and again from cat).']) """ self.child_fd attr = termios.tcgetattr(self.child_fd) if state: attr[3] = attr[3] | termios.ECHO else: attr[3] = attr[3] & ~termios.ECHO # I tried TCSADRAIN and TCSAFLUSH, but these were inconsistent # and blocked on some platforms. TCSADRAIN is probably ideal if it worked. termios.tcsetattr(self.child_fd, termios.TCSANOW, attr) file was set using setlog() then all data will also be written to the log file. If timeout is None then the read may block indefinitely. If timeout is -1 then the self.timeout value is used. If timeout is 0 then the child is polled and if there was no data immediately ready then this will raise a TIMEOUT exception. The timeout refers only to the amount of time to read at least one character. This is not effected by the 'size' parameter, so if you call read_nonblocking(size=100, timeout=30) and only one character is available right away then one character will be returned immediately. It will not wait for 30 seconds for another 99 characters to come in. This is a wrapper around os.read(). It uses select.select() to implement the timeout. """ if self.closed: raise ValueError ('I/O operation on closed file in read_nonblocking().') if timeout == -1: timeout = self.timeout # Note that some systems such as Solaris do not give an EOF when # the child dies. In fact, you can still try to read # from the child_fd -- it will block forever or until TIMEOUT. # For this case, I test isalive() before doing any reading. # If isalive() is false, then I pretend that this is the same as EOF. if not self.isalive(): r,w,e = self.__select([self.child_fd], [], [], 0) # timeout of 0 means "poll" if not r: self.flag_eof = True raise EOF ('End Of File (EOF) in read_nonblocking(). Braindead platform.') elif self.__irix_hack: # This is a hack for Irix. It seems that Irix requires a long delay before checking isalive. # This adds a 2 second delay, but only when the child is terminated. r, w, e = self.__select([self.child_fd], [], [], 2) if not r and not self.isalive(): self.flag_eof = True raise EOF ('End Of File (EOF) in read_nonblocking(). Pokey platform.') r,w,e = self.__select([self.child_fd], [], [], timeout) if not r: if not self.isalive(): # Some platforms, such as Irix, will claim that their processes are alive; # then timeout on the select; and then finally admit that they are not alive. self.flag_eof = True raise EOF ('End of File (EOF) in read_nonblocking(). Very pokey platform.') else: raise TIMEOUT ('Timeout exceeded in read_nonblocking().') if self.child_fd in r: try: s = os.read(self.child_fd, size) except OSError, e: # Linux does this self.flag_eof = True raise EOF ('End Of File (EOF) in read_nonblocking(). Exception style platform.') if s == '': # BSD style self.flag_eof = True raise EOF ('End Of File (EOF) in read_nonblocking(). Empty string style platform.') if self.logfile is not None: self.logfile.write (s) self.logfile.flush() if self.logfile_read is not None: self.logfile_read.write (s) self.logfile_read.flush() return s raise ExceptionPexpect ('Reached an unexpected state in read_nonblocking().') def read (self, size = -1): # File-like object. "". """ if size == 0: return '' if size < 0: self.expect (self.delimiter) # delimiter default is EOF return self.before # I could have done this more directly by not using expect(), but # I deliberately decided to couple read() to expect() so that # I would catch any bugs early and ensure consistant behavior. # It's a little less efficient, but there is less for me to # worry about if I have to later modify read() or expect(). # Note, it's OK if size==-1 in the regex. That just means it # will never match anything in which case we stop only on EOF. cre = re.compile('.{%d}' % size, re.DOTALL) index = self.expect ([cre, self.delimiter]) # delimiter default is EOF if index == 0: return self.after ### self.before should be ''. Should I assert this? return self.before def readline (self, size = -1): # File-like object. """This reads and returns one entire line. A trailing newline is kept in the string, but may be absent when a file ends with an incomplete line. Note: This readline() looks for a \\r\\n pair even on UNIX because this is what the pseudo tty device returns. So contrary to what you may expect you will receive the newline as \\r\\n. An empty string is returned when EOF is hit immediately. Currently, the size argument is mostly ignored, so this behavior is not standard for a file-like object. If size is 0 then an empty string is returned. """ if size == 0: return '' index = self.expect (['\r\n', self.delimiter]) # delimiter default is EOF if index == 0: return self.before + '\r\n' else: return self.before def __iter__ (self): # File-like object. """This is to support iterators over a file-like object. """ return self def next (self): # File-like object. """This is to support iterators over a file-like object. """ result = self.readline() if result == "": raise StopIteration return result def readlines (self, sizehint = -1): # File-like object. """This reads until EOF using readline() and returns a list containing the lines thus read. The optional "sizehint" argument is ignored. """ lines = [] while True: line = self.readline() if not line: break lines.append(line) return lines def write(self, s): # File-like object. """This is similar to send() except that there is no return value. """ self.send (s) def writelines (self, sequence): # File-like object. """This calls write() for each element in the sequence. The sequence can be any iterable object producing strings, typically a list of strings. This does not add line separators There is no return value. """ for s in sequence: self.write (s) def send(self, s): """This sends a string to the child process. This returns the number of bytes written. If a log file was set then the data is also written to the log. """ time.sleep(self.delaybeforesend) if self.logfile is not None: self.logfile.write (s) self.logfile.flush() if self.logfile_send is not None: self.logfile_send.write (s) self.logfile_send.flush() c = os.write(self.child_fd, s) return c def sendline(self, s=''): """This is like send(), but it adds a line feed (os.linesep). This returns the number of bytes written. """ n = self.send(s) n = n + self.send (os.linesep) return n def sendcontrol(self, char): """This sends a control character to the child such as Ctrl-C or Ctrl-D. For example, to send a Ctrl-G (ASCII 7):: child.sendcontrol('g') See also, sendintr() and sendeof(). """ char = char.lower() a = ord(char) if a>=97 and a<=122: a = a - ord('a') + 1 return self.send (chr(a)) d = {'@':0, '`':0, '[':27, '{':27, '\\':28, '|':28, ']':29, '}': 29, '^':30, '~':30, '_':31, '?':127} if char not in d: return 0 return self.send (chr(d[char])). """ ### Hmmm... how do I send an EOF? ###C if ((m = write(pty, *buf, p - *buf)) < 0) ###C return (errno == EWOULDBLOCK) ? n : -1; #fd = sys.stdin.fileno() #old = termios.tcgetattr(fd) # remember current state #attr = termios.tcgetattr(fd) #attr[3] = attr[3] | termios.ICANON # ICANON must be set to recognize EOF #try: # use try/finally to ensure state gets restored # termios.tcsetattr(fd, termios.TCSADRAIN, attr) # if hasattr(termios, 'CEOF'): # os.write (self.child_fd, '%c' % termios.CEOF) # else: # # Silly platform does not define CEOF so assume CTRL-D # os.write (self.child_fd, '%c' % 4) #finally: # restore state # termios.tcsetattr(fd, termios.TCSADRAIN, old) if hasattr(termios, 'VEOF'): char = termios.tcgetattr(self.child_fd)[6][termios.VEOF] else: # platform does not define VEOF so assume CTRL-D char = chr(4) self.send(char) def sendintr(self): """This sends a SIGINT to the child. It does not require the SIGINT to be the first character on a line. """ if hasattr(termios, 'VINTR'): char = termios.tcgetattr(self.child_fd)[6][termios.VINTR] else: # platform does not define VINTR so assume CTRL-C char = chr(3) self.send (char) def eof (self): """This returns True if the EOF exception was ever raised. """ return self.flag_eof, e: #, technically, the child is still alive until its output is read. """ if self.isalive(): pid, status = os.waitpid(self.pid, 0) else: raise ExceptionPexpect ('Cannot wait for dead child process.') self.exitstatus = os.WEXITSTATUS(status) ('Wait was called for a child process that is stopped. This is not supported. Is some other process attempting job control with our child pid?') return self.exitstatus. """ if self.terminated: return False if self.flag_eof: # This is for Linux, which requires the blocking form of waitpid to get # status of a defunct process. This is super-lame. The flag_eof would have # been set in read_nonblocking(), so this should be safe. waitpid_options = 0 else: waitpid_options = os.WNOHANG try: pid, status = os.waitpid(self.pid, waitpid_options) except OSError, e: # No child processes if e[0] == errno.ECHILD: raise ExceptionPexpect ('isalive() encountered condition where "terminated" is 0, but there was no child process. Did someone else call waitpid() on our process?') else: raise e # I have to do this twice for Solaris. I can't even believe that I figured this out... # If waitpid() returns 0 it means that no child process wishes to # report, and the value of status is undefined. if pid == 0: try: pid, status = os.waitpid(self.pid, waitpid_options) ### os.WNOHANG) # Solaris! except OSError, e: # This should never happen... if e[0] == errno.ECHILD: raise ExceptionPexpect ('isalive() encountered condition that should never happen. There was no child process. Did someone else call waitpid() on our process?') else: raise e # If pid is still 0 after two calls to waitpid() then # the process really is alive. This seems to work on all platforms, except # for Irix which seems to require a blocking call on waitpid or select, so I let read_nonblocking # take care of this situation (unfortunately, this requires waiting through the timeout). if pid == 0: return True if pid == 0: return True ('isalive() encountered condition where child process is stopped. This is not supported. Is some other process attempting job control with our child pid?') return False) def compile_pattern_list(self, patterns): ""(clp, timeout) ... """ if patterns is None: return [] if type(patterns) is not types.ListType: patterns = [patterns] compile_flags = re.DOTALL # Allow dot to match \n if self.ignorecase: compile_flags = compile_flags | re.IGNORECASE compiled_pattern_list = [] for p in patterns: if type(p) in types.StringTypes: compiled_pattern_list.append(re.compile(p, compile_flags)) elif p is EOF: compiled_pattern_list.append(EOF) elif p is TIMEOUT: compiled_pattern_list.append(TIMEOUT) elif type(p) is type(re.compile('')): compiled_pattern_list.append(p) else: raise TypeError ('Argument must be one of StringTypes, EOF, TIMEOUT, SRE_Pattern, or a list of those type. %s' % str(type(p))) return compiled_pattern_list def expect(self, pattern, timeout = -1, searchwindowsize=None): "" returs 1 ('foo') if parts of the final 'bar' arrive late After a match is found the instance attributes 'before', 'after' and 'match' will be set. You can see all the data read before the match in 'before'. You can see the data that was matched in 'after'. The re.MatchObject used in the re match will be in 'match'. If an error occurred then 'before' will be set to all the data read so far and 'after' and 'match' will be None. If timeout is -1 then timeout will be set to the self.timeout value. A list entry may be EOF or TIMEOUT instead of a string. This will catch these exceptions and return the index of the list entry instead of raising the exception. The attribute 'after' will be set to the exception type. The attribute 'match'(). """ compiled_pattern_list = self.compile_pattern_list(pattern) return self.expect_list(compiled_pattern_list, timeout, searchwindowsize) def expect_list(self, pattern_list, timeout = -1, searchwindowsize = -1): ""(). If timeout==-1 then the self.timeout value is used. If searchwindowsize==-1 then the self.searchwindowsize value is used. """ return self.expect_loop(searcher_re(pattern_list), timeout, searchwindowsize) def expect_exact(self, pattern_list, timeout = -1, searchwindowsize = -1): """This is similar to expect(), but uses plain string matching instead of compiled regular expressions in 'pattern_list'. The 'pattern.""" if type(pattern_list) in types.StringTypes or pattern_list in (TIMEOUT, EOF): pattern_list = [pattern_list] return self.expect_loop(searcher_string(pattern_list), timeout, searchwindowsize) def expect_loop(self, searcher, timeout = -1, searchwindowsize = -1): """This is the common loop used inside expect. The 'searcher' should be an instance of searcher_re or searcher_string, which describes how and what to search for in the input. See expect() for other arguments, return value and exceptions. """ self.searcher = searcher if timeout == -1: timeout = self.timeout if timeout is not None: end_time = time.time() + timeout if searchwindowsize == -1: searchwindowsize = self.searchwindowsize try: incoming = self.buffer freshlen = len(incoming) while True: # Keep reading until exception or return. index = searcher.search(incoming, freshlen, searchwindowsize) if index >= 0: self.buffer = incoming[searcher.end : ] self.before = incoming[ : searcher.start] self.after = incoming[searcher.start : searcher.end] self.match = searcher.match self.match_index = index return self.match_index # No match at this point if timeout < 0 and timeout is not None: raise TIMEOUT ('Timeout exceeded in expect_any().') # Still have time left, so read more data c = self.read_nonblocking (self.maxread, timeout) freshlen = len(c) time.sleep (0.0001) incoming = incoming + c if timeout is not None: timeout = end_time - time.time() except EOF, e: self.buffer = '' self.before = incoming self.after = EOF index = searcher.eof_index if index >= 0: self.match = EOF self.match_index = index return self.match_index else: self.match = None self.match_index = None raise EOF (str(e) + '\n' + str(self)) except TIMEOUT, e: self.buffer = incoming self.before = incoming self.after = TIMEOUT index = searcher.timeout_index if index >= 0: self.match = TIMEOUT self.match_index = index return self.match_index else: self.match = None self.match_index = None raise TIMEOUT (str(e) + '\n' + str(self)) except: self.before = incoming self.after = None self.match = None self.match_index = None raise def getwinsize(self): """This returns the terminal window size of the child tty. The return value is a tuple of (rows, cols). """ TIOCGWINSZ = getattr(termios, 'TIOCGWINSZ', 1074295912L) s = struct.pack('HHHH', 0, 0, 0, 0) x = fcntl.ioctl(self.fileno(), TIOCGWINSZ, s) return struct.unpack('HHHH', x)[0:2] def setwinsize(self, r, c): "". """ # Check for buggy platforms. Some Python versions on some platforms # (notably OSF1 Alpha and RedHat 7.1) truncate the value for # termios.TIOCSWINSZ. It is not clear why this happens. # These platforms don't seem to handle the signed int very well; # yet other platforms like OpenBSD have a large negative value for # TIOCSWINSZ and they don't have a truncate problem. # Newer versions of Linux have totally different values for TIOCSWINSZ. # Note that this fix is a hack. TIOCSWINSZ = getattr(termios, 'TIOCSWINSZ', -2146929561) if TIOCSWINSZ == 2148037735L: # L is not required in Python >= 2.2. TIOCSWINSZ = -2146929561 # Same bits, but with sign. # Note, assume ws_xpixel and ws_ypixel are zero. s = struct.pack('HHHH', r, c, 0, 0) fcntl.ioctl(self.fileno(), TIOCSWINSZ, s) stop. The default for escape_character is ^]. This should not be confused with ASCII 27 -- the ESC character. ASCII 29 was chosen for historical merit because this is the character used by 'telnet' as the escape character. The escape_character will not be sent to the child process. You may pass in optional input and output filter functions. These functions should take a string and return a string.)) global p p.setwinsize(a[0],a[1]) p = pexpect.spawn('/bin/bash') # Note this is global and used in sigwinch_passthrough. signal.signal(signal.SIGWINCH, sigwinch_passthrough) p.interact() """ # Flush the buffer. self.stdout.write (self.buffer) self.stdout.flush() self.buffer = '' mode = tty.tcgetattr(self.STDIN_FILENO) tty.setraw(self.STDIN_FILENO) try: self.__interact_copy(escape_character, input_filter, output_filter) finally: tty.tcsetattr(self.STDIN_FILENO, tty.TCSAFLUSH, mode) def __interact_writen(self, fd, data): """This is used by the interact() method. """ while data != ''(): r,w,e = self.__select([self.child_fd, self.STDIN_FILENO], [], []) if self.child_fd in r: data = self.__interact_read(self.child_fd) if output_filter: data = output_filter(data) if self.logfile is not None: self.logfile.write (data) self.logfile.flush() os.write(self.STDOUT_FILENO, data) if self.STDIN_FILENO in r: data = self.__interact_read(self.STDIN_FILENO) if input_filter: data = input_filter(data) i = data.rfind(escape_character) if i != -1: data = data[:i] self.__interact_writen(self.child_fd, data) break self.__interact_writen(self.child_fd, data) def __select (self, iwtd, owtd, ewtd, timeout=None): """This is a wrapper around select.select() that ignores signals. If select.select raises a select.error exception and errno is an EINTR error then it is ignored. Mainly this is used to ignore sigwinch (terminal resize). """ # if select() is interrupted by a signal (errno==EINTR) then # we loop back and enter the select() again. if timeout is not None: end_time = time.time() + timeout while True: try: return select.select (iwtd, owtd, ewtd, timeout) except select.error, e: if e[0] == errno.EINTR: # if we loop back we have to subtract the amount of time we already waited. if timeout is not None: timeout = end_time - time.time() if timeout < 0: return ([],[],[]) else: # something else caused the select.error, so this really is an exception raise ############################################################################## # The following methods are no longer supported or allowed. def setmaxread (self, maxread): """This method is no longer supported or allowed. I don't like getters and setters without a good reason. """ raise ExceptionPexpect ('This method is no longer supported or allowed. Just assign a value to the maxread member variable.') def setlog (self, fileobject): """This method is no longer supported or allowed. """ raise ExceptionPexpect ('This method is no longer supported or allowed. Just assign a value to the logfile member variable.') ############################################################################## # End of spawn class ############################################################################## class searcher_string (object): """This is a plain matching string itself """ def __init__(self, strings): """This creates an instance of searcher_string. This argument 'strings' may be a list; a sequence of strings; or the EOF or TIMEOUT types. """ self.eof_index = -1 self.timeout_index = -1 self._strings = [] for n, s in zip(range(len(strings)), strings): if s is EOF: self.eof_index = n continue if s is TIMEOUT: self.timeout_index = n continue self._strings.append((n, s)) def __str__(self): """This returns a human-readable string that represents the state of the object.""" ss = [ (ns[0],' %d: "%s"' % ns) for ns in self._strings ] ss.append((-1,'searcher_string:')) search strings. 'freshlen' must indicate the number of bytes at the end of 'buffer' which have not been searched before. It helps to avoid searching the same, possibly big, buffer over and over again. See class spawn for the 'searchwindowsize' argument. If there is a match this returns the index of that string, and sets 'start', 'end' and 'match'. Otherwise, this returns -1. """ absurd_match = len(buffer) first_match = absurd_match # 'freshlen' helps a lot here. Further optimizations could # possibly include: # # using something like the Boyer-Moore Fast String Searching # Algorithm; pre-compiling the search through a list of # strings into something that can scan the input once to # search for all N strings; realize that if we search for # ['bar', 'baz'] and the input is '...foo' we need not bother # rescanning until we've read three more bytes. # # Sadly, I don't know enough about this interesting topic. /grahn for index, s in self._strings: if searchwindowsize is None: # the match, if any, can only be in the fresh data, # or at the very end of the old data offset = -(freshlen+len(s)) else: # better obey searchwindowsize offset = -searchwindowsize n = buffer.find(s, offset) if n >= 0 and n < first_match: first_match = n best_index, best_match = index, s if first_match == absurd_match: return -1 self.match = best_match self.start = first_match self.end = self.start + len(self.match) return best_index class searcher_re (object): """This is regular expression re.match object returned by a succesful re.search """ def __init__(self, patterns): """This creates an instance that searches for 'patterns' Where 'patterns' may be a list or other sequence of compiled regular expressions, or the EOF or TIMEOUT types.""" self.eof_index = -1 self.timeout_index = -1 self._searches = [] for n, s in zip(range(len(patterns)), patterns): if s is EOF: self.eof_index = n continue if s is TIMEOUT: self.timeout_index = n continue self._searches.append((n, s)) def __str__(self): """This returns a human-readable string that represents the state of the object.""" ss = [ (n,' %d: re.compile("%s")' % (n,str(s.pattern))) for n,s in self._searches] ss.append((-1,'searcher_re:')) regular expressions. 'freshlen' must indicate the number of bytes at the end of 'buffer' which have not been searched before. See class spawn for the 'searchwindowsize' argument. If there is a match this returns the index of that string, and sets 'start', 'end' and 'match'. Otherwise, returns -1.""" absurd_match = len(buffer) first_match = absurd_match # 'freshlen' doesn't help here -- we cannot predict the # length of a match, and the re module provides no help. if searchwindowsize is None: searchstart = 0 else: searchstart = max(0, len(buffer)-searchwindowsize) for index, s in self._searches: match = s.search(buffer, searchstart) if match is None: continue n = match.start() if n < first_match: first_match = n the_match = match best_index = index if first_match == absurd_match: return -1 self.start = first_match self.match = the_match self.end = self.match.end() return best_index def which (filename): """This takes a given filename; tries to find it in the environment path; then checks if it is executable. This returns the full path to the filename if found and executable. Otherwise this returns None.""" # Special case where filename already contains a path. if os.path.dirname(filename) != '': if os.access (filename, os.X_OK): return filename if not os.environ.has_key('PATH') or os.environ['PATH'] == '': p = os.defpath else: p = os.environ['PATH'] # Oddly enough this was the one line that made Pexpect # incompatible with Python 1.5.2. #pathlist = p.split (os.pathsep) pathlist = string.split (p, os.pathsep) for path in pathlist: f = os.path.join(path, filename) if os.access(f, os.X_OK): return f return None def split_command_line(command_line): """This splits a command line into a list of arguments. It splits arguments on spaces, but handles embedded quotes, doublequotes, and escaped characters. It's impossible to do this with a regular expression, so I wrote a little state machine to parse the command line. """ arg_list = [] arg = '' # Constants to name the states we can be in. state_basic = 0 state_esc = 1 state_singlequote = 2 state_doublequote = 3 state_whitespace = 4 # The state of consuming whitespace between commands. state = state_basic for c in command_line: if state == state_basic or state == state_whitespace: if c == '\\': # Escape the next character state = state_esc elif c == r"'": # Handle single quote state = state_singlequote elif c == r'"': # Handle double quote state = state_doublequote elif c.isspace(): # Add arg to arg_list if we aren't in the middle of whitespace. if state == state_whitespace: None # Do nothing. else: arg_list.append(arg) arg = '' state = state_whitespace else: arg = arg + c state = state_basic elif state == state_esc: arg = arg + c state = state_basic elif state == state_singlequote: if c == r"'": state = state_basic else: arg = arg + c elif state == state_doublequote: if c == r'"': state = state_basic else: arg = arg + c if arg != '': arg_list.append(arg) return arg_list # vi:ts=4:sw=4:expandtab:ft=python:
https://apache.googlesource.com/duo_unix/+/ed38a958bb39f42b77fb794ca7fb9a66420e46c0/tests/pexpect.py
CC-MAIN-2022-21
refinedweb
5,940
62.54
Exploring Elm - Part 1. The reason I want to explore Elm is that FRP concepts are very interesting, though can be hard to understand in the context of creating an actual application. Examples with events, mapping over data etc. does not really explain how you would use this approach to create an application. But Elm is all about creating applications and FRP is an important part of it. Since Elm is its own language you express yourself differently in regards of syntax, but it is also enforced how you define state, change that state and define your UI. So in this first article, which I hope to become more articles, we are going to take a small step into Elm. I am no Elm expert at all, but I have built lots of JavaScript applications and tools to help me solve the problems we face when building large applications. I will compare Elm to JavaScript and specifically React and Redux, as they have many similarities. Are you a designer or a programmer? Our approaches to building applications has matured a lot the last year. Much because of React and Flux. And there is a big shift happening. Frameworks using templates has made sure that HTML/CSS developers (designers) has an approachable tool to do their job, but still introduce some new syntax that makes it easier to express business logic inside the templates. But with tools like React we are moving towards a world where you have to be a programmer to express the UI. I actually think this is a good thing. I think HTML/CSS is this thing between design and programming which nobody really likes to do. A designer wants to do design and a programmer wants to do programming. Okay, getting a bit philosophical here, but my point is to place Elm in this world. Elm is taking a step even deeper into the programming world when talking about expressing the UI. Actually pretty much like hyperscript. Get going with Elm Though Elm is really easy to get going with, you need more. Just like you can easily load up a script tag, you need something around it when you want to create applications. The first challenge is CSS. You need CSS in your application, but Elm can not express that currently. So we need a workflow to handle this. We also need a workflow that will build our application and bring the compiled Elm application into an HTML file along with the CSS. I was a bit surprised that there really is not much information about this. But after some digging I built my own workflow using Gulp. If you want to play with Elm I suggest you use that to get going. The workflow lets you use LESS for css and it will automatically compile on file changes. It will also live reload the page on any changes. I am a big fan of webpack, but it does not make any sense to use that with Elm. The reason being that the Elm code can not require any other assets than Elm files. Taming CSS with Elm One of the latest and greatest is CSS-Modules. It lets you scope your CSS with your components/views. This is not possible with Elm, as Elm can not import CSS files. So we need a different strategy. The strategy I suggest is splitting your Elm application up into different views where the top node of each view has its name as class name. An example: Title.elm view = div [ class = "Title"] [ ] This way you can separate styling and still scope it to specific parts of your application. That said, any subviews are in risk of being affected by parent view CSS. The challenges Elm needs to cover So I am no Elm missionary. I am just a programmer who is always looking for tools to solve my problems and I think Elm has some very good ideas. That said, I am very unsure how Elm works in a larger scale. Counters and a TodoMVC where all the code is in the same file does not really help me. So lets create a list of things I am specifically going to explore in this article: - All examples of Elm applications I have seen is expressed as one single file. That does not work in bigger applications. How does splitting my app in different files affect how Elm works? - All examples of Elm applications I have seen does not have much state and the state is defined in the same file as the view. That does not scale. You need to have a global state store to be able to share state across views. So how does Elm handle this? - All examples of Elm applications I have seen defines their state changing logic inside the same file as the view. This does not scale. You need a global state changing layer to allow multiple views to trigger the same state changes. Is that possible with Elm? There are many other things to explore, but these are the basics of building a scalable application. Lets dive in! Creating an application Given that you have used this boilerplate you can just fire up the server and the workflow. I will go step by step through how you build the app and compare it to normal JavaScript. What is really great about Elm is that you get all the tools you need right out of the box. module App where import Html exposing (..) main = text "Hello world!" This piece of code creates a module. The module exposes all the methods of the Html module on the scope and defines the special main function. That function is just like the render function of React. So lets see this in JavaScript import {Text, Div, Span...} from 'Html'; render(<Text>Hello world</Text>, document.body); The difference here is that we automatically expose everything from the Html module. Defining state I come from the world of single state trees and Flux. I do not want state in my components/views because it too often gets me into problems. The reason being that some other component/view needs access to the same state. The great thing about Elm is that everything is immutable and the syntax makes it easy to handle this, unlike JavaScript where it can look quite ugly. Model.elm module Model where items : List String items = [ "foo", "bar" ] In JavaScript this would look like: export const items = ["foo", "bar"]; So basically everything you define on a module is exposed by default, though you can control that. In the Elm code we define that our items is a list of strings, which is great. That helps Elm understand when we might do something wrong with the list somewhere else in the code. This looks great when exposing a list, but what about a record/object? module Model where type alias Items = { isLoading : Bool, list : List String, hasError : Bool } items : Items items = { isLoading = False, list = [ ], hasError = False } Okay, so this is possible. But what if a piece of my state is a List of objects. module Model where type alias Item = { title : String, solved: Bool } type alias Items = { isLoading : Bool, list : List Item, hasError : Bool } items : Items items = { isLoading = False, list = [ ], hasError = False } Okay, so this certainly scales, though this file will get huge as I add lots of state. Maybe it is a better idea to split this up into multiple files. That would allow me to do: Model/Items.elm module Model.Items where isLoading : Bool isLoading = False hasError : Bool hasError = False type alias Item = { title : String solved : Bool } list : List Item list = [ ] This certainly looks more scalable, though it will require a lot of work to wire it into our model. The reason is that each of these exposed values has to be accessed specifically. We can not use the module like a normal object, like in JavaScript. That means we have to change it a bit: Model/Items.elm module Model.Items where type alias Item = { title : String solved : Bool } model : { isLoading : Bool, hasError : Bool, list : List Item} model = { isLoading = False, list = [ ], hasError = False } Okay, so now we have a single exposed value, which is some state and we have types on them. Nice! Lets bring it into our app: module App where import Html exposing (..) import Model.Items main = text "Hello world!" As you can imagine you will have many model files, much like reducers in Redux. Though they just describe initial state, not how that state is changed. We will get to that. Exposing state to the application Now we need a way to expose our model to the app. Elm comes with a small package, called StartApp, that lets you expose state and a messaging concept for doing state changes. This is part of the boilerplate and you just: module App where import Html exposing (..) import Model.Items import StartApp.Simple as StartApp -- Model initialState = { items = Model.Items.model } -- View view address model = text "Hello world!" main = StartApp.start { model = initialState, view = view } Okay, when we start our application we pass in some initial state and the top level view. As you can see we have access to the model in our view, but also something called address. This has to do with changing the state of your application. Lets dive into that. Changing the state of your application As stated in the introduction we need a way to let any view change any part of our state. In most examples this is defined within the view, but that coupling is something that gets you into problems in larger applications. So I have been researching a bit. But first, lets talk about how we actually express a state change in Elm. Actions/Items.elm module Actions.Items where type Action = NoOp | Add update action model = case action of Add text -> let items = model.items changedItems = { items | list = List.append items.list [ text ] } in { model | items = changedItems } NoOp -> model If you have used Redux this will look familiar to you: import { NO_OP, ADD } from './actions'; export default function (state, action) { switch (action.type) { case ADD: const items = state.items; const changedItems = {...items, list: items.list.concat(action.text)}; return {...state, items: changedItems}; case NO_OP: return state; } } // Or maybe you use object assign export default function (state, action) { switch (action.type) { case ADD: const items = state.items; const changedItems = Object.assign({}, items, {list: items.list.concat(action.text)}); return Object.assign({}, state, {items: changedItems}); case NO_OP: return state; } } As you can see the difference from Redux is that you have access to all your state, and you have different syntax for changing the state. It certainly is shorter and sweeter. But as I said, Elm examples defines their actions within the views. What I want is to globally define them so that any view can use them. Lets get back to our application and see what we can do to fix that: module App where import StartApp.Simple as StartApp import Html exposing (..) import Model.Items import Actions.Items -- Model initialState = { items: Model.Items.model } -- View view address model = text "Hello world!" -- Update update update data = {data | model = update data.action data.model} updates updaters = \action model -> .model (List.foldr (update) {action = action, model = model} updaters) main = StartApp.start { model = initialState, view = view update = updates [Actions.Items.update] } So what I am basically doing here is making the update function of my application to run through multiple update functions from different action files. When I add new actions I just add them to the list passed to updates and any views is able to trigger a change in any part of my application. In my experience that will never get you into any scaling problems. Trigger a state change So now we just have to solve the last part. That is nesting views and trigger state changes. Let us create a new file that will list our items and also add new items to the list. View/Items.elm module View.Items where import Html exposing (..) import Html.Attributes exposing (..) import Html.Events exposing (..) import Actions.Items as Items item item = li [ ] [ text item ] view address model = div [ class = "Items"] [ button [ onClick address (Items.Add "Foo") ] [ text "Add" ], ul [ ] List.map item model.items.list ] As we can see our nested view will receive address and all the state of our app through model. In our div we have a button that will use address to trigger an Add action and pass the text Foo when it is clicked. Then our ul elements children will be built by mapping over the items list and producing li elements. Much like JavaScript, using React and Redux approach: import actions from './actions'; export default function (props) { const renderItem(item) { return <li>{item}</li>; } return ( <div className="Items"> <button onClick={() => actions.addItem("foo")}>Add</button> <ul> {props.items.list.map(renderItem)} </ul> </div> ); } So now let us move over to our main application file and use our view. module App where import StartApp.Simple as StartApp import Html exposing (..) import Model.Items import Actions.Items import Views.Items as Items -- Model initialState = { items: Model.Items.model } -- View view address model = div [ class = "App" ] [ h1 [ ] [ text "Hello world!" ], Items.view address model ] -- Update update update data = {data | model = update data.action data.model} updates updaters = \action model -> .model (List.foldr (update) {action = action, model = model} updaters) main = StartApp.start { model = initialState, view = view update = updates [Actions.Items.update] } Summary As stated I am no expert in Elm. I am a JavaScript developer and have spent a lot of time with different Flux implementations and tried to solve some challenges myself, like the cerebral project. Diving into Elm I was surprised how quickly I got around the new syntax and how very familiar it really is to our world. That said, Elm is its own language and has lots of great “built in” features like rarely having runtime errors as Elm understands how everything is connected in your app at compile time. It also has immutability built in which is really great. Last but not least it allows you to express state and state changes with really nice syntax and less verbosity than in JavaScript. My initial concerns with Elm I believe to be solvable. This was just my initial approach though and maybe there are lots better ways of doing it. That said, I still have more concerns. Like expressing complex state changes, getting into side effects like ajax requests and building bigger nested structures of views. Passing address and model everywhere feels a bit off. Also performance is a concern as I do not quite understand how Elm decides upon what views needs to be rendered or not. Hopefully this gave you some insight into Elm and please do try it out using this boilerplate. It is a fantastic piece of work and there is so much more to Elm than what I have gone through here. Thanks for reading!
https://christianalfoni.herokuapp.com/articles/2015_11_30_Exploring-Elm-part1
CC-MAIN-2019-35
refinedweb
2,507
74.59
Many programming languages, C# included, treat certain sequences of letters as “special”. Some sequences are so special that they cannot be used as identifiers. Let’s call those the “reserved keywords” and the remaining special sequences we’ll call the “contextual keywords”. They are “contextual” because the character sequence might one meaning in a context where the keyword is expected and another in a context where an identifier is expected.[1. An unfortunate consequence of this definition is that using is said to be a reserved keyword even though its meaning depends on its context; whether the using begins a directive or a statement determines its meaning. And similarly for other keywords that have multiple usages, like fixed.] The C# specification defines the following reserved keywords: void volatile while The implementation also reserves the magic keywords __arglist __makeref __reftype __refvalue which are for obscure scenarios that I might blog about in the future. Those are the keywords that we reserved in C# 1.0; no new reserved keywords have been added since. It is tempting to do so, but we always resist. Were we to add a new reserved keyword then any program that used that keyword as an identifier would break upon recompilation. Yes, you can always use a keyword as an identifier if you really want: @typeof @goto = @for.@switch(@throw); is perfectly legal, though more than a little weird. But we prefer to avoid as many breaking changes as possible. We also have a whole bunch of contextual keywords. The “preprocessor” [1. An unfortunate name, since “preprocessing” is not done before regular language processing. In C#, the so-called “preprocessing” happens during lexical analysis.] uses all the directives ( #define, and so on) which of course were never valid identifiers in the first place. But it also uses contextual keywords hidden default disable restore checksum. C# 1.0 had contextual keywords get set value add remove for properties, indexers and events. The attribute locations event and return are already reserved keywords; assembly module type method field property param typevar are contextual keywords in the context of an attribute. C# 2.0 added where partial global yield alias. C# 3.0 added from join on equals into orderby ascending descending group by select let var. C# 4.0 added dynamic. C# 5.0 added async await. Every time we add one of these we need to carefully design the grammar so that if possible, the use of the new contextual keyword does not possibly change the meaning of an existing program which used it. For example, when defining a partial class, the partial must go immediately before the class. Since there was never a legal C# 1.0 program where partial appeared immediately before class, we knew that adding this new feature to the grammar would not possibly break any existing programs. Or, another example. Consider var x = 1; – that could have been a legal C# 2.0 program if there was a type called var with a user-defined implicit conversion from int. The semantic analyzer for declaration statements checks to see whether there is a type called var that is accessible at the declaration; if there is then the normal declaration rules are used. Only if there is not such a type can we do the analysis as an implicitly typed local declaration. One might wonder why on earth we added five contextual keywords to C# 1.0, when there was no chance of breaking backwards compatibility. Why not just make get set value add remove into “real” keywords? Because we could easily get away with making them contextual keywords, and it seemed likely that real people would want to name variables or methods things like get, set, value, add or remove. So we left them unreserved as a courtesy. Those were easy to make contextual, unlike, say, return. That’s a lot harder to make a contextual keyword because then return (10); would be ambiguous; is that calling the method named return or returning ten? So we didn’t make any of the other reserved keywords into contextual keywords. Pingback: Verbatim identifiers | Fabulous adventures in coding Pingback: Can you take advantage of implicit type variables with a class named `var`? - BlogoSfera
https://ericlippert.com/2009/05/11/reserved-and-contextual-keywords/
CC-MAIN-2020-05
refinedweb
705
55.64
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives Our programming tools should be more democratic, egalitarian, said (Dr.? Mr.? I was assuming Dr.) Edwards a while back. (I, a peon, have great respect for and mostly agree with him, of course, utterly seriously.) What I see happen in the real world is something like: (1) hack something up to get it working asap; (2) oh no we got actual customers [remember the famous quote about not being able to change Makefile syntax?] we'd better keep on truckin' with this since they want new features and are already bought in to how it works today; (3) who the hell wrote this crap, I can't possibly maintain it! [remember the famous death of Netscape?] So if we hand people Visual Basic, they might make something great that would never have existed before. BUT what kind of rats nest of hell code will they produce, and what zillions of crazy bugs will they have, and and and. If that is a real problem somewhere in the world sometimes, how best to address it? Rewrite it? Outsource/offshore rewriting it? Sell it to CA, Inc.? Or something else? What could we have in our programming languages and ecosystems that would let the masses have their cake and eat it, too? Even as a few people still work on improving visual gui editors, I've noticed that the entire software development community seems to have chosen markup languages over concrete tools. We've chosen metaprogramming over tools. We've chosen text = visual design. So user interface, it's as if textual programming languages won out over visual programming languages all over again, just in the realm of visual dsls instead of in the same language the rest of the code was in. Perhaps if programmers used more powerful languages better suited to run-time generation, something with the power of scheme, for instance, then the two languages would merge and user interfaces would be in generated code in the base language. What could we have in our programming languages and ecosystems that would let the masses have their cake and eat it, too? Easy extraction, refactoring, and sharing of subprograms. Type-safe structured editing. Good support for Visual DSLs. We could do a lot to address the entanglement problem, where code easily becomes entangled with context through state, namespace, etc.. If we make it easier to extract, share, and safely reuse pieces of applications, we could have something much closer to a component-market (rather than an 'app' market). Shifting a lot of type support into the editor would make it a lot easier to snap elements together safely, and to search a market of available components for relevant content. Visualization can help users grok code they haven't seen before, and provide considerable feedback during development so they begin to absorb information without consciously thinking about it. We really need many ways to visualize code and behavior (e.g. views of code, views of types, views of statically computable values, animation during compilation or of partial-evaluations during composition, animation of execution). Automatic fuzz testing could provide a nice basis for animations during edits. People used visual basic because they didn't know how to program or get started, and having a visual editor for their user interface made it possible to get simple programs up in a gui environment without learning much. People like that don't need solutions for problems with modules and sharing and safety and programming in the large, they need their hand held to do the first steps of small programs. And I'm not sure that type-safe anything is useful for small scale programmers, either neophyte or professional. It's a lot of mental overhead that isn't worth it, no matter how much research is obsessed with it. Note that "morphic" is a newer design for a Smalltalk gui and visual editor, perhaps it started in Self rather than Smalltalk. Smalltalk's gui was copied by everyone. It has some really bad elements, using a model/view/controller separation when 99.99% of programs don't need it was a horrible choice. But the fact of having a simple (not complicated by types) message passing model for the gui and the openness, easiness to explore and change and trace (at least in the original version) made it the easiest to program to. But it wasn't any good for absolute neophytes without a visual editor. The lack of a meta-language for the user interface makes it out of style, rigid and less useful to professional programmers (unless they get used to generating smalltalk), but it's still the best known approach for "visual basic" like use. The problem with smalltalk has been the way it's licensed not the rest of it. But I believe the Alan Kay and some others have a current project trying to demonstrate that the basics of an OS and tools (such as editors) can be written in much simpler code, maybe 1/1000th the size that people are used to. The system is based on a dynamically typed language (Smalltalk) with some sort of constraint programming helping with gui layout - people may be using the constraint library for Mac programs these days too, I'm not sure. While Kay's systems never seem to get finished or polished these days, the idea that the underlying system could be much simpler might make programming much easier. Perhaps that would be useful in general for the sort of non-sophisticated programers who loved Visual Basic and for the rest of us. The proliferation of unnecessary complexity makes programming a lot harder than it needs to be. And the idea that even full featured programs could be much much much smaller has been proved. The editor in the Xerox star was fairly full featured (supporting even Asian texts layout and pictures..), and imagine how much smaller any program written in the 70s had to have been. Steps Toward The Reinvention of Programming We may have had some follow up posts about it, too. People like that don't need solutions for problems with modules and sharing and safety and programming in the large, they need their hand held to do the first steps of small programs. The former supports the latter. Having easy component sharing and integration helps people find and recognize what they need, e.g. in component libraries, especially when taken together with good visualization, without grokking the inner workings. The ability to easily disentangle components, in turn, helps keep systems more maintainable and the work more reusable, including from one prototype to another. Type-safe composition and editing can be very useful for small-scale projects and neophyte developers. It's no different than how having differently shaped adapters at the ends of your wires (e.g. for Ethernet vs. USB vs. PS/2 vs. power) helps people compose things correctly without really thinking about it. Types can also aide search for relevant components. Types can be used in many different ways, some productive and some much less so. Every little element of design contributes to everything else. Or at least it should. If we don't then we're just going the Monty Python swamp castle skit all over again like we did with previous "peoples' languages" like VB. Those who inherit the code of such things are generally as a rule very sad as a result. Of course, the fact that *any* code one gets to inherit and maintain makes one really sad means our *entire* approach to coding things up frankly sucks, even if we're using "professionals' languages". The problem with visual basic is a lack of power, lack of abstraction not a lack of typed variables or modules. I would much rather inherit a program written in R5RS Scheme than one written in visual basic, even though R5RS has less support for typed variables and no more support for modules, because it has enough power to represent abstractions. There's a practical difference between modularity with careful discipline vs. limiting entanglement so first-order expressions are easily refactored into modules. There are also UX differences between using types to say "no! you're wrong!" after having written something and type safe structured editing, which uses types to ensure you get it right at edit time. Dismissing my description as 'typed variables and modules' does disservice to both my ideas and your comprehension of them. As far as lacking power and abstraction, I think that's the easier problem to address. A difficulty is providing abstraction to "the masses" which suggests support for thinking in low-order, concrete ways. DSLs are probably suitable for this role, since a good DSL can abstract over the computational noise while remaining concrete within one domain. IIRC, VB never really experimented with visual DSLs. PureData has experimented with them a little, but I believe it could be done better. The problem with {visual basic, java, go, c, and any number of other languages that people use on a regular basis to ship code} is a lack of power, lack of abstraction. :-) :-( I wanted to play some DSL building package based on the idea that you have an editor that doesn't even allow syntax or semantic errors. Of course my computer is too slow and with 2 gig of ram too small to even run the editor. My feeling, though, is that an editor that doesn't allow errors is an editor that doesn't allow writing and modifying code. It would be one of the deepest rings of hell. There has never been a popular product that attempted anything like that, so saying that it's "the future" is selling snake oil that almost sounds disproven. Isn't that just a visual language without the pointless pictures? Also proven to be a pain in the butt? I guess that's too harsh. But I'm not convinced. I suppose editors will get better, computers will get faster etc. Forcing structure on people kind of makes sense as training wheels or as error reporting after you're done editing, but if editing requires more complex asking permission than just typing, then it will slow people to a crawl. Structured editing with a keyboard is possible, e.g. similar to using intellisense to build and search for expressions. Paul Chiusano's recent work on Unison is an example aiming for typesafe editor (especially related: [1][2]). More generally, we might also leverage a concept of gestures, for which the keyboard might be one tool for inputting them. I'm not sure what you're imagining. I'm certain you can imagine plenty of obviously bad ways to do it. We wouldn't be favoring those. gestures that should be supported.
http://lambda-the-ultimate.org/node/5135
CC-MAIN-2020-29
refinedweb
1,825
61.87
# Lingtrain Aligner. How to make parallel books for language learning. Part 1. Python and Colab version ![title](https://habrastorage.org/r/w780q1/webt/c4/sh/ft/c4shftv4oh58zvszgfnpvusie3k.jpeg) If you're interested in learning new languages or teaching them, then you probably know such a way as parallel reading. It helps to immerse yourself in the context, increases the vocabulary, and allows you to enjoy the learning process. When it comes to reading, you most likely want to choose your favorite author, theme, or something familiar and this is often impossible if no one has published such a variant of a parallel book. It's becoming even worse when you're learning some cool language like Hungarian or Japanese. Today we are taking a big step forward toward breaking this situation. We will use the **lingtrain\_aligner** tool. It's an open-source project on Python which aims to help all the people eager to learn foreign languages. It's a part of the Lingtrain project, you can follow us on [Telegram](https://t.me/lingtrain_books), [Facebook](https://www.facebook.com/lingtra.in/) and [Instagram](https://www.instagram.com/lingtra.in/). Let's start! Find the texts -------------- At first, we should find two texts we want to align. Let's take two editions of "To Kill a Mockingbird" by Harper Lee, in Russian and the original one. The first lines of the found texts look like this: ``` TO KILL A MOCKINGBIRD by Harper Lee DEDICATION for Mr. Lee and Alice in consideration of Love & Affection Lawyers, I suppose, were children once. Charles Lamb PART ONE 1 When he was nearly thirteen, my brother Jem got his arm badly broken at the elbow. When it healed, and Jem’s fears of never being able to play football were assuaged, he was seldom self-conscious about his injury. His left arm was somewhat shorter than his right; when he stood or walked, the back of his hand was at right angles to his body, his thumb parallel to his thigh. He couldn’t have cared less, so long as he could pass and punt. ... ``` --- ``` Харпер Ли Убить пересмешника Юристы, наверно, тоже когда-то были детьми. Чарлз Лэм ЧАСТЬ ПЕРВАЯ 1 Незадолго до того, как моему брату Джиму исполнилось тринадцать, у него ``` была сломана рука. Когда рука зажила и Джим перестал бояться, что не сможет играть в футбол, он ее почти не стеснялся. Левая рука стала немного короче правой; когда Джим стоял или ходил, ладонь была повернута к боку ребром. Но ему это было все равно — лишь бы не мешало бегать и гонять мяч. ``` ... ``` Extract parallel corpora ------------------------ The first step is to make a parallel corpus from our texts. Is a serious task, mainly because of the following reasons: * Professional translators are kind of artists. They can translate one sentence as several and vice versa. They feel the language and can be very creative in the desire to convey the *meaning*. * Some parts of the translated text can be missing. * During extraction we need to save the paragraph structure somehow. Without it, we will not be able to create a solid and decorated book. We will use the python **lingtrain\_aligner** library which I'm developing. Under the hood, it uses machine learning models (sentence-transformers, LaBSE, and others). Such models will transform sentences into dense vectors or **embeddings**. Embeddings are a very interesting way to catch a sense contained in a sentence. We can calculate a cosine distance between the vectors and interpret it as semantic similarity. The most powerful (and huge) model is LaBSE by Google. It supports over 100 languages. Before we feed our texts into the tool we need to prepare them. ### Prepare the texts #### Add the markup I've made a simple markup language to extract the book structure right before the alignment. It's just a special kind of token that you need to add to the end of the sentence. | Token | Purpose | Mode | | --- | --- | --- | | %%%%%title. | Title | Manual | | %%%%%author. | Author | Manual | | %%%%%h1. %%%%%h2. %%%%%h3. %%%%%h4. %%%%%h5. | Headings | Manual | | %%%%%qtext. | Quote | Manual | | %%%%%qname. | Text under the quote | Manual | | %%%%%image. | Image | Manual | | %%%%%translator. | Переводчик | Manual | | %%%%%divider. | Divider | Manual | | %%%%%. | New paragraph | Auto | #### New paragraph token This kind of token will be placed automatically following the rule: * if the line ends with [.,:,!?] character and EOF (end of the line, '/n' char) we treat it as the end of the paragraph. #### Text preprocessing 1. Delete unnecessary lines (publisher information, page numbers, notes, etc.). 2. Put labels for the author and the title. 3. Label the headings (H1 is the largest, H5 is the smallest). If the headers aren't needed, then just delete them. 4. Make sure that there are no lines in the text that end with a [.,:,!?] and aren't the end of a paragraph (otherwise the whole paragraph will be split into two parts). Place the labels in accordance with the aforementioned rules. Empty lines do not play any role. You should get documents similar to these: ``` TO KILL A MOCKINGBIRD%%%%%title. by Harper Lee%%%%%author. Lawyers, I suppose, were children once.%%%%%qtext. Charles Lamb%%%%%qname. PART ONE%%%%%h1. 1%%%%%h2. When he was nearly thirteen, my brother Jem got his arm badly broken at the elbow. When it healed, and Jem’s fears of never being able to play football were assuaged, he was seldom self-conscious about his injury. His left arm was somewhat shorter than his right; when he stood or walked, the back of his hand was at right angles to his body, his thumb parallel to his thigh. He couldn’t have cared less, so long as he could pass and punt. ... ``` --- ``` Харпер Ли%%%%%author. Убить пересмешника%%%%%title. Юристы, наверно, тоже когда-то были детьми.%%%%%qtext. Чарлз Лэм%%%%%qname. ЧАСТЬ ПЕРВАЯ%%%%%h1. 1%%%%%h2. Незадолго до того, как моему брату Джиму исполнилось тринадцать, у него была сломана рука. Когда рука зажила и Джим перестал бояться, что не сможет играть в футбол, он ее почти не стеснялся. Левая рука стала немного короче правой; когда Джим стоял или ходил, ладонь была повернута к боку ребром. Но ему это было все равно - лишь бы не мешало бегать и гонять мяч. ... ``` Marked lines will be automatically extracted from the texts before alignment. They will be used when we will make a book. ### Align texts #### Colab We will do the whole process in the Google Colab notebook, so everyone can do the same for free without installing anything on his or her machine. I've prepared the Colab, it contains all the instructions. [Colab version](https://colab.research.google.com/drive/1_ics0YzWg5qIZIPhA1X_Wbfg0XZzRO-p?usp=sharing) Meanwhile, we will observe the alignment process in more detail. #### Details After installing the tool with this command: ``` pip install lingtrain-aligner ``` We are loading our texts, adding the paragraph tokens and splitting them into the sentences: ``` from lingtrain_aligner import preprocessor, splitter, aligner, resolver, reader, vis_helper text1_input = "harper_lee_ru.txt" text2_input = "harper_lee_en.txt" with open(text1_input, "r", encoding="utf8") as input1: text1 = input1.readlines() with open(text2_input, "r", encoding="utf8") as input2: text2 = input2.readlines() db_path = "book.db" lang_from = "ru" lang_to = "en" models = ["sentence_transformer_multilingual", "sentence_transformer_multilingual_labse"] model_name = models[0] text1_prepared = preprocessor.mark_paragraphs(text1) text2_prepared = preprocessor.mark_paragraphs(text2) splitted_from = splitter.split_by_sentences_wrapper(text1_prepared , lang_from, leave_marks=True) splitted_to = splitter.split_by_sentences_wrapper(text2_prepared , lang_to, leave_marks=True) ``` \_db*path* is the heart of the alignment. It's an SQLite database that will hold information about the alignment and document structure. Also, note that we provided the language codes ("en" and "ru"). This means that some language-specific rules will be applied during the splitting. You can find all supported languages with this command: ``` splitter.get_supported_languages() ``` If your language is not here make an issue on GitHub or write in our group in telegram. You can also use the "**xx**" code to use some base rules for your text. Now, when texts are split, let's load them into the database: ``` aligner.fill_db(db_path, splitted_from, splitted_to) ``` #### Primary alignment Now we will align the texts. It's a batched process. Parts of texts size of *batch\_size* with some extra lines size of *window* will align together with the primary alignment algorithm. ``` batch_ids = [0,1,2,3] aligner.align_db(db_path, model_name, batch_size=100, window=30, batch_ids=batch_ids, save_pic=False, embed_batch_size=50, normalize_embeddings=True, show_progress_bar=True ) ``` Let's see the result of the primary alignment. *vis\_helper* helps us to plot the alignment structure: ``` vis_helper.visualize_alignment_by_db(db_path, output_path="alignment_vis.png", lang_name_from=lang_from, lang_name_to=lang_to, batch_size=400, size=(800,800), plt_show=True ) ``` ![](https://habrastorage.org/r/w1560/webt/s7/c1/jl/s7c1jlvghsr--ho0iddvfow-pu8.png) Not bad. But there are a lot of conflicts. Why? Consider the following reasons: * Model has too many good variants. If the line is short (some kind of a chat or just a name) the model can find another similar line in the window and take it as a suitable choice. * The right line is not in the search interval. Texts have different counts of sentences and the "*alignment axis*" can go beyond the window. To handle the second problem you can use the *shift* parameter. And to handle conflicts there is a special module called **resolver**. ### Resolve the conflicts We can observe all the found conflicts using the following command: ``` conflicts_to_solve, rest = resolver.get_all_conflicts(db_path, min_chain_length=2, max_conflicts_len=6) ``` --- ``` conflicts to solve: 46 total conflicts: 47 ``` And some statistics: ``` resolver.get_statistics(conflicts_to_solve) resolver.get_statistics(rest) ``` The most frequent conflicts are the size of '2:3' and '3:2'. It means that one of the sentences here was translated as two or vise versa. ``` resolver.show_conflict(db_path, conflicts_to_solve[10]) ``` --- ``` 124 Дом Рэдли стоял в том месте, где улица к югу от нас описывает крутую дугу. 125 Если идти в ту сторону, кажется, вот—вот упрешься в их крыльцо. 126 Но тут тротуар поворачивает и огибает их участок. 122 The Radley Place jutted into a sharp curve beyond our house. 123 Walking south, one faced its porch; the sidewalk turned and ran beside the lot. ``` The most successful strategy that came into my mind is to resolve the conflicts iteratively. From smaller to bigger. ``` steps = 3 batch_id = -1 #all batches for i in range(steps): conflicts, rest = resolver.get_all_conflicts(db_path, min_chain_length=2+i, max_conflicts_len=6*(i+1), batch_id=batch_id) resolver.resolve_all_conflicts(db_path, conflicts, model_name, show_logs=False) vis_helper.visualize_alignment_by_db(db_path, output_path="img_test1.png", batch_size=400, size=(800,800), plt_show=True) if len(rest) == 0: break ``` Visualization after the first step: ![](https://habrastorage.org/r/w1560/webt/-n/4w/ok/-n4wokvntudgbmuyqxitnrvspsw.png) And after the second: ![](https://habrastorage.org/r/w1560/webt/hx/bn/jd/hxbnjd6v41ky8ul_nhc4p8w3egu.png) Great! Now our *book.db* file holds the aligned texts along with the structure of the book (thanks to markup). Create a book ------------- The module called **reader** will help us to create a book. ``` from lingtrain_aligner import reader paragraphs_from, paragraphs_to, meta = reader.get_paragraphs(db_path, direction="from") ``` With the *direction* parameter ["from", "to"] you can choose which paragraph structure is needed (the first text or the second). Let's create it: ``` reader.create_book(paragraphs_from, paragraphs_to, meta, output_path = "lingtrain.html" ) ``` And will see this as input: ![](https://habrastorage.org/r/w1560/webt/cc/rx/cw/ccrxcwml7netwzracgnzctb4ucy.png) It's a simple styled HTML page. I've added some styles to make it even useful for language learners! It's a *template* parameter. ``` reader.create_book(paragraphs_from, paragraphs_to, meta, output_path = "lingtrain.html", template="pastel_fill" ) ``` ![](https://habrastorage.org/r/w1560/webt/sq/an/wy/sqanwy7qjdizhidxno3_wvz4z_s.png) ``` reader.create_book(paragraphs_from, paragraphs_to, meta, output_path = f"lingtrain.html", template="pastel_start" ) ``` ![](https://habrastorage.org/r/w1560/webt/ro/kb/3y/rokb3yh4udvdrva1wx09j06r9yq.png) ### Custom styles You can even use your own style. For example, let's highlight all even sentences in the book: ``` my_style = [ '{}', '{"background": "#fafad2"}', ] reader.create_book(paragraphs_from, paragraphs_to, meta, output_path = f"lingtrain.html", template="custom", styles=my_style ) ``` ![](https://habrastorage.org/r/w1560/webt/cq/xu/av/cqxuav94vmkcy9awowzxgpddp7m.png) You can use any applicable to span CSS styles: ``` my_style = [ '{"background": "linear-gradient(90deg, #FDEB71 0px, #fff 150px)", "border-radius": "15px"}', '{"background": "linear-gradient(90deg, #ABDCFF 0px, #fff 150px)", "border-radius": "15px"}', '{"background": "linear-gradient(90deg, #FEB692 0px, #fff 150px)", "border-radius": "15px"}', '{"background": "linear-gradient(90deg, #CE9FFC 0px, #fff 150px)", "border-radius": "15px"}', '{"background": "linear-gradient(90deg, #81FBB8 0px, #fff 150px)", "border-radius": "15px"}' ] reader.create_book(paragraphs_from, paragraphs_to, meta, output_path = f"lingtrain.html", template="custom", styles=my_style ) ``` ![](https://habrastorage.org/r/w1560/webt/ve/us/nj/veusnjchmh-r181yrieqlufywem.png) I hope it will be helpful for all who love languages. Have fun! Next time we will discuss multilingual books creation and use the UI tool which I'm working on. Stay tuned. To be continued --------------- It is an open-source project. You can take a part in it and find the code on ours [github page](https://github.com/averkij/lingtrain-aligner). Today's Colab is [here](https://colab.research.google.com/drive/1_ics0YzWg5qIZIPhA1X_Wbfg0XZzRO-p). You can also [support the project](https://www.tinkoff.ru/collectmoney/crowd/averkiev.sergey7/wYn8f32996/) by making a donation. ### May the Language Force be with you.
https://habr.com/ru/post/586574/
null
null
2,154
59.3
. On an average sized project, this upgrade should take around 30 minutes. We'll walk you through the changes you have to make to your current project and explain the reasoning behind it. Controller constructors are now resolved by the container so this removed some redundancy within your code and any duplicated auto resolving can now be directly in your constructor: from masonite.request import Requestclass YourController:def __init__(self, request: Request):self.request = Requestdef show(self):print(self.request) # <class masonite.request.Request> Read more in the Controllers documentation. There is a new command that starts a Python shell and imports the container for you already. Test it out to verify that objects are loaded into your container correctly. It's a great debugging tool. $ craft tinker Masonite 2 ships with an awesome little helper command that allows you to see all the routes in your application $ craft show:routes A huge update to Masonite is the new --reload flag on the serve command. Now the server will automatically restart when it detects a file change. You can use the -r flag as a shorthand: $ craft serve -r An incredible new feature is autoloading support. You can now list directories in the new AUTOLOAD constant in your config/application.py file and it will automatically load all classes into the container. This is great for loading command and models into the container when the server starts up. You can also use this class as a standalone class in your own service providers. Read more in Autoloading documentation. Updated all libraries to the latest version with the exception of the Pendulum library which latest version is a breaking change and therefore was left out. The breaking change would not be worth it to add the complexity of upgrading so you may upgrade on a per project basis. Previously you had to import classes like: from masonite.drivers.UploadDriver import UploadDriver Now you can simply specify: from masonite.drivers import UploadDriver Because of this change we no longer need the same duplicated class names in the PROVIDERS list either. Read more about changing duplicated class names under the Duplicate Class Names documentation. Removed the need for the redirection provider completely. You need to remove this from your PROVIDERS list. Renamed Request.redirectTo to Request.redirect_to Also removed the .send() method and moved the dictionary into a parameter: def show(self):return request().redirect('/dashboard/@id', {'id': '5'}) Added a new Request.only method to fetch only specific inputs needed. Added a new Request.get_request_method() method to the Request class. You can now completely remove fetching of any inputs that Masonite handles internally such as __token and __method when fetching any inputs. This is also great for building third party libraries: Request.all(internal_variables=False) Because of the changes to internal framework variables, there are several changes to the CSRF middleware that comes in every application of Masonite. Be sure to read the changes in the Upgrade Guide 1.6 to 2.0. Added a new default package to Masonite that allows scheduling recurring tasks: Read about Masonite Scheduler under the Task Scheduling documentation. It's important during development that you have the ability to seed your database with dummy data. This will improve team development with Masonite to get everyones database setup accordingly. Read more in the Database Seeding documentation. Now all templates have a new static function in them to improve rendering of static assets Read more in the Static Files documentation. You can use the password helper to hash passwords more simply than using straight bcrypt: from masonite.helpers import passwordpassword('secret') # returns bcrypt password Read more in the Encryption documentation. You can now specify which location in your drivers you want to upload to using a new dot notation: Upload.store(request().input('file'), 'disk.uploads') This will use the directory stored in: DRIVERS = {'disk': {'uploads': 'storage/uploads','profiles': 'storage/static/users/profiles/images'},...} Masonite 2 removes the bland error codes such as 404 and 500 errors and replaces them with a cleaner view. This also allows you to add custom error pages. Read more in the Status Codes documentation. Providers are now explicitly imported at the top of the file and added to your PROVIDERS list which is now located in config/providers.py. This completely removes the need for string providers and boosts the performance of the application sustantially
https://docs.masoniteproject.com/v/v2.2/whats-new/masonite-2.0
CC-MAIN-2020-16
refinedweb
733
58.08
CPSC 124, Winter 1998 Sample Answers to Lab 3 This page contains sample answers to some of the exercises from Lab #3 in CPSC 124: Introductory Programming, Winter 1998. See the information page for that course for more information. Exercise 1: The problem was to modify an existing random walk program. It was only necessary to change a few lines of the program. However, the exercise also asked you to modify the comments so that they would be appropriate for the new program. (This is not just make-work. The idea is to encourage you to read the program carefully and understand how it works, why each variable was declared, etc.) In my solution, I have removed two variables, newRow and newCol, since they are not needed in the modified program. I didn't expect you to do this, but it does make the program cleaner./* In this program, a "disturbance" wanders randomly in a window. The window is actually made up of a lot of little squares. Initially, all the squares are black. But each time the disturbance visits a square, it becomes a slightly brighter shade of green (up to the maximum brightness). The action continues as long as the window remains open. Note that if the disturbance wanders off one edge of the window, it appears at the opposite edge. by David Eck, February 2, 1998 */ public class MosaicApplication { public static void main(String[] args) { MosaicFrame mosaic = new MosaicFrame(20,30,20,10); // Open a window with 30 rows and 30 columns of rectangles, // all initially black. int row = 15; // The row in which the disturbance is located. int col = 15; // The column in which the disturbance is located. // (Initial values of 15 put the disturbance in the middle // of the window.) while (mosaic.stillOpen()) { // repeat as long as the window stays open // Executing this while loop will move the disturbance // one space either left, right, up, or down, and increase // the level of green displayed in the square it visits. int rand = (int)(4*Math.random()); // a random number from 0 to 3, // used to decide which direction to move if (rand == 0) { // move left if (row > 0) row = row - 1; else // disturbance was already at left edge; move to right edge row = 29; } else if (rand == 1) { // move right if (row < 29) row = row + 1; else // disturbance was already at right edge; move to left edge row = 0; } else if (rand == 2) { // move up if (col > 0) col = col - 1; else // disturbance was already at top edge; move to bottom col = 29; } else { // move down if (col < 29) col = col + 1; else // disturbance was already at bottom edge; move to top col = 0; } int g = mosaic.getGreen(row,col); // Get current level of green in this square. mosaic.setColor(row,col,0,g+10,0); // Reset color with more green // (but still no red or blue). mosaic.delay(5); // insert a short delay between steps } // end of while loop } // end of main() } // end of class MosaicApplication Exercise 2: The exercise was to write the program described in the comment:/* This program displays a window containing a grid of little colored squares. Initially, the color of each square is set randomly. The program then selects one of the squares at random, selects one of its neighbors at random, and colors the selected square to match the color of its selected neighbor. This is repeated as long as the window is open. As the program runs, some colors disappear while others take over large patches of the window. (Note: For the purpose of determining the neighbor of a square, the bottom edge of the window is considered to be connected to the top edge, and the left edge is considered to be connected to the right edge.) David Eck, February 2, 1998 */ public class ConversionExperience { public static void main(String[] args) { MosaicFrame mosaic = new MosaicFrame(30,30); // Open a window with 30 rows and 30 columns of rectangles, // all initially black mosaic.fillRandomly(); // start by filling the mosaic with random colors while (mosaic.stillOpen()) { // repeat as long as the window stays open // Executing this while loop will select a random square (by selecting // a random row and a random column). It will then randomly select // a neighbor of that square (by randomly selecting one of the directions // up, down, left, or right). The color of the selected square is // changed to match the color of its selected neighbor. int row = (int)(30 * Math.random()); // randomly selected row int col = (int)(30 * Math.random()); // randomly selected column int neighborRow = row; // These will be the row and column of the int neighborCol = col; // randomly selected neighbor. They are // initialized to be the same as row and col, // but one of them will be changed by // the following if statement. int rand = (int)(4*Math.random()); // a random number from 0 to 3, // used to choose the direction in // which the neighbor lies if (rand == 0) { // choose neighbor to the left if (row > 0) neighborRow = row - 1; else // square is on the left edge; choose neighbor on right edge neighborRow = 29; } else if (rand == 1) { // choose neighbor to the right if (row < 29) neighborRow = row + 1; else // square is on the right edge; choose neighbor on left edge neighborRow = 0; } else if (rand == 2) { // choose neighbor above if (col > 0) neighborCol = col - 1; else // square is on the top edge; choose neighbor on bottom edge neighborCol = 29; } else { // choose neighbor below if (col < 29) neighborCol = col + 1; else // square is on the bottom edge; choose neighbor on top edge neighborCol = 0; } int r = mosaic.getRed(neighborRow,neighborCol); // Get color of neighbor int g = mosaic.getGreen(neighborRow,neighborCol); int b = mosaic.getBlue(neighborRow,neighborCol); mosaic.setColor(row,col,r,g,b); // set color of square to match neighbor mosaic.delay(5); // insert a short delay between steps } // end of while loop } // end of main() } // end of class ConversionExperience Exercise 3: This exercise was postponed to Lab 5. David Eck, 2 February 1998
http://math.hws.edu/eck/cs124/labs98/lab3/answers.html
crawl-001
refinedweb
1,001
59.03
Bart De Smet's on-line blog (0x2B | ~0x2B, that's the question) TxF is another great Vista technology that allows you to manipulate files and the registry in a transacted way, in concert with other transactional operations on the system or the network. Imagine a business process that has to update a database, create a file, call a webservice, etc all in a transacted manner. With TxF this kind of thing becomes possible. To my own surprise, I didn't blog about TxF yet from a more technical point of view. However in this post, I'll do :-). We'll start rather simple with a transacted file delete, and in future posts we'll dive deeper and deeper, each time making things easier for managed code developers too (although the real summum of TxF in managed code would be a core change to the BCL). The goal of this first post is to show some key players and basic approaches in the TxF world. Since the early Vista days, things have changed quite a bit. The first approach the dev teams took, was to keep the Win32 API functions for file I/O intact, e.g. CreateFile or DeleteFile. Only the internals changed, so that if a Kernel Transaction Manager (KTM) transaction is in flight, that transaction would be used (to put things easy and set the mind in this post). However, another approach was taken to bring transactions to file I/O, namely the use of the Transacted suffix, like CreateFileTransacted and DeleteFileTransacted, amongst others (see later posts). Since the latter function has the simplest signature, it seemed an attractive candidate for TxF introduction to me. Here it is: BOOL WINAPI DeleteFileTransacted( LPCTSTR lpFileName, HANDLE hTransaction ); Where the non-transacted function only had one parameter, this function carries a handle to the KTM transaction to enroll in.. Transactions are about ACID: atomicity, consistency, isolation and durability. In this first demo, you'll see the atomic character of deleting two files in a transaction, the isolation while the transaction is running, durability once the transaction is committed. Here's the code: 1 using System; 2 using System.Runtime.InteropServices; 3 using System.IO; 4 5 namespace TxF 6 { 7 class Program 8 { 9 [DllImport("Kernel32.dll")] 10 static extern bool DeleteFileTransactedW([MarshalAs(UnmanagedType.LPWStr)]string file, IntPtr transaction); 11 12 [DllImport("Kernel32.dll")] 13 static extern bool CloseHandle(IntPtr handle); 14 15 [DllImport("Ktmw32.dll")] 16 static extern bool CommitTransaction(IntPtr transaction); 17 18 [DllImport("Ktmw32.dll")] 19 static extern bool RollbackTransaction(IntPtr transaction); 20 21 [DllImport("Ktmw32.dll")] 22 static extern IntPtr CreateTransaction(IntPtr securityAttributes, IntPtr guid, int options, int isolationLevel, int isolationFlags, int milliSeconds, string description); 23 24 static void Main(string[] args) 25 { 26 // 27 // Demo setup. 28 // 29 string file1 = "c:\\temp\\txf1.txt"; 30 string file2 = "c:\\temp\\txf2.txt"; 31 using (StreamWriter sw = File.CreateText(file1)) 32 sw.WriteLine("Hello World"); 33 using (StreamWriter sw = File.CreateText(file2)) 34 sw.WriteLine("Hello World"); 35 36 // 37 // Start the demo. 38 // 39 Console.WriteLine("Press <ENTER> to start the transaction."); 40 Console.ReadLine(); 41 42 // 43 // Create a kernel transaction. 44 // 45 IntPtr tx = CreateTransaction(IntPtr.Zero, IntPtr.Zero, 0, 0, 0, 0, null); 46 47 // 48 // Delete the files (transacted). 49 // 50 bool rollback = false; 51 if (!DeleteFileTransactedW(file1, tx)) 52 rollback = true; 53 if (!DeleteFileTransactedW(file2, tx)) 54 rollback = true; 55 56 // 57 // Commit or rollback? 58 // 59 if (!rollback) 60 { 61 char c; 62 do 63 { 64 Console.WriteLine("{0} {1}.", file1, File.Exists(file1) ? "still exists" : "has vanished"); 65 Console.WriteLine("{0} {1}.", file2, File.Exists(file2) ? "still exists" : "has vanished"); 66 Console.Write("Commit transaction (Y/N)? "); 67 c = (char)Console.Read(); 68 } 69 while (c != 'Y' && c != 'y' && c != 'N' && c != 'n'); 70 71 if (c == 'Y' || c == 'y') 72 CommitTransaction(tx); 73 else 74 RollbackTransaction(tx); 75 } 76 else 77 { 78 Console.WriteLine("Forced rollback!"); 79 RollbackTransaction(tx); 80 } 81 82 // 83 // Close kernel mode transaction handle. 84 // 85 CloseHandle(tx); 86 } 87 } 88 } It assumes you have a c:\temp folder of course. Two files are created (26-34), called txf1.txt and txf2.txt. Next, the app stops (36-40) to give you a chance to obtain locks to the files from the outside to see what happens in that case. Finally, the real work starts by creating a transaction (45) and performing transacted file deletes (51, 53). Notice we track the success or failure of the transacted operations. In case the file is locked, the transacted delete won't succeed. Next, if no rollback is required, we demand user interaction to decide whether the transaction has to be completed or not (61-69), after which the commit or rollback is done (72, 74). Finally (85) the handle to the KTM transaction is closed. A simple helper app might be useful to obtain locks from the outside: using System; using System.IO; class Locker { public static void Main(string[] args) { string f = args[0]; using (File.OpenRead(f)) { Console.WriteLine("{0} locked. Press <ENTER> to unlock.", f); Console.ReadLine(); } } } 1. Start TxF.exe 2. Dir the c:\temp folder, it should contain txf1.txt and txf2.txt 3. Press <ENTER> in TxF.exe to start the transaction and the transacted file deletes. 4. Dir the c:\temp folder, the txf[12].txt files should still be present. 5. In TxF.exe press Y to commit. 6. In c:\temp the txf[12].txt files will be gone. Demo 2 5. In TxF.exe press N to rollback. 6. In c:\temp the txf[12].txt files will still be there. Remove them via erase txf?.txt Demo 3 3. Execute locker.exe txf1.txt. 4. Press <ENTER> in TxF.exe to start the transaction and the transacted file deletes. 5. A forced rollback will be executed. 6. Unlock the file by exiting locker.exe. 7. In c:\temp the txf[12].txt files will both be present. Remove them via erase txf?.txt Demo 4 5. Try to run locker.exe txf1.txt and notice the file is locked by the transaction in flight. 6. In TxF.exe press Y to commit. 7. In c:\temp the txf[12].txt files will be gone. In this introduction post you learned the basic concepts of the KTM and TxF. Notice that as a managed developer this should be the only time to interact directly with the KTM for demonstration purposes. In a next post, you'll learn how to leverage the power of the KTM, TxF (and TxR) through System.Transactions with little additional plumbing. Commit! PingBack from Yet another great (well, at least in my opinion) month of Daily Blogging . Once more, feedback from readers Introduction Last month I blogged about TxF in Windows Vista and how to use it from your own application Pingback from AlphaFS - Bringing Advanced Windows File System support to .NET | Alpha Leonis
http://blogs.bartdesmet.net/blogs/bart/archive/2006/11/05/Windows-Vista-_2D00_-Introducing-TxF-in-C_2300_-_2800_part-1_2900_-_2D00_-Transacted-file-delete.aspx
CC-MAIN-2013-20
refinedweb
1,162
68.97
Rename-Item Updated: April 21, 2010 Applies To: Windows PowerShell 2.0 Renames an item in a Windows PowerShell provider namespace. Syntax Description. -NewName <string> Specifies the new name of the item. Enter only a name, not a path and name. If you enter a path that is different from the path that is specified in the Path parameter, Rename-Item generates an error. To rename and move an item, use the Move-Item cmdlet. You cannot use wildcard characters in the value of NewName. To specify a name for multiple files, use the Replace operator in a regular expression. For more information about the Replace operator, type "Get-Helpabout_Comparison_Operators". For a demonstration, see the examples. -PassThru Passes an object representing the item to the pipeline. By default, this cmdlet does not generate any output. -Path <string> Specifies the path to the item to rename. Rename-Item cmdlet is designed to work with the data exposed by any provider. To list the providers available in your session, type "Get-PsProvider". For more information, see about_Providers. Example Description ----------- This command uses the Rename-Item cmdlet to rename a registry key from Advertising to Marketing. When the command is complete, the key is renamed, but the registry entries in the key are unchanged. Example 4 preceding "txt" is interpreted to match any character. To ensure that it matches only a dot (.), it is escaped with a backslash character (\). The backslash character is not required in ".log" because it is a string, not a regular expression.
http://technet.microsoft.com/en-us/library/dd315353.aspx
CC-MAIN-2013-20
refinedweb
253
60.61
Here we will understand the reparameterization trick used by Kingma and Welling (2014) to train their variational autoencoder. Assume we have a normal distribution $q$ that is parameterized by $\theta$, specifically $q_{\theta}(x) = N(\theta,1)$. We want to solve the below problem $$ \text{min}_{\theta} \quad E_q[x^2] $$ This is of course a rather silly problem and the optimal $\theta$ is obvious. We want to understand how the reparameterization trick helps in calculating the gradient of this objective $E_q[x^2]$. One way to calculate $\nabla_{\theta} E_q[x^2]$ is as follows $$ \nabla_{\theta} E_q[x^2] = \nabla_{\theta} \int q_{\theta}(x) x^2 dx = \int x^2 \nabla_{\theta} q_{\theta}(x) \frac{q_{\theta}(x)}{q_{\theta}(x)} dx = \int q_{\theta}(x) \nabla_{\theta} \log q_{\theta}(x) x^2 dx = E_q[x^2 \nabla_{\theta} \log q_{\theta}(x)] $$ For our example where $q_{\theta}(x) = N(\theta,1)$, this method gives $$ \nabla_{\theta} E_q[x^2] = E_q[x^2 (x-\theta)] $$ Reparameterization trick is a way to rewrite the expectation so that the distribution with respect to which we take the expectation is independent of parameter $\theta$. To achieve this, we need to make the stochastic element in $q$ independent of $\theta$. Hence, we write $x$ as $$ x = \theta + \epsilon, \quad \epsilon \sim N(0,1) $$ Then, we can write $$ E_q[x^2] = E_p[(\theta+\epsilon)^2] $$ where $p$ is the distribution of $\epsilon$, i.e., $N(0,1)$. Now we can write the derivative of $E_q[x^2]$ as follows $$ \nabla_{\theta} E_q[x^2] = \nabla_{\theta} E_p[(\theta+\epsilon)^2] = E_p[2(\theta+\epsilon)] $$ Now let us compare the variances of the two methods; we are hoping to see that the first method has high variance while reparameterization trick decreases the variance substantially. import numpy as np N = 1000 theta = 2.0 eps = np.random.randn(N) x = theta + eps grad1 = lambda x: np.sum(np.square(x)*(x-theta)) / x.size grad2 = lambda eps: np.sum(2*(theta + eps)) / x.size print grad1(x) print grad2(eps) 3.86872102149 4.03506045463 Let us plot the variance for different sample sizes. Ns = [10, 100, 1000, 10000, 100000] reps = 100 means1 = np.zeros(len(Ns)) vars1 = np.zeros(len(Ns)) means2 = np.zeros(len(Ns)) vars2 = np.zeros(len(Ns)) est1 = np.zeros(reps) est2 = np.zeros(reps) for i, N in enumerate(Ns): for r in range(reps): x = np.random.randn(N) + theta est1[r] = grad1(x) eps = np.random.randn(N) est2[r] = grad2(eps) means1[i] = np.mean(est1) means2[i] = np.mean(est2) vars1[i] = np.var(est1) vars2[i] = np.var(est2) print means1 print means2 print print vars1 print vars2 [ 4.10377908 4.07894165 3.97133622 4.00847457 3.99620013] [ 3.95374031 4.0025519 3.99285189 4.00065614 4.00154934] [ 8.63411090e+00 8.90650401e-01 8.94014392e-02 8.95798809e-03 1.09726802e-03] [ 3.70336929e-01 4.60841910e-02 3.59508788e-03 3.94404543e-04 3.97245142e-05] %matplotlib inline import matplotlib.pyplot as plt plt.plot(vars1) plt.plot(vars2) plt.legend(['no rt', 'rt']) /usr/local/lib/python2.7/dist-packages/matplotlib/__init__.py:872: UserWarning: axes.color_cycle is deprecated and replaced with axes.prop_cycle; please use the latter. warnings.warn(self.msg_depr % (key, alt_key)) <matplotlib.legend.Legend at 0x7facb844ae50> Variance of the estimates using reparameterization trick is one order of magnitude smaller than the estimates from the first method!
https://nbviewer.jupyter.org/github/gokererdogan/Notebooks/blob/master/Reparameterization%20Trick.ipynb
CC-MAIN-2020-40
refinedweb
583
60.31
LINQ in JavaScript, ES6 style, for real this time Revisiting how to implement LINQ in JavaScript on top of ES6 but this time it's actually going to be on top of ES6 features! The other day one of the guys I work with was trying to work out the best way to generate an Excel document from .NET as the client had some wierd requirements around how the numerical data needed to be formatted (4 decimal places, but Excel treats a CSV to only show 2). The next day my boss came across a link to a demo of how to use LINQ to XML to generate a XML file using the Excel schema sets which allow for direct opening in Excel. One problem with the demo, it was using VB 9, and anyone who's seen VB 9 will know it has a really awesome way of handling XML literals in the IDE. This isn't a problem if you're coding in VB 9, but if you're in C# it can be. The VB 9 video can be found here: I recommend it be watched before progressing as it'll make a lot more sense against the following post. It'll also cover how to create the XML file, which I'm going to presume is already done. Because C# doesn't have a nice way to handle XML literals like VB 9 does we're going to have to do a lot of manual coding of XML, additionally we need to ensure that the appropriate namespaces are used on the appropriate nodes. The Excel XML using 4 distinct namespaces, in 5 declarations (yes, I'll get to that shortly) so we'll start off by defining them like so: XNamespace mainNamespace = XNamespace.Get("urn:schemas-microsoft-com:office:spreadsheet"); XNamespace o = XNamespace.Get("urn:schemas-microsoft-com:office:office"); XNamespace x = XNamespace.Get("urn:schemas-microsoft-com:office:excel"); XNamespace ss = XNamespace.Get("urn:schemas-microsoft-com:office:spreadsheet"); XNamespace html = XNamespace.Get(""); Notice how the 'main namespace' and 'ss' are exactly the same, well this is how they are handled within the XML document. The primary namespace for the file is urn:schemas-microsoft-com:office:spreadsheet but in some locations it's also used as a prefix. For this demo I'm going to be using the obligatory Northwind database and I'm going to just have a simple query against the customers table like so: var dataToShow = from c in ctx.Customers select new { CustomerName = c.ContactName, OrderCount = c.Orders.Count(), Address = c.Address }; Now we have to start building our XML, the root element is named Workbook and then we have the following child groups: Each with variying child properties. First thing we need to do is set up our XElement and apply the namespaces, like so: XElement workbook = new XElement(mainNamespace + "Workbook", new XAttribute(XNamespace.Xmlns + "html", html), CreateNamespaceAtt(XName.Get("ss", ""), ss), CreateNamespaceAtt(XName.Get("o", ""),o), CreateNamespaceAtt(XName.Get("x", ""), x), CreateNamespaceAtt(mainNamespace), I'm using a helper method to create the namespace attribute (which you'll be able to find in the attached source), but notice how the "main" namespace is the last one we attach, if we don't do it this way we'll end up with the XElement detecting the same namespace and only adding it once. Also, you need to ensure that you're prefixing the right namespace to the XElement tag! These two node groups are not overly complex, they hold the various meta-data about the Excel document we are creating, I'll skip them as they aren't really interesting and can easily be found in the source. This section is really important and handy for configuring custom looks within the document. There are way to many options to configure here to cover in the demo, it's easiest to generate the styles in Excel and save the file as an XML document (or read the XSD if you really want!). If you're doing custom styles make sure you note the ID you give the style so you can use it later in your document. Also, these styles are workbook wide, not worksheet so you can reuse them on each worksheet you create. I have a very simple bold header. Here is where the fun starts, we need to generate our worksheet. There are 4 bits of data we need to output here: To illistrate the power of LINQ I've actually dynamically generated the header row: Update: You should get dataToShow.First() not dataToShow.ToList() so you can get the properties for the header var headerRow = from p in dataToShow.First().GetType().GetProperties() select new XElement(mainNamespace + "Cell", new XElement(mainNamespace + "Data", new XAttribute(ss + "Type", "String"), p.Name ) ); This is just a little bit of fun using LINQ and Reflection to dynamically generate the column headers ;) Next we need to output the number of columns and number of rows (keep in mind the rows is the data count + header row count): new XAttribute(ss + "ExpandedColumnCount", headerRow.Count()), new XAttribute(ss + "ExpandedRowCount", dataToShow.Count() + 1), Now we put out the header cells: new XElement(mainNamespace + "Row", new XAttribute(ss + "StyleID", "Header"), headerRow ), Then lastly we generate the data cells (note - this can be done like the header, just chose to do it differently to illistrate that it can be done several ways): (yes I used an image this time, the formatting is a real bitch in the Umbraco WYSIWYG editor!). Lastly there needs to be a WorksheetOptions node, and then you can combine all the XElements together, add it to an XDocument object and save! There you have it, how to create an Excel document using LINQ to XML and C#. Download the source here.
http://www.aaron-powell.com/posts/2010-04-08-linq-to-xml-to-excel.html
CC-MAIN-2017-17
refinedweb
967
59.74
Announcing Harmonium: Our React UI Component Kit At Revelry, we’ve been building with React for a long time – since early developer betas. Over the last few years, and over dozens of projects, we’ve built and refined a UI component kit for React that we’ve code-named Harmonium. Harmonium is available on GitHub, and there’s also a React UI Component gallery at harmonium.revelry.co. In fact, we quietly released Harmonium about 4 months ago. We had a lot of confidence in it, because we’ve used some version of this toolkit to build dozens of projects for clients. That said, we wanted to give it a soft launch as a public project and get some community feedback before we started banging the drum. To our delight, we’ve gotten a lot of great community contributions. So, it feels like the time to start talking is now. Have a look at Harmonium Version 4: Why another UI framework? We found that there weren’t that many comprehensive React component libraries. The JavaScript way is to bring dependencies into your project one at a time. You get form elements from here, and an image gallery from there. In theory, this is great– you can pick the perfect thing for each project. In reality, these disparate components clash: they don’t look the same, they don’t work together, and they may even fight for control over the same namespaces. And they often come each with their own infrastructure and utilities, bloating the end product. We never found one library that covered all the things we needed for one of our typical projects. And that’s why we built Harmonium. A React UI Component kit of parts built to work together Our kit required parts that were built to work together with a consistent look. One of our design goals is that you never have to research and handpick component packages. Whatever you need is already here. And, we optimized for our projects. We wanted a kit that’s optimized for the kind of projects we do– as good at slick home pages as it is at crunchy admin interfaces. Harmonium ships with a great set of SCSS styles. You can customize these styles to your heart’s content, with variable overrides. Or, utilize your own completely custom style sheet. What kind of projects are you building? Give it a try and give us feedback. We encourage pull requests and suggestions. What’s on the roadmap? Harmonium isn’t done. There are whole areas of our roadmap that we have not even begun to explore. Here are some: - React Native support: support for using the same components (or direct native counterparts) in React Native. - Sketch integration: design in Sketch, export to React or React Native. - Other styling options: SCSS has competition these days. We’re looking for ways to build styles into css-in-js, postcss, etc - Data visualization: better charts and graphs - Higher level templates: common full page layouts pre-built and tested for you..
https://revelry.co/resources/revelry/react-ui-component-harmonium/
CC-MAIN-2021-25
refinedweb
507
65.83
Sharing Types Scott Seely Microsoft Corporation July 19, 2002 Introduction. How? Two Web services use the exact same type. When developing the client, the developer dutifully uses "Add Web Reference" to create the proxies. Sprinkle in a little code that calls a Web method and returns a custom type. Later, the same type is sent to another Web service. If the defaults are used, the data type, known to be the same for both Web services, cannot be used with both Web services. The code won't even compile to allow you to send the data. One thing causes this problem: the code is mapping from a CLR type to an XSD type, then back to a CLR type. A bunch of assumptions wind up hurting you, the developer. This column will take a look at what those assumptions are and show how to work around them. Setting the Stage We need some simple example that demonstrates the problem. To that end, we will have two Web services that expose a Name structure. Name will contain the following information: - Unique identifier - Middle name All items are strings. This is a simple representation of the data in Microsoft® Visual Basic® .NET: Two Web methods that are a part of two larger portTypes are GetName and AddNameToList. GetName returns a Name based on some ID. AddNameToList will add the name to some list of names—no need to get into why, let's just say that it does. To finish setting the stage, let's show what the Web service code looks like for both services: Service1: <WebService(Namespace:= _ "" & _ "AtYourService/2002/07/Service1.asmx")> _ Public Class Service1 Inherits System.Web.Services.WebService <WebMethod()> _ Public Function GetName(ByVal id As String) As Name Dim retval As New Name() retval.ID = id retval.First = "Scott" retval.Middle = "Christopher" retval.Last = "Seely" Return retval End Function End Class Service2: As good Web service developers do, we declared a special namespace for the Web services. The URI is unique and does not reference tempuri.org. (Whenever a Web service does reference tempuri.org, the default page for the .ASMX will warn all who view the page that the developer is using the default namespace for Microsoft® ASP.NET Web services.) The stage is set, and the curtains are ready to go up. Have we gone far enough to be able to share the Name type between Service1 and Service2? Sadly, no. The reason why: XML serialization issues. Serialization of Name How will the Name type be serialized when it is translated from a CLR type to XML? To answer that question, let's take a look at the XSD representation of Name within the Service1 and Service2 WSDL files. According to Service1.asmx?WSDL, the schema for Name looks like this: <s:complexType <s:sequence> <s:element <s:element <s:element <s:element </s:sequence> </s:complexType> Not too surprisingly, Service2.asmx?WSDL concurs with this representation. And yet, the two services disagree on something fundamental about the definition of the Name type: they are defined in different XML Schema (XSD) targetNamespaces. Each data type defined using XSD exists in a particular namespace, as defined by the schema's targetNamespace attribute. When instances of the type are represented as XML, they are qualified by the targetNamespace name. If a CLR data type like Name does not explicitly declare which XML namespace it belongs to, ASP.NET will define the data type in the same XML namespace as the Web service. This is the default behavior. For Service1, that namespace is. For Service2, the namespace is. As a result, the underlying Web service uses the one Name class but generates two XSD types, each in a different targetNamespace. As a result, Service1 and Service2 do not share the same Name data type—despite the fact that they use the same CLR type, it's the XSD type that really matters. Have no fear; this can be fixed. "How?" you ask. Tell Microsoft® .NET what namespace to use when mapping the Name type to XSD. To tell .NET what XML namespace to use, you need to give the environment a small hint. System.Xml.Serialization contains a number of attribute classes that let you change the way a given data type is serialized to an XML document. One of the most basic things you can do is to state the XML namespace to use when the data type is serialized. System.Xml.Serialization.XmlType does just that. To set the XML namespace for Name to, make Name look like this: Now, when ASP.NET generates the WSDL for Service1.asmx and Service2.asmx it will show the exact same XSD for Name in both instances. Since both Web services place the data type in the same namespace and have a shared representation, it only stands to reason that this type should be shareable between proxies, right? Wrong! To try sharing these types, I created a small console application. All that this application will do is to attempt to invoke Service1 and Service2 using the same version of Name. The first thing I did was use Add Web Reference from within Visual Studio .NET to create the proxies. The Solution Explorer reflects this addition as shown in Figure 1. Figure 1. Web references to the two Web services: Service1.asmx and Service2.asmx When Add Web Reference adds a new Web service, it places the proxy in a separate CLR namespace. The namespace is typically named [project namespace].[machine name]. For the local machine, this is [project namespace].localhost. When extra Web references are added for other Web services located on the same machine name, the environment will automatically append a numeral onto the end of the initial namespace. So, the second Web reference will be in the [project namespace].localhost1 namespace, the third will be in [project namespace].localhost2, and so on. This keeps all the data types and other information segregated. It also means that the Name data type is forcibly different on the client side because of a CLR namespace issue. As a result, the following code will not even compile: Specifically, I get the following error for the line svc2.AddNameToList(theName): Value of type 'AYS07162002_console.localhost.Name' cannot be converted to 'AYS07162002_console.localhost1.Name'. All is not lost. We just need a way to fix things. The Fix is In As with most problems, there are a number of ways to fix this and wind up with the desired results. It all depends on what you want to do. Let's take a look at two simple fixes. One can be done from within Visual Studio .NET, the other from the command line. Neither solution is very hard to use. The end result of both methods will be that the Web service proxies live in the same namespace. Within Visual Studio .NET When expanding the nodes under the two Web References, we have the layout shown in Figure 2. Figure 2. The Web References in the console application The really easy thing to do in this instance is to edit the proxy for localhost1. To see it in the Solution Explorer, make sure that the Show All Files button is selected as shown in Figure 3. Figure 3. The Show All Files button Now, you should be able to see a file located at localhost1/Reference.map/Reference.vb. Open it up; we are about to make some minor changes. First, change the namespace declaration to read Namespace localhost. Second, delete the declaration of the Name class. That class is already declared in the proxy for Service1.asmx. Finally, edit the Main function to read as follows: That's it. The bad news is that if you ever update the Web references, you will need to implement these changes again. Using WSDL.EXE I only presented the previous solution because a lot of people like to use the development environment. I have found that relying on Add Web Reference will bite back more often than not. If you want to reduce the chances of getting scarred, use WSDL.EXE directly. It only gets called when you ask it to and never does anything behind your back. First, delete the two Web references from the project. You've just decided to abandon that approach. Next, open up the Visual Studio .NET command prompt. (This is available from the Windows Start menu: Start→All Programs→Microsoft Visual Studio .NET→Visual Studio .NET Tools→Visual Studio .NET Command Prompt.) Navigate to a directory that you can find easily, like c:\temp. Once there, invoke WSDL.EXE to generate the proxies for you. On my machine, I ran the following two commands (each command appears as one command line): Service1 proxy Service2 proxy What these lines do is invoke WSDL.EXE, tell it to generate a proxy in Visual Basic .NET, and place that proxy in the CLR namespace called proxy. The file should be stored as either Service1.vb or Service2.vb. With the proxies generated, you can add them to the consuming application. To do this, go to the Solution Explorer and select the console application node. Right-click and select Add, Add Existing Item. Navigate to c:\temp and select both Service1.vb and Service2.vb and click Open. To get rid of the duplicate instances of the Name class, open either Service1.vb or Service2.vb and delete the declaration of the class. Finally, edit Main so that it calls the appropriate classes (now in the proxy namespace, just to mix things up). Once again, the one Name data type can be used across the two Web services. Cool! Summary In order to share a data type across two or more Web services, the Web services themselves must agree on two items: - The structure of the data type, when represented as XML. - The fully qualified name of any XSD data types in use. This is done to make sure that the XML representation of the data type is the same for any SOAP endpoint. Once this agreement has been made, a .NET client needs to make sure that the generated proxies that share the data type also share the same CLR namespace. A declaration for the shared data type can only appear once, so the developer will need to delete all but one declaration of that type. Once you know what is going on, it is fairly trivial to use a common data type across Web services. At Your Service
https://msdn.microsoft.com/en-us/library/aa480505.aspx
CC-MAIN-2015-32
refinedweb
1,760
67.35
11 September 2013 22:43 [Source: ICIS news] HOUSTON (ICIS)--Initial ?xml:namespace> The increase is in line with some initial price increases last week at 1-2 cents/gal as well, following the 23 cent/gal jump in the September feedstock benzene contract. August styrene contract prices are expected to be finalised by Friday. Current July styrene contracts are at 77.75-81.75 cents/lb FOB (free on board). Although August US styrene contracts were still being settled, producers have already issued price increases for September at 3-4 cents/lb on the back of stronger benzene prices and tight supply
http://www.icis.com/Articles/2013/09/11/9705213/initial-us-august-styrene-contracts-move-up-1-2-centslb.html
CC-MAIN-2014-41
refinedweb
103
64.51