text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
vubon + 31 comments // For loop public static void main(String[] args) { Scanner scan = new Scanner(System.in); for(int i = 1; scan.hasNext()== true; i++){ System.out.println(i + " " + scan.nextLine()); } } // While loop System public static void main(String[] args) { Scanner scan = new Scanner(System.in); int i = 0; while(scan.hasNext()){ i++; System.out.println(i + " " + scan.nextLine()); } } Both Loop working for me thaophuong10498 + 2 comments can you explain why ? VIPERCODER1 + 2 comments because in this case the time compkexity is more for the for loop that while loop sorathia_urvik91 + 0 comments Can you explain how complexity is differed between those loops ?? please gustavocpassos + 2 comments 1 - It's a bad pattern to compare a boolean with true/false like that: "scan.hasNext() == true" Explanation: if the variable is a boolean, doesn't need "== true" 2 - If you have to do a loop that uses the return of a boolean method (like "scan.hasNext()"), prefer to use while. Explanation: Your code is more beautifull and readable. Clean code is a good practice. Help this helps you amar301189 + 3 comments hi, how do you terminate taking the input from the console.. It kept asking, i print several runs...how do you exactly make the scan.hasNext() condition false through the console for eg:hello 1 hello hi 2 hi how 3 how how are you it still taking the input and i want to terminate here. Ignis_Qui_Vir + 0 comments Ctrl-Z works for me. But there is a bug in Eclipse: you have to switch the view, then go back to the console and then you can enter Ctrl-Z. saurabhkumar8112 + 1 comment read my answer on the top berkemuftuoglu1 + 0 comments you can use a statment like if(name.equals(" ")) then break in which you can get out of the loop when you input space gaurav21pendhari + 4 comments will you please explain System.out.println(i + " " + scan.nextLine()); I am new to java polmki99 + 1 comment It's equivalent to this code: String line = scan.nextLine(); System.out.println(i + " " + line); vardhangaharwar + 0 comments replace System.out.println(i + " " + line); with System.out.println(i + " \t" + line); vaibhavpkc + 0 comments it prints the current i value. And the Double inverted commas work as giving space between the value of i and the value of the scan object value that has been read by the compiler entered by the user. eshandangwal9991 + 0 comments In Stdout first of all current value of i will be print than " " will genrate space and scan.nextLine() will print value read by compiler. For example if i=1 and complier reads input such as "Hello Eveyone !" so it will be printed like this: 1 Hello Eveyone ! neelay619 + 2 comments how can that condition be met in the given problem set?? please if anyone can explain. tenzin392 + 1 comment I'm 20 days late, but the reason is: hasNext() checks if there is a string remaining in the Scanner; and returns false only when nothing is left there, which is equivalent to EOF ace_dexter + 1 comment But the input is being given line by line. How can you say it will search for EOF? It keeps on asking for input!!! Freank_224 + 0 comments You can use a while loop to look for it e.g. while (scanner.hasNextLine()) If there is no next line it will terminate the loop leospostbox + 0 comments I've done some research on this. Basically, hasNext() will block until Standard input (stdin) is closed. So when you run in an IDE, this loop never ends if you never kill stdin. It looks like on Hackerrank, at the end of the file, stdin is being closed thus it ends. You can also have some kind of 'END_OF_FILE' token to indicate end of input. jamaa_cool_afg + 0 comments I had the very same problem. I didn´t had the idea to put scanner.hasNext()==true in the termination condition. SwordFish91 + 6 comments that's made wrong this is the right : Scanner sc = new Scanner(System.in); while (sc.hasNextLine()) { int x=1; System.out.println(x++ + " " +sc.nextLine()); } claw10 + 0 comments @ohdoghdffdogdf, you need place int x=1 out of the while loop. Otherwise, it prints 1 everytime for each subsequent lines. Coder_Zuki + 0 comments initialize x outside while loop man. int x=1; while (sc.hasNextLine()) { System.out.println(x++ + " " +sc.nextLine()); } rcgonzalezf + 1 comment Your for loop can be simplified: // For loop public static void main(String[] args) { Scanner scan = new Scanner(System.in); for(int i = 1; scan.hasNext(); i++){ System.out.println(i + " " + scan.nextLine()); } } You don't have to use the == operator to compare with true since the hasNext method returns a boolean. synhack + 4 comments In fact, you can shorten it even further (at a potential loss of readability) by including the print statement within the for loop: for(int i = 1; scan.hasNext(); System.out.println(i + " " + scan.nextLine()), i++); JChiquin + 7 comments Even shorter: for(int i=1; scan.hasNext() ; System.out.println(i++ +" "+scan.nextLine())); pandeyvirat12345 + 1 comment what scan.hasNext do? Surya07081995 + 0 comments //shortest for(int i=1;in.hasNext() ;out.println(i++ +" "+in.nextLine()) ); nimish_gupta + 2 comments what does Scan.hasNext() exactly means?? And while in the for loop we are not comparing Scan.hasNext() ?? JeanDuarte + 0 comments U shall read Scan.hasNext like this.... does "Scan" have next element ? If it has, it returns TRUE, so while something is TRUE, keep it going. That's why you don't need any comparing. Surya07081995 + 0 comments sorry for being late,scan.hasNext() means after getting one input sacanner will wait for the next input . plodder1317 + 1 comment vubon, Your loop is an infinte loop,I don't know how it works in Hackerrank, but it's not a correct way. abir4u2011 + 0 comments It's not actually.When there is no more line of inputs the hasNext method returns false and the loop terminates. ReusMandal + 1 comment what is scan.hasnext ? scan is your scanner name ? but what is the meaning of this ? JeanDuarte + 0 comments That's it, scan is his scanner's name, that's the name he gave to the object created. Scanner x = new Scanner(System.in); ::: x is the scanner's name vikaskumar_vsk + 1 comment Even if I dont put scan.hasNext == true; it works fine. Can you explain why? Scanner scan = new Scanner(System.in); //for loop for(int i =1; scan.hasNext(); i++){ System.out.println(i + " " + scan.nextLine()); } JChiquin + 1 comment Because, the for loop check if the condition is true but scan.hasNext()is already boolean. - When scan.hasNext() return TRUE: - TRUE == TRUE -> TRUE - When scan.hasNext() return FALSE: - FALSE == TRUE -> FALSE So you see, it's not necesary compare with other boolean. Other example: the method isEmpty(Returns true if this list contains no elements.), We will mostly need to know when the collection contains elements, so we use it this way: if(!myCollection.isEmpty()){ //do something with the elements... } //Or we can compare with false if(myCollection.isEmpty()==false){ //do something with the elements... } We simply deny it. (Sorry my english) johnmlhll + 2 comments Worked for me too.. Scanner input = new Scanner(System.in); int counter = 1; String line = ""; while(input.hasNext()) { line = String.format(counter+" "+input.nextLine()); counter++; System.out.println(line); } esma3eel97 + 1 comment which part of this code reads from user input is it the "scan.hasNext()" or the "scan.nextLine()" ??? choudharymanisha + 4 comments Hey could you please help me with this problem... I am using a while loop just like you but in the output the Hello part is not being printed.. this is my code--> Scanner sc=new Scanner(System.in); String s=sc.next(); int i=1; while(sc.hasNextLine()) { String s1=sc.nextLine(); System.out.println(i+" "+s1); i++; } Expected Output--> 1 Hello world 2 I am a file 3 Read me until end-of-file. My output--> 1 world 2 I am a file 3 Read me until end-of-file. navneetlahiri + 1 comment line 2: String s=sc.next(); //this scans for "Hello" thats why you are not getting it in the output, try removing it. richardm96123 + 0 comments while(scan.hasNext()) should be the line you use. Also, don't use the strings. The only time String should be in the code is the very beginning. Scanner scan = new Scanner(System.in); int lineNum = 1; while(scan.hasNext()) { System.out.println(lineNum + " " + scan.nextLine()); lineNum++; This was my code that I used, and it works flawlessly. meethparikh01 + 0 comments here how does it scan the input because here it first check whether it has input or not and than it prints the output or can you please explain how does it take input? lambert_jeanphi1 + 0 comments in the for loop, no need for scan.hasNext() == true, just scan.hasNext, like in your while loop dimio + 0 comments Why "hasNext()", not "hasNextLine()"? My solution with using "hasNextLine()" are working correctly or not? public static void main(String[] args) { Scanner in = new Scanner(System.in); long i = 0; while ( in.hasNextLine() ){ System.out.println(++i + " " + in.nextLine()); } in.close(); } alishaksp33 + 0 comments why we learn this? can you give me any example where this thing will be useful? ratan_singh98 + 1 comment here is my solution import java.io.*; import java.util.*; public class Solution { public static void main(String[] args) { Scanner in=new Scanner(System.in); String a; int i=1; while(in.hasNext()) { a=in.nextLine(); System.out.println(i+" "+a); i++; } } } nooooooobie + 1 comment Initially 'in' will be empty right?Then,how will 'in.hasNext()' return true? ratan_singh98 + 0 comments exactly, but it checks whether the next line is empty or not, not the current line. shravankumar_ta1 + 3 comments import java.io.*; import java.util.*; import java.text.*; import java.math.*; import java.util.regex.*; public class Solution { public static void main(String[] args) { int i=1; Scanner sc = new Scanner(System.in); while(sc.hasNext()) { System.out.println(i++ +" "+sc.nextLine()); } } } aaryanmukherjee1 + 1 comment Thanks bro saranyaalluri03 + 0 comments import java.io.; import java.util.; import java.text.; import java.math.; import java.util.regex.*; public class Solution { public static void main(String[] args) { Scanner sc=new Scanner( System.in); String a=sc.nextLine(); String b=sc.nextLine(); String c=sc.nextLine(); for(int i=1; sc.hasNext(); i++) { System.out.println(i+" " +sc.nextLine()); } } } Sir, they said we have to take input. But you have written the code only for checking and printing. But where is the input that we should enter. nooooooobie + 0 comments Initially 'sc' will be empty right?Then,how will 'sc.hasNext()' return true? jvinamra776 + 0 comments More efficient then scanner and a bit faster although this is a small code so there will be not much big difference public static void main(String[] args)throws IOException { int i=1; String str = ""; BufferedReader in = new BufferedReader(new InputStreamReader(System.in)); while((str=in.readLine())!=null){ System.out.println(i+" "+str); i++; } } sohal_sheetal + 2 comments My test case is failing since the input section is showing test with “{-truncated-}”. On downloading {truncated} does not display. Code: public static void main(String[] args) { Scanner in = new Scanner(System.in); for(int i=0; in.hasNext(); i++){ System.out.println(i+ " " +in.nextLine()); } } Input (stdin)Download Hello world I am a file Read me until end-of-file. {-truncated-} Your Output (stdout) 0 Hello world 1 I am a file 2 Read me until end-of-file. 1ojusticeo1 + 0 comments Scanner Scan = new Scanner(System.in); int Count = 0; while(Scan.hasNext()) System.out.println(++Count + " " + Scan.nextLine() ); vivek5287445 + 1 comment Here is my solution: import java.io.*; import java.util.*; import java.text.*; import java.math.*; import java.util.regex.*; public class Solution { public static void main(String[] args) { /* Enter your code here. Read input from STDIN. Print output to STDOUT. Your class should be named Solution. */ Scanner sc = new Scanner(System.in); for (int i=1; sc.hasNext(); i++){ System.out.println(i+" "+sc.nextLine()); } } } mishra_sarthak71 + 0 comments for(int i=1; scan.hasNext()==true; i++){ System.out.println(i + " " + scan.nextLine()); if(scan.hasNext()==false) System.exit(0); } #Generally System.exit(0) is used to terminate the JAVA Program codebloode_123 + 1 comment public class Solution { public static void main(String[] args) { Scanner sc=new Scanner(System.in); int i=1; while(sc.hasNextLine()){ String s=sc.nextLine(); System.out.print(i+" "); System.out.print(s); System.out.println(); i++; } } } Sort 358 Discussions, By: Please Login in order to post a comment
https://www.hackerrank.com/challenges/java-end-of-file/forum
CC-MAIN-2019-43
refinedweb
2,096
69.58
Technical Support On-Line Manuals Cx51 User's Guide #include <intrins.h> bit _testbit_ ( bit b); /* bit to test and clear */ The _testbit_ routine produces a JBC instruction in the generated program code to simultaneously test the bit b and clear it to 0. This routine may be used only on directly-addressable bit variables and is invalid on any type of expression. This routine is implemented as an intrinsic function. The _testbit_ routine returns the value of b. #include <intrins.h> #include <stdio.h> /* for printf */ void tst_testbit (void){ bit test_flag; if (_testbit_ (test_flag)) printf ("Bit was set\n"); else printf ("Bit was clear.
http://www.keil.com/support/man/docs/c51/c51__testbit_.htm
CC-MAIN-2020-29
refinedweb
105
65.73
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. onchange output show previous value def onchange_resvalue(self, cr, uid, ids, iscopy, description, context=None): context = context or {} v = {} if iscopy: v['res_val'] = description return {'value': v} <field name="iscopy" on_change="onchange_restime(iscopy, description)"/> <field name="description" on_change="onchange_restime(iscopy, description)"/> <field name="res_val"/> In the New record (not save in db) above on change method How to get res_val previous value entered by the user? For example in the above code working in the way, 1) If user Entered res_val output value 2) And then user checked iscopy boolean field it will copy the res_val into description 3) And again unchecked how to shows previous value entered by the user. I don't know who put a -1 on the comment of @voathnak lim below, but I think that this question about old value in "onchange" procedures is not obvious, usefull, I didn't found any answer on the web, so it may have been more usefull to explain why this question is a bad question, and more usefull give a link to the possibly obvious answer .... (a militant for a peacefull and usefull web ....) About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/onchange-output-show-previous-value-34010
CC-MAIN-2017-43
refinedweb
241
51.62
09-29-2012 12:51 PM I keep recieving this message as I submit my app for the App World. No matter what i do, I can't even get the icon and app name correct in the simulator, but the app works fine. Any help from anyone? i'm getting extrmemly frustrated - I am testing your app on the BlackBerry PlayBook. Please fix the icon name so that it matches with the app name. - There is an icon issue after installing your application on the BlackBerry PlayBook, icon appear as a question mark. Please make sure that the icon size is 86x86 in the blackberry-tablet.xml file and included in your bar file. im using flash builder on a PC so i can use the simulator. I'd be extremely grateful for any help anyone could offer Solved! Go to Solution. 09-29-2012 12:56 PM - edited 09-29-2012 12:57 PM Just to verify do you have an app icon that is in PNG format at 86x86 pixels in dimension named: "blackberry-tablet-icon.png" and in your XML configuration file (was: blackberry-tablet.xml) now called bar-descriptor.xml with a node like this: <icon>  </icon> I'm afraid I''m not exactly sure how Flash Builder lets you specify this icon but I'm guessing there is a configuration section for your application where this info would need to be. 09-29-2012 01:11 PM yes, I currently have the icon sized at 86x86 and under where it says "The icon for the image which should be 86x86" I have <icon> </icon> <!-- The splashscreen that will appear when your application is launching. Should be 1024x600. --> <!-- <splashscreen>Clock Icon</p ermission> --> <!-- Fourth digit segment of the package version. First three segments are taken from app description versionNumber tag. Must be an integer from 0 to 2^16-1 --> <!-- <buildId>1</buildId> --> </qnx>ermission> --> <!-- Fourth digit segment of the package version. First three segments are taken from app description versionNumber tag. Must be an integer from 0 to 2^16-1 --> <!-- <buildId>1</buildId> --> </qnx> the file is named "countdown-app.xml" currently
http://supportforums.blackberry.com/t5/Adobe-AIR-Development/icon-issue-continually-leading-to-rejection/td-p/1925889
CC-MAIN-2013-48
refinedweb
653
66.74
Opened 11 years ago Closed 11 years ago #2060 closed Bug (Fixed) UDFs Documentation issues Description While translating AutoIt help file by the Russian community, we found few issues: 1. In _GUICtrlListView_SetItemSelected: -1 to set selected state of all items There should be a remark about the fact that it will not work without $LVS_SINGLESEL style. 2. In _GUICtrlTreeView_SetText and _GUICtrlTreeView_SetState: item ID/handle to set the icon How is that related to icon? 3. In _GUICtrlListView_GetOriginY: Retrieves the current horizontal view origin for the control Should be vertical instead of horizointal. 4. In _GUICtrlTreeView_SetHeight: New height of every item in pixels. Heights less than 1 will be set to 1. If not even and the control does not have the DllStructGetData($TVS_NONEVENHEIGHT style this value will be rounded down to the nearest even value, "") If -1, the control will revert to using its default item height. Something wrong here (look at the bold part). 5. In _INetGetSource: #include <INet.au3> _INetGetSource ( $s_URL ) There is no second optional parameter in the syntax. 6. In _GUICtrlEdit_InsertText: #Include <GuiEdit.au3> _GUICtrlEdit_InsertText($hWnd, $sText, $iIndex = -1) The third parameter does not marked as optional (without the [, ]). 7. In _GUICtrlComboBox_Destroy: Remarks Restricted to only be used on Listbox created with _GUICtrlComboBox_Create Should be Combobox, not Listbox. Also this remark present in few other _GUICtrlComboBox_* functions. 8. In _GUICtrlRichEdit_SetSel: Remarks The first character of the text in a control is at character position 1 Why 1, AFAIK, it's 0. 9. In _GUICtrlRichEdit_GetPasswordChar: Special case: 0 - there is no password character, so the control displays the characters typed by the user It returns empty string, not 0. Attachments (0) Change History (2) comment:1 Changed 11 years ago by MrCreatoR <mscreator@…> comment:2 Changed 11 years ago by guinness - Milestone set to 3.3.7. Oops, here is a correction for the first issue:
https://www.autoitscript.com/trac/autoit/ticket/2060
CC-MAIN-2022-33
refinedweb
309
54.63
Hi Chris, Thanks a bunch for the new angle. Question & comments: * I like the simplicity of using a single TVar whose state reflects the not-computed/computed state of the IVal. * I also like the public interface of taking an STM argument (newTIVal(IO)) over returning a sink (newEmptyTIVal(IO)), which came from some non-STM thinking. In fact, maybe 'cached' is a better public interface yet. I'm going to try it out, renaming "cached" to "ival". (Oh yeah, I'm shortening "TIVal" to "IVal".) * Why tryPutTMVar in place of putTMVar? Perhaps to encourage checking that var hasn't been written? * A perhaps prettier version of force: force (TIVal tv) = readTVar tv >>= either compute return where compute wait = do a <- wait writeTVar tv (Right a) return a * The Applicative STM instance can be simplified: instance Applicative STM where { pure = return; (<*>) = ap } Cheers, - Conal On Mon, Apr 28, 2008 at 7:40 AM, ChrisK <haskell at list.mightyreason.com> wrote: > The garbage collector never gets to collect either the action used to > populate the cached value, or the private TMVar used to hold the cached > value. > > A better type for TIVal is given below. It is a newtype of a TVal. The > contents are either a delayed computation or the previously forced value. > > Thew newTIVal(IO) functions immediately specify the delayed action. > > The newEmptyTIVal(IO) functions create a private TMVar that allows the > delayed action to be specified once later. Note the use of tryPutTMVar to > return a Bool instead of failing, in the event that the user tries to store > more that one action. > > When force is called, the previous action (and any private TMVar) are > forgotten. The garbage collector might then be free to collect them. > > -- > Chris > > -- By Chris Kuklewicz (April 2008), public domain > > module TIVal(TIVal,newTIVal,newTIValIO,force,cached) where > > > > import Control.Applicative(Applicative(..)) > > import > > Control.Concurrent.STM(STM,TVar,newTVar,newTVarIO,readTVar,writeTVar > > > > ,TMVar,newEmptyTMVar,newEmptyTMVarIO,tryPutTMVar,readTMVar) > > import Control.Monad(Monad(..),join,liftM2) > > import System.IO.Unsafe(unsafePerformIO) > > > > newtype TIVal a = TIVal (TVar (Either (STM a) a)) > > > > -- the non-empty versions take a computation to delay > > > > newTIVal :: STM a -> STM (TIVal a) > > newTIVal = fmap TIVal . newTVar . Left > > > > newTIValIO :: STM a -> IO (TIVal a) > > newTIValIO = fmap TIVal . newTVarIO . Left > > > > -- The empty versions stage things with a TMVar, note the use of join > > -- Plain values 'a' can be stored with (return a) > > > > newEmptyTIVal :: STM ( TIVal a, STM a -> STM Bool) > > newEmptyTIVal = do > > private <- newEmptyTMVar > > tv <- newTVar (Left (join $ readTMVar private)) > > return (TIVal tv, tryPutTMVar private) > > > > newEmptyTIValIO :: IO ( TIVal a, STM a -> STM Bool ) > > newEmptyTIValIO = do > > private <- newEmptyTMVarIO > > tv <- newTVarIO (Left (join $ readTMVar private)) > > return (TIVal tv, tryPutTMVar private) > > > > -- force will clearly let go of the computation (and any private TMVar) > > > > force :: TIVal a -> STM a > > force (TIVal tv) = do > > v <- readTVar tv > > case v of > > Right a -> return a > > Left wait -> do a <- wait > > writeTVar tv (Right a) > > return a > > > > -- Conal's "cached" function. This is actually safe. > > > > cached :: STM a -> TIVal a > > cached = unsafePerformIO . newTIValIO > > > > -- The instances > > > >) > > > > instance Applicative STM where > > pure x = return x > > ivf <*> ivx = liftM2 ($) ivf ivx > > > -------------- next part -------------- An HTML attachment was scrubbed... URL:
http://www.haskell.org/pipermail/haskell-cafe/2008-April/042258.html
CC-MAIN-2014-15
refinedweb
523
53.31
Code Java: can you tell me what is the out put and which interface will call when you want display () Code Java: can you tell me what is the out put and which interface will call when you want display () No. Can you tell us? What happened when you wrote a test program to test this?: Code java: public class DiamondProblemTest { public interface Cowboy{ public void draw(); } public interface Artist{ public void draw(); } public static class Person implements Cowboy, Artist{ public void draw(){ //should I pull out a gun or a paintbrush? } } public static void main(String... args){ new Person().draw(); } } The answer's obviously that your person needs to draw pictures with his bullets :P
http://www.javaprogrammingforums.com/%20member-introductions/8625-interface-printingthethread.html
CC-MAIN-2016-26
refinedweb
116
72.05
STL (Standard Template Library) is a good skill for anyone programming C++ in the modern day. I must say that it takes some getting used to, i.e. there is a fairly steep learning curve, and some of the names that are used are not very intuitive (perhaps because all of the good names had been used up). The upside is once learned they will save you headaches down the road. Compared to MFC containers, they are more flexible and powerful. Listed advantages: This guide has been written so that the reader can get a running start at this challenging part of computer science, without having to wade through the endless mass of jargon and stifling exactitude that STL'er have created for their own amusement. The code here is mainly instructive on the practical ways to use STL. I always wanted to write one and here is my golden 24 karet opportunity: a hello world program. This transfers a char string into a vector of characters and then displays the string one character at a time. A vector is your basic garden variety array-like template. Probably, about half of all STL containers are vectors, so if you grasp this program, you are half way to a complete understanding of STL. // Program: Vector Demo 1 // Purpose: To demonstrate STL vectors // #include "stdafx.h" - include if you use pre compiled headers #include <vector> // STL vector header. There is no ".h" #include <iostream> // for cout using namespace std; // Ensure that the namespace is set to std char* szHW = "Hello World"; // This is - as we all know - a character array, // terminated with a null character int main(int argc, char* argv[]) { vector <char> vec; // A vector (STL array) of characters // Define an iterator for a vector of characters - this is always // scoped to within the template vector <char>::iterator vi; // Initialize the character vector, loop through the string, // placing the data in the vector character by character until // the terminating NULL is reached char* cptr = szHW; // Start a pointer on the Hello World string while (*cptr != '\0') { vec.push_back(*cptr); cptr++; } // push_back places the data on the back of the vector // (otherwise known as the end of the vector) // Print each character now stored in the STL array to the console for (vi=vec.begin(); vi!=vec.end(); vi++) // This is your standard STL for loop - usually "!=" // is used in stead of "<" // because "<" is not defined for some containers. begin() // and end() retrieve iterators // (pointers) to the beginning of the vector and to the end of vector { cout << *vi; } // Use the indirection operator // (*) to extract the data from the iterator cout << endl; // No more "\n" return 0; } push_back is the standard function for putting data into a vector or deque. insert is a similar function and does much the same, and works with all containers, but is more complicated. The end() is actually the end plus one, to allow the loop to operate properly - it actually points to the point just beyond the limits of the data. Just like in a regular loop in which you say for (i=0; i<6; i++) {ar[i] = i;} - ar[6] does not exist, but it is never reached in the loop so it does not matter. push_back insert end() for (i=0; i<6; i++) {ar[i] = i;} One of the first annoyances of STL makes itself known. To initialize it with data is more difficult than is the case in C/C++ arrays. You basically must do it element by element, or first initialize an array and then transfer it. People are working on this I understand. // Program: Initialization Demo // Purpose: To demonstrate initialization of STL vectors #include <cstring> // same as <string.h> #include <vector> using namespace std; int ar[10] = { 12, 45, 234, 64, 12, 35, 63, 23, 12, 55 }; char* str = "Hello World"; int main(int argc, char* argv[]) { vector <int> vec1(ar, ar+10); vector <char> vec2(str, str+strlen(str)); return 0; } In programming, there are many ways of doing the same task. Another way to fill a vector is to use the more familiar square brackets, like so: // Program: Vector Demo 2 // Purpose: To demonstrate STL vectors with // counters and square brackets #include <cstring> #include <vector> #include <iostream> using namespace std; char* szHW = "Hello World"; int main(int argc, char* argv[]) { vector <char> vec(strlen(sHW)); // The argument initializes the memory footprint int i, k = 0; char* cptr = szHW; while (*cptr != '\0') { vec[k] = *cptr; cptr++; k++; } for (i=0; i<vec.size(); i++) { cout << vec[i]; } cout << endl; return 0; } This example is cleaner, but allows you less control of the iterator, and has an extra integer counter, and you must explicitly set the memory footprint. Hand in hand with STL is the concept of Namespaces. STL is defined within the std namespace. There are 3 ways to specify it: std using namespace std; This is the simplest and best for simple projects, limits you to the std namespace, anything you add is improperly put in the std namespace (I think you go to heck for doing this). using std::cout; using std::endl; using std::flush; using std::set; using std::inserter; This is slightly more tedious, although a good mnemonic for the functions that will be used, and you can interlace other namespaces easily. typedef std::vector<std::string> VEC_STR; This is tedious but the best way if you are mixing and matching lots of namespaces. Some STL zealots will always use this and call anyone evil who does not. Some people will create macros to simplify matters. In addition, you can put using namespace std within any scope, for example, at the top of a function or within a control loop. using namespace std To avoid an annoying error code in debug mode, use the following compiler pragma: #pragma warning(disable: 4786) Another gotcha is: you must make sure that the spaces are placed between your angle brackets and the name. This is because >> is the bit shift operator, so: >> vector <list<int>> veclis; will give an error. Instead, write it: vector <list <int> > veclis; to avoid compilation errors. This is the explanation lifted from the MS help file of the set: "The template class describes an object that controls a varying-length sequence of elements of type const Key. Each element serves as both a sort key and a value. The sequence is represented in a way that permits lookup, insertion, and removal of an arbitrary element with a number of operations proportional to the logarithm of the number of elements in the sequence (logarithmic time). Moreover, inserting an element invalidates no iterators, and removing an element invalidates only those iterators that point at the removed element." An alternate, more practical, definition is: A set is a container that contains all unique values. This is useful for cases in which you are required to collect the occurrence of value. It is sorted in an order that is specified at the instantiation of the set. If you need to store data with a key/value pair, then a map is a better choice. A set is organized as a linked list, is faster than a vector on insertion and removal, but slightly slower on search and addition to end. An example program would be: // Program: Set Demo // Purpose: To demonstrate STL sets #include <string> #include <set> #include <iostream> using namespace std; int main(int argc, char* argv[]) { set <string> strset; set <string>::iterator si; strset.insert("cantaloupes"); strset.insert("apple"); strset.insert("orange"); strset.insert("banana"); strset.insert("grapes"); strset.insert("grapes"); // This one overwrites the previous occurrence for (si=strset.begin(); si!=strset.end(); si++) { cout << *si << " "; } cout << endl; return 0; } // Output: apple banana cantaloupes grapes orange If you want to become an STL fanatic, you can also replace the output loop in the program with the following lines. copy(strset.begin(), strset.end(), ostream_iterator<string>(cout, " ")); While instructive, I find this personally less clear and prone to error. If you see it, now you know what it does. Containers pre-date templates and are computer science concepts that have been incorporated into STL. The following are the seven containers implemented in STL. Note: If you are reading the MFC help then you will also come across the efficiency statement of each container. I.E. (log n * n) insertion time. Unless you are dealing with very large number of values, you should ignore this. If you start to get a noticeable lag or are dealing with time critical stuff then you should learn more about the proper efficiency of various containers. The map is a template that uses a key to obtain a value. Another issue is that you will want to use your own classes instead of data types, like int that has been used up to now. To create a class that is "template-ready", you must be ensure that the class contains certain member functions and operators. The basics are: int = You would overload more operators as required in a specific template, for example, if you plan to have a class that is a key in a map you would have to overload relational operators. But that is another story. // Program: Map Own Class // Purpose: To demonstrate a map of classes #include <string> #include <iostream> #include <vector> #include <map> using namespace std; class CStudent { public : int nStudentID; int nAge; public : // Default Constructor - Empty CStudent() { } // Full constructor CStudent(int nSID, int nA) { nStudentID=nSID; nAge=nA; } // Copy constructor CStudent(const CStudent& ob) { nStudentID=ob.nStudentID; nAge=ob.nAge; } // Overload = void operator = (const CStudent& ob) { nStudentID=ob.nStudentID; nAge=ob.nAge; } }; int main(int argc, char* argv[]) { map <string, CStudent> mapStudent; mapStudent["Joe Lennon"] = CStudent(103547, 22); mapStudent["Phil McCartney"] = CStudent(100723, 22); mapStudent["Raoul Starr"] = CStudent(107350, 24); mapStudent["Gordon Hamilton"] = CStudent(102330, 22); // Access via the name cout << "The Student number for Joe Lennon is " << (mapStudent["Joe Lennon"].nStudentID) << endl; return 0; } If you like to use typedef, this an example: typedef typedef set <int> SET_INT; typedef SET_INT::iterator SET_INT_ITER One convention is to make them upper case with underscores. ANSI/ISO strings are commonly used within STL containers. It is your standard string class, widely praised except for its deficiency of no format statement. You must instead use << and the iostream codes (dec, width, etc.) to string together your string. << Use c_str() to retrieve a character pointer, when necessary. c_str() I said that iterators are pointers, but there is more. They look like pointers, act like pointers, but they are actually embedded in which the indirection operator (unary *) and -> have been overloaded to return a value from the container. It is a bad idea to store them for any length of time, as they usually invalid after a value has been added or removed from a container. They are something like handles in this regard. The plain iterator can be altered, so that the container is to be traversed in different ways: * -> ++ -- += -= < <= > >= == != reverse_iterator begin() rbegin() rend() const Templates have other parameters besides the type of value. You can also pass callback functions (known as predicates - this is a function of one argument that returns a bool value). For example, say you want a set of strings that are automatically sorting in ascending order. You would simply create a set class in this way: bool set <int, greater<int> > set1 greater <int> is another template for a function (generic function) which is used to sort values, as they are placed into the container. If you wanted the set to be sorted in descending order, you would write: greater <int> set <int, less<int> > set1 There are many other cased you must pass a predicate as parameter to the STL class, in algorithms, described below. The templated names get expanded for the compiler, so when the compiler chokes on something, it spits out extremely long error messages that are difficult to read. I have found no good way around this. The best is to develop the ability to find and focus on the end of the error code where the explanation is located. Another related annoyance: if you double click on the template error, it will take you to the point in the code within the template code, which is also difficult to read. Sometimes, it is best just to carefully re-examine your code, and ignore the error messages completely. Algorithms are functions that apply to templates. This is where the real power of STL starts to show up. You can learn a few function names that usually apply to most of the template containers. You can sort, search, manipulate, and swap with the greatest of ease. They always contain a range within which the algorithm performs. E.g.: sort(vec.begin()+1, vec.end()-1) sorts everything but the first and last values. sort(vec.begin()+1, vec.end()-1) The container itself is not passed to the algorithm, just two iterators from the container that bookend a range. In this way, algorithms are not restricted by containers directly, but by the iterators supported by that specific algorithm. In addition, many times you will also pass a name of a specially prepared function (those afore mentioned predicates) as an argument. You can even pass plain old values. Example of algorithms in play: // Program: Test Score // Purpose: To demonstrate the use of algorithm // with respect to a vector of test scores #include <algorithm> // If you want to use an // algorithm this is the header used. #include <numeric> // (For Accumulate) #include <vector> #include <iostream> using namespace std; int testscore[] = {67, 56, 24, 78, 99, 87, 56}; // predicate that evaluates a passed test bool passed_test(int n) { return (n >= 60); } // predicate that evaluates a failed test bool failed_test(int n) { return (n < 60); } int main(int argc, char* argv[]) { int total; // Initialize a vector with the data in the testscore array vector <int> vecTestScore(testscore, testscore + sizeof(testscore) / sizeof(int)); vector <int>::iterator vi; // Sort and display the vector sort(vecTestScore.begin(), vecTestScore.end()); cout << "Sorted Test Scores:" << endl; for (vi=vecTestScore.begin(); vi != vecTestScore.end(); vi++) { cout << *vi << ", "; } cout << endl; // Display statistics // min_element returns an _iterator_ to the // element that is the minimum value in the range // Therefor * operator must be used to extract the value vi = min_element(vecTestScore.begin(), vecTestScore.end()); cout << "The lowest score was " << *vi << "." << endl; // Same with max_element vi = max_element(vecTestScore.begin(), vecTestScore.end()); cout << "The highest score was " << *vi << "." << endl; // Use a predicate function to determine the number who passed cout << count_if(vecTestScore.begin(), vecTestScore.end(), passed_test) << " out of " << vecTestScore.size() << " students passed the test" << endl; // and who failed cout << count_if(vecTestScore.begin(), vecTestScore.end(), failed_test) << " out of " << vecTestScore.size() << " students failed the test" << endl; // Sum the scores total = accumulate(vecTestScore.begin(), vecTestScore.end(), 0); // Then display the Average cout << "Average score was " << (total / (int)(vecTestScore.size())) << endl; return 0; } These are used in the initialization stages of a template. They are mysterious behind the scenes type of creatures, and really only of concern if you are doing high level memory optimization, and are best considered to be black boxes. Usually, you never even specify them as they are default parameters that are generally not tinkered with. It is best to know what they are though in case they show up on one of those employment tests. Any way that you use a regular class, you can use an STL class. It can be embedded: class CParam { string name; string unit; vector <double> vecData; }; or used as a base class: class CParam : public vector <double> { string name; string unit; }; Derivation should be used with some caution. It is up to you as to the form that fits your programming style. To create a more complex data structure, you can nest a template within a template. It is best to typedef beforehand on the internal template as you will certainly need to use the inner template again. // Program: Vector of Vectors Demo // Purpose: To demonstrate nested STL containers #include <iostream> #include <vector> using namespace std; typedef vector <int> VEC_INT; int inp[2][2] = {{1, 1}, {2, 0}}; // Regular 2x2 array to place into the template int main(int argc, char* argv[]) { int i, j; vector <VEC_INT> vecvec; // if you want to do this in all one step it looks like this // vector <vector <int> > vecvec; // Fill it in with the array VEC_INT v0(inp[0], inp[0]+2); // passing two pointers // for the range of values to be copied to the vector VEC_INT v1(inp[1], inp[1]+2); vecvec.push_back(v0); vecvec.push_back(v1); for (i=0; i<2; i++) { for (j=0; j<2; j++) { cout << vecvec[i][j] << " "; } cout << endl; } return 0; } // Output: // 1 1 // 2 0 Although cumbersome to initialize, once completed and filled in, you have a 2 dimensional array that is indefinitely expandable (until memory space runs out). The same can be done for any combination of containers, as the situation requires. STL is useful, but not without its annoyances. As the Chinese say: if you learn it, you will be like a tiger with the claws of a lion. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here set <string> strset; ... strset.insert("grapes"); strset.insert("grapes"); // This one overwrites the previous occurrence // This one overwrites the previous occurrence strset.insert("grapes"); pair<set <string>::iterator, bool> result(strset.insert("grapes")); if ( result.second == false ) { strset.erase("grapes"); strset.insert("grapes"); } #include <iostream> using namespace std; void main() { cout << "Hello, World!" << endl; } typedef std::vector<long> LongVector; for (j=0; j<32000; j++) for (vi=vec.begin(); vi!=vec.end(); v++) ; for (j=0; j<32000; j++) for (vi=vec.begin(); vi!=vec.end(); ++v) ; int A::operator++() {return ++a;} int A::operator++(int) {int b = a; ++a; return b;} for (int i = 0; i < SOME_BIG_NUMBER; i++) for (int i = 0; i < SOME_BIG_NUMBER; ++i) JWood wrote:Have you ever considered that the prefix / postfix issue may be a complete urban myth and it is just an excuse to try to get everyone to change an arbitrary and mostly esthetic programming decision to their way? JWood wrote:You are wrong as well and I think you missed an important point in Summand's post. JWood wrote:If iterator is defined in the template library as a pointer - and it is in VC 6.0 then it is a simple data type and so postfix and prefix are simply order of operation decisions which take the same amount of time - the instructions are just switched around. <code>// Prefix // Given int i increment i push i to stack // Postfix // Given int i copy i to r increment i push r to stack</code> JWood wrote:What you call "optimization" I would call competence. I don't work for my compiler, my compiler works for me. General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/articles/6513/practical-guide-to-stl?msg=801053
CC-MAIN-2017-13
refinedweb
3,209
58.92
I installed the ncsdk 1.12 version on a fresh RPi model 3 with Raspbian stretch. I could run /ncappzoo/apps/hello_ncs_py with no issue.I could run /ncappzoo/apps/hello_ncs_py with no issue. When I am running the following script: from mvnc import mvncapi as mvnc # grab a list of all NCS devices plugged in to USB print("[INFO] finding NCS devices...") devices = mvnc.EnumerateDevices() # if no devices found, exit the script if len(devices) == 0: print("[INFO] No devices found. Please plug in a NCS") quit() # use the first device since this is a simple test script print("[INFO] found {} devices. device0 will be used. " "opening device0...".format(len(devices))) device = mvnc.Device(devices[0]) device.OpenDevice() # open the CNN graph file print("[INFO] loading the graph file into RPi memory...") path = '/home/pi/ssd_on_pi/graph' with open(path, mode="rb") as f: graph_in_memory = f.read() # load the graph into the NCS print("[INFO] allocating the graph on the NCS...") graph = device.AllocateGraph(graph_in_memory) I have this error: [INFO] finding NCS devices... [INFO] found 1 devices. device0 will be used. opening device0... [INFO] loading the graph file into RPi memory... [INFO] allocating the graph on the NCS... Traceback (most recent call last): File "allocate_graph.py", line 25, in <module> graph = device.AllocateGraph(graph_in_memory) File "/usr/local/lib/python3.5/dist-packages/mvnc/mvncapi.py", line 203, in AllocateGraph raise Exception(Status(status)) Exception: mvncStatus.ERROR So initially I installed the API-only mode, I received this error when I ran the above file. Then, I uninstalled API-only and tried the full SDK, still the same error :( I think the problem is that somehow I am not able to allocate the graph onto the device. I generated the graph using a separate ubuntu vm, and I was able to successfully run the above code in the vm.I think the problem is that somehow I am not able to allocate the graph onto the device. I generated the graph using a separate ubuntu vm, and I was able to successfully run the above code in the vm. Any clue what could be wrong here? Link Copied I just tried generating the graph from ncsdk/examples/caffe/GoogLeNet in the vm and copied it to the Pi. When running run.py in Pi, the same error shows up :( pi@raspberrypi:~/workspace/ncsdk-1.12.00.01/examples/caffe/GoogLeNet $ python3 run.py Device 0 Address: 1.4.1 - VID/PID 03e7:2150 Starting wait for connect with 2000ms timeout Found Address: 1.4.1 - VID/PID 03e7:2150 Found EP 0x81 : max packet size is 64 bytes Found EP 0x01 : max packet size is 64 bytes Found and opened device Performing bulk write of 865724 bytes... Successfully sent 865724 bytes of data in 2788.732941 ms (0.296055 MB/s) Boot successful, device address 1.4.1 Found Address: 1.4.1 - VID/PID 03e7:f63b done Booted 1.4.1 -> VSC Traceback (most recent call last): File "run.py", line 66, in <module> graph = device.AllocateGraph(blob) File "/usr/local/lib/python3.5/dist-packages/mvnc/mvncapi.py", line 203, in AllocateGraph raise Exception(Status(status)) Exception: mvncStatus.ERROR HALP… @ama @PINTO @ama Does "~/workspace/ncsdk-1.12.00.01/examples/caffe/GoogLeNet/cpp/run_cpp" work? @ama Have you tried using a powered USB hub? @Tome_at_Intel Yes I am indeed using a USB hub to connect the Movidius stick to the Pi. The only thing that is connected to the hub is the Pi. It is mainly because if I directly plug in the stick to the Pi, I will not have access to the other 3 USB ports :(, so essentially I am using the hub as a usb extension cable. @Tome_at_Intel @PINTO Issue resolved. It is because that I am using a USB hub as an extension cable to connect the stick to the Pi. Everything works as expected when I am using an actual USB extension cable. I strongly suggest to have this added as a reminder in the RPi installation guide @ama Glad you figured it out and narrowed it down to your USB hub. The brand of the USB hub I've been using is a 7 port powered Anker USB hub and they seem to work well with the NCS so far. We do recommend using a powered USB hub whenever you can, although its not required. I have this listed in my Troubleshooting guide at.
https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Having-issues-with-loading-a-graph-onto-the-NCS/td-p/659534
CC-MAIN-2021-21
refinedweb
743
68.16
Total newbie - is QML compiled I'm completely a newbie with QT (I have programmed in Python in the past). I know QT is normally C++ and compiled. I watched a video on the new QT 5.1 that showed QT QUICK 2.0. The interesting thing is no C++ code was written - only QT QUICK code. The question - if I create a QT QUICK only app is the app compiled? Does it run in some sort of JIT (like JavaScript). I'd like to deploy something like an '.exe'? Can that be done just using QT QUICK? - skycrestway There is a detail blog post answering this. It is the first of a 4 part series. The .qml files are not compiled to machine code but the qt library comes with a jit-based javascript engine included. To make a stand alone executable from qml, you can create a basic cpp application and simply point it to your qml files. The following snippet shows the minimum amount of c++ code needed to set up and run a qml application: @#include <QApplication> #include <QQmlApplicationEngine> int main(int argc, char *argv[]) { QApplication app(argc, argv); QQmlApplicationEngine engine("main.qml"); return app.exec(); }@ The qml files themselves can also be embedded into the executable. When deploying you would also usually ship the Qt .dlls together with your application. Thanks folks - the link looks like a very interesting (and educational) read. Johnf
https://forum.qt.io/topic/29313/total-newbie-is-qml-compiled
CC-MAIN-2018-09
refinedweb
238
76.22
Details - Type: Bug - Status: Resolved - Priority: Major - Resolution: Fixed - Affects Version/s: 2.0.0Release - Fix Version/s: 2.0.1Release - Component/s: None - Labels:None - Environment:Ubuntu Linux, Java 6, Eclipse 3.5, XML-RPC module 0.7 from mvn repo - Number of attachments : Description The XMLRPCServerProxy seems not to be working. Whenever one tries to call any remote method, it fails with a MissingPropertyException. However, if I try to call serverProxy.invokeMethod directly, it works. So, the following fails: def server = new XMLRPCServerProxy('') def uid = server.login('terp', 'admin', 'admin') while the following works: def server = new XMLRPCServerProxy('') def uid = server.invokeMethod('login', ['terp', 'admin', 'admin']) Activity Hum... Found out that it only happens inside Eclipse, with Eclipse Groovy Plugin libs as GROOVY_HOME (v2.0.0 here). If I use the downloaded libs, it works as expected. Pretty weird, but not critical. Sorry for the false alarm. Since it is happening only within eclipse and not otherwise, how about moving the issue to greclipse project and have someone look into it? Cloves if "foo instanceof GroovyObject" fails even though foo is such an instance then this is clearly a case of class duplication. A class is defined by name and class laoder. So it is perfectly possible to have two different GroovyObject. The GroovyObject this test was made against is from the runtime, since the class the test is in, is part of the runtime. But I suspect the foo coming from outside and being loaded by a different Groovy runtime, which of course then has a different GroovyObject. Most probably one is included in eclipse and the other one is provide by yourself I've suspected this was the case. It's a conflict between the Groovy Plugin libs and the Groovy libs provided by the Maven Plugin. Removing the Groovy Eclipse provided libs from the classpath (it also need to be removed from the "Run as Script" Configuration) solves the issue. The "Run as Groovy Script" action, by default, setup a groovy.home system properties that probably bootstrap Groovy from a different classloader than the one used to run the script. Maybe a warning about such in the Groovy Eclipse Plugin documentation should be made to help others troubleshoot this hard to debug behavior. Classloading issues are always a PITA. In the next week or so, I will be releasing m2eclipse integration with Groovy-Eclipse that will solve the problem you are describing. Also, an entry in the FAQ would be useful. I'll keep this bug open until I get a chance to update the FAQ. Done. FAQ is updated: Also, changed title to reflect the real problem. I've found that in org.codehaus.groovy.runtime.callsite.CallSiteArray:145, even though XMLRPCServerProxy implements the GroovyObject interface, the "receiver instanceof GroovyObject" evaluation returns false! That's beyond my understanding!
http://jira.codehaus.org/browse/GRECLIPSE-664?focusedCommentId=213110&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-14
refinedweb
474
58.69
See the current Perl documentation for lib:Benchmark. Here is our local, out-dated (pre-5.6) version: Benchmark - benchmark running times of code timethis - run a chunk of code several times timethese - run several chunks of code several times timeit - run a chunk of code and see how long it goes timethis ($count, "code"); # Use Perl code in strings... timethese($count, { 'Name1' => '...code1...', 'Name2' => '...code2...', }); # ... or use subroutine refe The Benchmark module encapsulates a number of routines to help you figure out how long it takes to execute some code. Methods The Benchmark module encapsulates a number of routines to help you figure out how long it takes to execute some runs/second, which should be a more interesting number than the actually spent seconds. $code Returns a Benchmark object. timethis(COUNT, VALUE, KEY, STYLE) The routines are called in string comparison order of KEY. The COUNT can be zero or negative, see timethis(). timethis(). timestr().'. The following routines will be exported into your namespace if you specifically ask that they be imported: The data is stored as a list of values from the time and times functions: ($real, $user, $system, $children_user, $children_system) in seconds for the whole loop (not divided by the number of rounds). The timing is done using time(3) and times(3). time(3)(); Benchmark inherits from no other class, except of course for Exporter. Comparing eval'd strings with code references will give you inaccurate results: a code reference will show a slower execution time than the equivalent eval'd string. The real time timing is done using time(2) and the granularity is therefore only one second. time(2). Throwing nuclear weapons into the sun Making everybody on Earth disappear A threat from an alien with a mighty robot A new technology or communications medium Providing a magic fish to a Miss Universe Establishing a Group mind Results (150 votes). Check out past polls.
http://www.perlmonks.org/index.pl?node=perlman%3Alib%3ABenchmark
CC-MAIN-2018-22
refinedweb
321
59.13
Hi i got an problem i need to read from an text file that i got from an excel file at start. so i want each block in the excel file to have one space at the array i saved it so it got seperated by ; example: 4 ; 0,0 42 ; 0,0 15 ; 3,2 73 ; 5,4 61 ; 5,6 7 ; 4,6 so i've been trying to get some information in these threads and learned this #include <iostream> #include <fstream> #include <string> using namespace std; int main () { string line; string array[50]; int temp=0; ifstream myfile ("example.txt"); if (myfile.is_open()) { while (! myfile.eof()) { getline (myfile,line, ';'); array[temp]=line; temp++; } myfile.close(); } else cout << "Unable to open file"; for (int a=0;a<=temp;a++) { cout<<array[a]<<endl; } return 0; } This one works fine because it seperates away the ; and writes it to my array bur since its an string i cant work on with it... i need it in int form so i can do my other calculations later on #include <iostream> #include <fstream> using namespace std; int main() { int i = 0; int array[20]; int max_read = 20; int amountRead = 0; std::ifstream in("example.txt",std::ios::in |std::ios::binary); if(!in) { std::cout<<"Could not open file"<<std::endl; return 1; } //this is where we are reading in the information into our array while(in>>array[amountRead] && amountRead < max_read) { amountRead++; } for(i = 0; i < 20; i++) { cout<<array[i]<<endl; } return 0; } And this program are with int form but i cant get it to seperate and with this program i have to know how big the array is soposed to be. That wont work so well for me because this program is going to be used later on with some calculations and it can be different large everytime thx for help :)
https://www.daniweb.com/programming/software-development/threads/170038/how-to-read-text-file-into-array-and-skip-the-delimeters
CC-MAIN-2016-50
refinedweb
313
66.51
#include <windows_message_handler.hpp> List of all members. Class that manages a message handling function attched to a win32 HWND. Definition at line 30 of file windows_message_handler.hpp. Function prototype for the message handling function. Definition at line 43 of file windows_message_handler.hpp. callback_t() Attach the message handler to the window with a specific string handle. A given string handle can be attached to a window at most once, attempting to re-install a handler with the same string handle on a given window is an error. Attempting to install a message handler to two windows at the same time is an error. Note that the ProcName template argument is a const char*, which requires a very specific coding style to be used by clients. The string being passed must be staticly defined, but cannot be a string literal, as the following example demonstrates: extern const char my_proc_name_k[] = "my-unique-proc-name"; extern const char my_proc_name_k[] = "my-unique-proc-name"; message_handler_t handler(&my_handler_func); handler.install<my_proc_name_k>(my_hwnd); message_handler_t handler(&my_handler_func); handler.install<my_proc_name_k>(my_hwnd); Definition at line 66 of file windows_message_handler.hpp. Definition at line 71 of file windows_message_handler.hpp. Definition at line 74 of file windows_message_handler.hpp. Use of this website signifies your agreement to the Terms of Use and Online Privacy Policy. Search powered by Google
http://stlab.adobe.com/classadobe_1_1message__handler__t.html
CC-MAIN-2016-07
refinedweb
217
50.53
Hi, I am using a grouped ListView and want to show information of the group header displayed in the item. Here is my xaml code: <ListView ItemsSource="{Binding ParentList}" IsGroupingEnabled="true"> <ListView.GroupHeaderTemplate> <DataTemplate> <ViewCell> <StackLayout> <Label Text="{Binding ParentTitle}"/> </StackLayout> </ViewCell> </DataTemplate> </ListView.GroupHeaderTemplate> <ListView.ItemTemplate> <DataTemplate> <ViewCell> <StackLayout> <Label Text="{Binding ParentTitle}"/> <!-- Here I want to get the GroupHeader Context --> <Label Text="{Binding ChildTitle}"/> </StackLayout> </ViewCell> </DataTemplate> </ListView.ItemTemplate> </ListView> Here is my Parent class: public class Parent : ObservableCollection<Child> { public ObservableCollection<Child> ChildList => this; public String ParentTitle { get; set; } } Here is my Child class: public class Child public String ChildTitle { get; set; } } I already tried setting a x:Name on the GroupHeader but it is not unique so the item can not find it. Can anyone please help? Answers @DaWa Can you check this link? @AnubhavRanjan Thanks, I checked your link, but I can not see how it will help with my problem. Assuming you have multiple parent objects with multiple children for each parent: Instead of trying to do it in your XAML, how about adding the ParentTitle as a property to your Child class, or a reference to the Parent object in the Child. The former approach will then allow you to just bind to the ParentTitle property in the ItemTemplate. I anticipate that the latter can/will introduce a circular reference which can impact garbage collection, if you don't clear down your parent/child collections. @JamesLavery thanks for your help, but I wanted prevent a circular reference. I simplified my real problem to show what I need. In my project I need want to disable my child when my parent is disabled and I also need the Id of my parent in the itemtemplate. So you've added the Parent Id to the child item? No, not yet. I hope that I do not need to do that. Since I did not get an other solution I was forced to add the necessary fields to the ChildObject.
https://forums.xamarin.com/discussion/comment/355654
CC-MAIN-2019-13
refinedweb
334
62.98
28 August 2009 17:05 [Source: ICIS news] TORONTO (ICIS news)--German biofuels firm Verbio plans to soon start producing biomethane at two of its bioethanol plants to help reduce environmental impacts, it said on Friday. The company would convert a by-product - called slop (Schlempe), also known as distillers dried grain with solubles (DDGS) - from plants in Saxony-Anhalt and ?xml:namespace> The project, believed to be the first of its kind in Some competitors were converting the by-product into animal feed, but that was a very energy-intensive process that Verbio chose not to adopt, she added. The initial biomethane production from the two plants, set to begin in late 2009/early 2010, would be equivalent to about 30 megawatts (MW), she said. Production would later be expanded to 125 MW. Verbio plans to market the biomethane as biogas, Haaker said, adding that negotiations with partners were ongoing. Verbio has production capacities for 300,000 tonnes/year of bioethanol and 450,000 tonnes/year of biodiesel. Check out Doris de Guzman’s Green Chemicals Blog for views on sustainability issues
http://www.icis.com/Articles/2009/08/28/9243769/germanys-biofuels-firm-verbio-to-start-producing-biomethane.html
CC-MAIN-2015-18
refinedweb
183
58.32
Dynamic Types in C#. All of these basic concepts come together to form the concept of a C# dynamic type. The dynamic type in C# is a strange, new concept that has some interesting use cases. Those use cases usually apply to interacting with other languages or documents. Let’s say I build a simple class that describes a person object. I will also go ahead and create a main class, build a person object with the dynamic type, and print one line to screen. using System; namespace IntroDynamicTypes { class Person { public Person(string n) { this.Name = n; } public string Name { get; set; } } class DynamicTypesProgram { static void Main(string[] args) { dynamic DynamicPerson = new Person("Urda"); Console.WriteLine("Person Created, Name: " + DynamicPerson.Name); // Prints "Person Created, Name: Urda" } } } Now you may notice as you key this into Visual Studio 2010 you will not have your normal IntelliSense to guide you. You will be prompted with this notice: Since we have defined this person object as dynamic, we can use any method we want with it! The compiler will not check for anything or stop you from building an application with objects using undefined methods. This is because a dynamic class can call these methods at run time, with the expectation that the method definitions and code will exist when the program is ran. In fact we can even add some more code into our main like so… static void Main(String[] args) { dynamic DynamicPerson = new Person("Urda"); Console.WriteLine("Person Created, Name: " + DynamicPerson.Name); // Prints "Person Created, Name: Urda" // This will throw an error only at runtime, // *not* at compile time! DynamicPerson.SomeMagicFunction(); } At this point you’ll notice we have added a method called SomeMagicFunction that does not exist in the class, but Visual Studio 2010 still lets us compile the application. It is only at run time that this application will throw an error when it attempts to make a call to SomeMagicFunction. But if the function was made available through some form of interop, you would be able to execute that function against the object. So dynamic types allows C# to play nice with other languages such as IronPython, HTML DOMs, COM API, or somewhere else in a program. Think of the dynamic type as a way to bridge the gap between strongly typed components such as C# and weakly type components such as IronPython, COM, or other DOM objects.
https://urda.com/blog/2010/09/23/dynamic-types-in-csharp
CC-MAIN-2022-21
refinedweb
402
61.26
An easy, no-dependencies package for writing IPE files from Python. Project description Mini-Ipe This is a source-only, no-dependencies Python package to write Ipe files. The proper way to work with Ipe files from Python would probably be to use ipepython from ipe-tools, but this requires building a number of things from source, which may be difficult (or just time-consuming) depending on your computing environment. Mini-Ipe is a quick way to easily write Ipe files with minimum effort, from a plain Python environment. Mini-Ipe is now on PyPI. Get it anywhere using (python3 -m) pip install miniipe. What are Ipe files Ipe is an "extensible drawing editor" that is excellent for making diagrams for scientific papers and presentations. This makes its file format ideal as output from computational experiments. Important! An Ipe file needs a valid style file. See the remarks section below. Getting started First, get miniipe visible to your interpeter, for example using pip. (No need to clone the github repository.) pip install miniipe Then try the following small program and go from there. from miniipe import Document, polygon doc = Document() ps = [(100,100), (200,200), (300,100)] doc.path( polygon(ps) ) doc.write('simple.ipe') Remarks This is not a complete documentation, but looking at the examples will get you a long way. The best way to find out about all the methods and arguments is probably to import miniipe and use an IDE to look around (or skim the source source); here are some general remarks. Points Mini-Ipe accesses the X and Y coordinates 2D points you give it using index [0] and [1]. This means a 2-tuple of numbers is probably the easiest way to go in many cases (see the example above). We do not provide a class for working with 2D points/vectors: if you need to do nontrivial geometric computations, you probably already have some way to do that and we do not want to create additional API boundaries. Note. We do provide a Matrix class for affine transformations, since this is an important concept in Ipe. Document.path(...) In Ipe, what is drawn is mostly orthogonal to how it it drawn. Polygons, circles, splines: almost everything is a path. Even filled shapes are paths with a fill property. This is reflected in the Mini-Ipe API: it is document.path( polygon(...), ...) rather than document.polygon( ..., ...). - A path is described by a series of "path construction operators" (Ipe documentation). The Document.pathmethod takes a string of such drawing instructions. You can write these on your own, but should probably use the convenience functions like rectangle, circle, polygon, and so forth. Under most circumstances, you can have multiple shapes be part of one "path" by concatenating these strings. Layers All objects ( path, text, use) belong to a layer. As a consequence of the Ipe file format, if you don't specify the layer argument, the object goes in the same layer as the previous object. Matrix Matrices occur in multiple places in Ipe, most prominently as a property of objects: when drawing something that has a matrix property, Ipe transforms it using the given matrix. Use the Matrix class for this: it supports matrix multiplication using the @ operator, and helper functions for common transformations such as Translate, Scale and Rotate are provided. See the matrix fun example. Note. Transformation by the matrix property is not done by miniipe: it merely writes the matrix property in the Ipe file. To actually transform a single point in a way that is consistent with Ipe, use the Matrix.transform(p) method. See the transform example to confirm that the results match. Ellipse The ellipse function takes a Matrix argument: it draws the ellipse resulting from transforming the unit circle by this matrix. The 'parent' argument The methods path, text and use take an optional argument called parent. If omitted, the object is added to the default page that a miniipe.Document starts with. If you make more pages, pass the page you want to add the object to instead. To put the object in a group ( miniipe.Document.group(...)), pass the group instead. Style files It is not clear to me that I have the rights to distribute the standard Ipe style file, so you will have to provide your own. There are several ways to go about this. - Call import_stylefile()without arguments. This tries to import ~/.ipe/styles/basic.isy, which may or may not exist on your system. You get an error if this file does not exist. - Call import_stylefile(filename)with the filename of a valid style file. You can get one from Ipe as follows: make a new document, select Edit > Style sheets, select basicand click Save. - Do not import a style file when you make the document with miniipeand save it anyway. Ipe may complain when you open the file - colours, symbols et cetera will be missing. You can then add the basicstyle file after the fact. (See option 2 for how to get the basic style file.) You can also make styles using Mini-Ipe. See the style example code. Bitmap images You can include bitmap images in Ipe files using add_bitmap and image. Mini-Ipe does not do any bitmap processing: the image payload is entirely your own responsibility. See the bitmap example. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/miniipe/
CC-MAIN-2020-45
refinedweb
922
65.93
Identifying Tweets on Twitter in Python Using Machine Learning We deploy a model that identifies whether a tweet is positive or negative. This is a generalized model and thus can be used for any similar purposes in natural language processing. Predictions based on the nature of texts come under ‘Natural Language Processing’.There are certain specific libraries used to classify lengthy text files and sort them accordingly. This is a bit different than simple classification and prediction algorithms. Prerequisites: - You need to have a dataset file with a .tsv extension. - Set the folder as a working directory, in which your dataset is stored. - Install Sypder or any similar working environment. (python 3.7 or any latest version) - You need to know the Python programming language and Natural Language Processing. Step by step implementation: Let us look at the steps to identify the nature of the tweets. Make sure that you have checked the prerequisites to this implementation. 1. Importing the library First of all, import the libraries that we are going to use: import numpy as np import matplotlib.pyplot as plt import pandas as pd 2. Importing the dataset The dataset consists of two columns, one is for the tweets and the second one is a ‘0’ or a ‘1’, specifying whether the tweet is positive or negative. The dataset here is going to be a ‘.tsv’ (Tab Separated Values) file. The reason behind not using a ‘.csv’ (Comma Separated Values) file here is that tweets usually contain a lot of commas. In a ‘.csv’ file, every value separated by a comma is taken as a separate column. dataset = pd.read_csv('Tweeter_tweets.tsv', delimiter = '\t', quoting = 3) ‘quoting =3 ‘ specifies that we ignore the double quotes (punctuation) in the tweet. 3. Filtering the text a)Removing non-significant characters - We need to import a library, ‘re’.This library has some great tools to clean some texts efficiently. We will keep only the different letters from A to Z. - The tool that will help us do this is the ‘sub’ tool. The trick is, we’re going to input what we don’t want to remove. Following the hat (^) is what we don’t want to remove in the tweet. We also need to add a space because the removed character will be replaced by a space. - The second step is to put all the letters of this tweet in lowercase. We use the ‘lower‘ function for this. import re tweet = re.sub('[^a-zA-Z]', ' ', dataset['Tweet'][0]) tweet = tweet.lower() tweet = tweet.split() For example, ‘I loved the Corpus Vila…..nice location!!!’ output: i loved the corpus vila nice location b) Removing the non-significant words - We need to import the ‘ nltk‘ library, which contains a lot of classes, functions, data sets, and texts to perform natural language processing. - We also need to import a stopwords package, which we will be using in the later sections. And now we need to import the tools in the ‘ nltk ‘library. The tool is going to be a list of words that are irrelevant to predict the nature of the tweet. - We will now use the ‘split’ function. Well, simply it splits all the different tweets into different words. Therefore, the Tweet (string) splits into elements of a list, where one word is one element. import re import nltk nltk.download('stopwords') from nltk.corpus import stopwords tweet = re.sub('[^a-zA-Z]', ' ', dataset['Tweet'][0]) tweet = tweet.lower() tweet = tweet.split() tweet = [word for word in tweet if not word in set(stopwords.words('english'))] c) Stemming - And we will also do what’s called stemming which consists of taking the root of some different versions of the same word. - Let’s start by importing a class ‘PorterStemmer‘.We need to create an object of this class as we are going to use it in the ‘for ‘ loop. So let’s call this object ‘psw’. - Well, the first thing we’ll do is go through all the different words of the tweet. - All right, now that we have our object created, we will use this object and the stem method here. We need to apply this stem method from our ‘psw’ object to all the words of our tweets. import re import nltk nltk.download('stopwords') from nltk.corpus import stopwords from nltk.stem.porter import PorterStemmer tweet = re.sub('[^a-zA-Z]', ' ', dataset['Tweet'][0]) tweet = tweet.lower() tweet = tweet.split() psw = PorterStemmer() tweet = [psw.stem(word) for word in tweet if not word in set(stopwords.words('english'))] - Finally, we need to join back different words of this tweet list. - We use a special function for this which is the ‘join’ function. d) Applying a for loop - Well, what happens is that we are going to take values from 0 to 4999 and for each value of ‘i’ we deal with a specific tweet of our dataset the tweet indexed by ‘i’. - So in the end, we have to append our cleaned tweet to our raw_model. import re import nltk nltk.download('stopwords') from nltk.corpus import stopwords from nltk.stem.porter import PorterStemmer raw_model= [] for i in range(0, 5000): tweet= re.sub('[^a-zA-Z]', ' ', dataset['Tweet'][i]) tweet= tweet.lower() tweet= tweet.split() psw = PorterStemmer() tweet= [psw.stem(word) for word in tweet if not word in set(stopwords.words('english'))] tweet= ' '.join(tweet) raw_model.append(tweet) Output: love corpu vila nice locat 4. Creating a sparse matrix - We now create a sparse matrix by taking all the different words of the tweet and creating one column for each of these words. Now, we import a class, CountVectorizor from ‘sklearn’. - Here, we’ll take all the words of the different tweets and we will attribute one column for each word. We will have a lot of columns and then for each tweet, each column will contain the number of times the associated word appears in the tweet. - Then, we put all these columns in a table where the rows are nothing else than the 5000 tweets. So each cell of this table will correspond to one specific tweet and one specific word of this raw_model. In the cell, we’re going to have a number and this number is going to be the number of times the word corresponding to the column appears in the tweet. - And actually, this table is a matrix, containing a lot of zeroes called a sparse matrix. from sklearn.feature_extraction.text import CountVectorizer cvw = CountVectorizer(max_features = 9500) X = cvw.fit_transform(raw_model).toarray() y = dataset.iloc[:, 1].values 5. Training the model and analyzing the results - For our machine learning model to be able to predict the nature of tweets, it needs to be trained on all these tweets. - Well, as usual, it needs to have some independent variables and one dependent variable because simply what we are doing here is classification. So, we have some independent variables, on which we will train our model to predict a dependent variable, which is a categorical variable. We train our model based on the ‘naive Bayes’ algorithm. - We can analyze the results looking at the confusion matrix from the variable explorer. from sklearn.model_selection import train_test_split X_training_set, X_test_set, y_training_set, y_test_set = train_test_split(X, y, test_size = 0.25, random_state = 0) from sklearn.naive_bayes import GaussianNB classifier = GaussianNB() classifier.fit(X_training_set, y_training_set) from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test_set, y_result) Results: The confusion matrix helps us to predict the number of reviews correctly classified. We can experiment with the results by increasing or decreasing the values in the training and test sets. Also read: Naive Bayes Classification Algorithm in Machine Learning
https://www.codespeedy.com/identifying-tweets-on-twitter-in-python-using-machine-learning/
CC-MAIN-2021-43
refinedweb
1,276
67.45
Place-Plus Please see attached file for full problem description. 13. five years. The various development projects and their five-year financial return ($ millions) given that interest rates will decline, remain stable, or increase are shown in the following payoff table. Interest Rates Project Decline Stable Increase Office Park 0.5 1.7 4.5 Office building 1.5 1.9 2.5 Warehouse 1.7 1.4 1.0 Mall 0.7 2.4 3.6 Condominiums 3.2 1.5 0.6 Determine the best investment using the following decision criteria. a. Maximax b. Maximin c. Equal likelihood d. Hurwicz (x=.3) 14. The Oakland Bombers professional basketball team just missed making the playoffs last season and believes it needs to sign only one very good free agent to make the playoffs next season. The team is considering four players: Barry Byrd, Rayneal O' Neil, Marvin Johnson, and Michael Gordan. Each player differs according to position, ability, and attractiveness to fans. The payoffs (in $ millions) to the team for each player based on the contract, profits from attendance, and team product sales for several different seasonal outcomes are provided in the following table. Season Outcome Player Loser Competitive Make Playoffs Byrd $-3.2 $1.3 4.4 O'Neil -5.1 1.8 6.3 Johnson -2.7 0.7 5.8 Gordan -6.3 -1.6 9.6 Determine the best decision using the following decision criteria. a. Maximax b. Maximin c. Hurwicz (x=.60) d. Equal likelihood 32. Construct a decision tree for the decision situated below show in the following payoff table. Determine which type of dealership the couple should purchase. Gasoline Availability Dealership Shortage .6 Surplus .4 Compact cars $300,000 $150,000 Full-sized cars -100,000 600,000 Trucks 120,000 170,000 34. The management of first American Bank was concerned about the potential loss that might occur in the event of a physical catastrophe such as a power failure or a fire. The bank estimated that the loss from one of these incidents could be as much a $100 million, including losses due to interrupted service and customer relations. One project the bank is considering is the installation of an emergency power generator at its operations headquarters. The cost of the emergency generator is $800,000, and if it is installed no losses from this type of incident will be incurred. However, if the generator is not installed, there is a 10% chance that a power outage will occur during the next year. If there is an outage, there is a .05 probability that the resulting losses will be very large, or approximately $80 million in lost earnings. Alternatively, it is estimated that there is a .95 probability of only slight losses of around $1 million. Using decision tree analysis, determine whether the bank should install the new power generator.
https://brainmass.com/statistics/probability/place-plus-149478
CC-MAIN-2017-04
refinedweb
480
57.98
HTML::Mason::Utils - Publicly available functions useful outside of Mason version 1.52 The functions in this module are useful when you need to interface code you have written with Mason. Given a component id, this method returns its default Cache::Cache namespace. This can be useful if you want to access the cached data outside of Mason. With a single component root, the component id is just the component path. With multiple component roots, the component id is key/ path, where key is the key corresponding to the root that the component falls under. This function expects to receive a CGI.pm object and the request method (GET, etc). Given these two things, it will return a hash in list context or a hashref in scalar context. The hash(ref) will contain all the arguments passed via the CGI request. The keys will be argument names and the values will be either scalars or array references. This software is copyright (c) 2012 by Jonathan Swartz. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
http://search.cpan.org/~jswartz/HTML-Mason-1.52/lib/HTML/Mason/Utils.pm
CC-MAIN-2017-17
refinedweb
190
74.08
Official retail release of Vista was January 30, 2007Official retail release of Vista was January 30, 2007 18 hours ago, PopeDai wrote *snip* 2006. 18 hours ago, PopeDai wrote No, because 16-bit programming hurts my brain. I still don't understand how memory addressing worked with near and far pointers. What's the problem? Converting between a (real-mode) far pointer and a linear pointer is easy. DWORD FpToLinear(FAR void* fp) { return ((DWORD)fp >> 12) + ((DWORD)fp & 0xffff); } What could be more straightforward than that? =D 21 hours ago, yuhong wrote @figuerres: It is to *target* XP, not to host on XP, as with VS2012.. 15 hours ago, yuhong wrote @evildictaitor: The problem is that don't work in 16-bit protected mode, for one thing. 20 hours ago, evildictaitor wrote ... between a (real-mode) far pointer ... Thread Closed This thread is kinda stale and has been closed but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.
http://channel9.msdn.com/Forums/Coffeehouse/VC11-Firefox-Metro-Win8-SDK-and-XP?page=2
CC-MAIN-2014-42
refinedweb
174
73.37
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Unable to install module I am trying to install OCA/Comptocamp's Analytic Timesheet In Task (timesheet_task) module, but I keep getting this error: File "/opt/odoo/odoo-server/addons/timesheet_task/__init__.py", line 21, in <module> from . import project_task ImportError: cannot import name project_task File project_task.py is present and file permissions seem to be OK. I've also tried to replace '.' with 'openerp' in __init__.py, but it has no effect. I have no problems installing other modules. Any ideas? About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/unable-to-install-module-84180
CC-MAIN-2017-51
refinedweb
137
61.43
Simple shooting game in AS3 In this tutorial you will learn how to create a simple shooting game in Actionscript 3. The objective of the game is to shoot the moving target on the screen. You have three tries to shoot the targets. If you manage to shot three targets without missing them, you win the game. Otherwise you lose the game. I have a used a free stock image of a bottle for the moving targets, but an image will work. Simple shooting game in AS3 Step 1 Open a new AS3 file with the stage dimensions 500 x 120 and import an image on the stage by pressing File > Import > Import to stage. I have used an image of a beer bottle, but you can use any image. Convert the image into a movie clip (F8) and check the ‘Export for Actionscript’ button and give the identifier: Bottle. Once you have created the movie clip delete it from the stage. Step 2 On the menu bar select Insert > New symbol select the type movie clip and give the name: shots and check the Export for Actionscript and give the identifier: Shots then click ok. In this movie clip you will create the bullets for the game, so in the first frame create three bullets. You can either create the bullets using the shape tools or import an image of a bullet. Then in the second frame remove one of the bullets. Again in the third frame remove another bullet. And finally in the fourth frame remove the last bullet. Insert a new layer on timeline call Actions and then open the Actions panel and add a stop() method. Now return to the main timeline. Step 3 Again on the menu bar select Insert > New symbol select the type movie clip and give the name: endBox and check the Export for Actionscript and give the identifier: EndBox. This movie clip will display a message if you have won or lost the game. In the first frame create a rectangle shape with the text ‘You win’. And in the second frame create a rectangle shape with text ‘You lose’. Insert a new layer call Actions and then open the Actions panel and add a stop() method. Now return to the main timeline. Step 4 Open a new Actionscript 3 class file and save it with the name: Shoot. Then add the following code below. Make sure you add ‘Shoot’ to the document class. package{ import flash.display.MovieClip; import flash.display.Sprite; import flash.events.MouseEvent; import flash.events.Event; public class Shoot extends MovieClip { private var bottleArray:Array = new Array(); private var numOfBottle:uint = 9; private var bottleHit:uint = 0; private var hits:uint = 6; private var bottle:Bottle; private var shots:Shots; private var endBox:EndBox; private var bg:Sprite; public function Shoot() { stage.addEventListener(Event.ENTER_FRAME, enterHandler); //Adds the bottles onto the stage. for (var i:int = 0; i < numOfBottle; i++) { bottle = new Bottle(); bottle.x = 0 + (i * 60); bottle.y = 14.35; bottle.addEventListener(MouseEvent.CLICK, clickHandler); addChild(bottle); bottleArray.push(bottle); } //Adds a invisibe background to detect when the targets have been missed. bg = new Sprite(); bg.graphics.beginFill(0x000000,0); bg.graphics.drawRect(0, 0, stage.stageWidth, stage.stageHeight); bg.graphics.endFill(); addChildAt(bg,0); bg.addEventListener(MouseEvent.CLICK, checkShots); //Adds the number shot left to stage. shots = new Shots(); shots.x = 434.85; shots.y = 95.75; addChild(shots); endBox = new EndBox(); addChild(endBox); endBox.x = 148; endBox.y = 32; endBox.visible = false; } //This function moves the all the bottles in the array four pixels to the //right at the current frame rate. If a bottle goes passed the stage //width, it will get re-added a bottle width before the start of the stage. private function enterHandler(e:Event):void{ for (var j:uint = 0; j < bottleArray.length; j++) { bottleArray[j].x += 1; if (bottleArray[j].x > stage.stageWidth + bottleArray[j].width) { bottleArray[j].x = 0 - bottleArray[j].width; } } } //This removes the bottle from the stage, the event listener and //removes it from the array if it has been clicked. private function clickHandler(e:MouseEvent):void { var index:uint = bottleArray.indexOf(e.currentTarget); var target:MovieClip = e.currentTarget as MovieClip; target.removeEventListener(MouseEvent.CLICK, clickHandler); removeChild(target); bottleArray.splice(index, 1); checkShots(); } private function checkShots(e:MouseEvent = null):void{ //This increments the bottleHit counter by one and goes to //the next frame in shots movie clip. bottleHit++; shots.gotoAndStop(shots.currentFrame + 1); //If there are no more shot available and array length is six then you win. if (shots.currentFrame == shots.totalFrames && bottleArray.length == hits) { removeObject(); endBox.visible = true; endBox.gotoAndStop(1); } //If there are no more shot available you lose. else if (shots.currentFrame == shots.totalFrames) { removeObject(); endBox.visible = true; endBox.gotoAndStop(2); } } //This removes the event listeners private function removeObject():void{ stage.removeEventListener(Event.ENTER_FRAME, enterHandler); bg.removeEventListener(MouseEvent.CLICK, checkShots); for (var i:int = 0; i < bottleArray.length; i++){ bottleArray[i].removeEventListener(MouseEvent.CLICK, clickHandler); } } } } Step 5 Test your movie Ctrl + Enter You can download the source files here. 6 comments: i tried using some of this code to allow for multiple enemy spawns in a side scrolling shooting game im making, but im getting an error. any tips? code: for(var i:int=0; i < numofbad;){ Badguy = new Enemy() badguyarray[i].addChild(Badguy) badguyarray[i].Badguy.y = int(Math.random()*500) badguyarray[i].Badguy.x = 900 i=i+1 } error: TypeError: Error #1010: A term is undefined and has no properties. at shootergame_fla::MainTimeline/frame1() @Jake A few things to check with your code so far: *Is there a Badguy variable? *Is there a movie clip with the Class Enemy? *Is there an instance of the array - badguyarray? @iliketo yeah, they were all properly declared, i reopened the file in the morning made some minor edits that i dont remember and now it works... still no idea what was wrong though. but i was wondering would i be able to make a colision detection script for two objects generated this way, i cant quite figure out how to find a path for the both of them... @Jake, Can you please explain? I'm not sure what you mean. @iliketo basically i have two movie clips, one called aseed and one called badguy. aseed is moving right and badguy is moving left. both are generated through code into an array so i can have multiple on screen. i am trying to detect colisions between the two but your code uses parents instead of a path like stage.aseed so i dont know how to check for collisions as i cant find a path for the movie clips. do you have any idea how i could detect coliosions between the two? thanks in advance. @Jake, Take a look at this post
http://www.ilike2flash.com/2011/03/simple-shooting-game-in-as3.html
CC-MAIN-2017-04
refinedweb
1,141
68.36
Figure 1 - The property grid in action Are you familiar with that window in Visual Studio .NET that lets you edit all your forms? You know, the one that you wish you could add to all your programs because it has such a cool interface? You know, the Property Window! Well guess what? You can add it to your windows form as part of your design and use it to allow users to edit class properties directly because Microsoft has provided it, they just don't display it initially in the toolbox.The control is called the Property Grid and you have full access to it. Simply right click on the toolbox and choose Add/Remove Items... as shown in figure 2 below: Figure 2 - Adding the Property Grid to the ToolBox This will bring up the Customize Toolbox dialog which will give you a choice of either .NET Components or COM Components. Scroll down to PropertyGrid, check this item and click OK.Walla! You now have access to a very powerful editting control in which you can allow users to edit properties in your classes through the slick interface of the property grid. Figure 3 - Customize ToolBox Dialog for Adding your PropertyGrid Control to the Toolbox Readying your Class for the Property Grid The property grid is fairly easy to use.The hard part is making the class that you want to display in the grid "Property Grid Friendly". The first step is to create public properties for fields you want to expose. All properties should have get and set methods.(If you don't have a get method, the property won't show up in the PropertyGrid). If you just do the bare minimum of making public properties with get and set methods, your class instance can be displayed in the property grid. However, your property grid will be void of descriptions, categories and other niceties that property grids such as Visual Studio.NET contains. In order to really make your property grid soar, you need to add special attributes to each of your properties. These attributes are contained in the System.ComponentModel namespace and are shown in the table below: DefaultValueAttribute DefaultPropertyAttribute Table 1 - Attributes to ready your class's properties for the property grid The Customer Class Now we are ready to create our customer class to make it displayable in the property grid.In order to utilize the property grid attributes in Table 1, we need to import the namespace for these attributes so we add the using clause for the System.ComponentModel: using Then we simply create the customer class with private fields and public properties that access these fields. Attributes are placed above the properties to ready each property for the property grid. Listing 1 - PropertyGrid-Ready Customer Class All that is left to do to get the grid running is assign an instance of our customer class to the property grid.The property grid will automatically figure out all the fields of the customer through reflection and display the property name along with the property value on each line of the grid. Another nice feature of the property grid is it will create special editing controls on each line that correspond to the value type on that line.For example, a Date of Birth property (of type DateTime) of the customer will allow you to edit the value of the the date with the calendar control. Booleans can be edited with a combo box showing True or False (saves you from excess typing). In Listing 2 we created a new customer and populated it with values through its properties.We then use the SelectObject property of the PropertyGrid and assign our customer object to this property.Upon assigning the customer to the grid, the grid will display all of the public properties we defined in our Customer class. Listing 2 - Assigning the customer object to the property grid. private void Form1_Load(object sender, System.EventArgs e){// Create the customer object we want to displayCustomer bill = new Customer();// Assign values to the propertiesbill.Age = 50;bill.Address = " 114 Maple Drive ";bill.DateOfBirth = Convert.ToDateTime(" 9/14/78");bill.SSN = "123-345-3566";bill.Email = bill@aol.com;bill.Name = "Bill Smith"; // Sets the the grid with the customer instance to be// browsedpropertyGrid1.SelectedObject = bill;} Examining the Results Figure 1 shows the results of our efforts of creating a property grid in our form. If we examine figure 1 again we see that the two categories we defined in our CategoryAttribute,ID Settings and Market Settings, are shown as headers in the two tree nodes.The description of the FrequentBuyer is shown at the bottom of the PropertyGrid retrieved from our DescriptionAttribute for the FrequentBuyer property.Upon selecting the boolean FrequentBuyer property value, we get a drop down for the possible boolean values, True or False. Conclusion The property grid is a powerful control for allowing users to edit the internals of your published classes.Because of the ability for the property grid to reflect and bind to your class, there is not much work involved in getting the property grid up and working.You can add categories and descriptions to your property grid by using the special grid attributes in the System.Component model.Anyway, here is yet another powerful control available to you and the latest property of your .NET mindshare. View All
http://www.c-sharpcorner.com/article/using-property-grid-in-C-Sharp/
CC-MAIN-2017-43
refinedweb
901
61.06
23 September 2011 07:46 [Source: ICIS news] Correction: In the ICIS story headlined "Dushanzi Petchem to restart crackers, polyolefins in late Sep, Oct" dated 23 September 2011, please read in the first paragraph ... "plants in Dushanzi" ... instead of ... "plants at Urumqi"... and in the fourth paragraph ... "100,000 tonne/year HDPE unit" ... instead of ... "1m tonne/year HDPE unit" ... A corrected story follows. SINGAPORE (ICIS)--China’s Dushanzi Petrochemical will restart its 1m tonne/year ethylene cracker and derivative polyolefin plants in Dushanzi in late September and October following shutdowns at the plants on 8 August, a company source said on Friday. In addition, the company will restart its 300,000 tonne/year high density polyethylene (HDPE) unit, 600,000 tonne/year linear low density PE (LLDPE)/HDPE swing plant, 300,000 tonne/year polypropylene (PP) plant and 250,000 tonne/year PP unit at the same site during this period, the source said. The producer shut its 220,000 tonne/year ethylene cracker for maintenance on 12 August and will restart it during ?xml:namespace> Dushanzi Petrochemical will restart its 100,000 tonne/year HDPE unit, 120,000 tonne/year LLDPE and 140,000 tonne/year PP plant at the same site during the holiday period. The company shut the units on 12 August. For more on ethylene,
http://www.icis.com/Articles/2011/09/23/9494566/corrected-dushanzi-petchem-to-restart-crackers-polyolefin-units.html
CC-MAIN-2014-42
refinedweb
220
64.71
Does HCP automatically purge/remove documents based on some retention policy? I was going through REST Developer Guide and saw that HCP has a system metadata called 'retention' which can be set. However it looks like this is different from what i need, since what this does (as per the doc) is that 'HCP will reject attempts to delete an object during its retention period'. This works perfectly as in any attempt to remove the resource fails and it allows the delete only after the retention period. Is there any trigger in HCP which can be set that automatically removes the resource after the retention period? Thanks. It is described in "Managing A Tenant And Its Namespaces" book. Yes, it is enabled on a namespace level.
https://community.hitachivantara.com/thread/9788
CC-MAIN-2019-26
refinedweb
126
61.77
SSL_CLEAR(3) OpenSSL SSL_CLEAR(3) SSL_clear - reset SSL object to allow another connection #include <openssl/ssl.h> int SSL_clear(SSL *ssl); Reset ssl to allow another connection. All settings (method, ciphers, BIOs) are kept. SSL_clear is used to prepare an SSL object for a new connec- tion. While all settings are kept, a side effect is the han- dling of the current SSL session. If a session is still open, it is considered bad and will be removed from the ses- sion. SSL_clear() resets the SSL object to allow for another con- nection. The reset operation however keeps several settings of the last sessions (some of these settings were made automatically during the last handshake). It only makes sense when opening a new session (or reusing an old one) with the same peer that shares these settings. SSL_clear() is not a short form for the sequence SSL_free(3); SSL_new(3); .), MirOS BSD #10-current 2005-02-05 1 SSL_CLEAR(3) OpenSSL SSL_CLEAR(3) SSL_CTX_set_client_cert_cb.
http://mirbsd.mirsolutions.de/htman/sparc/man3/SSL_clear.htm
crawl-003
refinedweb
164
65.93
The following form allows you to view linux man pages. #include <sys/statvfs.h> int statvfs(const char *path, struct statvfs *buf); int fstatvfs(int fd, struct statvfs *buf); The filesystem. ST_NOSUID Set-user-ID/set-group-ID bits are ignored by exec(3). It is unspecified whether all members of the returned struct have mean- ingful values on all filesystems. fstatvfs() returns the same information about an open file referenced by descriptor fd. On success, zero is returned. On error, -1 is returned, and errno is set appropriately. EACCES (statvfs()) Search permission is denied for a component of the path prefix of path. (See also path_resolution(7).) EBADF (fstatvfs()) fd is not a valid open file descriptor.. Multithreading (see pthreads(7)) The statvfs() and fstatvfs() functions are thread-safe.) webmaster@linuxguruz.com
http://www.linuxguruz.com/man-pages/fstatvfs/
CC-MAIN-2017-43
refinedweb
133
60.61
0 I have put some xml data in a file the python code below and have been trying to figure out how to get the data from the file add another "person" and then save to the same file again. from lxml import etree from lxml.builder import ElementMaker E = ElementMaker() DOC = E.doc PERSON = E.activity TITLE = E.title DESC = E.desc IDNO = E.idno def NAME(*args): return {"name":' '.join(args)} thing = DOC ( PERSON( NAME("John"), DESC ("germedabob"), IDNO ("2") ) ) filename = "xmltestthing.xml" FILE = open(filename,"w") FILE.writelines(etree.tostring(thing, pretty_print=True)) FILE.close() This is what I have got so far, I can't work out how to add anything to it and save it again. from lxml import etree file = "xmltestthing.xml" thing1 = etree.parse(file) print(etree.tostring(thing1, pretty_print=True)) That outputs: <doc> <person name="John"> <desc>germedabob</desc> <idno>2</idno> </person> </doc> Thanks
https://www.daniweb.com/programming/software-development/threads/290953/lxml-file-i-o
CC-MAIN-2018-30
refinedweb
153
63.25
Introduction: Arduino Audio Output Instructable I've written about sending audio into an Arduino, find that here) SomeList: Amazon Additional Materials: (1x) usb cable Amazon (1x) breadboard (this one comes with jumper wires) Amazon (1x) jumper wires Amazon The purpose of a low pass filter is to smooth out the output of the DAC in order to reduce noise. By using a low pass filter on the signal, you can smooth out the "steps" in your waveform while keeping the overall shape of the waveform intact (see fig 4). I used a simple RC flow pass filter to achieve this: a resistor and a capacitor in series to ground. Connect the resistor to the incoming signal and the capacitor to ground, the signal coming from the junction between these two components will be low pass filtered. I sent this filtered signal into another buffer circuit (I wired an op amp in a voltage follower configuration) to protect the filtered signal from any loads further down in the circuit. See the schematic in fig 5 for more info. Next I added a potentiometer to control the amplitude of my signal. To do this I wired the output from the 2nd voltage follower to one side of a 10k potentiometer. The I wired the other side of the pot to ground. The signal coming out from the middle of the pot has an adjustable amplitude (between 0 and 2.5V) depending on where the pot is turned. See the schematic (fig 7) for more info. You can see the output of the signal before the pot and after the pot (when turned to halfway point) in fig 6. Step 6: Amplifier Many times when we talk about amplifiers we think about circuits which increase the amplitude of a signal. In this case I'm talking about increasing the current of the signal so that it can drive a load (like a speaker). In this stage of the circuit I set up both op amps on one TS922 package as parallel voltage followers. What this means is I sent the output from the amplitude pot to the non-inverting input of both op amps. Then I wired both op amps as voltage followers and connected their outputs to each other. Since each op amp can source 80mA of current, combined they can source 160mA of current. Step 7: DC Offset Before sending a signal to speakers, you want to make sure it is oscillating around 0V (typical of audio signals). So far, the Arduino DAC output we've been dealing with is oscillating around 2.5V. To fix this we can use a big capacitor. As indicated in the schematic, I used a 220uF capacitor to DC offset my signal so that it oscillates around 0V. The output of the DC offset signal (blue) and un-offset signal (yellow) for two different amplitudes can be found in figs 2 and 3. Step 8: Output Finally, I wired up a 1/4" mono jack with two wires. I connected the ground lead to the Arduino's ground and the signal lead to the negative lead of the 220uF capacitor. The ground pin is usually the larger pin on the jack, test for continuity with the threaded portion of the jack to make sure that you have located the ground pin correctly (see fig 5). The signal pin will be continuous with the clip that extends out from the jack (fig 5). See the schematic for more info.. 113 Comments 2 years ago What do you mean the Arduino does not have analog out capabilities? There are 6 analog i/o pins on the Arduino... Reply 2 years ago The analog pins on the Arduino can only read analog input. Analog output requires a DAC of some kind. Question 3 years ago may sound like a weird question, but can you use the TL082 that you used in your arduino audio input project? 4 years ago Hi, nice circuit. I just don't understand how you can have a voltage below 0V at the output of the DC offset filter as the Arduino has no negative rail supply. Reply 3 years ago You're right the output of the arduino is between 0 and 5 V, but the requirements for the speaker is that the input is a wave centred around 0 vaults. So in this circuit the input is from the arduino digital pins, converting to a voltage between 0 and 5 V in the resistor ladder described in step 1 and how to write to this from the arduino in step 2. Then the next few steps do a bunch of stuff retaining this voltage range to prepare it for output. Then in step 7 it gets shifted from that range to being centred around 0 V ready for output to the speaker. Hope this helps - I haven't built it yet due to the amplifier chip being obsolete and not being entirely sure what to replace it with, but I've been reading and rereading to try an understand as best I can. Question 3 years ago Hi, great tutorial! Is it possible to test the DAC unit without an expensive oscilloscope? And does the components have to match the ones you listed or will for example any 20kOhm resistor work. What are the important specifications of the amplifier? I can't seem to find that specific one. Thanks Question 4 years ago on Step 7 What is the function of the 0.1uF + 10Ohm in parallel with the output Cap + speaker? Is this simply a filter for noise? 4 years ago hi there, great project! i take it if i wanted to combine the input and output projects, the code would be able to combine easily. i am working on an audio effects box and am interested in using ideas from these projects to help with mine. thanks 5 years ago Would you please hint to how this project or your waveform generator project (that I got from Jameco) can output a suitable Dc/Ac voltage(considering the pot max amplitude ) to be hooked up to smartphone Microphone jack , without damaging the internal circuit of the smartphone. Do I need to do a change to the software ,hardware or the power source? Thanks. 5 years ago I am doing a blind stick with ultrasonic sensor and voice guide where it tells if you should go left or right. Is it possible if i use this audio output? Reply 5 years ago use a text to speech chip, it will be much easier: 5 years ago Hello! Nice tutorial. I want to make a low-pass filter like you for Arduino toneAC(). With toneAC, we're sending out of phase signals on two pins. How will I connect the resistor and capacitor? I need two resistors and two capacitors connected between every pin and ground? Or is sufficient one resistor and one capacitor between one pin and ground? 5 years ago I did a test on the timings of direct port write. I used the following pins: PB1,PB0, PD7,PD6,PD5,PD4,PD3,PD2 ()leaving PD1 and PD0 for rx/tx). PORTD = (PORTD & B00000011)|((input<<2)&B11111100); PORTB = (PORTB & B11111100)|((input>>6)&B00000011); These two lines set the input on the aforementioned pins with direct bit-banging write method described. The result is astonishing- It only takes ~1.6 usec to execute these two lines. So for interrupt service routine you get ample time to do other processing. Here is the code: #include "Arduino.h" { } { } } } } } } } //The setup function is called once at startup of the sketch uint8_t input=100; String inputString=""; void setup() // Add your initialization code here Serial.begin(115200); // The loop function is called in an endless loop void loop() //Add your repeated code here testSerialEvent(); long sTime=millis(); for(long i=0;i<100000;i++){ PORTD = (PORTD & B00000011)|((input<<2)&B11111100); PORTB = (PORTB & B11111100)|((input>>6)&B00000011); long eTime=millis()-sTime; Serial.println(eTime); Serial.print("["); for(int i=1;i>=0;i--){ Serial.print(((PORTB&(1<<i))>>i));Serial.print(" "); for(int i=7;i>=2;i--){ Serial.print(((PORTD&(1<<i))>>i));Serial.print(" "); Serial.println("]"); delay(1000); //output-every loop takes 170 msec that mean one iteration takes ~1.7 usec. Its too good. void testSerialEvent(){ while(Serial.available()){ char c=(char)Serial.read(); inputString += c; if (c == '\n') { input=inputString.toInt(); inputString=""; Serial.print("Input->");Serial.println(input); break; 5 years ago Hello, I think this tutorial is wonderful, thank you so much for posting it, however I've run into a problem now. When I hook the Arduino up to an oscilloscope, it shows a perfect sine wave. However, when I plug my Arduino into my audio interface, so I can record the output on my computer, the signal becomes truncated and only has a peak to peak voltage of 80 mV. Could you please help me understand why this is happening, and what I can do to fix it? Thank you. 6 years ago Hi Amanda, thankyou very much, all this is great. I would use this project to generate a wave and split the signal from one output to multiple guitar amps through something like a plugboard(I believe it will be connected in paralel), do i need to change something in the scheme to send a good signal to all the speakers? Thank you :) 6 years ago thank you for this excellent project 6 years ago Hi amandaghassaei! Nice work! I trying to do this project but i have one big problem. I can't hind TS922IN/24 in my country! and i don't have enough time to buy from other one. Isn't an alternative for TS922IN/24? 6 years ago on Introduction Hello Sir, I want to output an analog Signal using this tutorial, but my problem is I want my input to be digital, what I mean is, someone is going to send me a bunch of bits and then I want to output them into analog, My problem is I dont know how to read that incoming bits in my arduino. Do you have any codes for reading a bits in the pin of arduino. ? Reply 6 years ago did u hve get the ans, if yes...can u share it =) 6 years ago on Introduction
https://www.instructables.com/Arduino-Audio-Output/
CC-MAIN-2022-21
refinedweb
1,731
70.84
[:call, [:lit, 1], :+, [:array, [:lit, 1]]] A literal looks like this: [:lit, 42] To traverse the AST, ParseTree comes with the SexpProcessor library, which facilitates the creation of visitors. To analyze all node types of a Ruby AST, a subclass of SexpProcessor with process_XXX methods is created, where XXX is the name of the node. For instance, this handles the :aliasnode: def process_alias(node) cur = node.shift nw = node.shift # ... end The Ruby to Rubinius bytecode compiler is built in this way. For instance, a Ruby aliascall is parsed into [:alias, :old_name, :new_name], which the compiler handles as such: def process_alias(x) cur = x.shift nw = x.shift add "push :#{cur}" add "push :#{nw}" add "push self" add "send alias_method 2" end The compiler takes the old name (in curr) and the new name (in nw), and creates the bytecode instructions (as strings) necessary to implement the functionality, which are then turned into the binary bytecodes executed by the Rubinius interpreter. Having the compiler in Ruby makes it easy to get insight into the inner workings and modify it for experiments. Useful scenarios could include instrumentation of the generated code or a low overhead way of collecting statistics about the compiled code. To look at the Rubinius source code, either refer to InfoQ's article about getting started with Rubinius development or just take a peek at the Rubinius source code online, for instance the current version of the Rubinius bytecode compiler. The compiler is not the only aspect necessary for Rubinius. A complete standard library is necessary too. Marcus Crafter, of Red Artisan, provides a tutorial on how to add library functionality to Rubinius. The tutorial shows to use the Rubinius foreign function interface (ffi) to access native library calls. This is used to implement some missing library functionality, in this tutorial, the POSIX call link. Community comments
https://www.infoq.com/news/2007/10/rubinius-compiler-ffi/
CC-MAIN-2021-43
refinedweb
309
52.29
- Versioning in SOAP One interesting note about SOAP is that the Envelope element doesn't expose any explicit protocol version in the style of other protocols such as HTTP (HTTP/1.0 versus HTTP/1.1) or even XML (?xml version="1.0"?). The designers of SOAP explicitly made this choice because experience had shown simple number-based versioning to be fragile. Further, across protocols, there were no consistent rules for determining what changes in major versus minor version numbers mean. Instead of going this way, SOAP leverages the capabilities of XML namespaces and defines the protocol version to be the URI of the SOAP envelope namespace. As a result, the only meaningful statement you can make about SOAP versions is that they are the same or different. It's no longer possible to talk about compatible versus incompatible changes to the protocol. This approach gives Web service engines a choice of how to treat SOAP messages that have a version other than the one the engine is best suited for processing. Because an engine supporting a later version of SOAP will know all previous versions of the specification, it has options based on the namespace of the incoming SOAP message: If the message version is the same as any version the engine knows how to process, it can process the message. If the message version is recognized as older than any version the engine knows how to process, or older than the preferred version, it should generate a VersionMismatch fault and attempt to negotiate the protocol version with the client by sending information regarding the versions it can accept. SOAP 1.1 didn't specify how such information might be encoded, but SOAP 1.2 introduces the soapenv:Upgrade header for this purpose. (We'll describe it in detail when we cover faults.) If the message version is newer than any version the engine knows how to process (in other words, completely unrecognized), it must generate a VersionMismatch fault. The simple versioning based on the namespace URI results in fairly flexible and accommodating behavior of Web service engines.
http://www.informit.com/articles/article.aspx?p=328640&seqNum=9
CC-MAIN-2019-51
refinedweb
349
50.26
Asynchronous Programming What happens if you have a lot of sockets that are waiting to read or write data? Asynchronous programming lets you write code that basically says, "Call my callback when you actually have something for me." Although this approach is used all the time in C, it's even nicer in Python because Python has first-class functions. These days, there are many servers written asynchronously. nginx is a "simplified version" of Apache that is both very fast and highly concurrent. Squid, the popular open source Web proxy, is also written asynchronously. This makes a lot of sense if you think about what a Web proxy does. It spends all of its time managing a ton of sockets, funneling data between clients and servers. Asynchronous programming starts with operating system APIs such as select, poll, kqueue, aio, and epoll. These APIs let you write code that basically says, "These are the file descriptors I'm working with. Which of them is ready for me to do some reading or writing?" In Python, libraries like the built-in asyncore module and the popular Twisted framework take these low-level APIs and orchestrate callback systems on top of them. Let's look at an example of asynchronous code. First, the linear (non-asynchronous) code in Example 4. def handle_request(request): data = talk_to_database() print "Processing request with data from database." Re-written asynchronously, you end up with something like Example 5. (You can move use_data into a new top-level function after handle_request, but it's convenient to do it this way to maintain access to request via a closure.) def handle_request(request): def use_data(data): print "Processing request with data from database." deferred = talk_to_database() deferred.addCallback(use_data) Notice that the talk_to_database function no longer returns a value directly. Rather, it returns a deferred object to which you can attach callbacks. This is called "continuation passing style". Rather than waiting for a function to simply return, you must pass a callback detailing how to continue once the data is obtained. Because you must use continuation passing style anytime you call a function that might block, it soon permeates your codebase. This can be painful and prevents you from using any library that does blocking I/O unless it's written using continuation passing style. On the other hand, living in the asynchronous ghetto has its benefits. Aside from the clear concurrency benefits, the Twisted codebase is widely regarded as well-written code, and it provides implementations for most popular protocols. Subroutines Versus Coroutines In the beginning, there was the GOTO. It didn't take any parameters, and it was a one-way trip. A coroutine is like a subroutine, except it doesn't necessarily return. With subroutines, you can do things like: f -> g -> h (return to g, return to f) With coroutines, you can do things like: f -> g -> h -> f Coroutines can be used for simple cooperative multitasking. The Python Cookbook has a great recipe for coroutines based on generators. Example 6 is a simple version of it. import itertools def my_coro(name): count = 0 while True: count += 1 print "%s %s" % (name, count) yield coros = [my_coro('coro1'), my_coro('coro2')] for coro in itertools.cycle(coros): # A round-robin scheduler :) coro.next() # Produces: # # coro1 1 # coro2 1 # coro1 2 # coro2 2 # ... Using generators to implement coroutines is definitely a cute hack. By the way, this same trick can be used in Twisted to alleviate some of the need to use callbacks everywhere. On the other hand, there are some limitations to this technique. Specifically, you can only call yield in the generator. What happens if my_coro calls some function f and f wants to yield? There are some workarounds, but the limitation is actually pretty core to Python. (Because Python isn't stackless, it can't support true continuations in the same way that Scheme can.) I've written about this topic in detail on my blog.
http://www.drdobbs.com/tools/concurrency-and-python/206103078?pgno=3
CC-MAIN-2014-23
refinedweb
658
65.42
Game Window IconPosted Wednesday, 18 December, 2013 - 12:11 by jagarciauceda in Hello everyone!! I have a problem. I want change the icon that appears in my game window and put my own icon. but I don't know how do it. I have searched during a lot of hours in google, but I haven't found anything, anybody knows how do this?¿ Thanks and Happy Christmas!! Re: Game Window Icon There are two ways to do this: I would strongly recommend the latter, as that will also change the icon of the executable file on the disk (on Windows). The Icon property is only necessary if you with to change the icon at runtime. Re: Game Window Icon Hi!!! Thanks!!! your are very quick!! I have tried to do this but only change the "executable icon". The window Icon no disapear is like a virus!! ha ha. I don't have the Manifest like you (app.manifest) maybe is this?¿ ....and when I put in the "Out type" : Console application, changes the icon's console window (great) but not in my game window....Sad, very sad Re: Game Window Icon I was pretty sure that this would change all icons. My bad. The solution is still pretty simple: go to your Project Properties -> Resources -> Add Resource -> Add New Icon. Navigate to your icon, select it and click Add. Now all you need to do is set the Icon property of your GameWindow, like this: Replace "MyNamespace" and "MyIcon" with the real namespace and icon name for your application and you are done! Re: Game Window Icon Works!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! XD I didn't have this line this .Icon = MyNamespace.Properties.Resources.MyIcon;so, now I am super happy and my application looks better :D Many Thanks the Fiddler !!!!!!!!!!!!! Tonight I will sleep good!!!
http://www.opentk.com/node/3483
CC-MAIN-2015-22
refinedweb
302
84.07
GatewayESP8266MQTTClient in Development Branch Hi, I am starting to do some testing with the GatewayESP8266MQTTClient in the Dev branch. How do we define our own nodeid's. In the past I would do it via begin() in void setup{}. Do we now put begin() in the presentation function? Mike I'm trying the ENC28J60 branch and am also wondering the same thing. The setup and main functions are empty? #define MY_NODE_ID xxx But for the gateway this is not needed as they always get id=0 @gmccarthy Unless you want to attach sensors to the gateway itself, you can leave presentation() and setup() empty. @hek Thank you. I should have mentioned I have the gateway working but I was wondering how to define the node id in the clients. In any case, I was coming back here to say I had figured it out. I have a temp sensor communicating with the gateway. Mike @hek Gateway and sensor node never get communicated even after configuring manual ID. Here is arduino debug Starting... find parent send: 1-1-255-255 s=255,c=3,t=7,pt=0,l=0,sg=0,st=bc: find parent send: 1-1-255-255 s=255,c=3,t=7,pt=0,l=0,sg=0,st=bc: find parent send: 1-1-255-255 s=255,c=3,t=7,pt=0,l=0,sg=0,st=bc: Here is the ESP8266 12E gateway debug 0;0;3;0;9;Starting... scandone f 0, ....scandone state: 0 -> 2 (b0) .state: 2 -> 3 (0) state: 3 -> 5 (10) add 0 aid 1 cnt connected with Ahmed, channel 1 ip:192.168.0.31,mask:255.255.255.0,gw:192.168.0.1 .IP: 192.168.0.31 0;0;3;0;9;gateway started, id=0, parent=0, distance=0 0;0;3;0;9;Attempting MQTT connection... 0;0;3;0;9;MQTT connected There seems to be a bug in incomingMQTT: replace for (str = strtok_r(topic, "/", &p); str && i < 5; with for (str = strtok_r(topic, "/", &p); str && i <= 5; -> Ok, created a PR on this. Will just let it run through Jenkins before merging. There may be other problems in the code. I switched to ethernet gateway and wrote my own controller in node-red (wip). @FotoFieber I switched to ethernet gateway and wrote my own controller in node-red I am attempting the same . For some reason though I cannot get the mysensors ESP8266Gateway to receive data from node-red. I configured a TCP node in node-red with the gateways IP on port 5003. When I connect the serial monitor I can see the client connection established, but when I send data from node-red I get nothing. I am able to telenet to the gateway and in the serial monitor I see the messages I type so I know the gateway can receive, but it does not receive from node-red. How did you get node-red and the gateway to exchange messages? Thank you, Mike @Mike-Cayouette What kind of tcp node do you use? I use a tcp request node and inject a blank on startup to get the connection opened. - ahmedadelhosni last edited by ahmedadelhosni Hello, This reply is related to another topic where I wasn't able to send my buffer from my mqtt gateway to another sensor node. After spending all day today debugging the issue, I was able to find two bugs and to send the buffer correctly. This is the topic which I subscribe to #define MY_MQTT_SUBSCRIBE_TOPIC_PREFIX "mygatewayin" This is how I send my mqtt pattern using MQTTlens or Mosquitto to be published to the gateway. My message contains 1 or 0 to turn ON or OFF a relay in my sensor node with id 4, child id 19, set, no ack, V_LIGHT mygatewayin/4/19/1/0/2 The first problem which I faced is that my mqtt buffer wasn't evaluated in the callback function incomingMQTT() found in MyGatewayTransportMQTTClient.cpp After checking the case switch I found out that the rest of the case switches doesn't continue because the code returns due to the following line if (strcmp_P(str, MY_MQTT_SUBSCRIBE_TOPIC_PREFIX) != 0) { return; } According to The strcmp_PF() function returns an integer less than, equal to, or greater than zero if s1 is found, respectively, to be less than, to match, or be greater than s2. The contents of RAMPZ SFR are undefined when the function returns Actually I haven't used this function before but according to description it will return 0 if they are identical, so using != shall be right, but actually it was solved when I replaced it with == instead of != Finally I was able to evaluate the buffer and the case switches continued. But Also this didn't solve the problem and the buffer wasn't sent to the sensor node id 4 After lots of printing functions for debugging I found that in serial monitoring, the following is printed send: 0-0-0-4 s=19,c=1,t=2,pt=0,l=1,sg=0,st=fail:1 While when I use Serial gateway which is working well till now, the following is printed instead send: 0-0-4-4 s=19,c=1,t=2,pt=0,l=1,sg=0,st=ok:1 Thus I found that the difference is in 0-0-X-4. Searching further in the code, I found that the debug function for this line is used in MyTransport.cpp in function transportSendWrite(uint8_t to, MyMessage &message) debug(PSTR("send: %d-%d-%d-%d s=%d,c=%d,t=%d,pt=%d,l=%d,sg=%d,st=%s:%s\n"), message.sender,message.last, to, message.destination, message.sensor, mGetCommand(message), message.type, mGetPayloadType(message), mGetLength(message), mGetSigned(message), to==BROADCAST_ADDRESS ? "bc" : (ok ? "ok":"fail"), message.getString(_convBuf)); So the difference here is that the variable tois set as 0in case of mqttgateway, while it is set as 4correctly using Serial gateway. So finally this is what I have reached. In MyTransport.cpp in function boolean transportSendRoute(MyMessage &message) #if !defined(MY_REPEATER_FEATURE) // None repeating node... We can only send to our parent\\ ok = transportSendWrite(_nc.parentNodeId, message); #endif The above is called because MY_REPEATER_FEATUREis not defined, thus the mqttgateway sends always 0 instead of the required node id. I tried to define MY_REPEATER_FEATUREbut I found big memory size and the software was crazy. Also in MySensor.h it is not mentioned to be defined #if defined(MY_GATEWAY_MQTT_CLIENT) ... some code ... ... some code ... #elif defined(MY_GATEWAY_FEATURE) ... some code ... #if defined(MY_RADIO_FEATURE) // We assume that a gateway having a radio also should act as repeater #define MY_REPEATER_FEATURE #endif What I did now as a workaround because it is 2:40 AM here and I need to sleep Is the following In MyTransport.cpp #if !defined(MY_REPEATER_FEATURE) #if defined(MY_GATEWAY_MQTT_CLIENT) ok = transportSendWrite(message.destination, message); #else // None repeating node... We can only send to our parent ok = transportSendWrite(_nc.parentNodeId, message); #endif #else Please check whether this will solve the problem or not. Please correct me if I was wrong. It's my first time to look around in the library today. Thanks. @ahmedadelhosni said: MY_REPEATER_FEATURE Good catch, I'll do some tests and check in something. - ahmedadelhosni last edited by
https://forum.mysensors.org/topic/2193/gatewayesp8266mqttclient-in-development-branch/11
CC-MAIN-2019-22
refinedweb
1,219
63.59
-- | Perform an IO action, and place its result in a 'TMVar'. See -- also "Control.Concurrent.MVarIO" for an 'MVar' version. module Control.Concurrent.STM.TMVarIO (run) where import Control.Concurrent import Control.Concurrent.STM -- | @'run' action@ returns a 'TMVar' immediately. The result of -- @action@ will be placed in said 'TMVar'. If the 'TMVar' is full -- when @action@ completes, the return value is lost (i.e. we do not -- wait for an empty 'TMVar'). run :: IO a -> IO (TMVar a) run action = newEmptyTMVarIO >>= \tm -> forkIO (run' action tm) >> return tm run' :: IO a -> TMVar a -> IO () run' action tm = action >>= \ret -> atomically (tryPutTMVar tm ret) >> return ()
http://hackage.haskell.org/package/stm-orelse-io-0.1/docs/src/Control-Concurrent-STM-TMVarIO.html
CC-MAIN-2014-41
refinedweb
105
61.73
Supreme Court Judgments Subscribe 12/02/1965 SIKRI, S.M. SIKRI, S.M. GAJENDRAGADKAR, P.B. (CJ) HIDAYATULLAH, M. SHAH, J.C. CITATION: 1966 AIR 142 1965 SCR (3) 71 CITATOR INFO : RF 1975 SC1652 (12) ACT: Constitution of India. Arts. 226 and 286(1)(b)--Questions of fact to determine whether sale in the course of import--Therefore if State sales tax leviable--Whether should be decided in writ proceedings. HEADNOTE: The Sales Tax Officer rejected the assessed's claim that he was not liable to be assessed to sales tax in respect of certain .sales of cement imported from Pakistan because (i) he was not a. dealer within the meaning of s.2(f) of the Rajas than Act 29 of 1954, and (ii) the sales in question were in the course of the import within the meaning of Art. 286(1)(b) of the Constitution. In the order of assessment, there was no discussion of the question of applicability of Art. 286(1) (b). The assessee therefore filed a petition under Art. 226 challenging the assessment order on the grounds taken before the Sales Tax Officer and also claiming that the latter had failed to consider the impact and effect of Art. 286(1)(b)on the facts of the case. The State objected to the maintainability of the petition on the ground that the petitioner should have availed of the alternative remedy of appeal provided under the Rajasthan Sales Tax Act, but the High Court overruled this objection for the reason, inter alia, that the petitioner had challenged the appellant's jurisdiction to assess him to. sales tax in view of the. provisions of Art. 286(1) (b). Upon dealing with the merits of the case, the High Court held that on the facts of the case it was clear that the sales in question took place when the goods were in the course of import and therefore, by virtue of Art. 286(1)(b) were not liable to sales tax. The court therefore quashed the order of assessment. On appeal to this Court, it was contended on behalf of the State that the High Court should have refused to entertain the petition as many of the crucial facts had not been brought on the record by the respondent, and further-more, it was not established that the cement was sold in the course of import into India. HELD: The High Court should not have decided the disputed questions of fact, but should merely have quashed the assessment order on the. ground that the Sales Tax Officer had not dealt with the question raised before him and remanded the case. [77 B] OBITER: The High Court should have declined to entertain the petition, as in this case there were no exceptional circumstances to warrant the exercise of the extraordinary jurisdiction under Art. 226. It was not the object of Art. 226 to convert High Courts into original or appellate assessing authorities whenever the assessee chose to attack an assessment order on the ground that a sale was made in the course of import and was therefore exempt from tax. The fact that an assessee might have to deposit sales tax when filing an appeal could not in every case justify his bypassing the remedies provided by the Sales Tax Act. There must be something more in a case to warrant the entertainment of a petition under Art. 226, something going to the root of the jurisdiction of the Sales 72 Tax Officer, something to show that it would be a case of palpable injustice to the assessee to force him to adopt the remedies provided by the Act. [75 G, H] A.V. Venkatesweran v. Ramchand Sobhraj Wadwani, A.I.R. 1961 S.C. 1506, referred to. CIVIL APPELLATE JURISDICTION: Civil Appeal No. 652 of 1964, Appeal from the judgment and order dated May 7, 1963 of the Rajasthan High Court in D.B. Civil Misc. Writ Petition No. 157 of 1962. G.C. Kasliwal, Advocate-General for Rajasthan. K.K. Jain, for the appellants. M.D. Bhargava and B.D. Sharma, for the respondent. The Judgment of the Court was delivered by Siki, J. This appeal by certificate of fitness granted by the Rajasthan High Court is directed against its judgment dated May 7, 1963, quashing the order of assessment dated March 5, 1962, made by the Sales Tax Officer, Jodhpur City, in so far as it levied sales tax on the turnover of Rs. 23,92,252.75 np. The respondent, M/s Shiv Ratan G. Mohatta, which is a partnership firm having its head office at Jodhpur, hereinafter referred to as the assessee, claimed before the Sales Tax Officer that they were not liable to be assessed to sales tax in respect of the above turnover because, firstly, the assessee was not a dealer within s. 2(f) of the Rajasthan Sales Tax Act (Rajasthan Act XXlX of 1954) with respect to this turnover, and secondly, because the sales were in the course if import within Art. 286 (1)(b) of the Constitution. Although the Sales Tax Officer set out the facts of the case relating to the second ground, he deemed it sufficient to assess this turnover on the ground that the assessee was a dealer within s. 2(f) of the Rajasthan Sales Tax Act, without adverting to the second ground. The facts on which the assessee had relied upon to substantiate his second ground were these. The Zeal-Pak Cement Factory, Hyderabad (Pakistan), hereinafter called the Pakistan Factory, manufactured cement in Pakistan. The Pakistan Industrial Development Corporation, hereinafter called the Pakistan Corporation, entered into an agreement with M/s Milkhiram and Sons (P) Ltd., Bombay, for the export of cement manufactured in Pakistan to India. The State Trading Corporation of India entered into an agreement with the said M/s Milkhiram & Sons for the purchase of, inter alia, 35,000 long tons of cement to be delivered to it F.O.R. Khokhropar in Pakistan, on the border of Rajasthan. The State Trading Corporation appointed the assessee as its agent, broadly speaking, to look after the import and the sale of the imported cement. The modus operandi adopted by the assessee for the sale of the cement was as follows. It would obtain from a buyer in Rajasthan an order under an agreement, a sample of which is on the record 73 The agreement fixed the price and the terms of supply. By one clause the assessee disclaimed any responsibility regarding delay in dispatch and non-receipt of consignment or any loss, damage or shortage in transit due to any reason whatsoever. The agreement further provided that "all claims for loss, damage or shortage, etc., during transit will lie with the carriers and our payments are not to be delayed on any such account whatsoever." It was further provided in the agreement that the dues were payable in advance in full, or 90% in advance and the balance within 15 days of billing plus sales tax and other local taxes. Clause 6 of the agreement is in the following terms: "A Post Card Loading Advice will be sent to you by the Factory as soon as the wagons are loaded in respect of your orders, and it will be your responsibility to arrange for unloading the consignment timely according to Railway Rules. Ourselves. and the suppliers will not be responsible for demurrage etc. on any account whatsoever. If the consignment reaches earlier than the Railway Receipt, it is the responsibility of buyer to arrange for and get the delivery timely against indemnity bond etc. All the Railway Receipts etc. will be sent by registered post by the Suppliers in Pakistan.". After this agreement had been entered into, the assessee would send despatch instructions to the Pakistan Corporation. These instructions indicated the name of the buyer-consignee and the destination, and provided that the railway receipt and D/A should be sent by registered post to the consignee. These instructions were sent with a covering letter to the Pakistan Corporation requesting that these instructions be passed on to the Pakistan Factory for necessary action. The Pakistan Corporation would then forward these despatch instructions to the Zeal-Pak Cement Factory. Later, the Pakistan Factory would advise the consignee that they had "consigned to the State Bank of India, Karachi, the particular quantity as per enclosed railway receipt and invoice." The State Bank of India, Karachi, would endorse the railway receipt in favour of the consignee and send it to him by post. The consignee would take delivery either by presentation of the railway receipt or by giving indemnity bond to the Station Master undertaking to deliver the railway receipt on its receipt. The Sales Tax Officer did set out most of these facts and the contentions of the assessee in the assessment order but disposed of the case with the following observations: "All the above went to prove that the assessee was an Agent of the non-resident dealer for the supplies in the State. The Assessee was an importer and hence submitted an application to the Custom Authority for the same. It booked orders and issued sale bills. Under the terms of an agreement of appointment of Agent, sale was to be effected by the Agent. Again while obtaining orders from the buyers under condition 5 Sales Tax was to be paid by the buyers to the assessee. Thus to all intents and purposes the assessee is a dealer who is liable for payment of Sales Tax to the State. They have rightly collected this amount from the buying dealers and retained with them. This should come to the Government.". We can find no discussion in the order on the question raised by the assessee that the sales were made in the course of import within Art. 286(1)(b) of the Constitution. The assessee then filed a petition under art. 226 of the Constitution and raised two contentions before the High Court, namely, (1) that the Sales Tax Officer failed to consider the impact and the effect of Art. 286(1)(b) on the facts of the case, and (2) that the Sales Tax Officer illegally held that the petitioner for all intents and purposes was a dealer liable to pay sales tax. The State raised an objection to the maintainability of the petition on the ground that the petitioner should have availed of the alternative remedy of appeal provided under the Rajasthan Sales Tax Act, but the High Court overruled this objection on the ground that "the contention of the petitioner is that in view of Art. 286(1)(b) of the Constitution, the respondent had no jurisdiction to assess the petitioner to pay the sales tax on the sale of goods in the course of the import into the territory of India", and that even if there was no total lack of jurisdiction in assessing the petitioner to pay sales tax. the principle enunciated in A.V. Venkateswarn v Ram chand Sobharaj Wadhwani (1) applied, and it was a case which should not be dismissed in litnine. Then the High Court proceeded to deal with the merits of the case. It first dealt with the question whether the petitioner was a dealer within the meaning of s. 2(f) of the Rajasthan Sales Tax Act, and came to the conclusion that the petitioner must be deemed to be a dealer within the said s. 2(f). Then it proceeded to deal with the question whether the sales had taken place in the course of import. The High Court held that in the circumstances of the case these sales had not occasioned the movement of goods but it was the first sale made by M/s Milkhiram and Sons to the State Trading Corporation which had occasioned the movement of goods. Secondly, it held that in the circumstances of the case "the property in goods after the delivery had been taken by the petitioner on behalf of the State Trading Corporation passed to the State Trading Corporation and simultaneously to the ultimate buyers. Thus the property in the (1) [1962] 1 S.C.R. 753. 75 goods passed to the ultimate buyers in Rajasthan when the goods had not reached the territory of India and were in course of import. In view of the authority of their Lordships of the Supreme Court in J. V. Gokal and Co. (Private) Ltd. v. The Assistant Collector of Sales Tax (Inspection) & Others, ('), it must be taken that the sale took place when the goods were in the course of the import and they should not be liable to the payment of the Sales Tax by virtue of Art. 286(1)(b).". In the result, the High Court quashed the order of assessment in so far as it sought to levy tax on the turnover in dispute. The Sales Tax Officer, Jodhpur, and the State of Rajasthan having obtained certificate of fitness from the High Court filed this appeal. The learned Advocate-General has raised two points before us: First, on the facts of this case the High Court should have refused to entertain the petition, and secondly, that it has not been established that the cement was sold in the course of import within Art. 286(1)(b). Regarding the first point, he urges that an appeal lay against the order of the Sales Tax Officer; no question of the validity of the Sales Tax Act was involved and the taxability of the turnover depended on where the property passed in each consignment. This involved consideration of various facts and, according to him.the crucial facts had not been brought on the record by the assessee on whom lay the onus to establish that the sales were in the course of import. He says that the assessee should have proved that each railway receipt was endorsed by the State Bank of India, Karachi, to the buyer before each consignment crossed the frontier. We are of the opinion that the High Court should have declined to entertain the petition. No exceptional circumstances exist in this case to warrant the exercise of the extraordinary jurisdiction under Art. 226. It was not the object of art. art.. (1)[1960] 2 S.C.R. 852. 76 Regarding the second point, the learned Advocate- General .argues that the onus was on the assessee to bring his case within Art. 286(1)(b) of the Constitution in respect of the sales to the various consignees. He says that there is no evidence on record as to when the State Bank of India endorsed the railway receipt in favour of the ultimate buyer in respect of each consignment and without this evidence it cannot be said that the title to the goods passed to the ultimate buyer at Khokhropar or in the course of import. He further urges that it would have to be investigated in each case as to when the State Bank endorsed the railway receipt and when the goods crossed the customs barrier. He says that it is not contested that the ultimate buyer took delivery of goods without producing the railway receipt by virtue of special arrangements entered into with the railway, and according to him. it is only when the delivery was taken by the buyer in Rajasthan that the title passed. By that time, according to him, the course of import had ceased. We do not think it necessary to consider the various arguments addressed by the learned Advocate-General or the soundness of the view of the High Court on this point, because we are of the opinion that the High Court should not have gone into this question on the facts of this case. The Sales Tax Officer had not dealt with the question at all, and it is not the function of the High Court under art. 226, in taxing matters, to constitute itself into an original authority or an appellate authority to determine questions relating to the taxability of a particular turnover. The proper order in the circumstances of this case would have been to quash the order of assessment and send the case back to the Sales Tax Officer to dispose of it according to law. Under the Rajasthan Sales Tax Act, and other Sales Tax Acts, the facts have to be found by the assessing authorities. If any facts are not found by the Sales Tax Officer, they would be found by the appellate authority. and it is not the function of a High Court to find facts. The High Court should not encourage the tendency on the part of the assesses to rush to the High Court after an assessment order is made. It is only in very exceptional circumstances that the High Court should entertain petitions under art. 226 of the Constitution in respect of taxing matters after an assessment order has been made. It is true, as said by this Court in A. V. Venkateswarn v. Ramchand Sobharaj Wadhwani(1) that it would not be .desirable to lay down inflexible rules which should be applied with rigidity in every case, but even so when the question of taxability depends upon a precise determination of facts and some of the facts are in dispute or missing, the High Court should decline to decide such questions. It is true that at times the assessee alleges some additional facts not found in the assessment order and the State, after a fresh investigation, admits these facts, but in a petition under art. (1)[1962] 1 S.C.R. 753. 77 226 where the prayer is for quashing an assessment order, the High Court is necessarily confined to the facts as stated in the order or appearing on the record of the case. In this case, as already indicated, we have come to the conclusion that the High Court should not have decided disputed questions of fact, but should merely have quashed the assessment order on the ground that the Sales Tax Officer had not dealt with the question raised before him and remanded the case. Accordingly. we allow the appeal, set aside the order of the High Court, quash the assessment order in so far as it relates to the. turnover of Rs. 23,92.252.75 up, and remit the case to the Sales Tax Officer to decide the case in accordance with law. He will find all the facts necessary for the determination of the question and come to an independent conclusion untrammeled by the views expressed by the High Court. We may make it clear that we are not expressing any view whether the finding of the High Court that the property in the goods passed simultaneously at Khokhropar to the State Trading Corporation and the ultimate buyer is correct or not. There would be no order as to costs in this appeal. Appeal allowed. Back
http://www.advocatekhoj.com/library/judgments/index.php?go=1965/february/17.php
CC-MAIN-2018-17
refinedweb
3,161
66.57
I have tried that with no joy. My current code is: [EPiServer.PlugIn.ScheduledPlugIn(DisplayName = "Reel Content Service")] public class ReelContentTask { static ReelContentTask() { if (PrincipalInfo.CurrentPrincipal.Identity.Name == string.Empty) { PrincipalInfo.CurrentPrincipal = PrincipalInfo.CreatePrincipal("Admin"); } } public static string Execute() { return "test"; } } Thats it. Man, I've been pulling my hair out over this. We have a shared dev database and one of the developers had his clock set faster than mine so his was trying to execute the task. The annoying thing is, i'd been to that page and followed the steps for the "Object reference to set to an instance of an object" issue. Moral of the story, read the whole article. Thanks again. Hi, I have created a scheduled task to parse and XML file and setup some pages from it. This all worked fine when run manualy, however when I aurtomate it, it fails with "Object reference not set to an instance of an object". I'm not sure how to debug this, but as a test I removed all functionality from the Execute method and just returned a string "test" and this fails in the same way when triggered automaticaly. Does anybody have any idea what the issue might be? and how I could debug it? Many thanks Matt
https://world.optimizely.com/forum/legacy-forums/Episerver-CMS-5-R2/Thread-Container/2009/10/Scheduled-task-errors-when-run-automaticaly/
CC-MAIN-2022-21
refinedweb
214
65.32
All opinions expressed here constitute my (Jeremy D. Miller's) personal opinion, and do not necessarily represent the opinion of any other organization or person, including (but not limited to) my fellow employees, my employer, its clients or their agents. A couple of weeks ago I wrote about using the Inversion of Control (IoC) principle to create classes that are easier to unit test. One major thing I left out of that post was using the Dependency Injection pattern to loosely couple classes from their dependencies. UserC My preference is to use the “Constructor Injection” flavor of DI. The mechanism here is pretty simple; just push the dependencies in through the constructor function. private IView _view; private IService _service; public Presenter(IView view, IService service) _view = view; _service = service; public object CreateView(Model model){…} public void Close(){…} public void Save(){…} [TestFixture] public class PresenterTestFixture private IMock _serviceMock; private IMock _viewMock; private Presenter _presenter; [SetUp] public void SetUp() // Create the dynamic mock classes for IService and IView _serviceMock = new DynamicMock(typeof(IService)); _viewMock = new DynamicMock(typeof(IView)); // Create an instance of the Presenter class Presenter() _service = new Service(); public IView View get { return _view; } set { _view = value; } public IService Service get { return _service; } set { _service = value; } // Create an instance of the Presenter class get if (_view == null) { _view = new View(); } return _view;. // Call to StructureMap to fetch the default configurations of IView and IService _view = (IView) StructureMap.ObjectFactory.GetInstance(typeof(IView)); _service = (IService) StructureMap.ObjectFactory.GetInstance(typeof(IService)); IV. The next (and last) post in the whole Inversion of Control chain will look at using a Dependency Injection tool to “wire up” an application. Links [Advertisement] Between being extremely short handed at work, tech' reviewing a new book, a possible book proposal I haven't yet finished reading your post, but I wanted to let you know that a) I like it so far, and b) you have provided me with my new motto: "Don’t blow it off just because it seems trivial". That seems to apply to most of the most effective concepts in software development and with how they are received in general. It took long enough, and I've had a pretty good stream of people asking for this, so here it is: Holy crap. Can we please not go shit-crazy with jargon? I come here because I've never heard of "Dependency Injection"; I thought it was some wicked cool new pattern. Come to find out that this "Dependency Injection" isn't a fucken pattern at all--its standard best practices when it comes to method signatures. Your methods should take base objects/interfaces as input and return high-level objects. If your method takes a List<int> as a parameter, your callers are restricted to that specific collection. However, if you take an IEnumerable<int>, your callers can choose to use one of the many implementations of the interface (I count 15 in mscorlib alone). If you return an IEnumerable, your callers can only enumerate through the result set. But if you return a List<int>, you callers can treat it as a List<int>, an ICollection<int>, and IEnumerable<int> or just as an IEnumerable. Its amazing how brilliant I am; I've been doing this ever since I read CLR Via C#. I had no idea I was implementing the Dependency Injection pattern. Lurl. I wonder what other patterns I'm implementing... Mcgurk, A pattern is just that, a solution that reoccurs. It's just something you do. PingBack from PingBack from Jeremy D. Miller has some really good articles on unit testing, design and TDD. Here are some gems: ... Jeremy, I noticed Structuremap requires metadata attributes to identify classes and methods as pluggable / interfaces / constructors / factory… Then you have to write the config files to link everything together, but you can only inject classes that you have written using structuremap attributes... Spring is much better – all configuration is external in the config and will work with any class / library written by anyone. Structuremap probably is faster, but not as flexible as Spring. How far do you agree? Afif, I don't agree with you at all ;-) You've mistated some little things. You do not have to use the attributes at all if you don't want to. In some very common cases the attributes can take the xml configuration down to near zero, and that's cool in my book. If you don't want to use attributes at all, everything can be done via Xml, assuming you have a high tolerance for coding in Xml. 2.0 did make the Xml configuration quite a bit tighter with less duplication. As of 2.0, you can wire up dependencies in code with a Fluent interface as well. Can Spring.Net do that? Didn't think so. Honestly, I don't like Spring.Net's approach to IoC at all, but I'm too biased to make a judgement. StructureMap (and Castle Windsor) does auto wiring vastly better than Spring.Net because it's type safe. No wiring stuff together by string values. Use Type's first. Pingback from 58bits - Tech - Application Block Software Factory - Sample App Starting a new set of projects, what to do? Thanks for sharing this information. I can still my favorite professor way back in college whose favorite design technique is Construction Injection. May I know what sets the difference between Setter Injection and Construction Injection? Pingback from Just another .Net Voyegar » Blog Archive » Some more reading on Dependency Injection   JEREMY’s NOTE:  I’m trying to rewrite the StructureMap documentation today, and I’m going JEREMY’s NOTE: I’m trying to rewrite the StructureMap documentation today, and I’m going to blog most First, I'm going to assume that you are somewhat already familiar with the concepts of Dependency Pingback from Unity - Dependency Injection and Inversion of Control Container | Tea Break The most general question I get with StructureMap is “how do I get started?” Personally, I’d recommend Pingback from A Gentle Quickstart for StructureMap 2.5 -
http://codebetter.com/blogs/jeremy.miller/archive/2005/10/06/132825.aspx
crawl-002
refinedweb
1,015
53.81
In-Depth String functions, integer functions ... booorrring! Tuples in C# 7.0 -- let's explore what makes them infinitely more exciting. String functions return a string. Integer functions return an integer. Ugh, so boring. Fortunately, Microsoft knows all about boring, and it decided to do something about it: tuples. With the new tuple technology included in C# 7.0 and Visual Studio 2017, you can craft functions that return more than one value. Of course, C# has always allowed you to return more than one value from a single function. The most natural way to do this was to build a structure containing the distinct elements, and return an instance of that structure: public struct MixOfTens { public string TextPart; public int NumPart; } public MixOfTens GiveMeTen() { MixOfTens result; result.TextPart = "Ten"; result.NumPart = 10; return result; } This structure-based approach is just that: structured. If you wanted to return those values in a way that was a little more on-the-fly, you could employ out-parameters, one for each value coming back from the function: public void GiveMeTen(out string textPart, out int numPart) { textPart = "Ten"; numPart = 10; } Tuples provide a third way of returning multiple function values, one that combines aspects of the structure-based and on-the-fly options already mentioned. With tuples, you define the values to be returned as part of the function declaration, a bit like the out-parameter variation. But to the caller, the data coming back appears as an instance with distinct value members, just like the structure-based approach. Beyond being a new way of doing something you could already do before, tuples offer a few other advantages over the traditional multi-value methods. Behind the scenes, tuples are implemented using standard generic objects. For example, the string-int tuple mentioned earlier could be declared using the generic System.Tuple class: public Tuple<string, int> GiveMeTen() { return new Tuple<string, int>("Ten", 10); } While this code works just fine, the new tuple syntax included in the latest release of C# provides a more straightforward style that's easier on the keyboard and easier to manage in code. A word of warning about the examples in this article: All code was developed using Visual Studio 2017 Release Candidate. There is a miniscule-yet-still-possible chance that changes might occur before the final release of the product. Although Visual Studio now includes support for tuples, you still need to download and install the tuple library as part of your project or solution. If you just want to try out tuples, start by creating a C# Console App. Save your project to disk, and then access the Visual Studio Tools | NuGet Package Manager | Manage NuGet Packages for Solution menu command. When the NuGet Package Manager window appears (see Figure 1), select the Browse tab, enter Tuple in the search field and select the System.ValueTuple item from the results. Select your project from the checklist of projects that magically appears and, finally, click the Install button. Once you assent to the various terms and conditions, Visual Studio adds the System.ValueTuple assembly to your project's references (see Figure 2). Now you're ready to build a tuple-returning function. The syntax is quite similar to a standard function that returns a single value, but the multiple return types appear in a set of parentheses, both in the function declaration, and as part of the return statement: public (string, int) GiveMeTen() { return ("Ten", 10); } The easiest way to intercept this result is through an implicit variable, one declared using the C# var keyword: var result = GiveMeTen(); The returned instance includes a member field for each strongly typed element of the tuple. In the GiveMeTen example, the first member is a string containing "Ten," and the second is an integer containing 10. By default, these members have positional names: Item1, Item2 and so on: // ----- Writes out the string "Ten": Console.WriteLine(result.Item1); // ----- Writes out the integer 10: Console.WriteLine(result.Item2); Of course, default names are boring, and as mentioned earlier, we're done with boring return values. You can include new default member names as part of the return type declaration for your tuple function: public (string TextPart, int NumPart) GiveMeTen() { return ("Ten", 10); } The data returned from this updated code still includes Item1 and Item2 members, but you can also use the custom TextPart and NumPart field names: // ----- Writes out the string "Ten": Console.WriteLine(result.TextPart); // ----- Writes out the integer 10: Console.WriteLine(result.NumPart); Did you notice how I called the TextPart and NumPart monikers "default member names"? That's because the names are suggestions only. When you capture the return data from the function, you can craft your own compatible tuple variable, complete with locally relevant member names: (string Literature, int Mathematics) result = GiveMeTen(); Console.WriteLine(result.Literature); Console.WriteLine(result.Mathematics); This is great and all, but part of the promise of tuples is the ability to return multiple values from a function, not just a last-minute object with multiple members. Fortunately, C# lets you capture the tuple members as independent variables. The syntax is similar to local tuple variable declaration, but it leaves out the name of the target merged object: (string simpleString, int simpleNum) = GiveMeTen(); Console.WriteLine(simpleString); Console.WriteLine(simpleNum); Returning function values isn't the only reason tuples exist. As mentioned earlier, tuples are standard instances of generic types, a technology that's been part of the C# language since 2005. This means that you can create instances of tuple variables alongside your regular boring variables, modify them, compare them and pass them to methods as needed: (string Name, int Age) person1 = ("Alice", 32); (string Name, int Age) person2 = ("Bob", 36); IntroducePeople(person1, person2); When crafting methods that accept tuples as arguments, replace the standard data type with a tuple-type: public static void IntroducePeople((string, int) friend1, (string, int) friend2) { // ----- Because the parameters did not specify custom // member names, use the Item1 and Item2 defaults. if (friend1.Item1 == friend2.Item1) { } While you can compare tuple members to like-typed variables and expressions using standard C# comparison operators, comparing entire tuples with each other requires use of the Equals method, included in every tuple instance: if (friend1.Equals(friend2) == true)... Another fun use of tuples, if you consider data manipulation fun, is to return them from a LINQ query. In C#, when you want to return objects with a subset of members from the original objects, you must create an anonymous or named instance using the new keyword: List<Employee> allEmployees = new List<Employee>() { new Employee { ID = 1L, Name = "Fred", Salary = 50000M }, new Employee { ID = 2L, Name = "Sally", Salary = 60000M }, new Employee { ID = 3L, Name = "George", Salary = 70000M } }; var wellPaid = from oneEmployee in allEmployees where oneEmployee.Salary > 50000M select new { EmpName = oneEmployee.Name, Income = oneEmployee.Salary }; Because tuples are object instances with named members, you can replace the class-returning LINQ query with one that returns tuples: var wellPaid = from oneEmployee in allEmployees where oneEmployee.Salary > 50000M orderby oneEmployee.Salary descending select (EmpName: oneEmployee.Name, Income: oneEmployee.Salary); var highestPaid = wellPaid.First().EmpName; The examples presented so far have included two members per tuples, but you can add more comma-delimited items as needed. But be warned: If you create a tuple with 27 members, you might be trying to compensate for a fear of well-crafted data structures. At this point, you're thinking, "Tuples are amazing. Down with classes! Up with tuples!" Hold on there, Mr. Reverse-Luddite. Tuples are extremely useful for lightly coded, data-only objects that don't require much in the way of processing logic. But they lack the technological breadth that true classes and structures offer. Tuples have no methods, no protected or internal members, and no means of implementing interfaces. And yet, tuples are useful, just as variables and class instances are useful. In fact, the C-language class and structure definitions from which C# gets its core types are really just tuples with permanent names applied to each member. If you think about it that way, then tuples have always been a part of your C# experience. They've simply become a bit more extroverted. If the logic of your code requires access to multi-value constructs without the overhead of class and structure declarations, tuples might be just the thing. About the Author Tim Patrick has spent more than thirty years as a software architect and developer. His two most recent books on .NET development -- Start-to-Finish Visual C# 2015, and Start-to-Finish Visual Basic 2015 -- are available from. He blogs regularly
https://visualstudiomagazine.com/articles/2017/01/01/tuples-csharp-7.aspx?admgarea=features
CC-MAIN-2019-47
refinedweb
1,441
53.21
Agenda See also: IRC log <Gregory> GJR: notes that PF has invited simon pieters to join to expedite the process "Discuss with PFWG role attribute vs aria attribute", on Michael Cooper <DanC> I updated actions/8 keeping Action 8 open pending more talk with Michael Cooper "Talk to WebAPI and WAF WGs about their role in offline API stuff and how they work with and contribute to the discussion", on chaals <Gregory> last PF WG meeting (MC's action discussed) - member confidential archive: ChrisW will bring up Action 13 with HCG <DanC> updated actions/13 reassigned to ChrisW, due 13 Dec ChrisWilson: what prompted this action? DanC: yeah, Saturday f2f discussion about offline Web apps "coordinate comparative tests using competing ARIA proposals" Gregory - ran into problem with chair of PF group ... scribe: they think it's an "undue burden" ... there's a push to get it resolved ... ... tomorrow morning there is a meeting with zcorpan (Simon Pieters) to discuss adoption of his ARIA proposal ... <DanC> (meeting tomorrow? a pointer to mail from whoever is running that meeting would be handy) <DanC> (er... are we talking about aria-role in substance here or just updating the action status?) Gregory: OK to [declare] a role without declaring a namespace (they agreed to this compromise) ... have been working with XHTML2 people ... <zcorpan> DanC, Gregory: now need to broker with developers ... ... I can report back about this [after the meeting tomorrow] <DanC> (which we agreed?) ChrisWilson: DanC you noted that you wanted examples DanC: yep <DanC> (I got the pointers I needed.) <DanC> (3 meetings GR just mentioned... pointers please) <Gregory> friday 30 november 2007 - meeting with simon pieters <DanC> I marked ACTION-23 withdrawn <scribe> ACTION: Gregory to report back after 11-30 meeting on ARIA syntax [recorded in] <trackbot-ng> Created ACTION-30 - Report back after 11-30 meeting on ARIA syntax [on Gregory Rosmaita - due 2007-12-06]. DanC: W3C process requires 7-day notice for meetings Gregory: this is an attempt to work with the vendors who are supportive of ARIA [discussion of getting "PF ducks in a row" and "mutual reality check" <Zakim> DanC, you wanted to note regrets for next week 6 Dec DanC notes he won't be here next week; ChrisWilson will chair again <DanC> next meeting: 6 Dec, Chris W to chair [moving on to discussion of Pending Review AIs] <DanC> [homework] summary of the video (and audio) codec discussion ChrisWilson: this seems complete[d] DanC will be at the Video Workshop above is posting from Dave Singer <DanC> ACTION: Dan see that Singer's summary makes it to the SJC/Dec W3C video workshop, possibly by confirming Singer's attendance [recorded in] <trackbot-ng> Created ACTION-31 - See that Singer's summary makes it to the SJC/Dec W3C video workshop, possibly by confirming Singer's attendance [on Dan Connolly - due 2007-12-06]. <Shawn> fyi: Dave Singer's email was tacked on to the issue for video-codecs: <Zakim> MikeSmith, you wanted to comment on Karl's proposal <DanC> ok by me, action done... now what next... a note and a wiki topic look OK to me Lachy - I'm trying to incorporate Karl's proposal into my draft ... scribe: as well as stuff from Roger <DanC> ok: Product HTML 5 authoring guidelines <DanC> yeah, not a good name. Mike to fix <DanC> (did lachy take an action) <scribe> ACTION: MikeSmith to change the product name of "HTML 5 authoring guidelines" in the tracker to something else, eventually [recorded in] <trackbot-ng> Sorry, couldn't find user - MikeSmith <scribe> ACTION: Michael(tm) to change the product name of "HTML 5 authoring guidelines" in the tracker to something else, eventually [recorded in] <trackbot-ng> Created ACTION-32 - Change the product name of \"HTML 5 authoring guidelines\" in the tracker to something else, eventually [on Michael(tm) Smith - due 2007-12-06]. <Lachy> DanC, what action would you like me to take? <DanC> good question. maybe none, for now <Lachy> ok <DanC> . ACTION: Lachy prepare web developer guide for publication as a Note <DanC> yup, regular web pages or blogs are fine by me Justin: [suggestion about considering blog items] DanC - I consider the series-of-blog items to be a fairly comfortable way of publishing this kind of information. <DanC> ACTION: Lachy prepare web developer guide for publication as a Note [recorded in] <trackbot-ng> Sorry, couldn't find user - Lachy <DanC> ACTION: Lachy prepare web developer guide, maybe as a Note, maybe other [recorded in] <trackbot-ng> Sorry, couldn't find user - Lachy <DanC> ( <DanC> (Lachy, can I add you to the issue tracking task force? i.e. will you be in touch with the chairs regularly?) Lachy: we want to be able to update the info after we publish it <anne> That's possible with a Note <ChrisWilson> (i.e. the content will change as the HTML5 spec changes) <DanC> (I presume so...) <anne> You just publish another Note <ChrisWilson> sure Lachy: blogs are good for describing current state of things but not for things that need to be updated <DanC> ACTION: ChrisWilson to investigate an HTML WG blog, a la the way the I18N WG does it [recorded in] <trackbot-ng> Created ACTION-33 - Investigate an HTML WG blog, a la the way the I18N WG does it [on Chris Wilson - due 2007-12-06]. <DanC> due jan <ChrisWilson> (or haven't seen recent discussion?) DanC - you can assign that issue to Lachlan now <Gregory> GJR: would like a continuation on - I've noted in the tracker the steps taken so far, and am in the process of finalizing a tweaked stylesheet for review [discussion of nonconformance of the style attribute in HTML5] ChrisWilson: how are we tracking follow-up and resolution on these issues? DanC: there is a new testing task force? <Lachy> <Lachy> <Zakim> DanC, you wanted to note another idea: an edited series of blog articles above is about testsuite stuff <Zakim> MikeSmith, you wanted to talk about testsuite stuff <DanC> ah... test suite product is already there... <ChrisWilson> Tracker watching public-html; the public-issue-tracking is for discussing how we do issue tracking. <ChrisWilson> Above was DanC <Shawn> Lachy: it was primarily for discussion of issues with the Tracker software... and yes... what ChrisWilson said. <Lachy> ok, so it's not something I need to subscribe to (I'm on too many lists already :-)) <ChrisWilson> I believe that is true, yes. <ChrisWilson> I don't think I'm subscribed. <Shawn> We just didn't want to clutter public-html with noise on backoffice issues <Julian> No, I didn't. [discussion about mailing lists and interaction with tracker: <ChrisWilson> Any other issues? <ChrisWilson> Motion to adjourn? [no objections to adjourning heard] <ChrisWilson> ADJOURN
http://www.w3.org/2007/11/29-html-wg-minutes
CC-MAIN-2016-30
refinedweb
1,126
55.47
In this section, you will learn how to read a file in memory. Description of code: The class FileInputStream get a channel for the file, and then call map( ) to produce a MappedByteBuffer, which is a particular kind of direct buffer and specify the starting point and the length of the region that you want to map in the file. The class MappedByteBuffer inherits all the methods of ByteBuffer class. The method get() reads the byte at the given index. Here is the code: import java.io.*; import java.nio.*; import java.nio.channels.*; public class ReadFileInMemory { public static void main(String[] args) throws Exception { File f = new File("C:/hello.txt"); long length = f.length(); MappedByteBuffer buffer = new FileInputStream(f).getChannel().map( FileChannel.MapMode.READ_ONLY, 0, length); int i = 0; while (i < length) { System.out.print((char) buffer.get(i++)); } System.out.println(); } } The above code mapped the file in a memory and then read the file. Advertisements Posted on: March
http://www.roseindia.net/tutorial/java/io/readfileinmemory.html
CC-MAIN-2017-34
refinedweb
162
66.94
This is part 20 of Categories for Programmers. Previously: Free/Forgetful Adjunctions. See the Table of Contents. Programmers have developed a whole mythology around monads. It’s supposed to be one of the most abstract and difficult concepts in programming. There are people who “get it” and those who don’t. For many, the moment when they understand the concept of the monad is like a mystical experience. The monad abstracts the essence of so many diverse constructions that we simply don’t have a good analogy for it in everyday life. We are reduced to groping in the dark, like those blind men touching different parts of the elephant end exclaiming triumphantly: “It’s a rope,” “It’s a tree trunk,” or “It’s a burrito!” Let me set the record straight: The whole mysticism around the monad is the result of a misunderstanding. The monad is a very simple concept. It’s the diversity of applications of the monad that causes the confusion. As part of research for this post I looked up duct tape (a.k.a., duck tape) and its applications. Here’s a little sample of things that you can do with it: - sealing ducts - fixing CO2 scrubbers on board Apollo 13 - wart treatment - fixing Apple’s iPhone 4 dropped call issue - making a prom dress - building a suspension bridge Now imagine that you didn’t know what duct tape was and you were trying to figure it out based on this list. Good luck! So I’d like to add one more item to the collection of “the monad is like…” clichés: The monad is like duct tape. Its applications are widely diverse, but its principle is very simple: it glues things together. More precisely, it composes things. This partially explains the difficulties a lot of programmers, especially those coming from the imperative background, have with understanding the monad. The problem is that we are not used to thinking of programing in terms of function composition. This is understandable. We often give names to intermediate values rather than pass them directly from function to function. We also inline short segments of glue code rather than abstract them into helper functions. Here’s an imperative-style implementation of the vector-length function in C: double vlen(double * v) { double d = 0.0; int n; for (n = 0; n < 3; ++n) d += v[n] * v[n]; return sqrt(d); } Compare this with the (stylized) Haskell version that makes function composition explicit: vlen = sqrt . sum . fmap (flip (^) 2) (Here, to make things even more cryptic, I partially applied the exponentiation operator (^) by setting its second argument to 2.) I’m not arguing that Haskell’s point-free style is always better, just that function composition is at the bottom of everything we do in programming. And even though we are effectively composing functions, Haskell does go to great lengths to provide imperative-style syntax called the do notation for monadic composition. We’ll see its use later. But first, let me explain why we need monadic composition in the first place. The Kleisli Category We have previously arrived at the writer monad by embellishing regular functions. The particular embellishment was done by pairing their return values with strings or, more generally, with elements of a monoid. We can now recognize that such embellishment is a functor: newtype Writer w a = Writer (a, w) instance Functor (Writer w) where fmap f (Writer (a, w)) = Writer (f a, w) We have subsequently found a way of composing embellished functions, or Kleisli arrows, which are functions of the form: a -> Writer w b It was inside the composition that we implemented the accumulation of the log. We are now ready for a more general definition of the Kleisli category. We start with a category C and an endofunctor m. The corresponding Kleisli category K has the same objects as C, but its morphisms are different. A morphism between two objects a and b in K is implemented as a morphism: a -> m b in the original category C. It’s important to keep in mind that we treat a Kleisli arrow in K as a morphism between a and b, and not between a and m b. In our example, m was specialized to Writer w, for some fixed monoid w. Kleisli arrows form a category only if we can define proper composition for them. If there is a composition, which is associative and has an identity arrow for every object, then the functor m is called a monad, and the resulting category is called the Kleisli category. In Haskell, Kleisli composition is defined using the fish operator >=>, and the identity arrrow is a polymorphic function called return. Here’s the definition of a monad using Kleisli composition: class Monad m where (>=>) :: (a -> m b) -> (b -> m c) -> (a -> m c) return :: a -> m a Keep in mind that there are many equivalent ways of defining a monad, and that this is not the primary one in the Haskell ecosystem. I like it for its conceptual simplicity and the intuition it provides, but there are other definitions that are more convenient when programming. We’ll talk about them momentarily. In this formulation, monad laws are very easy to express. They cannot be enforced in Haskell, but they can be used for equational reasoning. They are simply the standard composition laws for the Kleisli category: (f >=> g) >=> h = f >=> (g >=> h) -- associativity return >=> f = f -- left unit f >=> return = f -- right unit This kind of a definition also expresses what a monad really is: it’s a way of composing embellished functions. It’s not about side effects or state. It’s about composition. As we’ll see later, embellished functions may be used to express a variety of effects or state, but that’s not what the monad is for. The monad is the sticky duct tape that ties one end of an embellished function to the other end of an embellished function. Going back to our Writer example: The logging functions (the Kleisli arrows for the Writer functor) form a category because Writer is a monad: instance Monoid w => Monad (Writer w) where f >=> g = \a -> let Writer (b, s) = f a Writer (c, s') = g b in Writer (c, s `mappend` s') return a = Writer (a, mempty) Monad laws for Writer w are satisfied as long as monoid laws for w are satisfied (they can’t be enforced in Haskell either). There’s a useful Kleisli arrow defined for the Writer monad called tell. It’s sole purpose is to add its argument to the log: tell :: w -> Writer w () tell s = Writer ((), s) We’ll use it later as a building block for other monadic functions. Fish Anatomy When implementing the fish operator for different monads you quickly realize that a lot of code is repeated and can be easily factored out. To begin with, the Kleisli composition of two functions must return a function, so its implementation may as well start with a lambda taking an argument of type a: (>=>) :: (a -> m b) -> (b -> m c) -> (a -> m c) f >=> g = \a -> ... The only thing we can do with this argument is to pass it to f: f >=> g = \a -> let mb = f a in ... At this point we have to produce the result of type m c, having at our disposal an object of type m b and a function g :: b -> m c. Let’s define a function that does that for us. This function is called bind and is usually written in the form of an infix operator: (>>=) :: m a -> (a -> m b) -> m b For every monad, instead of defining the fish operator, we may instead define bind. In fact the standard Haskell definition of a monad uses bind: class Monad m where (>>=) :: m a -> (a -> m b) -> m b return :: a -> m a Here’s the definition of bind for the Writer monad: (Writer (a, w)) >>= f = let Writer (b, w') = f a in Writer (b, w `mappend` w') It is indeed shorter than the definition of the fish operator. It’s possible to further dissect bind, taking advantage of the fact that m is a functor. We can use fmap to apply the function a -> m b to the contents of m a. This will turn a into m b. The result of the application is therefore of type m (m b). This is not exactly what we want — we need the result of type m b — but we’re close. All we need is a function that collapses or flattens the double application of m. Such function is called join: join :: m (m a) -> m a Using join, we can rewrite bind as: ma >>= f = join (fmap f ma) That leads us to the third option for defining a monad: class Functor m => Monad m where join :: m (m a) -> m a return :: a -> m a Here we have explicitly requested that m be a Functor. We didn’t have to do that in the previous two definitions of the monad. That’s because any type constructor m that either supports the fish or bind operator is automatically a functor. For instance, it’s possible to define fmap in terms of bind and return: fmap f ma = ma >>= \a -> return (f a) For completeness, here’s join for the Writer monad: join :: Monoid w => Writer w (Writer w a) -> Writer w a join (Writer ((Writer (a, w')), w)) = Writer (a, w `mappend` w') The do Notation One way of writing code using monads is to work with Kleisli arrows — composing them using the fish operator. This mode of programming is the generalization of the point-free style. Point-free code is compact and often quite elegant. In general, though, it can be hard to understand, bordering on cryptic. That’s why most programmers prefer to give names to function arguments and intermediate values. When dealing with monads it means favoring the bind operator over the fish operator. Bind takes a monadic value and returns a monadic value. The programmer may chose to give names to those values. But that’s hardly an improvement. What we really want is to pretend that we are dealing with regular values, not the monadic containers that encapsulate them. That’s how imperative code works — side effects, such as updating a global log, are mostly hidden from view. And that’s what the do notation emulates in Haskell. You might be wondering then, why use monads at all? If we want to make side effects invisible, why not stick to an imperative language? The answer is that the monad gives us much better control over side effects. For instance, the log in the Writer monad is passed from function to function and is never exposed globally. There is no possibility of garbling the log or creating a data race. Also, monadic code is clearly demarcated and cordoned off from the rest of the program. The do notation is just syntactic sugar for monadic composition. On the surface, it looks a lot like imperative code, but it translates directly to a sequence of binds and lambda expressions. For instance, take the example we used previously to illustrate the composition of Kleisli arrows in the Writer monad. Using our current definitions, it could be rewritten as: process :: String -> Writer String [String] process = upCase >=> toWords This function turns all characters in the input string to upper case and splits it into words, all the while producing a log of its actions. In the do notation it would look like this: process s = do upStr <- upCase s toWords upStr Here, upStr is just a String, even though upCase produces a Writer: upCase :: String -> Writer String String upCase s = Writer (map toUpper s, "upCase ") This is because the do block is desugared by the compiler to: process s = upCase s >>= \ upStr -> toWords upStr The monadic result of upCase is bound to a lambda that takes a String. It’s the name of this string that shows up in the do block. When reading the line: upStr <- upCase s we say that upStr gets the result of upCase s. The pseudo-imperative style is even more pronounced when we inline toWords. We replace it with the call to tell, which logs the string "toWords ", followed by the call to return with the result of splitting the string upStr using words. Notice that words is a regular function working on strings. process s = do upStr <- upStr s tell "toWords " return (words upStr) Here, each line in the do block introduces a new nested bind in the desugared code: process s = upCase s >>= \upStr -> tell "toWords " >>= \() -> return (words upStr) Notice that tell produces a unit value, so it doesn’t have to be passed to the following lambda. Ignoring the contents of a monadic result (but not its effect — here, the contribution to the log) is quite common, so there is a special operator to replace bind in that case: (>>) :: m a -> m b -> m b m >> k = m >>= (\_ -> k) The actual desugaring of our code looks like this: process s = upCase s >>= \upStr -> tell "toWords " >> return (words upStr) In general, do blocks consist of lines (or sub-blocks) that either use the left arrow to introduce new names that are then available in the rest of the code, or are executed purely for side-effects. Bind operators are implicit between the lines of code. Incidentally, it is possible, in Haskell, to replace the formatting in the do blocks with braces and semicolons. This provides the justification for describing the monad as a way of overloading the semicolon. Notice that the nesting of lambdas and bind operators when desugaring the do notation has the effect of influencing the execution of the rest of the do block based on the result of each line. This property can be used to introduce complex control structures, for instance to simulate exceptions. Interestingly, the equivalent of the do notation has found its application in imperative languages, C++ in particular. I’m talking about resumable functions or coroutines. It’s not a secret that C++ futures form a monad. It’s an example of the continuation monad, which we’ll discuss shortly. The problem with continuations is that they are very hard to compose. In Haskell, we use the do notation to turn the spaghetti of “my handler will call your handler” into something that looks very much like sequential code. Resumable functions make the same transformation possible in C++. And the same mechanism can be applied to turn the spaghetti of nested loops into list comprehensions or “generators,” which are essentially the do notation for the list monad. Without the unifying abstraction of the monad, each of these problems is typically addressed by providing custom extensions to the language. In Haskell, this is all dealt with through libraries. Next: Monads and Effects. November 22, 2016 at 6:08 am This is a particularly illuminating post, thanks Bartosz! November 22, 2016 at 1:28 pm “sugared” “do” examples with “upStr” are broken. plz fix November 22, 2016 at 3:09 pm Just fine right up to here, then off the cliff: “pairing their return values with strings or, more generally, with elements of a monoid. We can now recognize that such embellishment is a functor” November 22, 2016 at 9:20 pm @Adam I’m assuming the reader is familiar with the previous discussion of Kleisli categories. November 22, 2016 at 9:29 pm @lambda functions: Damn WordPress silently eating less-than signs and everything that follows. Fixed! November 28, 2016 at 8:21 am Love your thinking and development of this; indeed it is a motivating example to follow all the definitions that lead to it. Are you thinking of turning this series into a book: categories for the working programmer? November 29, 2016 at 3:25 pm “That’s because any type constructor m that either supports the fish or bind operator is automatically a functor. For instance, it’s possible to define fmap in terms of bind and return:” I can see the case of bind, but how can you use the fish to get the functor fmap:: (a -> b) -> ma -> mb? The fish returns a -> mc, so how to get a morphism starting at mais not clear. November 30, 2016 at 12:53 pm @dmitri14: I was tempted to provide code for all possible translations between definitions, but then I would have to explain them. So here are some, without explanations. It’s pretty much an exercise in matching types. It’s sometimes called “type tetris.” December 1, 2016 at 1:46 pm Thank you! It is remarkable that idis always the 1st argument of the fish in these relations. Does it mean, only part of the fish is used and a more general Kleisli product may not come from a monad (contrary to what is said without proof in)? January 11, 2017 at 9:26 pm Lovely, thanks! Suggestion: ‘vlen = sqrt . sum . fmap (^ 2)’ (a bit briefer by avoiding ‘flip’)
https://bartoszmilewski.com/2016/11/21/monads-programmers-definition/
CC-MAIN-2017-09
refinedweb
2,859
59.33
When solving real-world coding problems, employers and recruiters are looking for both runtime and resource efficiency. Knowing which data structure best fits the current solution will increase program performance and reduce time required to make it. For this reason, most top companies require a strong understanding of data structures and heavily test it in their coding interview. Here’s what we’ll cover today: Brush up on your Python programming skills like data structures, recursion, and concurrency with 200+ hands-on practice problems. Ace the Python Coding Interview Data structures are code structures for storing and organizing data that make it easier to modify, navigate, and access information. Data structures determine how data is collected, the functionality we can implement, and the relationships between data. Data structures are used in almost all areas of computer science and programming, from operating systems, to front-end development, to machine learning. Data structures help to: - Manage and utilize large datasets - Quickly search for particular data from a database - Build clear hierarchical or relational connections between data points - Simplify and speed up data processing Data structures are vital building blocks for efficient, real-world problem solving. Data structures are proven and optimized tools that give you an easy frame to organize your programs. After all, there’s no need for you to remake the wheel (or structure) every time you need it. Each data structure has a task or situation it is most suited to solve. Python has 4 built-in data structures, lists, dictionaries, tuples, and sets. These built-in data structures come with default methods and behind the scenes optimizations that make them easy to use. Most data structures in Python are modified forms of these or use the built-in structures as their backbone. Now, let’s see how we can use these structures to create all the advanced structures interviewers are looking for. Python. This makes Python arrays particularly easy to use and adaptable on the fly. cars = ["Toyota", "Tesla", "Hyundai"] print(len(cars)) cars.append("Honda") cars.pop(1) for x in cars: print(x) Advantages: Disadvantages: Applications: myDogs Queues are a linear data structure that store data in a “first in, first out” (FIFO) order. Unlike arrays, you cannot access elements by index and instead can only pull the next oldest element. This makes it great for order-sensitive tasks like online order processing or voicemail storage. You can think of a queue as a line at the grocery store; the cashier does not choose who to check out next but rather processes the person who has stood in line the longest. We could use a Python list with append() and pop() methods to implement a queue. However, this is inefficient because lists must shift all elements by one index whenever you add a new element to the beginning. Instead, it’s best practice to use the deque class from Python’s collections module. Deques are optimized for the append and pop operations. The deque implementation also allows you to create double-ended queues, which can access both sides of the queue through the popleft() and popright() methods. from collections import deque # Initializing a queue q = deque() # Adding elements to a queue q.append('a') q.append('b') q.append('c') print("Initial queue") print(q) # Removing elements from a queue print("\nElements dequeued from the queue") print(q.popleft()) print(q.popleft()) print(q.popleft()) print("\nQueue after removing elements") print(q) # Uncommenting q.popleft() # will raise an IndexError # as queue is now empty Advantages: dequeclass Disadvantages: Stacks are a sequential data structure that act as the Last-in, First-out (LIFO) version of queues. The last element inserted in a stack is considered at the top of the stack and is the only accessible element. To access a middle element, you must first remove enough elements to make the desired element the top of the stack. Many developers imagine stacks as a stack of dinner plates; you can add or remove plates to the top of the stack but must move the whole stack to place one at the bottom. Adding elements is known as a push, and removing elements is known as a pop. You can implement stacks in Python using the built-in list structure. With list implementation, push operations use the append() method, and pop operations use pop(). stack = [] # append() function to push # element in the stack stack.append('a') stack.append('b') stack.append('c') print('Initial stack') print(stack) # pop() function to pop # element from stack in # LIFO order print('\nElements popped from stack:') print(stack.pop()) print(stack.pop()) print(stack.pop()) print('\nStack after elements are popped:') print(stack) # uncommenting print(stack.pop()) # will cause an IndexError # as the stack is now empty Advantages: Disadvantages: Applications: min()function using a stack Linked lists are a sequential collection of data that uses relational pointers on each data node to link to the next node in the list. Unlike arrays, linked lists do not have objective positions in the list. Instead, they have relational positions based on their surrounding nodes. The first node in a linked list is called the head node, and the final is called the tail node, which has a null pointer. Linked lists can be singly or doubly linked depending if each node has just a single pointer to the next node or if it also has a second pointer to the previous node. You can think of linked lists like a chain; individual links only have a connection to their immediate neighbors but all the links together form a larger structure. Python does not have a built-in implementation of linked lists and therefore requires that you implement a Node class to hold a data value and one or more pointers. class Node: def __init__(self, dataval=None): self.dataval = dataval self.nextval = None class SLinkedList: def __init__(self): self.headval = None list1 = SLinkedList() list1.headval = Node("Mon") e2 = Node("Tue") e3 = Node("Wed") # Link first Node to second node list1.headval.nextval = e2 # Link second Node to third node e2.nextval = e3 Linked lists are primarily used to create advanced data structures like graphs and trees or for tasks that require frequent addition/deletion of elements across the structure. Advantages: Disadvantages: Applications: The primary downside of the standard linked list is that you always have to start at the Head node. The circular linked list fixes this problem by replacing the Tail node’s null pointer with a pointer back to the Head node. When traversing, the program will follow pointers until it reaches the node it started on. The advantage of this setup is that you can start at any node and traverse the whole list. It also allows you to use linked lists as a loopable structure by setting a desired number of cycles through the structure. Circular linked lists are great for processes that loop for a long time like CPU allocation in operating systems. Advantages: Disadvantages: nullmarker Applications: Practiced knowledge of data structures is essential for any interviewee. Educative’s text-based courses give you hundreds of hands-on practice problems to ensure you’re ready when the time comes. Ace the Python Coding Interview Trees are another relation-based data structure, which specialize in representing hierarchical structures. Like a linked list, they’re populated with Node objects that contain a data value and one or more pointers to define its relation to immediate nodes. Each tree has a root node that all other nodes branch off from. The root contains pointers to all elements directly below it, which are known as its child nodes. These child nodes can then have child nodes of their own. Binary trees cannot have nodes with more than two child nodes. Any nodes on the same level are called sibling nodes. Nodes with no connected child nodes are known as leaf nodes. The most common application of the binary tree is a binary search tree. Binary search trees excel at searching large collections of data, as the time complexity depends on the depth of the tree rather than the number of nodes. Binary search trees have four strict rules: - The left subtree contains only nodes with elements lesser than the root. - The right subtree contains only nodes with elements greater than the root. - Left and right subtrees must also be a binary search tree. They must follow the above rules with the “root” of their tree. - There can be no duplicate nodes, i.e. no two nodes can have the same value.() Advantages: Disadvantages: Applications: Graphs are a data structure used to represent a visual of relationships between data vertices (the Nodes of a graph). The links that connect vertices together are called edges. Edges define which vertices are connected but does not indicate a direction flow between them. Each vertex has connections to other vertices which are saved on the vertex as a comma-separated list. There are also special graphs called directed graphs that define a direction of the relationship, similar to a linked list. Directed graphs are helpful when modeling one-way relationships or a flowchart-like structure. They’re primarily used to convey visual web-structure networks in code form. These structures can model many different types of relationships like hierarchies, branching structures, or simply be an unordered relational web. The versatility and intuitiveness of graphs makes them a favorite for data science. When written in plain text, graphs have a list of vertices and edges: V = {a, b, c, d, e} E = {ab, ac, bd, cd, de} In Python, graphs are best implemented using a dictionary with the name of each vertex as a key and the edges list as the values. # Create the dictionary with graph elements graph = { "a" : ["b","c"], "b" : ["a", "d"], "c" : ["a", "d"], "d" : ["e"], "e" : ["d"] } # Print the graph print(graph) Advantages: Disadvantages: Hash tables are a complex data structure capable of storing large amounts of information and retrieving specific elements efficiently. This data structure uses key/value pairs, where the key is the name of the desired element and the value is the data stored under that name. Each input key goes through a hash function that converts it from its starting form into an integer value, called a hash. Hash functions must always produce the same hash from the same input, must compute quickly, and produce fixed-length values. Python includes a built-in hash() function that speeds up implementation. The table then uses the hash to find the general location of the desired value, called a storage bucket. The program then only has to search this subgroup for the desired value rather than the entire data pool. Beyond this general framework, hash tables can be very different depending on the application. Some may allow keys from different data types, while some may have differently setup buckets or different hash functions. Here is an example of a hash table in Python code: import pprint class Hashtable: def __init__(self, elements): self.bucket_size = len(elements) self.buckets = [[] for i in range(self.bucket_size)] self._assign_buckets(elements) def _assign_buckets(self, elements): for key, value in elements: #calculates the hash of each key hashed_value = hash(key) index = hashed_value % self.bucket_size # positions the element in the bucket using hash self.buckets[index].append((key, value)) #adds a tuple in the bucket def get_value(self, input_key): hashed_value = hash(input_key) index = hashed_value % self.bucket_size bucket = self.buckets[index] for key, value in bucket: if key == input_key: return(value) return None def __str__(self): return pprint.pformat(self.buckets) # pformat returns a printable representation of the object if __name__ == "__main__": capitals = [ ('France', 'Paris'), ('United States', 'Washington D.C.'), ('Italy', 'Rome'), ('Canada', 'Ottawa') ] hashtable = Hashtable(capitals) print(hashtable) print(f"The capital of Italy is {hashtable.get_value('Italy')}") Disadvantages: Applications: There are dozens of interview questions and formats for each of these 8 data structures. The best way to prepare yourself for the interview process is to keep trying hands-on practice problems. To help you become a data structure expert, Educative has created the Ace the Python Coding Interview Path. This curated Learning Path includes hands-on practice material for all the most discussed concepts like data structures, recursion, and concurrency. By the end, you’ll have completed over 250 practice problems and have the hands-on experience to crack any interview question. Happy learning! Join a community of 500,000 monthly readers. A free, bi-monthly email with a roundup of Educative's top articles and coding tips.
https://www.educative.io/blog/8-python-data-structures
CC-MAIN-2021-31
refinedweb
2,091
53.92
On 10/31/2012 06:29 AM, Leonardo Arena wrote: > Hi, > > I was invited to show up here regarding BZ 871756. I'm the maintainer of > libvirt on Alpine Linux [1], an uclibc-based distro. > > If you need more info, testing, etc. just let me know. > --- a/src/util/logging.c > +++ b/src/util/logging.c > @@ -58,6 +58,11 @@ > > #define VIR_FROM_THIS VIR_FROM_NONE > > +#ifdef __UCLIBC__ > +/* uclibc does not implement mkostemp GNU extention */ > +#define mkostemp(x,y) mkstemp(x) > +#endif > + > VIR_ENUM_DECL(virLogSource) > VIR_ENUM_IMPL(virLogSource, VIR_LOG_FROM_LAST, > "file", NACK. Rather, we should be using gnulib's mkostemp - I'm working on the patch, and will post it shortly. If you could test my version, that would be much appreciated. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library Attachment: signature.asc Description: OpenPGP digital signature
https://www.redhat.com/archives/libvir-list/2012-October/msg01741.html
CC-MAIN-2015-14
refinedweb
135
59.19
For the past couple years, .net developers have been embracing various content preprocessors as they become more accessible. For the same couple of years, we’ve been trying to keep up. The dotLess port of the popular .less CSS extension has been getting better by leaps and bounds. It has become almost trivial to embed a javascript compiler in .net these days (thanks to projects like Jurassic), enabling us to support things like coffeescript. So we’re doing the obvious thing – stripping preprocessor support from our core library. There are some good reasons for this. Why force people to download things like Jurassic or dotLess if they don’t have the need? The flipside of this is that we’d been deliberately avoiding adding support for SASS/SCSS because of concerns about linking to IronRuby – these concerns largely disappear when preprocessing becomes an opt-in behavior. Some of these libraries don’t even work on Mono (I think .less might be the only one that works currently) so I feel extra bitter downloading code that won’t run on my platform of choice. Finally, the growth in adoption has been so fast that frankly, we’re unable to keep up. So let’s take a look at some of the original code (well not original as some of our refactorings did find their way to the 0.8.x branch). internal override string PreprocessForDebugging(string filename) { if(filename.ToLower().EndsWith(".coffee")) { string js = ProcessCoffee(filename); filename += debugExtension; using(var fileWriter = fileWriterFactory.GetFileWriter(filename)) { fileWriter.Write(js); } } return filename; } As you can see, the trigger for preprocessing is the extension. This is the desired behavior, but the way it was coded left it very brittle and made adding new preprocessors unwieldy. So we set out to find a way to break this code out of the core library. The approach that we used was plugin based – we defined an interface and exposed a mechanism to register implementations of this interface with the core library. Our original interface actually checked a file name to see if it needed preprocessing, so you could define any logic you wanted to determine whether to preprocess – we ended up eschewing this to go back to the extension-based decisions, for reasons that will be discussed later. The interface looks like this: public interface IPreprocessor { bool ValidFor(string extension); IProcessResult Process(string filePath, string content); string[] Extensions { get; } } The “ValidFor” method does exactly what it says – check if the preprocessor should be used with the supplied extension. “Process” is where the actual preprocessing happens. The array of extensions is exposed publicly to be used in registering the preprocessor – this is because each type of content bundle has a list of allowed extensions that is used to filter what gets included when we add a directory full of files. Finally, the ProcessResult type includes a string representing preprocessed content and a list of any dependent files that were changed. This last part was added by Simon Stevens to enable inclusion of .less imports as dependent files. Preprocessors can be registered two ways – both statically and with a particular bundle instance. For the instance level configuration there is a method in the bundle’s fluent API called “WithPreprocessor” that allows inclusion of a preprocessor with that bundle instance. Globally, we used the static “Bundle” class to allow preprocessor registration – methods exist there for registering script, style, and global preprocessors. If preprocessors of the same type are registered both statically and with a bundle instance, the instance-level preprocessor will be used. Now, back to why we decided to make preprocessor selection based on extension rather than the complete file name. To understand, I guess all you have to do is read about the Asset Pipeline in Ruby on Rails, but I will attempt to summarize here. The beautiful thing about the pipeline approach is the ability to chain preprocessing steps. This allows you to use ERB’s helper methods in your file prior to other preprocessing. For example, if you wanted to use ERB helpers in a coffeescript file you can name your file file.js.coffee.erb – when an asset has the .coffee and .erb extensions, both preprocessors will be applied. The order they are applied is driven by the reverse order of extensions, so *.coffee.erb would be preprocessed first by ERB and then by the coffeescript compiler. Our goal was to emulate this behavior in SquishIt, and without matching preprocessors to extensions rather than filenames we wouldn’t have been able to. Enabling this behavior is mostly a matter of finding preprocessors correctly. We find them like so: protected IPreprocessor[] FindPreprocessors(string file) { return file.Split('.') .Skip(1) .Reverse() .Select(FindPreprocessor) .Where(p => p != null) .ToArray(); } It’s important to note here that “FindPreprocessor” uses the firstpreprocessor it finds for a given extension – so we need to take care if implementing preprocessors for common file extensions like “.js”. We can then use the preprocessors in the default order to process our content: protected string PreprocessFile(string file, IPreprocessor[] preprocessors) { return directoryWrapper.ExecuteInDirectory(Path.GetDirectoryName(file), () => PreprocessContent(file, preprocessors, ReadFile(file))); } protected string PreprocessContent(string file, IPreprocessor[] preprocessors, string content) { return preprocessors.NullSafeAny() ? preprocessors.Aggregate(content, (cntnt, pp) => { var result = pp.Process(file, cntnt); bundleState.DependentFiles.AddRange(result.Dependencies); return result.Result; }) : content; } Despite the fact that we have totally broken everything users have come to depend on, we really do want to make the transition easier for people who were using .less or coffeescript with SquishIt. This is where the tremendous WebActivator library comes in. By including this library in our project, it allows us to define bits of code to run when the application starts up, like so: [assembly: WebActivator.PreApplicationStartMethod(typeof($rootnamespace$.App_Start.SquishItHogan), "Start")] namespace $rootnamespace$.App_Start { using SquishIt.Framework; using SquishIt.Hogan; public class SquishItHogan { public static void Start() { Bundle.RegisterScriptPreprocessor(new HoganPreprocessor()); } } } Thanks to this snippet, you don’t actually need to do anything to hook up global preprocessing – just reference the dll containing your preprocessor and WebActivator. This example is from the Hogan preprocessor, submitted by Abdrashitov Vadim. This pull request made me smile more than any I’ve seen in recent memory – a big part of the reason we moved to this model was to make it easier for people to define their own preprocessors and share them with the community. To have one submitted by a user before we even had a production-ready release was just so cool. I think this covers most of the changes, at least at a cursory level. I hope to find the time to put together a bit of proper documentation in the next few months, but hopefully this will help in the meantime. I’d like to extend a huge thanks to everyone who reported bugs in our pre-release versions, and to Rodrigo Dumont who provided the spark to get started on this stuff late last year.
http://blogs.lessthandot.com/index.php/webdev/serverprogramming/preprocessor-extensibility-in-squishit-0-9/
CC-MAIN-2017-13
refinedweb
1,157
54.52
Problem building OCI plugin for Qt 5.9.1 Hello! I am trying to connect to an Oracle database using OCI but when I try to build the OCI plugin it is giving me some problems... Windows 64 bits Qt version installed is 5.9.1 MinGW 5.3.0 32 bits that I installed along with Qt Instant Client SDK 12.2 for an Oracle Database 12c I modified the oci.pro to include the library oci.lib so I put the following lines: INCLUDEPATH += C:\Oracle\instantclient_12_2\sdk\lib\msvc C:\Oracle\instantclient_12_2\sdk\include \ #QMAKE_USE += oci QMAKE_LFLAGS += -L"C:\Oracle\instantclient_12_2\sdk\lib\msvc" -loci \ The commands I am using are: set INCLUDE = %INCLUDE%;C:\Oracle\instantclient_12_2\sdk\include set PATH = %PATH%;C:\Oracle\instantclient_12_2\sdk\lib\msvc cd %QTDIR% C:\Qt\5.9.1\Src\qtbase\src\plugins\sqldrivers\oci qmake oci.pro nmake (but I have also tried using mingw32-make) Everything good till the nmake, where I get many errors like the following: .obj/release/qsql_oci.o:qsql_oci.cpp:(.text+0xa5d0): undefined reference to _OCINumberToInt' .obj/release/qsql_oci.o:qsql_oci.cpp:(.text$_ZN17QSqlDriverPrivateD0Ev[__ZN17QSqlDriverPrivateD0Ev]+0x2a): undefined reference to__ZdlPvj' [...] .obj/release/moc_qsql_oci_p.o:moc_qsql_oci_p.cpp:(.rdata$_ZTI10QOCIDriver[__ZTI10QOCIDriver]+0x0): undefined reference to `__ZTVN10__cxxabiv120__si_class_type_infoE' collect2: error: ld returned 1 exit status NMAKE : fatal error U1077: 'C:\cygwin64\bin\g++.EXE' : return code '0x1' Stop. NMAKE : fatal error U1077: '"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\nmake.exe"' : return code '0x2' Stop. Where with the mingw32-make I get this: [...] collect2: error: ld returned 1 exit status Makefile.Release:67: recipe for target '..\plugins\sqldrivers\qsqloci.dll' failed mingw32-make[1]: *** [..\plugins\sqldrivers\qsqloci.dll] Error 1 mingw32-make[1]: Leaving directory 'C:/Qt/5.9.1/Src/qtbase/src/plugins/sqldrivers/oci' Makefile:40: recipe for target 'release-all' failed Any idea of what I am doing wrong? Thank you! - SGaist Lifetime Qt Champion Hi and welcome to devnet, Can you share your .pro file ? Hello! Thank you! My .pro file is:) Where qsqldriverbase.pri was giving me problems in the line: QT = core core-private sql-private # For QMAKE_USE in the parent projects. #include($$shadowed($$PWD)/qtsqldrivers-config.pri) PLUGIN_TYPE = sqldrivers load(qt_plugin) DEFINES += QT_NO_CAST_TO_ASCII QT_NO_CAST_FROM_ASCII I commented the include of that file because it was not finding it. Waiting for your reply. Thank you in advance! You should use forward slashes in your path. Qt handles the conversion for you. Also you have backslashes scatered over that file without anyhting that follows which might break the parsing. I modified that, now my .pro looks like:) But it still gives me the same errors. Any idea of another thing I might be doing wrong? Your INCLUDEPATH statement won't completely work since you have one path on a new line. Line ending backslashes have their use, you have to use the properly. Sorry, I am not very experienced in Qt as you can see... So, now I put the INCLUDEPATH like this: INCLUDEPATH += C:/Oracle/instantclient_12_2/sdk/lib/msvc C:/Oracle/instantclient_12_2/sdk/include I suppose the backslashes are for this? Or is it better to put: INCLUDEPATH += C:/Oracle/instantclient_12_2/sdk/lib/msvc INCLUDEPATH += C:/Oracle/instantclient_12_2/sdk/include Do you think I am missing something linking, that's why it gives undefined references? Hello again, I have been modifying quite a lot the environment. I first used another version of Qt, then I used different versions of Instant Client SDK, I used MSVC first but then swap to MinGW... And none of these actions have changed the output of the nmake command. Nevertheless, I found that at some point, some directories in C:/ were created: C:\lib\cmake\Qt5Sql\Qt5Sql_QOCIDriverPlugin.cmake C:\mkspecs\modules-inst C:\mkspecs\modules C:\plugins\sqldrivers containing qsqloci.lib qsqloci.pdb and some others... But there is no .dll created. I have also been following some other threads in this forum such as where is seems to work fine. But for me it is not. If you have any other question that might help me solve this I would appreciate it. Thank you. How did you manage to put the build result in your hard drive root folder ? Once you have your stuff built you need to call nmake install. Hello. Thank you for replying. I didn't do anything to put the results there. Actually, I don't think they are correct. I don't have any install so it gives me errors when I write nmake install. Any other idea why I am getting this? I do not understand if I am following the right steps why I keep on having undefined references. Hello, Let me help you a bit. Install oracle database client using: Oracle Database 11g Release 2 Client (11.2.0.1.0) for Microsoft Windows (32-bit) win32_11gR2_client.zip I installed as Runtime (tools for developing applications) and My Oracle base is C:\app\nehain In oci.pro, only comment the line QMAKE_USE += oci: My oci.pro: TARGET = qsqloci HEADERS += $$PWD/qsql_oci_p.h SOURCES += $$PWD/qsql_oci.cpp $$PWD/main.cpp #QMAKE_USE += oci darwin:QMAKE_LFLAGS += -Wl,-flat_namespace,-U,_environ OTHER_FILES += oci.json PLUGIN_CLASS_NAME = QOCIDriverPlugin include(../qsqldriverbase.pri) - Open Qt 5.9 for Desktop (MinGW 5.3.0 32 bit) - cd to C:\QtE\Qt5.9.0\5.9\Src\qtbase\src\plugins\sqldrivers\oci (Maybe your Qt Dir is different) - qmake "INCLUDEPATH+=C:\app\nehain\product\11.2.0\client_1\oci\include" "LIBS+=-LC:\app\nehain\product\11.2.0\client_1\oci\lib\msvc -loci" oci.pro - mingw32-make Thats all. Maybe u will need to run mingw32-make clean before qmake. Good luck Angel H. Hello Angel! Thank you so much for your help!! I just solved it :) I did exactly what you specified and then I ran mingw32-make install and I got the .dll in the C:\Qt\5.9.1\mingw53_32\plugins\sqldrivers folder. Thank you again! :) Rocio
https://forum.qt.io/topic/80928/problem-building-oci-plugin-for-qt-5-9-1/1
CC-MAIN-2019-26
refinedweb
993
52.97
So you have a console program. This console program could be run by typing its name at the command prompt, or it could be run by the user double-clicking it from Explorer. And you want to know which case you're in. This is another case of digging into the question to find the problem. In this case, the problem is "Well, if I'm run directly from Explorer, then when my program exits, the console is destroyed with it, and the user can't see the output. In that case, I want to prompt the user to hit Enter before the program exits." Okay, so what you really want to know is not whether you were run from Explorer. (After all, you would have this problem if the program were run from Task Manager or some other program launcher.) What you really want to know is whether the console will continue to exist after your program exits. For that, you can use GetConsoleProcessList. If your process is the only one attached to the console, then the console will be destroyed when your process exits. If there are other processes attached to the console, then the console will continue to exist (because your program won't be the last one). #include <windows.h> #include <stdio.h> int __cdecl wmain() { printf("this process = %d\n", GetCurrentProcessId()); DWORD count = GetConsoleProcessList(nullptr, 0); if (count == 1) { printf("I'm the last one!\n"); Sleep(2000); } else { printf("I'm not the last one!\n"); } return 0; } We care only how many processes are in the console process list; we don't care what they are. After getting the count, we either declare that we're the last one (and pause so you can see the message), or we say that we aren't (and exit immediately). This is accurate as far as it goes: It tells you whether the console will be destroyed when the process exits. What it doesn't tell you is whether the other processes in the process list will also exit when you exit. For example, if somebody does start cmd /c scratch.exe then the program will correctly report that it's not the last one, but what it doesn't know is that the cmd.exe is going to exit as soon as the scratch.exe program exits. There's not much you can do to detect this, because that information is internal to whatever other process launched your program. I suppose "real" solutions might include create a double-clickable file batch that included "pause" and/or a command-line-typable /nopause switch. Or a context menu command that reads something like this? %SystemRoot%\System32\cmd.exe /k "%1"&pause If you're going to use /k then you hardly need the pause as CMD.EXE will prompt once the command finishes. At some point, you have to abdicate responsibility. If the end user wants to shoot themself in the foot by pulling cmd.exe in where it doesn't belong, let them. And then there's redirection. And a whole bunch more combinations which I worked out, leading to this question: While the intention here is good, most console apps (on any system) don't bother with it on any system (not just in Windows) because there are so many scenarios where you don't want to pause (redirection, pipe, etc). That said, I wish Explorer were a bit smarter and did this for you like Visual Studio does. Then again, the same issues come into play for Explorer too. @BZ: Visual Studio only does this when you launch without debugging (Ctrl+F5). And contrary to Explorer, Visual Studio knows the type of application (GUI or Console) it's launching. Yes, the same issues come into play for Explorer also. If Explorer behaved liked VS does, then someone would complain that then they paste a line into Start->Run expecting the output to be a file containing the redirected results, they get a zombie console window. Or network admins: "My login scripts litter the screen with a bunch of zombie console windows; how do I get rid of them". I believe the correct answer is the current behavior, last program on the console decides whether to turn out the lights or not. Your program may spawn another own copy, specifying some information in command line about itself and exit. And that another copy should wait for exit its parent and then check if it last or not and do pause as needed. Sure this doesn't cover some edge cases, for example if parent tracks process tree using jobs or stupid process list polling, but I guess this should fix cmd /c. At my job, we give candidates a pre-interview homework that involves writing a console program. I deduct points if it includes any kind of attempting to pause before exiting. Why? I think it shouldn't matter unless your projects involves writting console applications. Code that pauses before exiting (a) is non-portable, (b) breaks non-interactive use, (c) is unnecessarily complicated, or (d) suffers from any combination of the above. Programmers who write code that pauses before exiting a console program will probably struggle in an environment which involves the use of console programs, including but not limited to compilers, linkers, build systems, version control systems and in-house data manipulation tools. And do you specify any of those as requirements when you set the task? Why would you need to? It's just a random way of eradicating people from the process, anyone who happens to make it through will have to decide whether they like having a passive aggressive boss. The purpose of the test task is not to see how a given candidate solves a well-defined ideal spherical exercise in vacuum. It is to see how they react to vague underspecified requirements that are prevalent in the real world. {{Citation needed}} I recall that there used to be a way to tell WINDOWS to keep console windows open after the app completed... I'd used this on many occasions, back before things like PowerShell... now I can't find the option (was pretty sure it used to be in properties of the exe or shortcut or something), maybe it's been removed. You can't find the option because it only exists for MS-DOS programs not Win32 console programs. I remember it well. It was a setting available for PIF-based shortcuts. It wasn't carried over into the newer LNK format. It is still possible, but in a more convoluted way now. C:\Users\xxx So in order to do it these days, after you create the link to the console program, change the target so that it becomes cmd /k "" Meh, the comments on this blog seems like they just remove stuff surrounded by less than and greater than symbols, assuming that they are html. The problem is, I normally use those as indicators of it being user input or variable depending on the input. Well, lets try that again with square brackets. cmd /k "[old contents of the target text box]" Possible, sure... but to me, the bigger question is how the PIF implementation works... and how the approaches discussed in article would work (or not)
https://blogs.msdn.microsoft.com/oldnewthing/20160125-00/?p=92922
CC-MAIN-2017-30
refinedweb
1,222
71.04
The Samba-Bugzilla – Bug 13008 smbd does not use the Intel AES instruction set for signing and encryption. Last modified: 2017-09-21 16:06:46 UTC This means we are much slower than we should be on traffic using signing or sealing. Created attachment 13524 [details] Original patch from Justin. I've modified this, but for the record. Created attachment 13525 [details] git-am fix for master. Should back-port cleanly to 4.7.0. Created attachment 13526 [details] git-am fix for master Fix typo in non-AES-ni code path (struct struct doesn't work :-). This wasn't in my original patch, but I think we'll need this or non-x86 platforms: diff --git a/third_party/aesni-intel/wscript b/third_party/aesni-intel/wscript index 151892f6889..ee7be031fd0 100644 --- a/third_party/aesni-intel/wscript +++ b/third_party/aesni-intel/wscript @@ -5,6 +5,9 @@ def configure(conf): conf.DEFINE('HAVE_AESNI_INTEL', 1) def build(bld): + if not bld.CONFIG_SET('HAVE_AESNI_INTEL'): + return + bld.SAMBA_LIBRARY('aesni-intel', source='aesni-intel_asm.c', cflags='-Wp,-E,-lang-asm', Created attachment 13527 [details] git-am fix for master Updated patch containing Justin's fix for non-x86 systems. Created attachment 13528 [details] git-am fix for master So this is the version requested by Andreas, based on Metze's patch here:;a=commitdiff;h=3759eb23b38c that calls into libnettle. Can you check it provides the same effects as directly calling the Intel AESNI instructions ? Cheers, Jeremy. Created attachment 13529 [details] Slightly improved git-am fix for master Updated lib/crypto/wscript_configure to check for nettle/memxor.h as well as nettle/aes.h Created attachment 13549 [details] git am fix allowing configure-time selectable AES cryto. Comment on attachment 13549 [details] git am fix allowing configure-time selectable AES cryto. Incorrect patch uploaded - sorry. Created attachment 13550 [details] git-am fix allowing configure time switch between AES implementations. Created attachment 13552 [details] Latest version submitted to master. Comment on attachment 13552 [details] Latest version submitted to master. I ran a couple quick tests and it's working well. I don't see a place to add a + to the review, but LGTM. Created attachment 13557 [details] git-am cherry-pick from master. I think we should only back-port this to 4.7. (In reply to Jeremy Allison from comment #13) Pushed to autobuild-v4-7-test. Pushed to v4-7-test. Closing out bug report. Thanks! *** Bug 11286 has been marked as a duplicate of this bug. *** AFAIK the AES-NI instructions are also supported in 32-bit mode of the CPUs (I know 32bit is quite old-fashioned these days :). Is the aes-ni code 64bit specific or could we allow the same also for i?68 architecture systems maybe? (In reply to Björn Jacke from comment #17) I don't have an x86 32-bit test environment, so someone with access to that will need to test if it can work.
https://bugzilla.samba.org/show_bug.cgi?id=13008
CC-MAIN-2019-35
refinedweb
490
51.04
right now things can include conf.h or httpd.h, but things break if they do both. I was going to make a patch to add: #ifdef HAVE_BSTRING_H #include <bstring.h> /* for IRIX, FD_SET calls bzero() */ #endif to proxy_connect.c, but can't include conf.h or httpd.h in proxy_connect.c because they are included by mod_proxy.h, but I think that since HAVE_BSTRING_H has nothing to do with mod_proxy.h it is lame to assume it will be included correctly. Perhaps conf.h should be wrapped with a big ifdef? (Oh, and Chuck, you can fix the above somehow... <g>)
http://mail-archives.apache.org/mod_mbox/httpd-dev/199704.mbox/%3CPine.BSF.3.95q.970407223431.19780C-100000@valis.worldgate.com%3E
CC-MAIN-2013-48
refinedweb
102
89.75
I want to create a button and upon clicking, it should open desired webpage. Not sure how I can about it 2 Likes May be use button to show up a hyperlink using markdown ? Hey, Would the following do the trick ? import streamlit as st import webbrowser url = '' if st.button('Open browser'): webbrowser.open_new_tab(url) 3 Likes This only works locally but not when the Streamlit app is accessed over network (e.g. deployed to a server). Here is a hack that works in all cases (but is a hack): from bokeh.models.widgets import Div import streamlit as st if st.button('Go to Streamlit'):'.format(js) div = Div(text=html) st.bokeh_chart(div) 3 Likes There need to be a better solution than this 2 Likes This worked but it was slow, not user friendly but works.
https://discuss.streamlit.io/t/how-to-link-a-button-to-a-webpage/1661
CC-MAIN-2020-34
refinedweb
140
77.84
I have to setup a small windows network inside my bigger linux/mac infrastructure. In order to get the windows clients logging onto the domain, I have had to make the DC their primary DNS server, which seems to have worked. I would much prefer to have one DNS server running on my network, or at least one authoritative server running on the network. I have a USG 200 router/firewall and I can configure some static records for DNS, but I an not sure what I need to put in order to get DNS and AD working together, and hints and tips appreciated. The first thing you should know is that Active Directory and DNS are so intertwined that they're almost one. For all intents and purposes, you should forget the idea of having an Active Directory domain which doesn't have a primary DNS server for Windows clients. I won't say it's "impossible", but I will strongly advise you that it's a path with only pain. As an alternative, why not let AD and DNS do their thing together and then add forwarders to your normal DNS servers. It's the same end result, you can basically forget about your Microsoft DNS server as it will just plod along doing its own thing as you actively maintain and update your other Name Servers. Just deploy AD on subdomain like windowsdomain.example.com instead of on example.com, and then delegate this subdomain to your domain controllers. This way, you will get two domains, which you could potentially split up for greater security. You do not need to run windows DNS on a domain controller for proper functionality of AD. DNS is the backbone of AD so you want to have a very resilient very reliable DNS infrastructure prior to adding active directory. I would strongly recommend using either windows OR your existing DNS infrastructure but I would not use both. Bind 9 will work fine. You should verify that the namespace you are using is valid for active directory. By posting your answer, you agree to the privacy policy and terms of service. asked 3 years ago viewed 479 times active
http://serverfault.com/questions/401966/windows-server-2008-active-directory-dns-setup
CC-MAIN-2015-48
refinedweb
367
61.87
to random functions in-memory! This is normally a good way to crash the program, but who knows? You might find a gem! This technique is useful for finding hidden functionality, but it's somewhat limited: it'll only work for applications that you're capable of debugging. With few exceptions (I've used a technique like this to break out of an improperly implemented sandbox before), this technique is primarily for analysis, not for exploitation or privilege escalation. Creating a Test Binary Let's start by writing a simple toy program. You can download this program as a 32- and 64-bit Linux binary, as well as the source code and Makefile, here Here's the full code: #include <stdio.h> void random_function() { printf("You called me!\n"); } int main(int argc, char *argv[]) { printf("Can you call random_function()?\n"); printf("Press <enter> to continue\n"); getchar(); printf("Good bye!\n"); } Put that in a file called jumpdemo.c and compile with the following command: gcc -g -O0 -o jumpdemo jumpdemo.c We add -O0 to the command line to prevent the compiler from performing optimizations such as deleting unused functions under the guise of "helping". If you grabbed our source code, you can simply run make after extracting it. Finding Interesting Functions Let's assume for the purposes of this post that the binaries are compiled with symbols. That means that you can see the function names! My favorite tool for analyzing binaries is IDA, but for our purposes, the nm command is more than sufficient: $ nm ./jumpdemo 0000000000601040 B __bss_start 0000000000601030 D __data_start 0000000000601030 W data_start 0000000000601038 D __dso_handle 0000000000601040 D _edata 0000000000601048 B _end 0000000000400624 T _fini U getchar@@GLIBC_2.2.5 w __gmon_start__ 0000000000400400 T _init 0000000000400630 R _IO_stdin_used w _ITM_deregisterTMCloneTable w _ITM_registerTMCloneTable w _Jv_RegisterClasses 0000000000400620 T __libc_csu_fini 00000000004005b0 T __libc_csu_init U __libc_start_main@@GLIBC_2.2.5 0000000000400577 T main U puts@@GLIBC_2.2.5 0000000000400566 T random_function 0000000000400470 T _start 0000000000601040 D __TMC_END__ Everything you see here is a symbol, and the ones with T in front are ones that we can actually call, but the ones that start with an underscore ('_') are built-in stuff that we can just ignore (in a "real" situation, you shouldn't discount something simply because the name starts with an underscore, of course). The two functions that might be interesting are "main" and "random_function", so that's what we're going to target! Before we can call one of these functions, we need to run the project in gdb ? the GNU Project Debugger. On the command line (from the directory containing the compiled jumpdemo binary), run ./jumpdemo in gdb: $ gdb -q ./jumpdemo Reading symbols from ./jumpdemo...(no debugging symbols found)...done. (gdb) The -q flag is simply to disable unnecessary output. After you get to the (gdb) prompt, the jumpdemo application is loaded and ready to run, but it hasn't actually been started yet. You can verify that by trying to run a command such as continue: (gdb) continue The program is not being run. gdb is an extremely powerful tool, with a ton of different commands. You can enter help into the prompt to learn more, and you can also use help <command> on any of the commands we use (such as help break) to get more details. Give it a try! Simple Case: Just Call It! Now that the program is ready to go in gdb, we can run it with the run command (don't forget to try help run!). You'll see the same output as you would if you'd run it directly until it ends, at which point we're back in gdb. You can run it over and over if you desire, but that's not really going to get you anywhere. In order to modify the application at runtime, it is necessary to run the program and then stop it again before it finishes cleanly. The most common way is to use a breakpoint ( help break) on main: $ gdb -q ./jumpdemo Reading symbols from ./jumpdemo...(no debugging symbols found)...done. (gdb) break main Breakpoint 1 at 0x40057b (gdb) Then run the binary and watch what happens: (gdb) run Starting program: /home/ron/blogs/jumpdemo Breakpoint 1, 0x000000000040057b in main () (gdb) Now we have control of the application in the running (but paused) state! We can view/edit memory, modify registers, continue execution, jump to another part of the code, and much much more! In our case, as I'm sure you've guessed by now, we're going to move the program's execution to another part of the program. Specifically, we're just going to use gdb's jump command ( help jump!) to resume execution at the start of random_function(): $ gdb -q ./jumpdemo Reading symbols from ./jumpdemo...(no debugging symbols found)...done. (gdb) break main Breakpoint 1 at 0x40057b (gdb) run Starting program: /home/ron/blogs/jumpdemo Breakpoint 1, 0x000000000040057b in main () (gdb) help jump Continue program being debugged at specified line or address. Usage: jump <location> Give as argument either LINENUM or *ADDR, where ADDR is an expression for an address to start at. (gdb) jump random_function Continuing at 0x40056a. You called me! [Inferior 1 (process 11391) exited with code 017] We did it! The program printed, "You called me!", which means we successfully ran random_function()! Exit code 017 means the process didn't exit cleanly, but that is to be expected. We just ran an unexpected function with absolutely no context! If for some reason you can't get breakpoints to work (maybe the program was compiled without symbols and you don't actually know where main is), you can do the same thing without breakpoints by pressing ctrl-c while the binary is running: random_function Continuing at 0x40056a. You called me! Program received signal SIGSEGV, Segmentation fault. 0x0000000000000001 in ?? () Don't worry about the segmentation fault, like the 017 exit code above, it's happening because the program has no idea what to do after it's finished running random_function. Another mildly interesting thing that you can do is jump back to main() to make the program think it's starting over: main Continuing at 0x40057b. Can you call random_function()? Press <enter> to continue I use jump-back-to-main a lot while doing actual exploit development to check whether or not I actually have code execution without trying to develop working shellcode. If the program appears to start over, you've changed the current instruction! Real-world Example For a quick example of this technique doing something visible against a "real" application, I took a look through my tools/ folder for a simple command line Linux application, and found THC-Hydra. I compiled it the standard way ? ./configure && make ? and ran nm against it: $ nm ./hydra 0000000000657abc b A 0000000000657cb4 B accntFlag U alarm@@GLIBC_2.2.5 000000000043baf0 T alarming 0000000000657ae4 B alarm_went_off 00000000004266f0 T analyze_server_response 0000000000654200 B apop_challenge U ASN1_OBJECT_free@@OPENSSL_1.0.0 U __assert_fail@@GLIBC_2.2.5 0000000000655c00 B auth_flag 0000000000657ab8 b B 0000000000408b60 T bail 000000000044b760 r base64digits 000000000044b6e0 r base64val 0000000000433450 T bf_get_pcount ? Turns out, there are over 700 exported symbols. Wow! We can narrow it down by grepping for T, which refers to a symbol in the code section, but there are still a lot of those. I manually looked over the list to find ones that might work (ie, functions that might print output without having any valid context / parameters). I started by running the application with a "normal" set of arguments to make note of the "normal" output: $ gdb -q --args ./hydra -l john -p doe localhost http-head Reading symbols from ./hydra...(no debugging symbols found)...done. (gdb) run Starting program: /home/ron/tools/hydra-7.1-src/hydra -l john -p doe localhost http-head [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Hydra v7.1 (c)2011 by van Hauser/THC & David Maciejak - for legal purposes only Hydra () starting at 2018-10-15 14:47:12 Error: You must supply the web page as an additional option or via -m [Inferior 1 (process 3619) exited with code 0377] (gdb) The only new thing here is --args which is simply a signal to gdb that there are going to be command line arguments to the binary. After I determined what the output is supposed to look like, I set a breakpoint on main, like before, and jumped to help() after breaking: $ gdb -q --args ./hydra -l john -p doe localhost http-head Reading symbols from ./hydra...(no debugging symbols found)...done. (gdb) break main Breakpoint 1 at 0x403bf0 (gdb) run Starting program: /home/ron/tools/hydra-7.1-src/hydra -l john -p doe localhost http-head [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Breakpoint 1, 0x0000000000403bf0 in main () (gdb) jump help Continuing at 0x408420. Syntax: (null) [[[-l LOGIN|-L FILE] [-p PASS|-P FILE]] | [-C FILE]] [-e nsr] [-o FILE] [-t TASKS] [-M FILE [-T TASKS]] [-w TIME] [-W TIME] [-f] [-s PORT] [-x MIN:MAX:CHARSET] [-SuvV46] [server service [OPT]]|[service://server[:PORT][/OPT]] Options: -R restore a previous aborted/crashed session -S perform an SSL connect .. I tried help_bfg as well: (gdb) jump help_bfg Continuing at 0x408580. /%,.- The bruteforce mode was made by Jan Dlabal, [Inferior 1 (process 9980) exited with code 0377] Neat! Another help output! Although these are easy to get to legitimately, it's really neat to see them called when they aren't expected to be called! Stuff "just works", kinda. Speaking of kinda working, I also jumped to hydra_debug(), which had slightly more interesting results: (gdb) jump hydra_debug Continuing at 0x408b40. [DEBUG] Code: ???? Time: 1539640228 [DEBUG] Options: mode 0 ssl 0 restore 0 showAttempt 0 tasks 0 max_use 0 tnp 0 tpsal 0 tprl 0 exit_found 0 miscptr (null) service (null) [DEBUG] Brains: active 0 targets 0 finished 0 todo_all 0 todo 0 sent 0 found 0 countlogin 0 sizelogin 0 countpass 0 sizepass 0 [Inferior 1 (process 7761) exited normally] It does its best to print out the statistics, but because it didn't expect that function to be called, it just prints nonsense. Every other function I tried either does nothing or simply crashes the application. When you call a function that expects certain parameters without any such parameters, it normally tries to access invalid memory, so it's pretty unsurprising. It's actually surprising that it works at all! It's pure luck that the string it tried to print is an invalid string rather than invalid memory (which would crash). Conclusion This may not seem like much, but it's actually a very simple and straightforward reverse engineering technique that sometimes works shockingly well. Grab some open source applications, run nm on them, and try calling some functions. You never know what you'll figure out! -Ron Bowes @iagox86 Posted December 28, 2018 at 8:26 PM | Permalink | Reply Al Wow; what a well written document. Really helped with kringlecon's reverse engineering challenge. Thanks a lot.
https://pen-testing.sans.org/blog/2018/12/11/using-gdb-to-call-random-functions/?reply-to-comment=91042
CC-MAIN-2019-18
refinedweb
1,852
63.59
Earlier I mentioned how my computer science and programming class using Python had a problem set asking me to develop a recursive function to detect Palindromes. Here is the function I mentioned in the previous post. def is_palindrome(phrase): # base case if len(phrase) < 2: return True # divide into easier calculation # and recursive call return phrase[0] == phrase[-1] and \ is_palindrome(phrase[1:len(phrase) - 1]) Since the assignment required me to create a recursive function I didn't think about other solutions at the time. I do remember feeling really good about this solution, however, and how much I liked recursive functions. Everything looked like a potential recursive function :) Interestingly enough, however, my algorithms course started a week ago, and the first week demonstrated the naive recursive function used to generate the Fibonacci sequence. This was rather eye-opening, and the new knowledge helped me get a more balanced perspective on recursive functions. It also has me looking at the recursive functions I have written in the past to see if there is a better algorithm. Revisiting Palindromes As I thought about the definition of a palindrome, a phrase that is the same forward and backward, I realized that and easy test for a palindrome is to reverse the letters in the phrase and compare it to the original phrase. If the two strings are the same, it is a palindrome. If the two strings are different, it isn't a palindrome. Therefore, the recursive function can be replaced with another algorithm. def is_palindrome(phrase): phrase_reversed = phrase[::-1] return phrase == phrase_reversed The expression, phrase[::-1], is slicing notation in Python and returns phrase with its letters reversed. I then compare the original phrase with its reversed form. If they are equal it is a palindrome and the function returns True, otherwise it is not a palindrome and the function returns False. I like this version much better! Seems more intuitive and less complicated than its recursive counterpart. I probably would use this version if the original problem did not require me to write a recursive function. Conclusion In just 1 week my algorithms course has me re-thinking algorithms. It's complementing my computer science and programming course using Python really well. Posted by Koder Dojo I am learning Python and attending four online courses: 1) computer science and programming using Python, 2) algorithms, 3) data structures, and 4) cryptography. This is in addition to my day job as a C# ASP.NET MVC Web Developer. Python and the online courses are challenging, eye-opening, and a lot of fun. I am writing about my experience to help me sort it all out. I hope you find it useful!
http://www.koderdojo.com/blog/python-algorithms-revisiting-recursive-functions-and-palindromes
CC-MAIN-2017-17
refinedweb
450
53.92
How come the following code Gives me the following error:Gives me the following error:Code:#include <iostream> using namespace std; class A { int Num; char Word[12]; public: A(int num, char* word) : Num(num) { strncpy(Word, word, 12); cout << "Building A!\n"; } void Print() { cout << "Num = " << Num << " Word = " << Word << endl; } }; class B: private A { float fNum; public: B(int num, char* word, float f) : A(num, word), fNum(f) { cout << "Building B! \n"; } void Print() { cout << "I'm B!\n"; cout << "fNum = " << fNum << endl; A::Print(); } }; class C: public B { A myA; public: C(int num, char* word) : B(num, word, 5.3), myA(num, word) { cout << "Building C!\n"; } void Print() { cout << "I'm C!\n"; myA.Print(); B::Print(); } }; void main() { C FunAndAmusement(3, "Seven"); FunAndAmusement.Print(); } Code:Error 2 error C2247: 'A' not accessible because 'B' uses 'private' to inherit from 'A' It is true that B uses private to inherit from A, but how does this have to do with the fact that i'm using an A data member inside C and using C's constructor init line to initialize it's A datamember? EDIT: Is this because B inherits private from A and that includes A's constructor as well, and that's why C cannot use A's constructor since it doesnt have access to it? if so, is there a way to override it and use A as it's datamember? like putting A's constructor in the "protected" or "public" section in class C? Thanks.
http://cboard.cprogramming.com/cplusplus-programming/112815-derived-classes-please-help.html
CC-MAIN-2014-42
refinedweb
256
76.35
There was some talk in the mailing list a couple years ago about supporting XCB in EGL, but I'm going to guess it never amounted to anything. #include <stdlib.h> #include <stdio.h> #include <xcb/xcb.h> #include <EGL/egl.h> void fatal(char *); int main(void) { xcb_connection_t *xdisplay; EGLDisplay edisplay; xdisplay = xcb_connect(NULL,NULL); if(xcb_connection_has_error(xdisplay)) { xcb_disconnect(xdisplay); fatal("Could not connect to X11 display."); } edisplay = eglGetDisplay((NativeDisplayType)xdisplay); if(edisplay == EGL_NO_DISPLAY) { eglTerminate(edisplay); xcb_disconnect(xdisplay); fatal("EGL failed to connect to display."); } if(!eglInitialize(edisplay, NULL, NULL)) { eglTerminate(edisplay); xcb_disconnect(xdisplay); fatal("EGL failed to Initialize"); } eglTerminate(edisplay); xcb_disconnect(xdisplay); return 0; } void fatal(char *msg) { fprintf(stderr, "%s\n", msg); exit(1); } Sefaults at eglInitialize(). It doesn't if I replace all of the XCB calls with Xlib calls. Of course there's a good chance I'm doing it wrong...]]> x.org suggests that we use xcb, but I notice that the documentation is all over the place and pretty poor. Also, it doesn't seem to be as widely used as they want it to be. That said, xcb does advertise some compelling features, and Xlib doesn't exactly look perfect. So which should we be using? Also, I notice that libegl depends on xcb. Can we cast an xcb connection to an EGL display?]]>
https://bbs.archlinux.org/extern.php?action=feed&tid=153674&type=atom
CC-MAIN-2016-26
refinedweb
222
50.53
Red Hat Bugzilla – Full Text Bug Listing Created attachment 491687 [details] Code to check for memcpy overlaps Thanks to recent optimizations in glibc where the memcpy happens backwards, it suddenly becomes more important that memcpy is used properly (without overlaps). For reference, see bug #638477 and: So, we need to check for those system-wide, otherwise there would be silent failures here and there. Right now Flash and squashfs has been identified, but I found others, such as pulsaudio and readahead-collector. In order to check for these easily I'm attaching a memcpy check that can be ld preloaded: % gcc -O2 -fPIC -Wall -Werror memcpy_check.c --shared -o /tmp/memcpy_check.so % echo /tmp/memcpy_check.so > /etc/ld.so.preload You can use (In reply to comment #1) > You can use > Yeah, but you have to know which process you want to check beforehand. The purpose of this is to check for the *whole* system... Sort of gprof vs perf. checking build system type... x86_64-unknown-linux-gnu checking host option to accept ISO C99... -std=gnu99 checking whether gcc -std=gnu99 and cc understand -c and -o together... yes checking how to run the C preprocessor... gcc -std=gnu99 -E checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking whether gcc -std=gnu99 needs -traditional... g++... g++ checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking dependency style of g++... gcc3 checking for a sed that does not truncate output... /bin/sed checking for fgrep... /bin/grep -F checking for ld used by gcc -std=gnu99... for objdump... objdump checking how to recognize dependent libraries... pass_all checking for ar... ar checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from gcc -std=gnu99 object... ok checking for dlfcn.h... yes checking whether we are using the GNU C++ compiler... (cached) yes checking whether g++ accepts -g... (cached) yes checking dependency style of g++... (cached) gcc3 checking how to run the C++ preprocessor... g++ -E checking for objdir... .libs checking if gcc -std=gnu99 supports -fno-rtti -fno-exceptions... no checking for gcc -std=gnu99 option to produce PIC... -fPIC -DPIC checking if gcc -std=gnu99 PIC flag -fPIC -DPIC works... yes checking if gcc -std=gnu99 static flag -static works... no checking if gcc -std=gnu99 supports -c -o file.o... yes checking if gcc -std=gnu99 supports -c -o file.o... (cached) yes checking whether the gcc -std=gnu99 linker (/usr/bin/ld -m elf_x86_64) checking for ld used by g++... /usr/bin/ld -m elf_x86_64 checking if the linker (/usr/bin/ld -m elf_x86_64) is GNU ld... yes checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking for g++ option to produce PIC... -fPIC -DPIC checking if g++ PIC flag -fPIC -DPIC works... yes checking if g++ static flag -static works... no checking if g++ supports -c -o file.o... yes checking if g++ supports -c -o file.o... (cached) yes checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking for ANSI C header files... (cached) yes checking sys/poll.h usability... yes checking sys/poll.h presence... yes checking for sys/poll.h... yes checking sys/ioctl.h usability... yes checking sys/ioctl.h presence... yes checking for sys/ioctl.h... yes checking byteswap.h usability... yes checking byteswap.h presence... yes checking for byteswap.h... yes checking for library containing bfd_init... no configure: error: *** libbfd not found Where are the result of the "test" going to show up? I have several systems, one I KNOW has the buggy flash ... (In reply to comment #4) > Where are the result of the "test" going to show up? > > I have several systems, one I KNOW has the buggy flash ... It will show as a crash, so abbrt should catch them. I'm talking of course of the test I mentioned in my description of this bug. the memcpy_check seems to be working: ABRT: won't let me file a bug report on debuginfo-install xulrunner, and xulrunner is crapping out every time it runs (this machine used to run flash without any noticed issues). Please see attached "text file" with cut/paste of abrt dump. Created attachment 491727 [details] debug info xulrunner Created attachment 491837 [details] flash failure on Acer Aspire 5734z-4836 This attachment is from a dual pentium T4500 (In reply to comment #8) > Created attachment 491837 [details] > flash failure on Acer Aspire 5734z-4836 > > This attachment is from a dual pentium T4500 That one is not in Fedora, that's Adobe: Unfortunately, I can not test the "sound bug" when the memcpy_check is blowing the flash player out of the water ... all SUGGESTIONS welcome. Is there a "glibc" F14 that I can install to check the fix ? (In reply to comment #10) > Unfortunately, I can not test the "sound bug" when the memcpy_check is blowing > the flash player out of the water ... all SUGGESTIONS welcome. You are just introducing noise to this bug, so let me explain it _once_. Flash has a bug in the way it uses memcpy, period. All memcpy_check is doing is exposing the bug, by crashing each time memcpy is used wrongly. Blowing Flash out of the what is exactly it's purpose, if that's not what you want, don't use it. If you want your Flash fixed, go to Adobe, this bug is not related to that in any way. I repeat, do _not_ post any Flash adobe related issues here, that is out of topic. > Is there a "glibc" F14 that I can install to check the fix ? Which fix? There is no fix from Adobe. If you want Flash to work you can modify memcpy_check to remove the check: --- #include <string.h> void *memcpy(void *dst, const void *src, size_t n) { return memmove(dst, src, n); } --- But again, that has nothing to do with this bug. Please stop. Created attachment 493497 [details] Code to check for memcpy overlaps Corrected code to check for overlaps. The check was failing if src == dst. Created attachment 496332 [details] unzip stacktrace Created with gdb. Comment on attachment 496332 [details] unzip stacktrace unzip from unzip-6.0-3.fc14.x86_64 crashes when using unzip -t somefile.zip (In reply to comment #14) > Comment on attachment 496332 [details] > unzip stacktrace > > unzip from unzip-6.0-3.fc14.x86_64 crashes when using unzip -t somefile.zip You should file a separate bug report to the 'unzip' component, and mark it as blocking this one. Attachment #491727 [details] and and attachment #491837 [details] should be marked as obsolete, probably attachment #496332 [details] too. Nobody seems to care.
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=696096
CC-MAIN-2016-44
refinedweb
1,147
77.03
ASP: if (Session["FirstName"] == null) { LabelFirstName.Text = "FirstName"; } else { LabelFirstName.Text = (string)Session["FirstName"]; } if (Session["LastName"] == null) { LabelLastName.Text = "LastName"; } else { LabelLastName.Text = (string)Session["LastName"]; } This method has several drawbacks: So Lets see the wrapper object and how it solves those issues: public static class SessionWrapper { private static T GetFromSession<T>(string key) { object obj = HttpContext.Current.Session[key]; if (obj == null) { return default(T); } return (T)obj; } private static void SetInSession<T>(string key, T value) { if (value == null) { HttpContext.Current.Session.Remove(key); } else { HttpContext.Current.Session[key] = value; } } private static T GetFromApplication<T>(string key) { return (T)HttpContext.Current.Application[key]; } private static void SetInApplication<T>(string key, T value) { if (value == null) { HttpContext.Current.Application.Remove(key); } else { HttpContext.Current.Application[key] = value; } } public static string FirstName { get { return GetFromSession<string>("FirstName"); } set { SetInSession<string>("FirstName", value); } } public static string LastName { get { return GetFromSession<string>("LastName"); } set { SetInSession<string>("LastName", value); } } public static User User { get { return GetFromSession<User>("User"); } set { SetInSession<User>("User", value); } } } It contains a few generic private functions to read and write objects from/to the Session or the Application objects, and public methods to be used from the code to access required properties.Note that the use of HttpContext.Current requires a valid request, or Current might be null! This wrapper provides a safe typed access from all over the application, one place to define all objects that might be stored in the session. If you want to move objects from the Session to other storing mechanism this would be solved in the wrapper only. The usage is also much easier: LabelFirstName.Text = SessionWrapper.FirstName; LabelLastName.Text = SessionWrapper.LastName; Note that you can add any initialization code or default values in the wrapper: public static string FirstName { get { string firstName = GetFromSession<string>("FirstName"); if( string.IsNullOrEmpty(firstName)) { firstName = "FirstName"; SetInSession<string>("FirstName", firstName); } return firstName; } set { SetInSession<string>("FirstName", value); } } We pay for user submitted tutorials and articles that we publish. Anyone can send in a contributionLearn More AdamJTP Said on May 7, 2008 : We use something like this in TeamLive - it also has the advantage that you can deal with session timeouts (and authenication hasn’t timed out) in one place. An alternative (but not necessarily better) is to use extension methods on HttpSessionState so that new developers (who may be used to dealing with session directly) notice the session facade. e.g. Once the developer familiar with vanilla Session has typed in “Session.” - the intellisense may prompt them to wonder what the GetFirstName() method does? e.g. public static class SessionHelper { private const string _sessKeyFirstName = “FirstName”; ///your code but with a “this” param public static string GetFirstName(this HttpSessionState session) { string firstName = GetFromSession(_sessKeyFirstName); if( string.IsNullOrEmpty(firstName)) { firstName = _sessKeyFirstName; SetInSession(_sessKeyFirstName, firstName); } return firstName; } //all your stuff private static T GetFromSession(string key) { object obj = HttpContext.Current.Session[key]; if (obj == null) { return default(T); } return (T)obj; } private static void SetInSession(string key, T value) { if (value == null) { HttpContext.Current.Session.Remove(key); } else { HttpContext.Current.Session[key] = value; } } } Ideally we’d be able to use extension properties (”.FirstName” (as you’ve done) seems more natural than “.GetFirstName()”) - but I suppose methods have the advantage of being able to take parameters later on - but I guess it’s down to dev team coding preference. e.g.: ///Optional update if name has been //updated by different user //or data feed? GetFirstName(this HttpSessionState sess,bool refreshSessionFromDb) { } ///default case GetFirstName(this HttpSessionState sess) { return sess.GetFirstName(false); } Daniel Said on May 7, 2008 : Another option would be to store a single “Plain Old Class Object” in session or application: internal class MyAppSession { public string FirstName{get;set;} public string LastName{get;set;} public User User{get; set;} public static MyAppSession Current { get { if (HttpContext.Current.Session["MyAppSession"] == null) HttpContext.Current.Session["MyAppSession"] = new MyAppSession(); return (MyAppSession)HttpContext.Current.Session["MyAppSession"]; } } } This makes the properties easier to declare, and you can initialize their values in the constructor. Brian Lowry Said on May 7, 2008 : While I like the idea, I don’t think its good future-proofing to make the session type-safe in this way. It makes more sense to me to have session accessed from within a domain object. What happens if your requirements change and you can’t use Session anymore? You haven’t successfully abstracted away the location of the stored variable by calling static methods on a SessionWrapper class. Instead, I would prefer: internal class Employee { public string FirstName { get {…} set {…} } public string LastName { get {…} set {…} } And use the get and set to access the session value. Additionally, there should be some form of naming convention for the session key so that there are no collisions… so add in: public const string FirstNameKey = “Employee.FirstName”; public const string LastNameKey = “Employee.LastName”; This keeps you from having to worry about typos and such. Brian Lowry Said on May 7, 2008 : I also wanted to add that the first part of the SessionWrapper class would be good to use when accessing them in the manner I stated above… i just wouldn’t put FirstName, LastName, etc. as part of the Session class. Will Said on May 7, 2008 : I use a slightly different pattern: I like passing in a delegate to a method that will store and return the default value. The same pattern can be used for the viewstate, the cache, or any other similar construct. Mooglegiant Said on May 20, 2008 : This is very helpful for me. However when I was trying it out on a sample web app, I keep getting a “Specified cast is not valid” when trying to get a int from the session. I checked though and the value is a “1″. I’m just not sure what the problem is with the casting. Thanks Said on Jul 5, 2008 : Thank You!!!! Helped me alot Pradeep Said on Jul 25, 2008 : Great post. Was able to use the code without any problem. Thanks for other comments too! Golem Said on Sep 19, 2008 : Nice job. I’ve implemented a similar mechanism into a project I’m working on. One thing I’ve run into is that at times, there may be a need to create an “ad hoc” session variable, where the key name is dynamically generated based on a variable, i.e. Session["Preference_" + userID]; I have not been able to come up with a solution for this that fits into the SessionWrapper class, where the keys need to be hard-coded into the class. Chris Marisic Said on Sep 29, 2008 : Go Go one line lazy loading get statement private static T GetFromSession(string key) { return (T) (HttpContext.Current.Session[key] ?? (HttpContext.Current.Session[key] = default(T))); } Chris Marisic Said on Sep 29, 2008 : Proper declaring of Set Method: private static void SetInSession(string key, T value) { if (Equals(value, default(T))) { HttpContext.Current.Session.Remove(key); } else { HttpContext.Current.Session[key] = value; } } The only really weird case is if you store primitive types in the session using this wrapper like int x, to remove x you need to pass in 0 which is default(int) but I think that’s a fair trade off since with the other declaring you would need to try to pass in SetInSession(key, null) which wouldn’t work because null isn’t a valid value of int. Vijay Said on Jan 23, 2009 : Hi I just need a clarification …. when using the above wrapper method, cant you think like is your server memory will get blocked if more users log in. Please let me know how to over come that Yves Said on Apr 26, 2009 : Do not forget… when accessing HttpContext.Current.Application the code must be Thread Safe. I guess you know why gentlemen. Jaimir Guerrero Said on May 19, 2009 :?, thanks Jaimir G.
http://www.dev102.com/2008/05/07/why-should-you-wrap-your-aspnet-session-object/
crawl-002
refinedweb
1,316
54.63
Quickstart: Creating a connected app with mobile services By starting with the sample that this topic describes, you can create any turn-based game that users play over a network. The TicTacToe sample is a simple Windows Store app that multiple users can play on various devices that are running Windows 8.1. The app stores game state by using mobile services and communicates updates by using push notifications. Prerequisites - Windows 8.1. - Microsoft Visual Studio 2013 with NuGet installed. To determine whether NuGet is installed, open the shortcut menu for a project node in Solution Explorer. If Manage NuGet Packages appears, NuGet is installed. Download it from NuGet Package Manager for Visual Studio 2013. - The Azure SDK, which you can get from the Microsoft Azure Download page. - An Azure account with a subscription. If you don't have one, sign up for a free trial. - Windows PowerShell. - Azure Command-Line Environment (CLI). See How to: Install and Configure the Azure Cross-Platform Command-Line Interface. - The TicTacToe sample files, which you can get from MSDN Code Gallery. Install the sample, and set up the mobile service - Extract the files from TicTacToe.zip, and then open the TicTacToe solution. Download a subscription file for your Azure subscription. If you have the Azure tools installed, you can download this file by following these steps. - In Server Explorer, open the shortcut menu for the Microsoft Azure node, and then choose Manage Subscriptions. - On the Certificates tab, choose the Import button, and then choose the Download subscription file link. - If prompted, sign in with the credentials that you use to access your Azure subscription. - Confirm the download, and note the file location. - In Visual Studio, choose the Cancel button, and then choose the Close button. If PowerShell scripts aren't enabled, follow these steps. - In a Azure Command Prompt window with administrative permissions, enter powershell. - In the PowerShell command prompt that appears, enter Set-ExecutionPolicy unrestricted. Change directories to the TicTacToe\TicTacToeCSharp\Script subdirectory of the directory where you installed the sample, and then run one of the following commands. Specify a new value for the mobile service that you're creating.Note You'll need a unique name for your copy of the TicTacToe mobile service, such as TicTacToeYourName. An error will appear if you choose a name that's already in use. Use this command line if you want to create a database and use the first subscription in the file. Specify new values for the ID and password of the administrator for the database that you're creating:powershell -File "tictactoesetup.ps1" --subscriptionFile "YourSubscriptionFile" --serviceName "YourMobileServiceName" --serverAdmin "DatabaseAdminUserId" --serverPassword "DatabaseAdminPassword" The script might take some time to run, as it creates a mobile service and all the tables and server-side scripts that the sample needs. Add one or both optional parameters if you want to use an existing SQL Database in Azure or if you have multiple subscriptions and you don't want to use the first one in the list. Specify the ID and password of the administrator for the existing database.powershell -File "tictactoesetup.ps1" --subscriptionFile "YourSubscriptionFile" --subscriptionId "YourSubscriptionId" --serviceName "YourMobileServiceName" --serverName "YourSQLServerInAzure" --databaseName "YourSQLDatabaseName" --serverAdmin "YourDatabaseAdminUserId" --serverPassword "YourDatabaseAdminPassword" If you've never created a mobile service with your Azure subscription before, the script may fail with an error that your subscription isn't registered to use mobile services. If this error appears, go to the Azure management portal, create a mobile service, and then re-run these steps. You can then create mobile services by using the Azure CLI. - In Internet Explorer, open the Azure management portal, and then choose your mobile service. - On the Data tab, verify that four tables were created: games, moves, userfriends, and users. Set up permissions and authentication - In the management portal, choose your mobile service, and then choose the Manage Keys button. - Choose the Copy button to copy the application key for your mobile service to the Clipboard. - In the TicTacToeCSharp solution, open App.xaml.cs, and locate the first line of the App class. - In the call to the constructor for the MobileServiceClient, insert the name of your mobile service, and paste the application key where requested. In the next step, you'll register your version of the TicTacToe app with the Windows Store. - In Solution Explorer, open the shortcut menu for the TicTacToeCSharp project, choose Store, and then choose Associate App with the Store. Specify an app name that isn't already in use. Visual Studio updates the Package.appxmanifest file and adds a Package.StoreAssociation.xml file and a store key file with the .pfx extension to your project. In the next step, you'll register your app with Live Connect so that you can authenticate with Microsoft accounts. - In Internet Explorer, open the Live Connect Developer Center - In the list, choose the app that you just created, choose the Edit Settings link, and then choose the API Settings link. See Register your apps to use a Microsoft account login. - In the Redirect domain text box, enter the URL of the mobile service,, and then choose the Save button. - Note the Client ID and Client Secret in the next step, when you register for authentication and configure the permissions. - In the Azure management portal, choose your mobile service, and then choose the Identity tab. - In the microsoft account settings section, enter the client ID and client secret, and then choose the Save button. See Getting started with authentication. - Follow these steps to configure push notifications with information about your app. See How to authenticate with the Windows Push Notification Service (WNS). - Sign in to the Windows Store apps page of the Windows Dev Center, and then choose to edit the app that you just created. - Choose the Services link, and then choose the Live Services site link. - Choose the Authenticating your Service link, and find the Package Security Identifier (SID) and Client secret. - In the Azure management portal, open your mobile service. - On the Push tab, copy the Package SID and client secret into the appropriate places under windows application credentials, and then choose the Save button. Run and test the sample - In Visual Studio, build the sample (Keyboard: Ctrl+Shift+B). This step also downloads and restores the packages on which the sample depends. - Choose the Start button (Keyboard: F5) to build the solution, and then sign in with a Microsoft account when requested. If you aren't requested to sign in, repeat items 7 through 12 in the previous procedure, Set Up Permissions and Authentication. - If a dialog box about privacy appears, choose the Yes button. The app starts. - In the top-right corner, enter a user name in the box to create a player, and then choose the Register button. When you sign in to TicTacToe with the same Microsoft account, you'll be signed in automatically as this player. - Before you play a game, create another player by performing one of the following sets of steps: Close the app [Keyboard: Alt+F4], restart the app, sign in using a different Microsoft account, and then repeat the previous step with a different user name. - Install the sample on another computer or virtual machine that's running Windows 8.1, and then sign in using a different Microsoft account. If the computer or virtual machine isn't running Visual Studio 2013, you must sideload the app. See Sideload Windows Store Apps. You might want to start by creating both players on the same computer and, later, install the app on a different computer to test simultaneous play. - Perform the following steps to create a list of friends and then start a game with one of them: - In the Search for user text box, enter a user name that you created, and then choose the Search button. - In the Search Results box, choose a player, and then choose the Add Friend button. - (Optional) Repeat the previous two steps to add more players. - In the Opponent text box, choose a player in the list that you just created, choose New Game, make a move, and then choose the Submit Move button. Inside the Code: Creating the mobile service client The code for the mobile service client is in the file TicTacToeMobileServiceClient.cs and uses the .NET client library. You can code the client in a couple of ways, but you write the cleanest code by creating classes that map directly onto the tables in the database, with public members that match the columns in the data table. In this way, you can get the output of your queries directly as objects, instead of having to deserialize the query results as you'd have to do if you used the non-generic versions of the same APIs. Each data table has an associated C# class that contains fields that map directly onto the data table. You can use these classes with the generic versions of the client API for the mobile service, as the following code shows. It starts a game by inserting a record in the games table. public async Task<int> CreateGame(int userKey1, int userKey2, ITurnGame game) { Game gameObj = new Game { User1 = userKey1, User2 = userKey2, Board = game.GameBoard, GameResult = (int)game.Result, CurrentPlayerIndex = game.CurrentPlayerIndex }; // Insert a new entry in the "games" table await client.GetTable<Game>().InsertAsync(gameObj); // Return the gameID return (int) gameObj.id; } The methods that result in mobile service calls are all asynchronous, so you usually write asynchronous client methods. Summary Now you understand how to use a mobile service to create a simple connected app such as a turn-based game. Admittedly, tic tac toe might not be the most exciting challenge. However, you can design and create your own apps by using the sample code. You could create another turn-based game that you can play on a grid, like chess, concentration, go, or checkers. The TicTacToe class contains most of the programming that's specific to the tic-tac-toe game. The TicTacToe class implements ITurnGame, so you would need to provide a different implementation of ITurnGame to create the game of your choice. Related topics
https://msdn.microsoft.com/en-us/library/windows/apps/xaml/dn481211.aspx
CC-MAIN-2016-44
refinedweb
1,688
55.95
This question is about interval comparison of nested integer intervals. Assume three ranges of integers, which I call target ranges for sake of simplicity. These target ranges never overlap, but may be of different length. > target1 = range(1,10000) > target2 = range(10001,20000) > target3 = range(20001,25000) > test1 = range(900,5000) # entirely in target1 > test2 = range(9900,10500) # mostly in target2, but crosses into target1 > sought_function(test1, [target1, target2, target3]) # 1 > sought_function(test2, [target1, target2, target3]) # 2 def nested_in_which(test, targets): for n, t in enumerate(targets): if test[0] in t and test[-1] in t: return(n) else: if test[0] in t and n < len(targets) and test[-1] in targets[n+1]: return(n+1) # Overlap comparison not yet implemented If you think of each range as a set. You want the target range that gives you the largest intersection with the test set. So if you calculate the length of the intersection between each target and the test and return the index of the max intersection you should have what you want. Here is some rough code that does it: def which_range( testRange, *targetRanges ): testRange = set( testRange ) tests = [ ( i, len( set( targetRange ).intersection( testRange ) ) ) for i, targetRange in enumerate( targetRanges ) ] return max( tests, key=lambda x: x[1] )[0] >>> which_range( range(9900,10500), range(1,10000), range(10001,20000), range(20001,25000) ) 1 # the second target range >>> which_range( range(900,5000), range(1,10000), range(10001,20000), range(20001,25000) ) 0 # the first target range
https://codedump.io/share/7CFI6m7bUrIl/1/evaluation-of-nested-integer-intervals-in-python
CC-MAIN-2017-43
refinedweb
250
53.55
Hide Forgot Created attachment 878140 [details] server.log I'm meeting error when jts transaction is used in scenario when connection to database is halted during commit phase. When connection is down then jdbc driver throws XAException. In case that it's XAException.XAER_RMERR then HeuristicMixedException is returned to client when previous resource is already commited. Then transaction log should contains record with status HEURISTIC on this resource. Scenario: commitHaltRev Steps: a. enlistment test xa resource b. enlistment jdbc xa resource c. prepare test xa resource d. prepare jdbc xa resource e. killing connection to database f. commit test xa resource g. commit jdbc xa resource -> failing as connection is down g-a. jdbc xa resource returns XAException.XAER_RMERR g-b. BasicAction::doCommit() - TwoPhaseOutcome.HEURISTIC_ROLLBACK h. start the connection to database i. start recovery Behavior: (JTS) - before recovery: arjuna tx log contains a record with status "HEURISTIC", prepared record in db log - after: arjuna tx log contains a record with status "PREPARED", prepared record in db log It seems that JTS implementation somelike does not refresh the log store. When I put :probe() operation before the start of recovery then the JTS behave in the same way as JTA and it leaves the state on HEURISTIC. Tom's reply on my question about this issue: This looks like a bug for JTS, not likely a bad one as I would guess the true state of the transaction is still recorded as a heuristic but not presented correctly. Created attachment 878141 [details] ds.properties file This could be reproduced by test case of crash recovery testsuite. The test has to be run with database different from Oracle as Oracle returns XAER_RMFAIL and the resource is commited at the end. git clone -b lrco git://git.app.eng.bos.redhat.com/jbossqe/eap-tests-transactions.git cd eap-tests-transactions/jbossts mvn clean verify -Dtest=JPAProxyCrashRecoveryTestCase#commitHaltRev -Dno.cleanup.at.teardown -Djbossts.noJTA -Dds.properties=path/to/ds.properties Created attachment 892926 [details] server.log from new test run on EAP 6.3.0.ER3 I did a try how is it with this issue for EAP 6.3.0.ER3 and I found several things: It seems that probeLog is not necessary anymore. When recovery is called twice then the HEURISTIC type of exception is correctly shown in the store. But there is one a bit more serious consequence which can cause the data inconsistency. When the recovery is called several times the resource (in this case postgresql database) seems to return (time 15:50:14,569, adding the server.log as attachment) XAResourceRecord.recover got status: CosTransactions::StatusRolledBack and data is rolled back. But the test Test XA resource is already committed. This seems being wrong. Other thing is that the transaction record is put under AssumedCompleteTransaction. The transaction log directory structure looks like ─ tx-object-store └── ShadowNoFileLockStore └── defaultStore ├── CosTransactions │ └── XAResourceRecord ├── Recovery │ └── FactoryContact │ └── 0_ffff7f000001_-4924aa24_5368e7f9_13 ├── RecoveryCoordinator │ └── 0_ffff52e38d0c_c91_4140398c_0 └── StateManager └── BasicAction └── TwoPhaseCoordinator └── ArjunaTransactionImple └── AssumedCompleteTransaction └── 0_ffff7f000001_-4924aa24_5368e7f9_3a In fact the original issue with necessity of calling probeLog seems to disappear. But the issue which is happening in the same scenario seems to be more serious. So I'm putting the severity of issue to high. Created attachment 914097 [details] PAProxyCrashRecoveryTestCase_commitHaltRev_jts_server.log.html I did a bit more invstigation on this issue and I think that it's connected with bz#1080035. The both does have the same syndromes - TM tries to commit several time and then gives up and do rollback. Maybe it's duplicate of the 1080035. Just this happens in different test scenario. After the restart of the server, TM tries to do commit but despite the fact that there is three calls of the phase2commit and stuff the real work is not proceeded. Then after some time rollback is launched by ResourceCompletor.rollback() I'm not sure whether ResourceCompletor is triggered by resource timeout (in this case Oracle) that Oracle clean the XA after some time and this time elapsed after third recovery and fourth already clean the log by rolling back the txn. Or wherther there is some issue in TM that it would rollback transaction before it could be committed as it's expected. Nevertheless the result is that data is in inconsitent state. The first resource was committed but the second one was rollbacked and more user will not get any useful information that it happened. I want to point out once again that the original issue with 'log is not refreshed' disappeared. Not it's about consistency of data amongst resources belong to the same transaction. Based on XA specification, if XAER_RMERR is returned during the commit, it means that the error occurred while committing the branch and its work was rolled back. User is informed about this with HeuristicMixedException as well as a transaction log kept in the store. It seems that the behaviour is correct and the assertions at the end of test should be changed. p.s. workflow of this test will change a little bit once fix for 1080035 is merged. However, with it or without, outcome is the same. Also, it might be worth contacting Postgres about this. In a scenario as tested here, XAER_RMFAIL is a more appropriate response. Hi Gytis, I'm sorry for being a bit late with my response. I see your point but, please, let me put here few arguments seen from my side. 1. first "strange" thing for me is that everything works fine with JTA. Just JTS does not work fine when XAER_RMERR is returned 2. The problematic XAER_RMERR exception code is returned for datases: PostgreSQL, MSSQL and Sybase (+MySQL behaves in similar way but "incorrectly" returns XAER_RMFAIL) 3. I can't find information that XAER_RMERR would order to force rollback in spec. But I can be searching wrong. In fact the database does not do any rollback and it's TM that order to process rollback after some time when everything is already up and working. I understand that user is "warned" by HeuristicMixedException about inconsistency. But in this case RM does not do any heuristic decision and rollback was not done. The "heuristic decision" is done by recovery of JTS in third run of recovery. This seems to me as changing in-doubt state to inconsistent state. I would like add one more point. Changing return code to XAER_RMFAIL is (at least what I understand from txn workflow) not the only change that is needed to be done. In such case it would be just a "simple" change in jdbc driver. But if driver returns XAER_RMFAIL then it's its responsibility to deliver commit after connection is renewed. What I saw from our test TM does not care about resources that returns RMFAIL. TM expects that resources manage the troubles on their own in that case. If RMERR is returned then TM preserves log record for its purposes and user then could react on such situation. Verified on revision EAP 6.4.0.DR11 Now XAResourceRecord.recover gets status: "CosTransactions.HeuristicRollback". And transaction is moved to "TwoPhaseCoordinator/ArjunaTransactionImple/AssumedCompleteHeuristicTransaction" For verifying the JTS transaction participant's status new BZ is created: bz#1168973 as JTS participants are not showing up in the tooling. Tom Jenkinson <tom.jenkinson@redhat.com> updated the status of jira JBTM-2274 to Closed
https://bugzilla.redhat.com/show_bug.cgi?id=1080140
CC-MAIN-2019-35
refinedweb
1,216
57.77
ABORT(3) Library Functions Manual ABORT(3) abort - cause abnormal program termination #include <stdlib.h> void abort(void); The abort() function causes abnormal program termination to occur, unless the signal SIGABRT is being caught and the signal handler does not return. Some implementations may flush output streams before terminating. This implementation does not. The abort() function never returns. sigaction(2), exit(3) The abort() function conforms to IEEE Std 1003.1-1990 (``POSIX.1''). The abort() function first appeared in Version 5 AT&T UNIX. Historically, previous standards required abort() to flush and close output streams, but this conflicted with the requirement that abort() be async signal safe. As a result, the flushing requirement was dropped. OpenBSD 5.7 May 14, 2014 OpenBSD 5.7
http://resin.csoft.net/cgi-bin/man.cgi?section=3&topic=abort
CC-MAIN-2016-18
refinedweb
125
60.01
Applet - Applet the applet concept and HTML? what is mean by swing? Hi friend, Applet Applet is java program that can be embedded into HTML pages. Java applets... in details to visit.... Applet is then compiled into java class file and we specify the name of class in the applet tag of html page. The java enabled browser loads class file of applet and run in its...; Introduction Applet is java program that can be embedded into HTML pages. Java applets Applet - Passing Parameter in Java Applet ;Welcome in Passing parameter in java applet example."> <... of the param tag in html file not in java code. Compile the program : javac... like Welcome in Passing parameter in java applet example. Alternatively you  ... an applet program. Java source of applet is then compiled into java class file and we specify the name of class in the applet tag of html page. The java enabled... copy that html part in the same directory and execute your first applet MySQL How to Topic MySQL How to Topic  ... using Java?  ...; How to Determine What Is Causing a Problem java - Applet java what is applet? Hi Friend, Please visit the following link: Thanks Java applet Java applet What tags are mandatory when creating HTML to display an applet The Java Applet Viewer program shows how to build an applet and the HTML file for it. Firstly create... of the applet. Beneath that is the HTML file which shows how to give the body... The Java Applet Viewer   What is Applet? - Applet any further, lets see what an applet is? is Applet? What is Applet? Hi,Here is a little.... Applet viewer is a command line program to run Java applets. It is included java i have button browse button in my applet form.when i click on browse button i have to go to file storage box(what ever files stored in my... in the mail).i have display that file path in my applet form.am learner please send me Java - Read file Applet Java - Read file Applet  ... the parent of the html file the applet is embedded in, not the URL of the applet... or applet viewer. Full running Read File Example code is given below: Here is the code The Java Applet Viewer and the HTML file for it. Firstly create a class. Then start the applet using init method... paint method to give the dimensions of the applet. Beneath that is the HTML file... The Java Applet Viewer   What is an Applet What is an Applet - Java Applet Tutorial Introduction Applet is java program that can be embedded into HTML pages. Java applets runs on the java enables web browsers applet applet what is applet in java An applet is a small program that can be sent along with a Web page to a user. Java applets can perform... the following link: Applet Tutorials Java - Applet Hello World Java - Applet Hello World This example introduces you with the Applet in Java... your applet open the html file in web browser. You browser should display applet java - Applet java what is the use of java.utl Hi Friend, The java...:// Thanks java - Applet :// Thanks...java what is graphics? Hi Friend, It includes pen widths, dashed lines, image and gradient color fill patterns, the use of arbitrary Using Applet in JSP to include Applet in JSP page What is applet ? Applet is java program that can be embedded into HTML pages. Java applets runs on the java enables web browsers...:// JSP-Applet To use applet in JSP page we can use java - Applet java what is applet The Java Applet Viewer and the HTML file for it. Firstly create a class. Then start the applet using init... file which shows how to give the body for applet. Here is the Java File... The Java Applet Viewer   Applet in Java ; tag is used to embedding an applet in an HTML file and the file are viewed... normally. Example of Applet in Java: import java.applet.Applet; import java.awt....Applet is a Java program designed for execution within the web browser Applet Write Files Example a file from an applet. Without maintaining the Java security policy file...;} } Download this code. For running an applet you have to need an HTML file...;TITLE> Write file example </TITLE> <applet code=" Java Applet Java Applet How to add Image in Java Applet? what is getDocumentBase The Java Applet Viewer . The following program shows how to build an applet and the HTML file for it. Firstly create... the dimensions of the applet. Beneath that is the HTML file which shows how to give the body for applet. Here is the Java File: import Java Applet Java Applet - Creating First Applet Example What is an Applet... cache applets which makes it easy to load them. Disadvantages of Java Applet: Web browsers and operating system require Java plug-in to run app: Thanks...*; import java.awt.*; public class CreateTextBox extends Applet implements Java applet Java applet How do I go from my applet to another JSP or HTML page What is an Applet - Java Applet Tutorial java applet - Applet :// Thanks...java applet I want to close applet window which is open by another button of applet program. plz tell me! Hi Friend, Try No matter what i do, eclipse applet WON'T PLAY SOUND - Java Beginners No matter what i do, eclipse applet WON'T PLAY SOUND OK, so I tried the code in..., even downloaded the sound file. it made a empty applet without any sound java loan calculator applet help java loan calculator applet help Hi, I could use some help correcting a code here. I need to write a Java applet program (together with its html test file) to calculate loan payments. The user will provide the interest rate Java Applet Java Applet Hi, What is Java Applet and how it can be useful in development of Java programs? Can anyone tell me the url to learn Java Applet? Thanks Hi, See the tutorial at: Applications and Applets Thanks Play Audio in Java Applet the sound file. This program will show you how to play a audio clip in your java applet viewer or on the browser. For this example we will be creating an applet called... Play Audio in Java Applet   core java - Applet the applet with html file: Java Applet Demo Put your html file in the same folder where you have put the applet. When you load the html page, your...core java how can draw a single line with mouse in applet. please Applet In Jsp what is the meaning of an applet. Applets are small programs or applications...; In this program we have made one applet in java. After making an applet in java we are using the applet class in the jsp by using the html tag <APPLET CODE = " The Life cycle of An Applet in the html file. The init () method retrieve the passed parameter through the PARAM tag of html file using get Parameter() method All the initialization... to be started or restarted. For Example if the user wants to return to the Applet core java - Applet (MouseEvent evt) { } } Then call this applet with html file 'applet.html'. Draw...core java Namaste sir , how can draw a line in Applet. I want when...; Hi Friend, Create an applet 'SimpleDrawLine.java': import java.awt. Applet in Eclipse - Running Applet In Eclipse and the the Java applet appears in the applet viewer. Download this Example... in Eclipse 3.0. An applet is a little Java program that runs inside a Web... in a browser. The HTML document tells the browser to load and run an applet using java applet run time error - Applet java applet run time error Hi, Im new to java...: " My code is below: //html code //PlayerApplet file... { Player player = null; /*String location=""; MediaLocator Java applet Java applet What is AppletStub Interface Java applet Java applet what are types of applets applet problem - Applet applet problem How can I create a file in client side by a java applet . Surely it will need a signed applet .But how can a signed applet create a file in the client side passing parameter from html to applet passing parameter from html to applet how to pass the following from HTML to applet in Java code - font size,font style,font size?give me suitable code Java applet Java applet What is the sequence for calling the methods by AWT for applets Java applet Java applet What is the relationship between the Canvas class and the Graphics class The APPLET Tag in Detail the applet with the topmost item on the current line in the HTML file. texttop... of the HTML file. middle-Aligns the applet with the middle of the baseline... of the HTML file. baseline-Aligns the bottom of the applet with the baseline java applet prog java applet prog applet to display scrolling text from right to left in an applet window using thread. text should be accepted by html parameter Applet Applet Draw the class hierarchy of an Applet class. Also explain how to set background and forground colors in java Java File Management Example for reading the text file? What is best way to read a text file in Java... of file. Read the example Read file in Java for more information on reading...Java File Management Example Hi, Is there any ready made API > java applet notpad java applet notpad i need Codes Or programe in java applet ; i want add this programe to my site html > please help me our Learn Java...: Java Applet Tutorials Hide Close Button in Java Applet - Java Server Faces Questions and second is frames.html.Java file will contain all the Java Applet Code and in HTML...Hide Close Button in Java Applet I have an applet while running...... button or close button. Java Applet Code - Show and Hide Buttons on Applet  Java Program - Applet Java Program A java program to move a text in applet from right to left. Hi Friend, Please visit the following link: Thanks HTML the file give me the code for this in html and java...HTML in html how can we can upload the file <tr> <td><strong>Video of the work:</strong> <td><b> difference between applet and swings difference between applet and swings what are the major difference... Applications while Applet need HTML code for Run the Applet 5)Swing have its own... for Abstract windows toolkit whereas le Swing is also called as JFC?s (Java In core java - Applet In core java what is difference between applet and swing? Hi gopal, The difference between the applet and swing is as describe below: Applet belongs to java.awt package and swing belongs to javax.swing.* package Java Image Browsing Applet Java Image Browsing Applet Hi. I want to create an applet which is embedded in html page which display image by browsing the files in the computers hard disk... Please help me out scrollbar - applet - Applet for more information. Thanks... ScrollbarDemo extends Applet { public void init() { Scrollbar sb = new...*; public class ScrollbarDemo extends Applet { Scrollbar scrollbar; String s Clock Applet in Java Java - Clock Applet in Java  ... by the java applet to illustrate how to use the clock in an applet. This program shows... online this example. Download the Java Code Problem in show card in applet. Problem in show card in applet. The following link contained the card demo with applet.... On Run as Java Applet then only show the Applet, not show any one card,hence any what is the example of wifi in java what is the example of wifi in java i want wi fi programs codings servlet to applet communication ) { e.printStackTrace(); } } } 3)Call this applet with the html file. <html> <body> <h1>Java Applet Demo</h1> <applet code...servlet to applet communication good morning sir. how to read Topics of every core java topic and simple example of core java that will help you learn it easily. Getting Started with Java SE What is Java? How to Get Java...Core Java Topics Following are the topics, which are covered under Core Java What is XML? information. For example Java web application uses web.xml file to declare... is simple example of XML file: <Invoice> <to>ABC...What is XML? In the first section of XML tutorials we will learn the basics servlet code - Applet with the html file. Java Applet Demo Thanks...servlet code how to communicate between applet and servlet ... from the servlet to applet. Here is the code of 'ServletExample.java how to connect database table with scrollbar in java applet I use the archive tag to download my applet into the Browser It takes too long to load. Can I do it in several steps... control the loading within the applet? thanks jsp plugin implementation - Applet folder under the WEB-INF folder and put my java code file and its corresponding classes inside the WEB-INF/classes folder. When i execute my jsp page, APPLET NOT INITED error displayed in my status bar. I opened the java console. The error Java create table in html file Java create table in html file In this section, you will learn how to create table in html file. In the previous section, you have seen different operations... are going to perform operation on a html file. You can see in the given Core java - Applet with the following html code: Java Applet Demo Thanks...Core java I, want to make a menu and thats munu Item is Shortcutkey... in the window. Please sent me this Applet code. Thank you From : Sunil java applet java applet why java applet programs doesn't contain main method java progam - Applet java progam write to me java program on simple calculator Hi Friend, Please visit the following link: Hope that it will be helpful HTML Document Creation will provide an introduction to HTML for Java programmers. We have...;BR> Body line breaks * <APPLET> Java applet tag Additional... into the HTML file. The <META> Tag The header tag can also contains java applet java applet If i insert in database from applet this work, but from applet.html don't JAVA with HTML JAVA with HTML Hello I want an JAVA example that would convert any UNICODE character OR STRING into its Equivalent HTML Numeric[ASCII] values... Like for character ? it's equivalent value is آ like all the other HTML tags, Definition of applet tag in HTML5. HTML tags, Definition of applet <applet> tag in HTML5. In this section... is supported by HTML4.01. It is used for including java applet. Applet is java program that can be embedded into HTML pages. Java applets runs on the java integration of webcam - Applet ; ---------------------------------------------- Visit for more information:.... the Scenario is i want to call the exe file of integrated laptop webcam.i am doin...*; public class JavaCam extends Applet implements Runnable{ boolean boolean java compilation error - Applet :// Thanks...java compilation error hi friends the following is the awt front design program error, give me the replay E:\ramesh>javac
http://www.roseindia.net/tutorialhelp/comment/87706
CC-MAIN-2014-10
refinedweb
2,529
66.33
Python persistence management Use serialization to store Python objects What is persistence? The basic idea of persistence is fairly simple. Let's say you've got a Python program, perhaps to manage your daily to-do list, and you want to save your application objects (your to-do items) between uses of the program. In other words, you want to store your objects to disk and retrieve them later. That's persistence. To accomplish that goal you've got several options, each with advantages and disadvantages. For example, you could store your object's data in some kind of formatted text file, such as a CSV file. Or you could use a relational database, such as Gadfly, MySQL, PostgreSQL, or DB2. These file formats and databases are well established, and Python has robust interfaces for all of these storage mechanisms. One thing these storage mechanisms all have in common is that data is stored independent of the objects and programs that operate on the data. The benefit is that the data then becomes available as a shared resource for other applications. The drawback is that allowing access to an object's data in this way violates the object-oriented principle of encapsulation, in which an object's data should only be accessible through its own, public interface. For some applications, then, the relational database approach may not be ideal. In particular, it's because relational databases do not understand objects. Instead, relational databases impose their own type system and their own data model of relations (tables), each containing a set of tuples (rows) made up of a fixed number of statically typed fields (columns). If the object model for your application doesn't translate easily into the relational model, you'll have quite a challenge mapping your objects to tuples and back again. This challenge is often referred to as an impedence-mismatch problem. Object persistence If you want to transparently store Python objects without losing their identity, type, etc., then you need some form of object serialization: a process that turns arbitrarily complex objects into textual or binary representations of those objects. Likewise, you must be able to restore the serialized form of an object back into an object that is the same as the original. In Python the serialization process is called pickling, and you can pickle/unpickle your objects to/from a string, a file on disk, or any file-like object. We'll look at pickling in detail later in this article. Let's say you like the idea of keeping everything as an object and avoiding the overhead of translating objects into some kind of non-object based storage. Pickle files provide those benefits, but sometimes you need something more robust and scalable than simple pickle files. For example, pickling alone doesn't solve the problem of naming and locating the pickle files, nor does it support concurrent access to persistent objects. For those features you need to turn to something like ZODB, the Z object database for Python. ZODB is a robust, multi-user, object-oriented database system capable of storing and managing arbitrarily complex Python objects with transaction support and concurrency control. (See Related topics to download ZODB.) Interestingly enough, even ZODB relies upon Python's native serialization capability, and to use ZODB effectively you must have a solid understanding of pickling. Another interesting approach to the persistence problem, originally implemented in Java, is called Prevayler. (See Related topics for a developerWorks article on Prevaylor.) A group of Python programmers recently ported Prevayler to Python and the result, called PyPerSyst, is hosted on SourceForge. (See Related topics for a link to the PyPerSyst project.) The Prevayler/PyPerSyst concept also builds upon the native serialization capabilities of the Java and Python languages. PyPerSyst keeps an entire object system in memory, and provides disaster recovery by occasionally pickling a snapshot of the system to disk and by maintaining a log of commands that can be reapplied to the latest snapshot. While applications that use PyPerSyst are therefore limited by available RAM, the advantages are that a native object system completely loaded in memory is extremely fast and is much simpler to implement than one, such as ZODB, that allows for more objects than can be held in memory at once. Now that we've briefly touched upon the various ways to store our persistent objects, it's time to examine the pickling process in detail. While our main interest is in exploring ways to persist Python objects without having to translate them into some other format, we are still left with various concerns, such as: how to effectively pickle and unpickle both simple and complex objects, including instances of custom classes; how to maintain object references, including circular and recursive references; and how to handle changes to class definitions without running into problems with previously pickled instances. We'll cover all of these issues in the following examination of Python's pickling capabilities. A peck of pickled Python Python pickling support comes from the pickle module, and its cousin, the cPickle module. The latter was coded in C to provide better performance and is the recommended choice for most applications. We'll continue to talk about pickle, but our examples will actually make use of cPickle. Since most of our examples will be shown from the Python shell, let's start by showing how to import cPickle while being able to refer to it as pickle: >>> import cPickle as pickle Now that we've imported the module, let's take a look at the pickle interface. The pickle module provides the following function pairs: dumps(object) returns a string containing an object in pickle format; loads(string) returns the object contained in the pickle string; dump(object, file) writes the object to the file, which may be an actual physical file, but could also be any file-like object having a write() method that accepts a single string argument; load(file) returns the object contained in the pickle file. By default, dumps() and dump() create pickles using a printable ASCII representation. Both functions have a final, optional argument that, if True, specifies that pickles will be created using a faster and smaller binary representation. The loads() and load() functions automatically detect whether a pickle is in the binary or text format. Listing 1 shows an interactive session using the dumps() and loads() functions just described: Listing 1. Illustration of dumps() and loads() Welcome To PyCrust 0.7.2 - The Flakiest Python Shell Sponsored by Orbtech - Your source for Python programming expertise. Python 2.2.1 (#1, Aug 27 2002, 10:22:32) [GCC 3.2 (Mandrake Linux 9.0 3.2-1mdk)] on linux-i386 Type "copyright", "credits" or "license" for more information. >>> import cPickle as pickle >>> t1 = ('this is a string', 42, [1, 2, 3], None) >>> t1 ('this is a string', 42, [1, 2, 3], None) >>> p1 = pickle.dumps(t1) >>> p1 "(S'this is a string'\nI42\n(lp1\nI1\naI2\naI3\naNtp2\n." >>> print p1 (S'this is a string' I42 (lp1 I1 aI2 aI3 aNtp2 . >>> t2 = pickle.loads(p1) >>> t2 ('this is a string', 42, [1, 2, 3], None) >>> p2 = pickle.dumps(t1, True) >>> p2 '(U\x10this is a stringK*]q\x01(K\x01K\x02K\x03eNtq\x02.' >>> t3 = pickle.loads(p2) >>> t3 ('this is a string', 42, [1, 2, 3], None) Notice that the text pickle format isn't too difficult to decipher. In fact, the conventions used are all documented in the pickle module. We should also point out that with the simple objects used in our example, there wasn't much space efficiency gained by using the binary pickle format. However, in a real system with complex objects, you will see a noticable size and speed improvement with the binary format. Next we'll look at some examples using dump() and load(), which work with files and file-like objects. These functions operate much like the dumps() and loads() that we just looked at, with one additional capability -- the dump() function allows you to dump several objects to the same file, one after the other. Subsequent calls to load() will retrieve the objects in the same order. Listing 2 shows this capability in action: Listing 2. Example of dump() and load() >>>>> b1 = {1: 'One', 2: 'Two', 3: 'Three'} >>> c1 = ['fee', 'fie', 'foe', 'fum'] >>> f1 = file('temp.pkl', 'wb') >>> pickle.dump(a1, f1, True) >>> pickle.dump(b1, f1, True) >>> pickle.dump(c1, f1, True) >>> f1.close() >>> f2 = file('temp.pkl', 'rb') >>> a2 = pickle.load(f2) >>> a2 'apple' >>> b2 = pickle.load(f2) >>> b2 {1: 'One', 2: 'Two', 3: 'Three'} >>> c2 = pickle.load(f2) >>> c2 ['fee', 'fie', 'foe', 'fum'] >>> f2.close() Pickle power So far we've covered the basics of pickling. In this section, we'll cover some advanced issues that arise when you start to pickle complex objects, including instances of custom classes. Fortunately, you'll see that Python handles these situations quite readily. Portability Pickles are portable over space and time. In other words, the pickle file format is independent of machine architecture, which means you can create a pickle under Linux, for example, and send it to a Python program running under Windows or the Mac OS. And when you upgrade to a newer version of Python, you don't have to worry that you might be abandoning existing pickles. The Python developers have guaranteed that the pickle format will be backwards compatible across Python versions. In fact, details about current and supported formats are provided with the pickle module: Listing 3. Retrieving supported formats >>> pickle.format_version '1.3' >>> pickle.compatible_formats ['1.0', '1.1', '1.2'] Multiple references, same object In Python, a variable is a reference to an object. And you can have multiple variables referencing the same object. It turns out that Python has no trouble at all maintaining this behavior with pickled objects, as Listing 4 demonstrates: Listing 4. Maintenance of object references >>> a = [1, 2, 3] >>> b = a >>> a [1, 2, 3] >>> b [1, 2, 3] >>> a.append(4) >>> a [1, 2, 3, 4] >>> b [1, 2, 3, 4] >>> c = pickle.dumps((a, b)) >>> d, e = pickle.loads(c) >>> d [1, 2, 3, 4] >>> e [1, 2, 3, 4] >>> d.append(5) >>> d [1, 2, 3, 4, 5] >>> e [1, 2, 3, 4, 5] Circular and recursive references The support for object references that we just demonstrated extends to circular references, where two objects contain references to each other, and recursive references, where an object contains a reference to itself. The following two listings highlight this capability. Let's look at a recursive reference first: Listing 5. Recursive reference >>> l = [1, 2, 3] >>> l.append(l) >>> l [1, 2, 3, [...]] >>> l[3] [1, 2, 3, [...]] >>> l[3][3] [1, 2, 3, [...]] >>> p = pickle.dumps(l) >>> l2 = pickle.loads(p) >>> l2 [1, 2, 3, [...]] >>> l2[3] [1, 2, 3, [...]] >>> l2[3][3] [1, 2, 3, [...]] Now let's look at an example of a circular reference: Listing 6. Circular reference >>> a = [1, 2] >>> b = [3, 4] >>> a.append(b) >>> a [1, 2, [3, 4]] >>> b.append(a) >>> a [1, 2, [3, 4, [...]]] >>> b [3, 4, [1, 2, [...]]] >>> a[2] [3, 4, [1, 2, [...]]] >>> b[2] [1, 2, [3, 4, [...]]] >>> a[2] is b 1 >>> b[2] is a 1 >>> f = file('temp.pkl', 'w') >>> pickle.dump((a, b), f) >>> f.close() >>> f = file('temp.pkl', 'r') >>> c, d = pickle.load(f) >>> f.close() >>> c [1, 2, [3, 4, [...]]] >>> d [3, 4, [1, 2, [...]]] >>> c[2] [3, 4, [1, 2, [...]]] >>> d[2] [1, 2, [3, 4, [...]]] >>> c[2] is d 1 >>> d[2] is c 1 Notice how we get slightly, but significantly, different results when we pickle each object separately, rather than pickling them together inside a tuple as shown in Listing 7: Listing 7. Pickling separately versus together inside a tuple >>> f = file('temp.pkl', 'w') >>> pickle.dump(a, f) >>> pickle.dump(b, f) >>> f.close() >>> f = file('temp.pkl', 'r') >>> c = pickle.load(f) >>> d = pickle.load(f) >>> f.close() >>> c [1, 2, [3, 4, [...]]] >>> d [3, 4, [1, 2, [...]]] >>> c[2] [3, 4, [1, 2, [...]]] >>> d[2] [1, 2, [3, 4, [...]]] >>> c[2] is d 0 >>> d[2] is c 0 Equal, but not always identical As we hinted in our last example, objects are only identical if they refer to the same object in memory. In the case of pickles, each is restored to an object that is equal to its original, but not identical. In other words, each pickle is a copy of the original object: Listing 8. Restored objects as copies of originals >>> j = [1, 2, 3] >>> k = j >>> k is j 1 >>> x = pickle.dumps(k) >>> y = pickle.loads(x) >>> y [1, 2, 3] >>> y == k 1 >>> y is k 0 >>> y is j 0 >>> k is j 1 At the same time, we saw that Python is able to maintain references between objects that are pickled as a unit. However, we also saw that separate calls to dump() take away Python's ability to maintain references to objects outside of the unit being pickled. Instead, Python makes a copy of the referenced object and stores it with the item being pickled. This isn't a problem for an application that pickles and restores a single object hierarchy. But it is something to be aware of for other situations. It's also worth pointing out that there is an option that does allow separately pickled objects to maintain references to each other as long as they are all pickled to the same file. The pickle and cPickle modules provide a Pickler (and corresponding Unpickler) that is able to keep track of objects that have already been pickled. By using this Pickler, shared and circular references will be pickled by reference, rather than by value: Listing 9. Maintenance of references among separately pickled objects >>> f = file('temp.pkl', 'w') >>> pickler = pickle.Pickler(f) >>> pickler.dump(a) <cPickle.Pickler object at 0x89b0bb8> >>> pickler.dump(b) <cPickle.Pickler object at 0x89b0bb8> >>> f.close() >>> f = file('temp.pkl', 'r') >>> unpickler = pickle.Unpickler(f) >>> c = unpickler.load() >>> d = unpickler.load() >>> c[2] [3, 4, [1, 2, [...]]] >>> d[2] [1, 2, [3, 4, [...]]] >>> c[2] is d 1 >>> d[2] is c 1 Nonpicklable objects A few object types cannot be pickled. For example, Python cannot pickle a file object (or any object with a reference to a file object), because Python cannot guarantee that it can recreate the state of the file upon unpickling. (The other examples are so obscure that they aren't worth mentioning in an article of this nature.) Attempting to pickle a file object results in the following error: Listing 10. Result of trying to pickle a file object >>> f = file('temp.pkl', 'w') >>> p = pickle.dumps(f) Traceback (most recent call last): File "<input>", line 1, in ? File "/usr/lib/python2.2/copy_reg.py", line 57, in _reduce raise TypeError, "can't pickle %s objects" % base.__name__ TypeError: can't pickle file objects Class instances The pickling of class instances requires a bit more attention than the pickling of simple object types. This is primarily due to the fact that Python pickles the instance data (usually the _dict_ attribute) and the name of the class, but not the code for the class. When Python unpickles a class instance, it attempts to import the module containing the class definition using the exact class name and module name (including any package path prefixes) as they were at the time the instance was pickled. Also note that class definitions must appear at the top level of a module, meaning they cannot be nested classes (classes defined inside other classes or functions). When class instances are unpickled, their _init_() method isn't normally called again. Instead, Python creates a generic class instance, applies the instance attributes that were pickled, and sets the instance's _class_ attribute to point to the original class. New-style classes, introduced in Python 2.2, rely on a slightly different unpickling mechanism. While the result of the process is essentially the same as with old-style classes, Python uses the copy_reg module's _reconstructor() function to restore new-style class instances. If you want to modify the default pickling behavior for either new-style or old-style class instances, you can define special class methods, named _getstate_() and _setstate_(), that will be called by Python during the saving and restoring of state information for instances of the class. We'll see some examples that make use of these special methods in the following sections. For now, let's take a look at a simple class instance. To begin, we created a Python module named persist.py, containing the following new-style class definition: Listing 11. New-style class definition class Foo(object): def __init__(self, value): self.value = value Now we can pickle a Foo instance and take a look at its representation: Listing 12. Pickling a Foo instance >>> import cPickle as pickle >>> from Orbtech.examples.persist import Foo >>> foo = Foo('What is a Foo?') >>> p = pickle.dumps(foo) >>> print p ccopy_reg _reconstructor p1 (cOrbtech.examples.persist Foo p2 c__builtin__ object p3 NtRp4 (dp5 S'value' p6 S'What is a Foo?' sb. >>> You can see that the class name, Foo, and the fully qualified module name, Orbtech.examples.persist, are both stored in the pickle. If we had pickled this instance to a file, and unpickled it later or on another machine, Python would attempt to import the Orbtech.examples.persist module and would raise an exception if it could not. Similar errors would occur if we renamed the class, renamed the module, or moved the module to another directory. Here is the error Python gives when we rename the Foo class and then try to load a previously pickled Foo instance: Listing 13. Trying to load a pickled instance of a renamed Foo class >>> import cPickle as pickle >>> f = file('temp.pkl', 'r') >>> foo = pickle.load(f) Traceback (most recent call last): File "<input>", line 1, in ? AttributeError: 'module' object has no attribute 'Foo' A similar error occurs when we rename the persist.py module: Listing 14. Trying to load a pickled instance of a renamed persist.py module >>> import cPickle as pickle >>> f = file('temp.pkl', 'r') >>> foo = pickle.load(f) Traceback (most recent call last): File "<input>", line 1, in ? ImportError: No module named persist We'll provide techniques for managing these kinds of changes, without breaking existing pickles, in the Schema evolution section below. Special state methods Earlier we mentioned that a few object types, such as file objects, cannot be pickled. One way to handle instance attributes that are not picklable objects is to use the special methods available for modifying a class instance's state: _getstate_() and _setstate_(). Here is an example of our Foo class, which we've modified to handle a file object attribute: Listing 15. Handling unpicklable instance attributes class Foo(object): def __init__(self, value, filename): self.value = value self.logfile = file(filename, 'w') def __getstate__(self): """Return state values to be pickled.""" f = self.logfile return (self.value, f.name, f.tell()) def __setstate__(self, state): """Restore state from the unpickled state values.""" self.value, name, position = state f = file(name, 'w') f.seek(position) self.logfile = f When an instance of Foo is pickled, Python will pickle only the values returned to it when it calls the instance's _getstate_() method. Likewise, during unpickling, Python will supply the unpickled values as an argument to the instance's _setstate_() method. Inside the _setstate_() method we are able to recreate the file object based on the name and position information we pickled, and assign the file object to the instance's logfile attribute. Schema evolution Over time you'll find yourself having to make changes to your class definitions. If you've already pickled instances of a class that needs changing, you'll likely want to retrieve and update those instances so that they continue to function properly with the new class definition. We already saw some of the errors that can occur when changes are made to classes or modules. Fortunately, the pickling and unpickling processes provide hooks that we can use to support this need for schema evolution. In this section, we'll look at ways to anticipate common problems and work around them. Because a class instance's code is not pickled, you can add, change, and remove methods without impacting existing pickled instances. For the same reason, you don't have to worry about class attributes. You do have to ensure that the code module containing the class definition is available in the unpickling environment. And you must plan for the changes that can cause unpickling problems: changing the name of a class, adding or removing instance attributes, and changing the name or location of the class definition module. Class name change To change the name of a class without breaking previously pickled instances, follow these steps. First, leave the original class definition intact so that it can be found when existing instances are unpickled. Instead of changing the original name, create a copy of the class definition, in the same module as the original class definition, giving it the new class name. Then add the following method to the original class definition, using the actual new class name in place of NewClassName: Listing 16. Changing a class name: Method to add to the original class definition def __setstate__(self, state): self.__dict__.update(state) self.__class__ = NewClassName When existing instances are unpickled, Python will locate the original class definition, the instance's _setstate_() method will be called, and the instance's _class_ attribute will be reassigned to the new class definition. Once you are sure that all the existing instances have been unpickled, updated, and re-pickled, you can remove the old class definition from the source code module. Attribute addition and subtraction Once again, the special state methods, _getstate_() and _setstate_(), give us control over each instance's state and the opportunity to handle changes in an instance's attributes. Let's take a look at a simple class definition to which we will add and remove attributes. Here is the initial definition: Listing 17. Original class definition class Person(object): def __init__(self, firstname, lastname): self.firstname = firstname self.lastname = lastname Let's assume we've created and pickled instances of Person, and now we've decided that we really just want to store one name attribute, rather than separate first and last names. Here is one way to change the class definition that will migrate previously pickled instances to the new definition: Listing 18. New class definition class Person(object): def __init__(self, fullname): self.fullname = fullname def __setstate__(self, state): if 'fullname' not in state: first = '' last = '' if 'firstname' in state: first = state['firstname'] del state['firstname'] if 'lastname' in state: last = state['lastname'] del state['lastname'] self.fullname = " ".join([first, last]).strip() self.__dict__.update(state) In this example we added a new attribute, fullname, and removed two existing attributes, firstname and lastname. When a previously pickled instance is unpickled, its previously pickled state will be passed to _setstate_() as a dictionary, which will include values for the firstname and lastname attributes. We then combine those two values and assign them to the new fullname attribute. Along the way, we eliminate the old attributes from the state dictionary. After all the previously pickled instances have been updated and re-pickled, we can remove the _setstate_() method from the class definition. Module modifications. Conclusion Object persistence depends on the object serialization capabilities of the underlying programming language. For Python objects that means pickling. Python pickles provide a robust and reliable foundation for effective persistence management of Python objects. In the Related topics below, you'll find information about systems that build on Python's pickling capability. Downloadable resources Related topics - The Python Web site is the starting point for all things Pythonic. - The Prevayler Web site provides more details about the prevalence philosophy. - PyPerSyst, the Python port of Prevayler, is available on SourceForge. - For an overview of using Python and Perl with IBM DB2, read "The Camel and the Snake, or 'Cheat the Prophet': Open Source Development with Perl, Python, and DB2".
https://www.ibm.com/developerworks/library/l-pypers/index.html
CC-MAIN-2019-13
refinedweb
4,055
56.15
I. Mike Volodarsky from the IIS team wrote a great article for the March 2007 MSDN Magazine that summarizes some of the key IIS 7.0 improvements. I highly recommend reading his excellent article here for a quick summary of some of them. I. The web farm support in particular is really cool, and will allow you to deploy your web applications on a file-share that contains all of the code, configuration, content, and encryption keys needed to run a server. You can then add any number of stateless and configuration-less web-servers into a web farm and just point them at the file-server to dynamically load their configuration settings (including bindings, virtual directories, app pool settings, etc) and application content. This makes it trivial to scale out applications across machines, and avoid having to use replication schemes for configuration and application deployment (just copy over the files on the file-share and all of the machines in the web farm will immediately pick up the changes). The upcoming Beta3 release of Windows Longhorn Server will support a go-live license, so you'll be able to take advantage of this soon. We are already running on IIS 7.0 clusters (so you'll be in good company!). In previous versions of IIS, developers had to write ISAPI extensions/filters to extend the server. In addition to being a royal pain to write, ISAPIs were also limited in how they plugged into the server and in what they allowed developers to customize. For example, you can't implement URL Rewriting code within an ISAPI Extension (note: ASP.NET is implemented as an ISAPI extension). And you end up tying up the I/O threads of the web-server if you write long-running code as an ISAPI Filter (which is why we didn't enable managed code to run in the filter execution phase of a request). One of the major architectural changes we made to the core IIS processing engine with IIS7 was to enable much, much richer extensibility via a new modular request pipeline architecture. You can now write code anywhere within the lifetime of any HTTP request by registering an HTTP Extensibility Module with the web-server. These extensibility modules can be written using either native C++ code or .NET managed code (you can use the existing ASP.NET System.Web.IHttpModule interface to implement this). All "built-in" IIS7 functionality (authentication, authorization, static file serving, directory listing support, classic ASP, logging, etc) is now implemented using this public modular pipeline API. This means you can optionally remove any of these IIS7 "built-in" features and replace/extend them with your own implementation. ASP.NET on IIS 7.0 has itself been changed from being implemented as an ISAPI to instead plug in directly as modules within the IIS7 pipeline: Among the many benefits this brings:). This really brings a tremendous number of extensibility opportunities to .NET developers. To help enable developers share the extensibility modules and other add-ins they write, the IIS team recently launched the "Download Center" on. This enables developers to browse/download as well as upload and share module extensions for IIS. You can check it out here. Note that in addition to having a managed extensibility story for Http Modules, IIS7 also now allows you to write managed admin tool UI extensions (the admin tool itself was built using Windows Forms), as well as use the .NET System.Configuration namespace to manage the IIS7 configuration system. In addition to the cool new extensibility options that IIS 7.0 provides, there are a ton of improvements (both big and small) that ASP.NET developers will really appreciate. I'll be blogging a series of them over the next few weeks/months and point out some really cool things that you'll be able to take advantage of. I also highly recommend subscribing to the IIS 7 team's blog feed here. Hope this helps, Scott Just tested iis7 on vista, and I have only one issue (but it is pain): I'm using asp.net forms authentication (works with iis6) If asp.net form authentication is enabled for a website inside iis7, the app broken because iis7 step in and mess up the forms authentication (why anyway?). If I disable asp.net form authentication inside iis7 everything works... No the problem is, when I redeploy the app anytime, stupid iis7 reconfigure the site again and again and enables the forms authentication. So I need to manually disable in iis7 configurator the form authentication every time when I redeploy the app. Is it possible to turn of this stupid autoconfiguration in iis7? This is one of my very fav product on Windows Vista and I have been following it 4m a real long time .. Modular architecture Rocks!!! Scott, When is Windows Longhorn server expected?? And I wanted to add few links for your readers .. who r interested in trying free IIS7 beta hosting.. - Chirag Scott, Does the web farm support copy the configuration, code and data from the remote fileshare onto the local servers drive? Or does it just run everything from the remote fileshare? Thanks for the artical link, ScottGu. This artical list some awesome IIS7 features. Mayo any chance we'll get IIS 7.0 on win 2003 ? Will IIS 7.0 support configuring the webserver without being an administrator? Developing .Net apps in a team environment is tedious when you have to run to the network admins evrytime you want a virtual directory created or modified. The "Operators" group was a nice feature of IIS 5.0 that I wish IIS 6.0 had (or at least something comparable - trying to configure permissions with the IIS metabase tool is impossible). Thanks, -Mark I wonder if "corporate identity" is a phrase punishable by death in Redmond. Why does MS need to be sponsored by all this advertising? I don't imagine IIS7 team is struggling to for pay checks. That web site is a disgrace, just like is. I think you have the coolest job at Microsoft. Fantastic post Scott! >>just copy over the files on the file-share and all of the machines in the web farm will immediately pick up the changes Will all the stateless web servers pick up the changes at the exact same time? Doesn't this mean that the application will 'hiccup' (my term) when all front-end web servers simultaneously need to load the application into memory? All of my deployment scripts inject a few seconds of delay between the rollout to each web server. More complex deployment scripts also sequentially disable the load balancer config while the application is loading for the first time. The result is that end users never see an application that is 'spooling up'. I would be curious to know how you avoid the hiccup when you roll out a new release using the file-share model. Thanks! By the way, in the words of a developer who just read this post: "What's a realistic time frame for us to use it? It sounds so bad ass!" Hi David, The edge web-server will copy and cache the configuration data locally (so that it doesn't need to continually hit it, and so that it can survive if the network goes down termporarily). The content itself will logically stay on the file-server - although under the covers the OS will end up caching it. Hi Alex, The reason we have ads on and is to help us fund additional articles, videos and samples on the sites. We end up being able to publish much more content that way. Thanks, Hi cjx, Unfortunately IIS7 won't run on Windows 2003 I'm afraid. We origionally wanted to support this, but there are a number of low-level OS changes we wanted to take advantage of that would have made supporting this really challanging. You'll have to use it on Windows Longhorn Server or Windows Vista. Hi stm, IIS won't autoconfigure anything for you - so I'm not sure why you are seeing formsauth changing. Can you send me an email (scottgu@microsoft.com) with more details about the issue and a copy of your web.config file? I can then take a look and figure out what is going wrong with it. Great, can't wait for its release! thanks Hi Scott, I just replied to a thread on forums.iis.net regarding web farm support and IIS7 benefits. I can reference your article.Thanks for the posting. Steve Schofield That was real nice intro on IIS 7 Wow, haven't seen the MSDN-magazine before, but at first I thought it was a box for Windows 3.1 :P I guess ISS 7.0 won't be available for Windows Server 2003... Keep up the great work Scott, the scene really appreciate it. [quote]IIS 7.0 ..... is now available with the home editions of the operating system[/quote] It was great to read this. I am looking forward to IIS 7.0 more than Orcas!!! Hi, I am having problems with IIS 7.0 on Vista Ultimate. I am testing an IIS 6.0 web application on Vista and IIS 7.0, but have ran into a problem whereby the following error is shown. Unable to find script library: 'aspnet_client/systemweb/1_1_4322/WebUIValidation.js' Try placing this file manually or reinstall by running 'aspnet_regiis -c' The file is there and I run the re-install with no sucess. Any ideas how to fix this issue? Please email me directly at davidnowens@hotmail.com Many thanks Dave Hi Dave, The error you are seeing above sounds like it is related to ASP.NET 1.1 not being installed on the box. This isn't included in Vista - instead you need to download the .NET Framework 1.1 redist and install it separately. Can you try this and see if it fixes the problem? I've recently purchased a new laptop with Vista Home Premium installed. I'm doing most of my software development in C# and ASP.NET, so I was happy to hear that I can run IIS 7 on Vista Home. Unfortunately, I've noticed some articles on the web that tell me that you can't debug ASP.NET applications properly in IIS 7 on Vista Home Premium because it doesn't support Windows Authentication. Can you confirm this for me please and tell me if there's a work around, other than spending money on an OS upgrade? See Thanks, Ian. Hi Ian, Unfortunately VS currently requires Windows Authentication to be enabled to auto-attach the debugger, and this module isn't installed on the Home edition. We are going to be issuing a patch to VS to correct this in the future for you (so no need to upgrade). In the meantime, you can fix this within VS by manually attaching to the server process. You can do this by choosing the "Attach to Process" menu item within the Debug menu inside VS, and then select the w3wp.exe process (which is the IIS worker process name). You can then set and hit any breakpoints inside your ASP.NET app. Sorry for the inconvenience, Hi I am currently trying to build a website using dreamweaver software, it says I need IIS - I am new to all of this. From some research I have discovered that I need IIS 7 (i'm running windows vista home premium) - I can't seem to locate it on my laptop though - how exactly can I find out whether I have it Thanks, Nicki Hi Nicki, You can install IIS by going to the Control Panel->Programs section, and by then selecting the "Turn Windows features on or Off" link. You'll then see Internet Information Services link in the list to use. Can we have a little discriptive blog on User Management/Roles with new IIS7. I mean WAT ( Web Admin Tool ) currently not available online. I understand, we shall be getting this facility online now with IIS7 ( Longorn ). Cn we have a little explanation more on it. This feature is the most demanded one currently. Thanks Hi Softmind, I will put it on my list to blog! Until then, I'd also recommend checking out the web-site. hi Scott, i have the same question as Ian above. I tried to attach the process w3wp.exe as you said above from my VS but could not find that process in my Attach to process wizard. Help me please. Make sure that you have "show processes in all sessions" checked. This made the w3wp.exe process available. I have a laptop with Vista Home Premium installed. But I can not install SQL Server 2005. When I try to do it, I recive a warninig message on my IIS. SQL Setup warning: IIS is not instaled or is disabled! But IIS work with my .NET! Guys and Scott, I am kindly asking for an expert to post the steps of how to setup an existing webpage visible from a LAN to the outside world through Windows Vista II7. I am interested in all the details (i.e. giving permissions etc) because I had a hard time trying to figure things out. hi Scott, whenever i write to run my asp.net 1.1 application with Ctrl+F5 i got configuration error please can anyone help me with this ...for a week im stuck on this ..error details are: Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: File or assembly name System.Configuration, or one of its dependencies, was not found. Source Error: Line 7: <add assembly="mscorlib" /> Line 8: <add assembly="System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> Line 9: <add assembly="System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> Line 10: <add assembly="System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> Line 11: <add assembly="System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> Source File: C:\inetpub\wwwroot\web.config Line: 9 Hi Sana, Can you send me an email (scottgu@microsoft.com) with more details about this error? I'll then loop you in with someone on my team who can help. One of the products that my team builds that I am most proud of is IIS 7. IIS 7 is a *major* update of En este regreso a la vida de ecuador.latindevelopers.net aprovechamos para actualizarnos en algunos elementos A light day out there for the Holiday weekend. LINQ Bart has another series going with The IQueryable A light day out there for the Holiday weekend. LINQ Bart has another series going with The IQueryable Tales - LINQ to LDAP. Part 1 is Key Concepts and Part 2 is Getting Started with IQueryable . CLR Jason continues Disassembling .NET with Appendix B
http://weblogs.asp.net/scottgu/archive/2007/04/02/iis-7-0.aspx
crawl-002
refinedweb
2,518
65.93
Looking over recent additions to Google’s Guava Libraries Release 10 I noticed the addition of EventBus. This is a lightweight implementation of a publish-subscribe style messaging system. This is similar to the publish-subscribe model provided by JMS, however the messages remain within the application rather than being broadcast externally. EventBus allows you to create streams within your program to which objects can subscribe; they will then receive messages published to those streams. Although this inter-object communication is not particularly difficult to recreate using patterns such as singletons, EventBus does provide a particularly simple and lightweight mechanism. Singletons also make having multiple event buses of a single type more difficult, and are hard to test. As an example I am going to create a simple multi-user chat program using sockets that several people will connect to via telnet. We will simply create an EventBus which will serve as a channel. Any messages that a user sends to the system will be published to all the other users. So here is our UserThread object: class UserThread extends Thread { private Socket connection; private EventBus channel; private BufferedReader in; private PrintWriter out; public UserThread(Socket connection, EventBus channel) { this.connection = connection; this.channel = channel; try { in = new BufferedReader(new InputStreamReader(connection.getInputStream())); out = new PrintWriter(connection.getOutputStream(), true); } catch (IOException e) { e.printStackTrace(); System.exit(1); } } @Subscribe public void recieveMessage(String message) { if (out != null) { out.println(message); } } @Override public void run() { try { String input; while ((input = in.readLine()) != null) { channel.post(input); } } catch (IOException e) { e.printStackTrace(); } //reached eof channel.unregister(this) try { connection.close(); } catch (IOException e) { e.printStackTrace(); } in = null; out = null; } } As can be seen this is just a simple threaded object that contains the EventBus that serves as a channel, and the user’s Socket. The run method then simply reads the socket and sends the message to the channel by calling the post method on the EventBus. Receiving messages is then implemented by adding a public method with the @Subscribe annotation (see above). This signals the EventBus to call this method upon receiving a message of the type given in the method argument. Here I am sending Strings however other objects can be used. GOTCHA: The method annotated with @Subscribe MUST be public. The receive function takes the message and writes it out to the user’s connection. This will of course also ping back the message that has been sent to the original user as the UserThread object will itself receive the message that it published. All that is left is to create a simple server object that listens for connections and creates UserThread objects as needed. public class EventBusChat { public static void main(String[] args) { EventBus channel = new EventBus(); ServerSocket socket; try { socket = new ServerSocket(4444); while (true) { Socket connection = socket.accept(); UserThread newUser = new UserThread(connection, channel); channel.register(newUser); newUser.start(); } } catch (IOException e) { e.printStackTrace(); } } } As shown this creates the channel, accepts user connections and registers them to the EventBus. The important code to notice here is the call to the register method with the UserThread object as an argument. This call subscribes the object on the EventBus, and indicates that it can process messages. Once the server is started users can then connect to the chat server with the telnet command: telnet 127.0.0.1 4444 And if you connect multiple instances you will see any message sent being relayed to the other instances. Having viewed this example you may be wondering what use an EventBus has. A very good example could be when maintaining a very loose coupling between a user interface and backend code. User input would generate a message such as resize, lost focus or closing down. Back end components could then simply subscribe to these events and deal with them appropriately. The official documentation lists many other uses as well. NB: EventBus isn’t meant for general purpose publisher-subscriber communication, this is just an example of the how the API interacts. Original: Sweet blog! I found it while browsing on Yahoo News. Do you have any tips on how to get listed in Yahoo News? I’ve been trying for a while but I never seem to get there! Thanks Thanks man.. very nice!!!
http://www.javacodegeeks.com/2013/06/guavas-eventbus-simple-publishersubscriber.html
CC-MAIN-2014-52
refinedweb
713
57.27
Building game show buzzers with a Raspberry Pi The showWinner Function – Lines 12 to 18 The showWinner function is called once a winner is determined. It draws the winner on the public (graphics) screen: def showWinner ( winner ): def is Python's keyword to define a function. Like most other languages, the arguments comprise a comma-separated list inside parentheses. When a function is initialized in Python, it gets its own namespace. The global keyword tells Python to look outside of the function for each global variable: global screen global numbers Normally, this functionality would be handled with a class, but because of the rapid deployment of this project, using globals was quicker and easier. winner is provided by the code calling this function. It tells me the player number that pressed the button first. The players are numbered 1 through 5; however, Python lists start numbering at 0, so subtracting 1 "aligns" the winner variable with the list: winner -= 1 In the code in line 17, screen.blit ( numbers [ winner ] [ 0 ] , ( numbers [ winner ] [ 1 ] , 0 ) ) blit is a function of a Pygame surface that copies another surface onto its parent, screen is a surface, and numbers is the list of surfaces and their offsets from the left side of the screen defined earlier. blit is provided with two arguments. The first argument is the surface to draw. The [ 0 ] in numbers [ **winner ] [ 0 ] gets the first element of the tuple, where I stored the graphic of the team number. The second argument is the position to use when drawing the surface, which is passed as a tuple (see the "Python Tuples" box for an explanation of tuples). numbers [ winner ] [ 1 ] says "get the winning entry from the list," and [ 1 ] gets the second element in the tuple, which is the left offset that was calculated when the graphic loaded. All of that boils down to the x coordinate. I want to draw at the top of the window, so I pass 0 as the y coordinate. Line 24 tells Pygame to copy the surface screen to the framebuffer, which makes the graphics appear on screen: pygame.display.flip() In this case, it's only a single blit; however, in a larger project, you would perform all of your blits, drawing, and other screen processing and then call pygame.display.flip()once. Python Tuples Python lets you define tuples by surrounding sets of data in parentheses. This approach is typically used for data that has multiple parts. Here are a few examples: coordinate = ( x , y ) color = ( red , green , blue ) card = ( "Ace" , "Spades" ) Pygame uses tuples extensively to pass coordinates, colors, rectangles, screen regions, and user input values, just to name a few. Tuples can have as many elements as you like. Square brackets after a tuple name will retrieve the requested element of a tuple. In the above example, coordinate [ 0 ] will retrieve the x value, color [ 1 ] will retrieve the green value, and card [ 1 ] will retrieve Spades. The reset function – Lines 20 to 24 The reset function clears the contestant's LCD when the game is done with the buzzer result. def, global, and pygame.display.flip work the same as above, so the only new command is fill: screen.fill ( ( 0 , 0 , 0 ) ) fill is a function of a surface that makes the entire surface the provided color. The one argument is an RGB color provided as a tuple. Once the surface is filled, I flip the display and the screen is empty! Buy this article as PDF Pages: 8 (incl. VAT)
http://www.raspberry-pi-geek.com/Archive/2013/02/Building-game-show-buzzers-with-a-Raspberry-Pi/(offset)/10
CC-MAIN-2018-34
refinedweb
594
69.41
Title: Tokio, Source: own resources, Authors: Agnieszka and Michał Komorowscy In C# it's simple, we use destructors a.k.a. finalizers almost never. The only one case when they are inevitable is the implementation of Disposable pattern. In C++ the situation is different because we don't have the automatic garbage collection. It means that if we create a new object with new keyword we have to destroy it later by using delete keyword. And if the object being deleted contains pointers to other objects created dynamically, they also need to be deleted. It's where destructors come to game. Here is an example with a class Node which models a binary tree. It's simplified and it is why all fields are public, don't do it in the production! Node::_count is a static field that I'm using to count created objects. #include <stdexcept> #include <iostream> class Node { public: Node(int i) : _value(i) { Node::_count++; } ~Node() { std::cout << " ~Node " << _value <<; if(_left != nullptr) delete _left; if(_right != nullptr) delete _right; _count--; } static int _count; int _value; Node* _left = nullptr; Node* _right = nullptr; }; int Node::_count = 0;Here is a testing code. If you run it you should see the result as follows: Existing nodes: 3 ~Node 1 ~Node 2 ~Node 3 Existing nodes: 0. We can see that all nodes have been deleted and that a destructor was executed 3 times. int main() { Node* root = new Node(1); root->_left = new Node(2); root->_right = new Node(3); std::cout << " Existing nodes: " << Node::_count; delete root; std::cout << " Existing nodes: " << Node::_count; }Now let's derived a new class from Node in the following way: class DerivedNode : public Node { public: DerivedNode(int i) : Node(i) { } ~DerivedNode() { std::cout << " ~DerivedNode " << _value; } };And modify a testing code a little bit in order to use our new class: int main() { Node* root = new DerivedNode(1); root->_left = new DerivedNode(2); root->_right = new DerivedNode(3); std::cout << " Existing nodes: " << Node::_count; delete root; std::cout << " Existing nodes: " << Node::_count; }The expectation is that ~DerivedNode destructor should be called together with the base class destructor ~Node. However, if you run the above code you'll notice see that it's not true i.e. you'll see the same result as earlier. To explain what's going look at the C# code below and answer the following question: Why I see "I'm A" if I created an instance of class B public class A { public void Fun() { Console.WriteLine("I'm A"); } } public class B: A { public void Fun() { Console.WriteLine("I'm B"); } } A a = new B(); a.Fun();I hope that it's not a difficult question. The answer is of course because Fun is not a virtual method. In C++ we have the same situation. Now you may say "Wait a minute, but we're talking about destructors not methods". Ya, but destructors are actually a special kind of methods. The fix is simple we just need to use a concept completely unknown in C# i.e. a virtual destructor. virtual ~Node() { ... }This time the test code will give the following result Existing nodes: 3 ~DerivedNode 1 ~Node 1 ~DerivedNode 2 ~Node 2 ~DerivedNode 3 ~Node 3 Existing nodes: 0 . 2 comments:
https://www.michalkomorowski.com/2017/02/c-for-c-developers-virtual-destructors.html
CC-MAIN-2019-13
refinedweb
547
71.24
Red Hat Bugzilla – Bug 190475 error when two nodes rename two files to same new name Last modified: 2010-01-11 22:11:01 EST Description of problem: processes on two nodes loop doing this in one dir: 1. create a new file foo.pid.count 2. rename foo.pid.count bar We'll regularly get ENOENT returned from the rename(2). Version-Release number of selected component (if applicable): How reproducible: always reported on irc, seen by users running dovecot imap server Steps to Reproduce: 1. 2. 3. Actual results: rename errors Expected results: no rename errors Additional info: Created attachment 128507 [details] test that shows the problem, run this on two machines at once in one dir Created attachment 128513 [details] updated version of test printing open errors I've attached a slightly modified version of the test that will make the error cases clearer: - if there's one instance of this test running on each node, then I just get ENOENT errors from rename(2). - if there are two instances of this test running on a node, then I get ENOENT errors from rename(2) and ENOENT errors from the open(2) of "rename.test" (open errors only on the node running two instances). Look like we have a couple of mail server issues that may all relate to this issue. I suspect we do not invalidate dentry correctly when the unlink (rename) is done by another node. I'll start to work on this issue today. There are two issues here: 1) I'm not sure this test case is valid. When two processes try to rename different files to the very same name, should they use either posix lock or flock on the directory? The reason ENOENT doesn't show up in ext3 is because VFS layer does an early "lock_rename" (it locks the directories) *before* it does the final lookup (the filename without pathname). By the time GFS is called, VFS layer has obtained the file's dentry. So there is a (big) window between VFS layer getting the file's dentry (it has inode pointer) and GFS could do anything to notify this node that the inode has been removed by another cluster node. If we want GFS behaves the same way as ext3, VFS's "lock_rename" has to call us. Even upstream folks agree to call us in "lock_rename", there is currently no hook in VFS that can do this job. 2) The locking sequence in gfs_rename() doesn't look right either. Will update this issue soon. It is a valid test case. The rename call should act as if its atomic so that given a destination that always exists, it should find it and replace it. Exactly which file it finds is of course subject to races in this particular case, but this kind of thing can be used as part of locking schemes etc. Rename used to do its locking at the VFS layer in a similar way to GFS, by using the addresses of the various mutexes (well semaphores in those days) to avoid deadlocks. That was found to have problems, hence the top down approach (i.e. parents before children). I think we can use the same lock ordering in GFS to avoid deadlocks as we already use that ordering in file creation for example, where there can be no deadlocks. Dealing with the race however, is a more tricky problem as you say... hmm... how about this, instead of messing around with VFS's lock_rename() (and upstream folks), we (GFS) could simply unhashing the dentry and freeing the inode that are passed to us by VFS. Then start over again (with performance hit expected). This way we could get rid of this ENOENT error. In any case, we still need to examine ("fix" is a better word) gfs_rename locking issues. If thats possible, and I believe it is, then its the best approach open to us I think. I'd be quite happy to see if fixed that way and I think the upstream folks will too. This issue is messier than I expected. The "open" is broken too - will give that (open) some more thoughts tomorrow. However, I do have a draft patch for rename now - after some more sanity checks, will post here for review. Good news is that this may be the cause of several double-free inode panic we have been seeing under mail server environment. This applies to GFS2 as well, btw. Can you explain a bit more about the "open" issue? You missed Dave's comment #3 above (as I did :)). Look like open() can become the victim of this stale dentry issue too. Ah, I see it now :-) In which case it should really be solved by locking at look up time with the open intents. We need to check if the current implementation gives us enough information to solve that in the nameidata structure. If not we may need to patch the upstream VFS, at least for GFS2. The patch didn't work well. Look like dropping dentry would create orphan GFS inodes that can not be de-allocated. That will result umount hangs (and other obvious combination such as buffer/memory leak, etc). Still scratching my head at this moment. Created attachment 135183 [details] Draft patch (with restriction) When rename(fromA, toB) is called, if another node happens to either delete and/or rename the "toB" file at the same time, current GFS could return ENOENT and fails the rename() call. This seems to violate rename API. This patch fixes that. Restriction: If the "toB" file has been referenced before rename() (and has not been released) at the same node, while another node deletes/renames it, this patch follows the current GFS logic - that is, it will return ENOENT and fail the call. Using the above test program as an example, if we don't include the logic of "oldfd" as in: "oldfd = open(fname, O_RDWR);", this patch will fix the issue. However, if we let that "open" stay, I don't know how to fix it at this moment. Sorry, the second node must do a rename (not delete) for this error to occur. It is late in the night so my head is not clear. We need to discuss the possibility of pushing for lock_rename() upstream changes. If lock_rename can call us, ~10 lines of code should do the trick, vs. the current ~300 lines of workaround. The difficulty (of GFS workaround) is that by the time we find inode is stale, it is already inside the onion layers of 5 VFS locks and 5 GFS glocks - so it is not possible to call existing routines. This implies lots of cut, paste, unlock, re-lock, etc. Very annoying and dangerous. Created attachment 135292 [details] Working draft patch Test ran fine last night - look like it completely fixed the issue without restrictions. Upload the draft patch here so it won't get lost. What I'll do next week (my machine needs to get reset to work on something else) is to re-fine this gigantic patch into three smaller patches for easy management and review: Patch 1 will be locking stuff. Patch 2 will be inode changes. Patch 3 will be the core rename changes. Created attachment 146972 [details] gfs_rename_core.patch This is the 3rd patch that was scheduled to check into CVS last week. However, the final sanity check shows that it will try to read an already deleted file that causes EIO. It ends up withdrawing the filesystem. I originally tested this on two i686 SMP machines but later one of my test nodes was replaced with an UP x86_64 machine. The error always happens with UP x86_64 and looks to me that both nodes think they have the lock. So the i686 SMP machine deletes the file while x86_64 UP machine tries to read it. Issue seems to somewhere around the asynchronous locking. I'll take a closer look tonight (didn't have a chance to go to it in office today). If I can't finish this up, Ben's help will be greatly appreciated. I see the error with this patch on two i686 machines with SMP kernels. One machine has two CPUs. the other only has one. For what it's worth, I have always seen the error on the one with two CPUs. It does relate to the asynchronous lockings.. 1) The panic is always with the "faster" node. When other (slower) node is still updating the on-disk structures, the faster node thinks it gets the lock and does a gfs_dir_search() that reads in garbage. The new rename patch relies on the new ino from the gfs_dir_search() call to update its dentry. Since the contents of the new ino is gargabe, next disk read would sometime go to never-land. 2) In previous testing, I always had gfs global rename lock. It was the first lock taken using gfs_glock_nq_init (that uses synchronous lock by default). So I never saw this problem. 3) There are few more bugs in gfs_rename(), for example, it never checks whether odip == ndip and requests these two locks (odip and ndip) regardless. Under synchronous locking, this would immediately generate an assertion failure but asynchronous locking let it get away with this. Interestingly GFS2 doesn't have this issue (it does check odip == ndip before requesting the lock. The good news is that after changing the rename locking into synchrnous mode, gfs no longer crashes. The bad news is that gfs_rename now deadlocks "correctly" from time to time. For example, each rename is normally associated with 4 locks (two directories and two files) - it never bothered to check their sequence, comparing to other part of the GFS kernel. In one instance, it deadlocks with gfs_lookupi where it locked file first, followed by directory while rename lock directory first, followed by file lock. The issue is in the routine "gfs_glock_nq_m" where it siliently switches the locking mode into asynchronous, regardless the request. I'm still deciding whether I should chase the asynchronous locking using this bugzilla (and fix gfs_glock_nq_m() accordingly) or just fix gfs_rename now and leave the asynchronous locking issue (and gfs_glock_nq_m) for next update (RHEL 4.6). Chasing glock async locking issues could take a while ... I've isolated the changes within gfs_rename() and leave gfs_glock_nq_m() un-touched. Will need to look into asynchronous locking issues as soon as we get a chance. After debug statements are cleaned-up, will check the changes into CVS today. Messing with rename is like walking on a place full of land-mine. Each time I try a new test case, a new deadlock explodes. The newest deadlock is very messy to get away under current GFS's lock ordering rule. When rename finds its destination file has been deleted by another node, it already has all the locks (two directory locks plus two file locks). My original patch discards the stale dentry and creates a new dentry based on the new file (created by another node). We need to lock this new file with the new ino. If the new ino is smaller than its directory lock, we could deadlock with another node if it happens to be doing a lookup on this file (where it would get this lock first, followed by directory lock and this process, on the other hand, has the directory lock already but will be required to get this new lock). I tried to enforce the lock order by releasing all the old locks, then re-acquire them all again but it makes the code very messy. Under Dave's test program, it looks very terrible since as soon as I release the directory locks, another node has picked up these locks and removes this new file *again* :(. A compromise made by today's CVS check-in is to check the new ino number to see whether it could cause deadlock. If yes, then we give up by returning the original error return (-ENOENT). Sadly move this bugzilla into modified state. This is about all I can do in RHEL 4.5. So in short sentence, after such a long and exhausting effort, we only reduce the possibility of returning this -ENOENT (that violates the rename API) by roughly 25%, if GFS allocates the ino in true random.
https://bugzilla.redhat.com/show_bug.cgi?id=190475
CC-MAIN-2017-04
refinedweb
2,066
70.84
Putting an Object in a Safe State Class Attributes As mentioned earlier, it is possible for two or more objects to share attributes. In Java and C++ you do this by making the attribute static: public class Count { static int count; public method1() { }} By declaring count as static, this attribute is allocated a single piece of memory for the class. Thus, all objects of the class use the same memory location for a. Essentially, each class has a single copy, which is shared by all objects of that class (see Figure 7). Figure 7: Class attributes. count sheep. The instant that Count2 records its first sheep, the data that Count1 was saving is lost. Operator Overloading Some O-O languages allow you to overload an operator. C++ is an example of one such language. Operator overloading allows you to change the meaning of an operator. For example, most people, when they see a plus sign, assume that it represents addition. If you see this equation X = 5 + 6; you expect that X would contain the value 11. And in this case, you would be correct. However, there are times when a plus sign could represent something else. For example, in the following code: String firstName = "Joe", lastName = "Smith";String Name = firstName + " " + lastName; You would expect that Name would contain Joe Smith. The plus sign here has been overloaded to perform string concatenation. In the context of strings, the plus sign does not mean addition of integers or floats, but concatenation of strings. What about matrix addition? You could have code like this: Matrix A, B, C;C = A + B; Thus, the plus sign now performs matrix addition, not addition of integers or floats. Overloading is a powerful mechanism. However, it can be downright confusing for people who read and maintain code. In fact, developers can confuse themselves. Java does not allow the option of overloading operators. The language itself does overload the plus sign for string concatenation, but that is it. The designers of Java must have decided that operator overloading was more of a problem than it was worth. If you must use operator overloading, take care not to confuse the people who will use the class. Multiple Inheritance As the name implies, multiple inheritance allows a class to inherit from more than one class. In practice this seems like a great idea. Objects are supposed to model the real world, are they not? And there are many real-world examples of multiple inheritance. Parents are a good example of multiple inheritance. Each child has two parents—that's just the way it is. So it makes sense that you can design classes by using multiple inheritance. And in some O-O languages, such as C++, you can. However, this situation falls into a category similar to operator overloading. Multiple inheritance is a very powerful technique, and in fact, some problems are quite difficult to do without it. Multiple inheritance can even solve some problems quite elegantly. However, multiple inheritance can significantly increase the complexity of a system. As with operator overloading, the designers of Java decided that the increased complexity of allowing multiple inheritance far outweighed its advantages, so they eliminated it from the language. In some ways, the Java language construct of interfaces compensates for this; however, the bottom line is that Java does not allow conventional multiple inheritance. Page 5<<
https://www.developer.com/java/ent/article.php/10933_3464311_5/Putting-an-Object-in-a-Safe-State.htm
CC-MAIN-2018-34
refinedweb
564
56.96
One of the many good things in BeOS Release 3 was the addition of system-wide translation services through the Translation Kit. While translation between different data formats was previously available in the form of a third-party library named Datatypes, having a Kit in the BeOS makes it easier to use and install, because you can assume it's always there. The Translation Kit will load any old-style Datatypes add-ons, but the interface used by Datatypes is deprecated. The actual work of translating data in the Translation Kit is performed by add-ons known as translators. This article explains what a translator add-on must do to be used by the Translation Kit, and what it can do to be a well-behaved citizen in the world of data format conversions. But first, the code! The archive can be found at: The purpose of our translator is to allow applications to read and write the PPM bitmap file format. I chose PPM because it is a format that is fairly simple to understand, while having enough variation to illustrate how to configure a translator add-on. It is also a fairly popular format for UNIX-style image processing tools. For translation to work, there has to be some common ground between the translators, and the applications using them. For bitmap graphics, this common ground is found in the B_TRANSLATOR_BITMAP format, a format designed to be easily readable and writable to BBitmap system objects. The format of a B_TRANSLATOR_BITMAP formatted file is simple: First, there is the file header, which consists of a struct: struct TranslatorBitmap { uint32 magic; /* B_TRANSLATOR_BITMAP */ BRect bounds; uint32 rowBytes; color_space colors; uint32 dataSize; }; As you can see, all elements of this struct are 32 bytes in size (except for the BRect, but all elements in a BRect are 4 bytes in size) so there should be no alignment problems when reading/writing this struct on different platforms. However, the byte order needs to be well defined, and since Datatypes was around long before the x86 port of BeOS, the well-defined byte order of the TranslatorBitmap struct is big-endian. The magic field should be set to the B_TRANSLATOR_BITMAP constant, swapped to big-endian if necessary. The bounds field should be set to the Bounds() that a BBitmap system object would use to contain the image. Note that Bounds(). right is ONE LESS than the width of the image in pixels, because the 0-th pixel counts as one pixel. Again, you need to swap the BRect members as necessary. For rowBytes, see below. colors is one of the values defined in GraphicsDefs.h which describe various ways you can interpret the raw pixel data. In R4, there will be more values defined for a color_space, although not all values work if you use DrawBitmap() to draw such a bitmap to the screen. In the sample source, we have cribbed the definitions for B_CMYK32 and relatives from R4 so that we can illustrate how to convert between color spaces. dataSize, lastly, should tell how much pixel data follows the header, but the size of the header (32 bytes) does not count. This should always be set as follows: header. dataSize= ( header. bounds. Width()+1)* header. rowBytes Again, be careful about byte-swapping. After this struct the actual data of the bitmap follows directly, from left to right, top to bottom, padded to rowBytes bytes per scanline. rowBytes is typically the smallest multiple of four bytes that will fit the width of the bitmap of whole pixels across. The general rule with regards to byte-swapping is to swap only when you need to read or write data, and keep it in the native format internally. Doing this ensures that you can easily access the values of the header. For instance, if you were to write a BBitmap out in the B_TRANSLATOR_FORMAT format, here's how you could do it: status_t WriteBitmap( BBitmap* map, BDataIO* out) { TranslatorBitmap header; /* prepare header */ header. magic= B_TRANSLATOR_BITMAP; header. bounds= map-> Bounds(); header. rowBytes= map-> BytesPerRow(); header. colors= map-> ColorSpace(); header. dataSize= header. rowBytes*( header. bounds. Width()+1); /* swap header */ header. magic= B_HOST_TO_BENDIAN_INT32( header. magic); header. bounds. left= B_HOST_TO_BENDIAN_FLOAT( header. bounds. left); header. bounds. top= B_HOST_TO_BENDIAN_FLOAT( header. bounds. top); header. bounds. right= B_HOST_TO_BENDIAN_FLOAT( header. bounds. right); header. bounds. bottom= B_HOST_TO_BENDIAN_FLOAT( header. bounds. bottom); header. rowBytes= B_HOST_TO_BENDIAN_FLOAT( header. rowBytes); header. colors= (color_space) B_HOST_TO_BENDIAN_FLOAT( header. colors); header. dataSize= B_HOST_TO_BENDIAN_FLOAT( header. dataSize); /* write header */ status_t err= out-> Write(& header, sizeof( header)); /* write data */ if ( err== sizeof( header)) { err= out-> Write( map-> Bits(), B_BENDIAN_TO_HOST_INT32( header. dataSize)); if ( err== B_BENDIAN_TO_HOST_INT32( header. dataSize)) { err= B_OK; } } return ( err> B_OK) ? B_IO_ERROR: err; } I have sloppily been saying "file format" above; the truth is that any BPositionIO object can be used by a translator, and as long as you can Seek() and SetSize() and Read() and Write() it, it needn't be a BFile proper. It can be one of the system classes BMallocIO or BMemoryIO, or it can be your own class that knows how to read and write data to some special storage you're using. This is used by the system class BBitmapStream, which knows how to present a BBitmap as a "stream" of data. Now, your job as a bitmap image translator is to read data in your "special" file format from the input stream, and write it to the output stream in the "standard" bitmap format as explained above. You should also be capable of doing the reverse: reading data in the "standard" bitmap format, and writing it out in your special format. This reading/writing is done in the exported Translate() function. Translate() is passed an input and output stream, type information that a previous call to Identify() returned, possibly a BMessage containing some configuration information and out-of-bounds information, and a requested output format type. This type is a four-letter type code as found in and other system headers, and the specific value is taken from your outputFormats[] array or the return data from Identify(). If there is no type code defined for the format you're dealing with, you have to make one up. When you do, remember that Be reserves all type codes that consist solely of upper case letters, digits, the underscore character and space. Your best bet is to use lowercase letters in your own type codes. There are standard formats for some other kinds of data besides bitmap images. You can find them in TranslatorFormats.h, and they are also described in the Translation Kit chapter of the online Be Book. There are some things that you need to get the Translation Kit to the point where it calls your Translate() function. There are many translators installed in a typical user's system, so how does it know which translator to use? Typically, a translator is selected in one of two ways: 1) An application that implements an "export" menu item, such as the Becasso paint program, or the R4 version of ShowImage, calls on the Translation Kit to list all available translators, and to select those that say that they can translate from the B_TRANSLATOR_BITMAP format to some other format. It then lets the user choose one of these translators using some UI (a dialog or menu, typically) and tell the Translation Kit to especially use the translator selected. For this to work, your translator needs to tell the world what formats it can read and write. It does so in the inputFormats[] and outputFormats[] arrays. These are arrays of struct translation_format, terminated by a format with all 0 values. While exporting these arrays is called "optional" in the documentation, applications that want to perform an export will not know about your translator unless it exports these arrays. Also note that there is no way to specify that only certain combinations of input and output file formats are good. Once you declare some input formats and some output formats, any combination of them may be used by the Translation Kit, including, in some degenerate cases, translating the SAME format (i e B_TRANSLATOR_BITMAP to B_TRANSLATOR_BITMAP). You decide how to best deal with this situation; just copying >from input to output is acceptable, although if your translator can also do some other tricks (like the color space conversion of PPMTranslator) you might want to do that even on same-format translations. 2) An application that accepts "any file" and then uses the Translation Kit to figure out what it was will cause your Identify() function to be called. The role of your Identify() function is to look at the beginning of the file and figure out if it is in one of the formats you know how to handle or not. Note that Identify() is often called before Translate(), even if the application selects your translator specifically, so you have to do a good job here. Because the BPositionIO passed for input may have some special meaning, such as reading from a network socket, you should not read more data than you need to make an educated guess as to the format of data you're passed. Similarly, calling Size() or Seek() relative to the end-of-file of the BPositionIO might be an expensive operation that causes the entire file to be downloaded to disk before it returns, so it should be avoided in your Identify() function. If not, you might not recognize the format, and then the user wasted an hour on a 28.8 kbps download just to get nothing useful out. Also, some applications use the Translation Kit only to identify what something is; they don't actually Translate() it. Wasting time getting to the end of the file is then doubly pointless. There are some additional required data items you need to export from your translator for the Translation Kit to use it. They tell the world your translator's name, some information about it, and the version. If there are two translators with the same name but different versions installed, the Translation Kit may choose to use only the latest version, for instance. Thus, you should make sure that you always bump the version number when releasing a new version of your translator, and that you never change your translator's name (as seen in translatorName[]) once it's set. translatorInfo[] is your personal soap box, and is a great place to put shareware notices, copyright information, URLs, or secret cabal messages. Except that then they wouldn't be secret anymore. There are three more optional functions that you may choose to export, even though your translator will work and be used by the Translation Kit without them. MakeConfig() allows you to create a BView (to which you can add other BViews such as BCheckBoxes and BMenuFields) that a client application can add to a window and display on screen. The purpose of this view should be to twiddle whatever tweakable parameters your translator has, and the View should remember these changes for later uses. You can see this implemented in the PPMTranslator as the struct ppm_settings variable g_settings, and the PrefsLoader class instance g_prefs_loader. GetConfigMessage() should return a "snapshot" of the current settings in the message passed to it. An application can pass a copy of this "snapshot" message data to a later call to Translate(), and your translator should then use whatever settings are kept in that message rather than the defaults. Similarly, an application can pass a copy of the data in this message to MakeConfig() to have the view preconfigured to the settings stored in that message rather than the current defaults (although the translator is allowed to change the defaults to what's in the message, as done in PPMTranslator). These two functions together make it possible to create an application which can present a UI for choosing a translator, to configure that translator, and later to use that specific translator/configuration pair to actually perform a translation. Great for automated batch conversions, for instance! For more detailed information on the functions used by the Translation Kit, look at the Translation Kit chapter of the Be Book, the section on writing a Translator add-on. The last optional function is main(). On the BeOS, there really isn't any difference between shared libraries, add-ons, and applications, except in the way they're used and what you call them. You can load an application as an add-on, or launch a shared library, providing that the executable in question exports the right functions. To be an application, all you have to do is to export a symbol named main(). PPMTranslator takes advantage of this schizophrenia to do something useful when double-clicked—it runs its own control panel by calling its own MakeConfig() function and adding the resultant View to a window, and then quits when the window is closed. I recommend that all translator add-ons do the same thing; that gives a user an easy way of setting the defaults for use by applications that don't display translator user interfaces, and users also get something useful out of double-clicking what might be an unknown executable found on their disk. Once your translator is debugged and ready to ship, you only need to make sure it gets installed where the Translation Kit will find it. By default, the Translation Kit will look in the following three places for Translator add-ons: /boot/home/config/add-ons/Translators/ /boot/beos/system/add-ons/Translators/—reserved for Be /boot/home/config/add-ons/Datatypes/ —for old Datatypes However, the user can change this behaviour by setting the environment variable TRANSLATORS. Users who do this are considered power users, so making sure your translator gets installed in ~/config/add-ons/Translators/ by default is the right thing to do. Before I end this article, I want to explain a few things about the code included with this article. First, there is downloading and installing the code. Just get it from the URL above, put it where you usually put sample source code, and un-zip it (or let the Expand-o-matic do it for you). Then, in a Terminal window, "cd" to the newly un-zipped folder, and type make install to build and install the PPMTranslator and the translate command-line tool. Documentation for the use of translate is scarce, but you have the source, so you should be able to figure it out from there. PPMTranslator should be doing most things "right" and thus be suitable as sample source. If you find something you don't like or think might be a bug, I'd be interested in hearing about it, and fixing the archive. The utilities in colorspace.cpp are intended as a quick way to get the job done when you need to output data in some color_space format other than what you have. They are not intended as a high-quality color convolution or separation package. Specifically, the conversion to grayscale is sub par, and the conversion to/from CMY(K), while correct, assumes that you're using perfect inks on perfect paper. I wish! If you read through the sources and conclude that Release 4 will define new values for the color_space enum for color spaces not previously defined in , you are correct. However, there is one caveat: while using this enum to communicate color space information is convenient, not all applications or classes will support all color spaces. Drawing a BBitmap in the B_CMYK32 color space to a BView will not work; nor can you draw into a BBitmap with a color space of B_YCrCb_422. Still, having names for these spaces is better than a complete vacuum. What are you waiting for? The source is there to explore, and the world is waiting for your translators! Shoo! Your programmers work 24-hour days to get your product ready for domestic sales. But, although 50% or more of your revenue is on the line, you don't think to ask them to test the product under the German or Japanese version of the operating system. It's only after your product starts appearing on shelves at CompUSA that the Vice President of Worldwide Sales asks you to demonstrate this hot application running under the Japanese operating system for a group of high-ranking executives from Tokyo. After the fourth crash and reboot, you realize that something isn't right with your product. Your VP apologizes, then says goodbye to the high- ranking executives and to 30% of worldwide revenue. For most development houses with an international presence, international sales account for upwards of 40% of total revenue. These revenues, however, can be severely reduced or eliminated for any software release when products aren't engineered for sales in international markets. In effect, your engineering department determines your global sales strategy, which may well end up defeated by or suffering from long and expensive localization processes because software products have been written with only a single target language in mind. Before localizing your software, you need to internationalize it. Internationalization is the process of creating a single code tree that is easily localized to multiple languages. Before spec, during development, and through final code freeze, your engineers should be well-versed in writing international software. Here are five of the most common pitfalls that cripple or defeat the international value of software products. English letters are usually presented as single-byte characters. Japanese, Chinese, and Korean characters are double-byte. Some text can be represented vertically. In the Middle East, some languages flow from right to left. All platforms have API calls to display all major languages, including US English. A program can automatically display date, time, and other culture standards by querying the operating system. Make sure your programmers learn and use these APIs wherever characters are displayed. Some components may not be localizable. A third-party vendor may not license a component for international distribution, or may demand additional licensing requirements and fees to localize it for you. Before a component is even tested for use in a product, require a sign-off for worldwide rights and compatibility with all international versions of the OS your company targets. Your programmers always keep pieces of text, pictures, and sounds in a separate, editable resource file, right? If resources aren't kept separate from the start, your programmers will either have to re-engineer later, or comb through your source code every time you localize into another language. Separate resources are relatively easy to edit, reducing engineering costs considerably, or allowing you to outsource localization entirely. Almost every SDK starts as English-only. Even Sun's Java Development Kit only recently added true international support. If your product is dependent on a third-party SDK, ensure that it is either compatible with targeted international operating systems, or that you are absolutely sure that international versions will be available by the time you are ready to localize. Tech writers and help authors like to adapt as much as possible to the platform they are writing for, and add improvements with incremental releases. Usually, help systems, as well as electronic documentation systems and players, do not exist on all platforms and language versions, increasing development costs for these systems. Every new or different word in the documentation has to be translated, copy-edited, and re-published. Therefore, it is practical and cost-effective to standardize on cross-platform, open systems, and to enforce a company policy of standardized product terminology. International success depends not only on the savvy of your business development team, but on a cooperative and proactive understanding of how to create an internationalized product. Being aware of the pitfalls and integrating strategies into your product specifications will allow you to realize your global revenue potential. Lynn Fredricks is the President of Proactive International, an international business development company that establishes worldwide distribution networks for software developers. Proactive International specializes in high-profile distribution between North America and Japan, as well as providing product and business analysis for large international corporations. Additional information can be found at. A while back, Stephen Beaulieu mentioned that DTS has divided support responsibilities for the various areas which make up the BeOS, based on familiarity, experience, and preference. The areas which fell towards me include OpenGL®, POSIX, Replicants, hardware, and printing. When I think about what I'd like to write for my Newsletter articles, I look into the questions which have come my way recently to see what people are asking about. This article was going to be on OpenGL®, but people asked about printing last week, so that's what we'll look at. OpenGL® will have to wait until my next article! Printing is a great topic, because it produces a physical representation of your work. It's very validating. You can hang your hardcopy on your office wall, show it to your friends, and just generally impress people with it. Portable, high-contrast displays are wonderful, but printing will always be valuable. As I once heard it put so eloquently, the paperless office is as much a myth as the paperless bathroom. (There's a very subtle double entendre in there, "for the connoisseur" as Jean-Louis might say.) Benoît Schillings wrote an excellent article... Be Engineering Insights: Proper Printing ...a short while back which explained how to get up and running with printing on the BeOS. I want to expand on that article just a little, showing a couple of techniques which you may want to incorporate into your own applications. The code we'll be adding printing to is none-other-than Dynadraw, the perfect vehicle for showing off printing. You can grab the source code from: In printing out Dynadraw views, it would be nice to have two modes: one in which one pixel on the screen corresponds to one typographical point on the page, and another where the entire view is scaled to fit on a single page. Correspondingly, under the menu, you'll find the item which has a check next to it. The check implies that we'll scale the view to the page, and the absence of the check implies that we'll produce "life size" output. The DDWindow class manages the printing, and is the target of the two new messages, PRINT_REQ and SCALE_PAGE. SCALE_PAGE notifies the FilterView that we're toggling printing modes. PRINT_REQ calls the imaginatively named DoPrint(), which sets up the print job, requests the view to draw in the correct position, spools pages, and commits the job. The most important thing to do when creating a print job is to set the print settings. This is usually done by making a call to BPrintJob:: ConfigPage(). These settings are stored in a BMessage, which you can get a pointer to by calling BMessage:: Settings(). It is extremely instructive to view the contents of this message when investigating or debugging by calling PrintToStream() on the message—try uncommenting these lines in DoPrint() and then running the application from a Terminal window: printf("printSettings message:\n" printSettings-> PrintToStream(); Next we calculate how wide and how long the printout will be. We do this by taking the ratio of our view width to our printable rectangle width, and rounding up with the ceiling function. The total number of sheets, therefore, is horizontal sheets multiplied by vertical sheets. We then loop through the pages, offsetting the current page rectangle to the top of the next page. With the current page rectangle lined up correctly, we call BPrintJob:: DrawView(), which calls FilterView:: Draw() with the rectangle described by curPageRect, and positions the output at (0,0) on the page. Having drawn, we add the page to the spool file, check that we haven't been interrupted, and continue the loop. Finally, we commit the job, and send the spool file to the printer. The FilterView:: Draw() function needs to be modified only slightly. If the view is being printed and the user has selected , then we determine a scaling factor and apply it. We determined the scaling factor here by finding the horizontal scaling factor and the vertical scaling factor, and taking the smaller of the two. We call SetScale() with this argument after reducing it by an epsilon, to make it more attractive on the page. (Note that SetScale() takes a double, with 1.0 == 100%, that is, full size.) The BView:: IsPrinting() function is critical to any application which does printing. It allows the Draw() function to modify its behavior for the printed page versus the screen. Any "screen-only" code you have in your draw function should be wrapped in an "if ( !)..." check. IsPrinting() As you can see, the BPrintJob class and the BView class work hand-in-hand to allow you to add quick and easy printing to your applications. Read Benoît's article, read about BPrintJob in the Be Book, and try it yourself! Happy printing! This is not about making a m?salliance of respected professionals and confidence men in the same column. No, today's topic is an attempt to balance two opposing views of an issue. That is, the good and bad sides of a hypothetical venture fund whose sole purpose would be investing in Be developers. Actually, this isn't so hypothetical. This week's choice of topic was triggered by reactions to a mention on our site of Marco Bernasconi's BeFund: Marco, a long-time friend of Be, is based in Switzerland, and he *is* the fund. We're all grateful for Marco's role as an "angel" to Be developers and, as the celestial label implies, this is not a classical Silicon Valley venture fund. What many correspondents have asked, however, is whether there should be a standard venture fund for BeOS-centric companies. I'll try to reproduce their arguments for and against, taking full responsibility for whatever distortions I might introduce in the process. On the pro side, several readers point to the keiretsu approach adopted by Kleiner Perkins in convincing other investors to join them in investments supporting the Go/Eo platform. More recently, Kleiner Perkins took a similar course in leading the creation of a Java Fund. As is now well understood, the reasoning behind this kind of venture investment is that a new platform needs a critical mass of symbionts such as software or hardware developers; the platform support fund provides capital to these helper companies. If the platform "ignites," all investors, whether in the platform company or in third-party companies, are in a good position to profit handsomely for being in before success became retroactively obvious. If critical mass isn't achieved, well, they tried, as manly venture investors are supposed to do. Critics of the idea call it "dirigiste" (that is, interventionist), an affront to the way things are done in free-market heaven. Essentially, they say, if a start-up offers a money-making opportunity, it will get financed. There is so much capital available right now that any good team with a good business plan will attract funding. In other words, if the free market doesn't fund BeOS developers, listen to what investors are saying: the business plan won't work. Perhaps it's the team, or the product; more likely there isn't enough confidence that the platform will reach critical mass and reward investors accordingly. Furthermore, opponents of the idea point out, the Go/Eo keiretsu didn't go anywhere, and there is doubt that Java will ever ascend to the Windows-killer platform status originally ordained for it. In other words, the dirigiste idea doesn't work, critics say, looking at my passport. Others take a different perspective. They point out that most large companies now deploy a strategic investment fund of one kind or another. These days, one of the most visible examples is Intel, but from Adobe and AT&T to 3Com and Cisco, "everybody does it." That does not automatically make it a good idea, but one is tempted to assume these companies have a different conduit for their philanthropic activities. In other words, all these CEOs, and their boards of directors, believe in the divine intervention of the free market for most things, but they are collectively willing to put billions in play when it comes to creating critical mass, or a self-fulfilling prophecy, rather than letting nature take its course. If memory serves, in the early eighties, a rich Apple took a 20% equity position in the new and, at the time, unproven Adobe. That investment helped the rise of the Macintosh platform and Apple made a killing of Microsoft proportions on its Adobe stake. In fairness, the examples go back and forth, and for every success story one may find failures of Momenta proportions. There is no absolute truth in this matter of platform-support investments. There is only risk taking, and the small matter of possible to produce a Be app by cross-compiling on another platform? The libraries are, as Ernest S. Tomlinson pointed out, "Portable Executable binaries", so some clever compiler direction should do the trick. Right? Fred Fish pointed to a tract in the Microsoft Developer Network Library called "Learn System-Level Win32 Coding Techniques by Writing an API Spy program" that shows you "how to make all client-sharedlib calls go through some private code you supply." Thomas Hudson nominated Chris Herborth's port of SWIG which "analyzes C++ code and generates an interface to various scripting languages such as TCL/Tk, Python, Perl..." There was some shouting from the "feel the pain" crowd (REAL BeOS developers use MW/BeIDE). And some listeners took the opportunity to re-open the binary format debate (ELF vs PEF/PE). Should a BHandler be destroyed when its BLooper is destroyed? Jesse Hall sees the looper/handler relationship as similar to that of window/view. And just as dying windows destroy their views, loopers should clean up after themselves. But maybe not: Matt Brubeck thinks unhitching the fate of a handler from its (current) looper makes message redirection easier, particularly when the looper is a window. THE BE LINE: To paraphrase Peter Potrebic, the fact that a destructed looper doesn't destroy its attached handlers is not a bug, and won't change. However, the looper should (but doesn't) remove its handlers while its being destroyed. This is a bug and will be fixed. Tinic Uro submitted a mathlib mystery spot: printf("%f\n",(float) rint(8.5)); printf("%f\n",(float) rint(9.5)); ..produces '8' and '10'. Why? Jens Kilian offered an explanation: “This is the American way to round a number—if it's exactly halfway between integers, it is rounded to the EVEN one. The German way, which I and (presumably) you were taught, is to simply take ceil(x + 0.5), always rounding UP when halfway between numbers.” International relations at stake, a number of listeners wrote in to suggest using floor() and ceil() (or, for the latter, floor(x+.5)) instead of rint(). THE BE LINE: (From Mani Varadarajan) “[ rint()'s behavior] is correct. To prevent skewing rounding errors only one way, it is a well-established rule that one should always round to the even number if a number is exactly between an even and odd value (pardon the lack of precise mathematical terminology). If the rule were to round always up, the error would be skewed in one direction. This evens out the error.” What's Be's Java policy? It was announced that Be had been dealt into Sun's floating crap game, but have they ante'd up? Is there a JVM on the horizon? THE BE LINE: Be is NOT an official Java licensee. We'd love to see a JVM running on the BeOS, but we have no plans to work on one ourselves. The windows for all instances of a multi-launch app are listed together in the Deskbar's app window list. How do you tell which windows belong to which instance of the app? Suggestions: Map your windows to logical units. For Felix (for example), this means each window would represent a distinct server, and tab views within the window would serve the various channels on that server. Go "docu-centric" and teach the windows to identify themselves. In the context of the previously proposed Felix example, a window would name the server & channel to which it's connected. You don't really care which instance of the app runs a particular window, all you care about is the window's target/contents. Make your app single launch, multi-window. Why relaunch the app just to open another channel? If the app is well-written, you should ALWAYS be able to get another window. Fine suggestions all, each with their own set of advantages and blemishes. And of the latter for the latter, comes this from Matt Brubeck: “I have NetPositive running in workspace three. I want to open a NetPositive window in workspace one. Ideally, I should be able to go to workspace one, launch NetPositive, and have it open a new window. Currently, I have to go to workspace three, tell NetPositive to open a new window, use Workspaces to drag the window to workspace one, then go to workspace one. Bleah.”
https://www.haiku-os.org/legacy-docs/benewsletter/Issue3-26.html
CC-MAIN-2017-04
refinedweb
5,571
60.45
Bug Description When I do a search on totem's youtube plugin it gets stuck "Fetching results ...". The reason for that: ** (totem:2670): DEBUG: Init of Python module ** (totem:2670): DEBUG: Registering Python plugin instance: BBCViewer+ ** (totem:2670): DEBUG: Creating object of type BBCViewer+ ** (totem:2670): DEBUG: Creating Python plugin instance ** (totem:2670): DEBUG: Init of Python module ** (totem:2670): DEBUG: Registering Python plugin instance: YouTube+ ** (totem:2670): DEBUG: Creating object of type YouTube+ ** (totem:2670): DEBUG: Creating Python plugin instance Exception in thread Thread-2: Traceback (most recent call last): File "/usr/lib/ self.run() File "/usr/lib/ res = self.callback( File "/usr/lib/ mrl = "http:// AttributeError: 'NoneType' object has no attribute 'groups' My setup: 64-bit 8.10 Tried a patch from here, still not working: http:// This bug was fixed in the package totem - 2.24.2-0ubuntu4 --------------- totem (2.24.2-0ubuntu4) intrepid; urgency=low * debian/ - upstream change to understand the new youtube url (lp: #288494) -- Sebastien Bacher <email address hidden> Fri, 24 Oct 2008 19:08:42 +0200 Will this fix be backported to totem in Ubuntu Hardy? This (or something very similar) is still causing the youtube plugin in hardy to not work. A backport would be appreciated. Please fix this in Hardy. "3 years support". Thanks! I've fixed the youtube.py, don't know if it as already been released the fix, it is working on my acer one. If you have problem try the following attachement, and copy it to /usr/lib/ (dont forget to do a backup of your youtube.py) Thank you! Confirm and thanks to Rui, it works. @Admins, please verify and upstream for Hardy LTS, thanks. I confirm that Rui Oliveira's fix works. Worked for me too. Except that Totem does not seem to buffer the video and I can not use the time slider. Koopee: Time slider never worked in the youtube plugin bundled with hardy. It seens to do buffering in my PC. Thanks Rui, it indeed seems to depend on buffering. I noticed that if I pause the youtube stream for too long, I get "Could not read from resource." error. I would quess that the server thinks that the connection is interrupted and breaks the connection. Anyway, which component handles the buffering? I think I could do something to enhance it if I knew where to look. Found a nice program youtube-dl. Helps to download the whole video from youtube. It gives enough buffering for every need... If you think that problem is solved than you are wrong. There is still no official backport to hardy (3 year support isn`t it). Downloading some files from 3rd person place is no a solution to that problem. Mariusz Kielpinski If you think that files from 3rd person is no solution, then don't download the file i've attached and stop winning. Koopee. Mariusz Kielpinski If you think that files from 3rd person is no solution, then don't download the file i've attached and stop winning.. worked here. well done. I'm sure Mariusz Kielpinski meant that it's no solution to the 99% of the people who have experienced this problem. That is, the people who have experienced it and did not care enough to go hunting around for a random solution on a bug tracking site. That's not what "Linux for human beings" means. It actually took me a good 15 minutes to find this; there are many other bugs with "ffdemux_swf" that have nothing to do with this particular problem (those other bugs refer to codec issues with gstreamer, and several people from the other bugs have been misdirected and have given up because of the confusion). This isn't a simple find by any means. Don't get me wrong, the fix works perfectly, and I thank you for the work you've done, but it's been a month since the fix was made available. We don't blame you for not placing this in hardy-backports; the admins are the ones who should have done this. Pointing out a problem is not whining, even though the problem is fairly small: it's a niche plugin for Totem, after all. I'm under the impression, however, that Ubuntu originally prided itself ideologically on shipping this plugin as the solution to the proprietary evils and such conferred by using Adobe Flash to actually go to YouTube. Touting this as the solution and then leaving it broken is slightly disingenuous, though I'm sure it wasn't done intentionally. Point is, they should keep their 3-years-support promise and pay attention. I see your point and i agree evolipel. Still don't understand why this bug is Confirmed when i released a valid fix that so far works for everyone. I may understand this delay, if the ubuntu devs have plans to backport a recent version of totem that has the fix. Meanwhile i hope people affected reach this launchpad bug, maybe google can be their friend. I've been looking at this change for hardy but it's not trivial, the code changed quite a lot between hardy and intrepid and the patch which has been used in intrepid doesn't apply to the hardy version, the updated version which is available in this bug doesn't only change the url used but also has changes to uses the api urls which seems to be a different issue and it's not clear if those are required the change seems to be working correctly on hardy so I've uploaded that now Sebastien Bacher írta: > the change seems to be working correctly on hardy so I've uploaded that > now > Pls., where?... csola írta: > Sebastien Bacher írta: >> the change seems to be working correctly on hardy so I've uploaded that >> now >> > Pls., where?... > Big, big sorry... Solved... warp Accepted into hardy-proposed, please test and give feedback here. Please see https:/ Confirming that totem 2.22.1-0ubuntu3 from hardy-proposed fixes this issue. This bug was fixed in the package totem - 2.22.1-0ubuntu3 --------------- totem (2.22.1-0ubuntu3) hardy-proposed; urgency=low * debian/ - upstream change to understand the new youtube url (lp: #288494) -- Sebastien Bacher <email address hidden> Thu, 15 Jan 2009 11:03:18 +0100 I already had the totem package above already installed (came thru automatic update) but it still didn't fix it. The way I had been doing it was to copy the downloaded youtube video from the /tmp folder to desktop and add extension .flv. This used to work about 6 months ago but no longer. My fix: Download package youtube-dl with synaptic package manager. To use, open terminal and enter: youtube-dl <youtube video URL> The resulting .flv file will be saved to your home folder (unless you changed directory in terminal). I can play this flv in totem no problems. Hope that helps. [URL=http:// Has nothing helped. But is not important anymore, I totally uninstalled Totem. Use only VLCplayer and mplayer frontend SMPlayer. Totem has me totally dismissed. (very sorry) Thanks for the effort. Regards Tom. So no more emails about this bug please. Security? What? Why? the bug described there has been fixed don't reopen without comment Thanks for the information - I updated the file and now it works fine. I also stopped using 8.04 and I'm now using 9.04 Youtube in totem does not work again in 9.04. 1. Searching is working. 2. Playing is not. I am getting box with message "An error location; you might not have permission to open the file". Here is totem log: ivan@tachilla:~$ LANG=C totem /var/lib/ import sha ** Message: Error: "http:// gstsouphttpsrc. 404 Not Found ----------end of log---- All files are not found. Maybe is it security feature from YouTube? HERE IS MINE. This bug was already fixed. If further updates to the code are needed, please open a separate bug report. do regressions that lead to the same error need dupes on launchpad? why don't i just report a regression locally, i.e. here? the bug is known upstream on http:// bugzilla. gnome.org/ show_bug. cgi?id= 557681 and has been fixed in their svn now
https://bugs.launchpad.net/ubuntu/+source/totem/+bug/288494
CC-MAIN-2017-17
refinedweb
1,383
73.68
In this tutorial, you will learn how to add months to date. Here, we are going to add few months to current date and return the date of that day. For this, we have created a calendar instance and get a date to represent the current date. Then using the method add() of Calendar class, we have added 4 months to the calendar and using the Date class, we have got the date of that day. Example: import java.util.*; public class AddMonthsToDate{ public static void main(String[] args) { Calendar calendar = Calendar.getInstance(); Date today = calendar.getTime(); System.out.println("Today's Date: " + today); calendar.add(Calendar.MONTH, 4); Date addMonths = calendar.getTime(); System.out.println("Date after 4 months: " + addMonths); } } Output: Today's Date: Tue Oct 09 13:08:07 IST 2012 Date after 4 months: Sat Feb 09 13:08:07 IST 2013 Advertisements Posted on: October+
http://www.roseindia.net/tutorial/java/core/addMonthsToDate.html
CC-MAIN-2015-18
refinedweb
149
58.38
In the earlier versions of .NET framework, writing code to perform asynchronous IO operations was not possible and hence the IO operations had to be synchronous. The problems that the developers were encountering with the synchronous approach were: 1. Unresponsiveness of UI - if the application is a thick client and had to perform file IO operations based on the user actions. 2. Performance issue - In case of back ground process, where it has to process large files. In .NET Framework 4.0 asynchronous IO provisions were given for classes like StreamReader, StreamWriter, etc. through the methods BeginRead, BeginWrite, etc., involving callbacks. Though it provided a way to write asynchronous code there was yet another drawback--the code complexity! In .NET Framework 4.5 the IO classes are packed with new Async methods using await and async keywords, which can be used to write straight-forward and clean asynchronous IO code. Below are the advantages of using these new async IO methods. 1. Responsive UI - In Windows apps, the user will be able to perform other operations while the IO operation is in progress. 2. Optimized performance due to concurrent work. 3. Less complexity - as simple as synchronous code. In this article we look at a few examples of async IO operations in .NET Framework 4.5. StreamReader and StreamWriter StreamReader and StreamWriter are the widely used file IO classes in order to process flat files (text, csv, etc). The 4.5 version of .NET Framework provides many async methods in these classes. Below are some of them. 1.ReadToEndAsync 2.ReadAsync 3.ReadLineAsync 4.FlushAsync - Reader 5.WriteAsync 6.WriteLineAsync 7.FlushAsync - Writer The code below reads the content from a given list of files asynchronously. namespace AsyncIOSamples { class Program { static void Main(string[] args) { List<string> fileList = new List<string>() { "DataFlatFile1.txt", "DataFlatFile2.txt" }; foreach (var file in fileList) { ReadFileAsync(file); } Console.ReadLine(); } private static async void ReadFileAsync(string file) { using (StreamReader reader = new StreamReader(file)) { //Does not block the main thread string content = await reader.ReadToEndAsync(); //Gets called after the async call is done. Console.WriteLine(content); } } } } Now let us try with the ReadLineAsync and read the content from a single file asynchronously. namespace AsyncIOSamples { class Program { static void Main(string[] args) { ReadFileLineByLineAsync("DataFlatFile1.txt"); Console.WriteLine("Continue with some other process!"); Console.ReadLine(); } private static async void ReadFileLineByLineAsync(string file) { using (StreamReader reader = new StreamReader(file)) { string line; while (!String.IsNullOrEmpty(line = await reader.ReadLineAsync())) { Console.WriteLine(line); } } } } } In these examples the main point to note is that these asynchronous operations do not block the main thread and are able to utilize the concurrency factor. A similar example holds good for StreamWriter as well. Here is the sample code, which reads the content from a list of files and writes it to the output files without blocking the main thread execution. namespace AsyncIOSamples { class Program { static void Main(string[] args) { ProcessFilesAsync(); //Main thread is not blocked during the read/write operations in the above method Console.WriteLine("Do something else in the main thread mean while!!!"); Console.ReadLine(); } private static async Task ProcessFilesAsync() { List<string> fileList = new List<string>() { "DataFlatFile1.txt", "DataFlatFile2.txt" }; foreach (var fileName in fileList) { string content = await ReadFileAsync(fileName); WriteFileAsync(content, "Output" + fileName); } } private static async void WriteFileAsync(string content, string outputFileName) { using (StreamWriter writer = new StreamWriter(outputFileName)) { await writer.WriteAsync(content); } } private static async Task<string> ReadFileAsync(string fileName) { using (StreamReader reader = new StreamReader(fileName)) { return await reader.ReadToEndAsync(); } } } } WebClient This class is used for data request operations over protocols like HTTP, FTP, etc. This class is also bundled with a bunch of Async methods like DownloadStringTaskAsync, DownloadDataTaskAsync and more. It doesn't end here but extends to classes like XmlReader, TextReader and many more. I will leave it to the readers to explore them. Happy reading!
https://mobile.codeguru.com/csharp/.net/net_framework/supporting-asynchronous-io-operations-with-.net-framework-4.5.htm
CC-MAIN-2021-31
refinedweb
635
58.18
[Here’s a shortcut to the results. But it would be best to read the post first.] Following my previous superoptimizer post, my student Jubi and I were getting up to speed on the prerequisites — SMT solvers, LLVM internals, etc. — when Googler Peter Collingbourne contacted me saying that he had recently gotten a superoptimizer up and running and might I be interested in working with him on it? I read his code and found it to be charmingly clear and simple. Also, one of my basic principles in doing research is to avoid competing, since competition wastes resources and burns students because the resulting race to publication effectively has an arbitrary winner. So I immediately started feeding bug reports to Peter. The new superoptimizer, Souper, makes a few simplifying assumptions: - The only optimization candidates that it considers are the true and false values. Therefore, at present Souper only harvests expressions that compute an i1: a one-bit integer, which is how Booleans are represented in LLVM. Thus, the result of a Souper run is a collection of expressions that LLVM could have — but did not — evaluate to either true or false. - It doesn’t yet have models for all instructions or for all undefined behaviors for the instructions it does support. These assumptions need to be relaxed. One generalization that should be pretty easy is to harvest expressions that end up as integers of arbitrary width. The interesting thing about this is that we cannot take time to check if every harvested expression evaluates to, for example, every possible value that an i32 can take. What we will do instead is to ask the SMT solver to synthesize the equivalent constant. The problem is that by default, when we make an equivalence query to an SMT solver, it is an unsat result that signals equivalence, and unsat doesn’t come with a model — it indicates failure to find a model. It turns out there’s a cute trick (which I learned from Nuno Lopes) involving a quantifier which flips a query around such that an equivalence results in sat, and therefore a model, from which we can pluck the synthesized constant. Consider this Z3/Python code where we’re asking, for a variety of constants c, how to express i*c (where i is an integer variable) in the form i<<x + i<<y + i<<z: from z3 import * s = Solver() def checkit (c): s.push() i, x, y, z = BitVecs('i x y z',32) q = ForAll (i, i*c == ((i<<x) + (i<<y) + (i<<z))) s.add(q) s.add(x>=0, x<32) s.add(y>=0, y<32) s.add(z>=0, z<32) if s.check() == sat: m = s.model() print ("i * " + str(c) + " == i<<" + str(m.evaluate(x)) + " + i<<" + str(m.evaluate(y)) + " + i<<" + str(m.evaluate(z))) else: print "i * " + str(c) + " has no model" s.pop() for m in range(100): checkit(m) This is just an example but it’s the kind of thing that might make sense on a small embedded processor where the integer multiply instruction is expensive or doesn’t exist. The results include: i * 28 == i<<4 + i<<3 + i<<2 i * 29 has no model i * 30 has no model i * 31 has no model i * 32 == i<<4 + i<<3 + i<<3 i * 33 == i<<4 + i<<4 + i<<0 The full set of results is here. I particularly enjoyed the solver's solutions for the first three cases. So we know that the synthesis part of a superoptimizer is possible and in fact probably not all that difficult. But that's a digression that we'll return to in a later post; let's get back to the main topic. Now I'll show you how to read Souper's output. You may find it useful to keep the LLVM instruction set reference handy. Here's an optimization report: %0:i32 = var %1:i32 = mul 4294967294:i32, %0 %2:i1 = eq 1:i32, %1 cand %2 0:i1 The first line tells us that %0 has type i32 -- a 32-bit integer -- corresponding to a signed or unsigned int in C/C++, and that it is a variable: an input to the superoptimized code that may hold any value. Reasoning about any-valued variables is hard but solvers are good at it and that is the entire point of the superoptimizer. The second line tells us that %1 is a new i32 computed by multiplying %0 by -2. The third line tells us that %2 is a new i1 -- a Boolean or 1-bit integer -- computed by seeing if %1 is equal to 1. The last line, starting with "cand", is Souper telling us that it believes %2 will always take the value 0. If Souper tells us this when running on optimized code, it has found a missed optimization. In this case LLVM has missed the fact that multiplying an arbitrary value by an even number can never result in an odd number. Is this a useful optimization to implement in LLVM? I don't know, but GCC does it, see the bottom of this page. Souper finds many missed optimizations that fit this general pattern: %0:i32 = var %1:i64 = sext %0 %2:i64 = sdiv 2036854775807:i64, %1 %3:i1 = ne 0:i64, %2 cand %3 1:i1 Here the observation is that if we divide a large constant by an arbitrary 32-bit value, the result cannot be zero. GCC does not find this one. Some Souper output contains path constraints: %0:i32 = var %1:i1 = eq 0:i32, %0 pc %1 1:i1 %2:i32 = addnsw 1:i32, %0 %3:i1 = slt %2, 2:i32 cand %3 1:i1 Here, at line 3, we learn that %1 must take the value 1 in the remaining code due to a path constraint. In the original LLVM code there was a conditional branch exiting if %1 had the value 0. Since %1 has the value 1, we can infer, in the remaining code, that %0 contains 0. Thus, %2 contains 1 and the expression %2 < 2 must evaluate to true. Finally, this charming example exploits the fact that if the product of two numbers is not zero, then neither of the numbers could have been zero: %0:i32 = var %1:i32 = var %2:i32 = mul %0, %1 %3:i1 = eq 0:i32, %2 pc %3 0:i1 %4:i1 = eq 0:i32, %0 cand %4 0:i1 One more thing that you might see in the full set of results is an entry like this: %0 = block. This means (more or less) that %0 is a value that is going to pass through the code to a phi node without being otherwise used, this is useful for increasing Souper's precision. I think that's about all you need to know in order to read the full set of results from a couple days of running Csmith, Souper, and C-Reduce in a loop. First, we wait for Souper to find a missed optimization and then second, we find a minimal C program that exhibits the missed optimization. The results have been ranked in a way that attempts to push more similar results (that are more likely to be duplicates) lower in the list. So far, the most common pattern that comes out of Souper's findings is that LLVM needs an integer range analysis. Such an analysis would also help eliminate integer overflow checks, one of my hobby horses. LLVM also doesn't always propagate information that would be best represented at the bit level, such as the even/odd distinction required for the first optimization that I discussed. Finally, LLVM does not always learn from branches. My not-necessarily-educated guess is that all of this is a symptom of LLVM's heavy reliance on the instruction combiner, which is not so much an optimization pass as a loose federation of a few hundred peephole passes. Some of the missing LLVM optimizations won't be hard to implement for people have passable C++ and who have spent some time becoming familiar with the instruction combiner. But here are a few things we need to keep in mind: - One might ask: Does it make sense to harvest missed optimizations from randomly generated code? My initial idea was that since Csmith's programs are free from undefined behaviors, the resulting optimizations would be less likely to be evil exploitation of undefined behaviors. But also I did it because it was easy and I was curious what the results would look like. My judgement is the the results are interesting enough to deserve a blog post. Perhaps an easier way to avoid exploiting undefined behavior would be to add a command line option telling Souper to avoid exploiting undefined behaviors. - For each missed optimization we should do a cost/benefit analysis. The cost of implementing a new optimization is making LLVM a bit bigger and a bit more likely to contain a bug. The benefit is potential speedup of code that contains the idioms. - Although the reduced C programs can be useful, you should look at Souper output first and the C code second. For one thing, the Boolean that Souper finds is sometimes a bit obscured in the C code. For another, the C-Reduce output is somewhat under-parenthesized -- it will test your knowledge of C's operator precedence rules. Finally C-Reduce has missed some opportunities to fold constants, so for example we see ~1 instead of -2 in the 2nd example from the top. - Each missed optimization found by Souper should be seen as a member of a class of missed optimizations. So the goal is obviously not to teach LLVM to recognize the specific cases found by Souper, but rather to teach it to be smart about some entire class of optimizations. My belief is that this generalization step can be somewhat automated, but that is a research problem. - Although all of the optimizations that I've looked at are correct, there's always the possibility that some of them are wrong, for example due to a bug in Souper or STP. This article presents some very early results. I hope that it is the beginning of a virtuous circle where Souper and LLVM can both be strengthened over time. It will be particularly interesting to see what kinds of optimizations are missing in LLVM code emitted by rustc, GHC, or llgo. UPDATE: Here are some bugs that people have filed or fixed in response to these results: - Optimize signed icmp of -(zext V) - Optimize integral reciprocal (udiv 1, x and sdiv 1, x) to not use division - ComputeMaskedBits & friends should know that multiplying by a power of two leaves low bits clear It's very cool that people are acting on this! Please let me know if you know of more results than are listed here. Interesting results! I filed: ComputeMaskedBits & friends should know that multiplying by a power of two leaves low bits clear You’re right that the integer range analysis in LLVM is weaker than it should be. The historical reason for this is that many though many attempts were made at tackling this problem, they all tried to handle a very broad class of the issue, and crumbled under their own weight (and compile time). A simple and elegant solution that handled common cases is all we really need IMO. How long does this take to run? If the runtime isn’t unbearably long, it would be interesting to see superoptimization opportunities on, say, SPEC or maybe even real-world applications. Joshua, right now it is very fast because it’s the candidate search is trivial. But I plan to slow it down a whole lot! Of course we can leave the fast search available via command line options. I’ll start posting more Souper results soon. I don’t think there’s a huge rush since there’s plenty of stuff to digest from just this first run. Something that I’d really like to do is correlate missed optimizations with profile data in order to point out ones that noticeably affect runtime. Thanks Chris! Yeah, I saw a lot of stuff in there that I thought that InstCombine already tries to handle. So hopefully this tool will be useful in fixing up some blind spots in the peephole optimizers. I’m still somewhat surprised at how much compiler developers avoid slow optimizations. Debug builds should be fast, sure. But for some applications a few percent faster binaries at the expense of 10x or more at compile time would be compute well spent. How hard would it be to allow dynamically injecting peep hole optimizations into LLVM? If that wouldn’t be to hard, then Souper could be used to generate a custom set of optimizations for a specific source at a much lower cost than generalizing them and adding them to the base compiler. (IIRC that ideas been more or less proposed before but it sounds like the theory side is rapidly turning into practice.) Hi bcs, I agree that this is worth exploring. It doesn’t sound very difficult at all, really, although I’d be slow to trust the resulting binaries in production. Also I think the speed issues can be largely solved by caching code fragments that have been shown to optimize (or not). I cannot not comment that ~1 is in every way as much of a constant as -2, since in C’s grammar, only positive numbers are constants. In other words, if ~1 is not a constant because it applies a unary operator to an expression, then -2 is not a constant either. “One might ask: Does it make sense to harvest missed optimizations from randomly generated code?” Only if the resulting optimization pass can be synthesized (e.g., a new subpart of the instruction combiner). If human attention is required, you want to be looking at real programs to avoid wasting time optimizing unused corners of the language. bcs’s idea is a good way to prioritize cases from real programs, too — implement the ones with the most benefit. Jeffrey, you’re not totally correct. These results are useful in pointing out things that people thought were being optimized, but aren’t. Also they’re useful in pointing out things that are just embarrassing not to do. But yes, of course, we want to use real application code to drive this work most of the time.
https://blog.regehr.org/archives/1146
CC-MAIN-2019-18
refinedweb
2,432
67.99
The Javascript world has moved insanely fast the past few years. There is so much innovation, but it has come at a cost: developer convenience. It's just harder to get up and running now, but once you are up and running, your life is amazing. AngularJS was built in a time where web technologies (Javascript, HTML, etc) were just starting to gain critical adoption and evolve. The core ideas they implemented were ahead of their time then, and now many of them have become official standards. So they had to make a choice -- do they stay with their existing architectures and attempt to port them over to the new standards, or just start from scratch building around the new standards while allowing a reasonable amount of backwards (and forwards) compatibility? A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system. — John Gall They ended up choosing the latter — and it was a good move. Some may feel differently, but the reality is that AngularJS ended up becoming very complex, especially considering the Angular team didn't have the new JS standards to build on top of. So what that means is that to learn Angular, we also need to learn about the new standards and technologies that it's built on top of. There are primarily four things that we'll need to be familiar with: ES6, TypeScript, RxJS/Observables, and build tools. We'll be covering RxJS and build tools within this course, but if you're not familiar with ES6 and TypeScript you'll need to fill those in first. Additionally, if you don't know AngularJS, there's no need to worry. The materials selected won't assume that you are an AngularJS developer and no prior experience is required to get up an running with Angular. We recommend checking out Nicolas Bevacqua's blog posts on classes, let, const, import/export, and arrow functions Another great overview of ES6 can be found at Vegibit The official documentation and a deep dive into TypeScript can be found at Official Documentation and from Basarat Syed's Deep Dive Framework changes Before moving forward, let’s make note of a couple of key changes in Angular. Curly braces now denote a one-way binding If you recall, this is the equivalent of using ng-bind in AngularJS. In Angular you're required to use parentheses inside of brackets (known as "banana in a box") for two way data binding. This change is largely due to the new unidirectional data flow that Angular has embraced. The example below demonstrates the change between AngularJS and Angular: <!-- Two way binding --> <!-- AngularJS --> <div>{{ message }}</div> <!-- Angular --> <div [(ngModel)]="message"></div> Woah, did you see [(ngModel)]? Is it a bird, a plane? Nope - it's a banana in a box! 🍌 Many basic directives, filters, and services do not exist until they have been imported! In AngularJS, this was true for things like services (e.g. \$http), but now we have to import basic directives like NgModel first (amongst other things). This may seem like a frustrating change, but it ultimately allows you to explicitly control the overhead of your Angular applications which is a good thing. For example, before using the ngModel directive in a two-way data binding, you must import the FormsModule and add it to the NgModule's imports list. Learn more about the FormsModule and ngModel in the Forms guide. import { FormsModule } from '@angular/forms'; // <--- JavaScript import from Angular Goodbye ng-app, hello Bootstrap We no longer use the ng-app attribute to connect an Angular App. Instead, we have to rely on a new technique known as Bootstrapping. Prior to Angular, we could use the ng-app attribute directive to connect our angular modules to a view. This process, known as Bootstrapping, has been changed with Angular. import { bootstrap } from '@angular/platform-browser-dynamic'; import { AppComponent } from './app.component'; // Connect the component to our view bootstrap(AppComponent); With the new bootstrapping system, we skip the process of connecting modules to views and focus on connecting a Component to a view. This makes so much more sense and we encourage you to learn more about it through the official Angular 2 QuickStart Tutorial Controllers no longer exist Instead we use Components to manage our views. And more! Thankfully, the Angular team has constructed this amazing tutorial detailing all of the major changes seen in Angular from AngularJS. We definitely recommend checking out the link below. AngularJS to Angular Quick Reference And with that knowledge, we can now proceed to create our first "Hello, World" application with Angular!
https://thinkster.io/tutorials/differences-between-angular-1-and-2
CC-MAIN-2019-13
refinedweb
803
61.36
Introduction Previously we created an RSS Feed reading application in Adobe AIR using Flash Builder 4. Our previous application had a simple UI which allowed us to enter a URL for the feed, and has two panes one below another. One to list the items in the feed, and the other to display the summary of the selected item. Note that this tutorial assumes that you have completed the previous one in the series.You can check out the previous article at. Our previous application could only handle a single feed at a time, which is a rather huge limitation for a feed reading application. In this article we will develop our application further by adding support for reading multiple feeds. We will make a Feed browser of sorts which will display a "tab bar" for switching between active feeds. Each "tab" will display its own list of feed items, and we will be able to select any item read the summary. Understanding the changes to our application So how do we go about this exactly? Let us first take into account the visual additions, in the form on controls / components. We will need: A tab bar We will simply use a button bar for this. A ButtonBar component makes it easy to add multiple buttons in a row. Each button will have a label corresponding to the title of the feed. A button to add a new feed Since we have to handle multiple feeds now, we need to have a button for adding a new feed. We will re-purpose the "Load" button of our previous application for this. A close button We should have some way to remove the currently active tab if we want. This could be a simple button labeled "X" or if you wish to make it look better you can create an icon for it. That's about it; this will cover our needs from this application. Now about the changes inside the application. Since we are now going to handle multiple feeds at the same time, we will need to have an array holding data for all the feeds, so it is ready when we click on it. To wire everything so that it works, with minimal coding we will do the following: We will bind the array of feeds to the ButtonBar. This will automatically ensure that the feeds and tabs are in sync. We will bind the feed data for active tab to the list showing the data items. This way the data will automatically change as we change the current tab. The HTML component is already bound to this list, so that bit will work as always. What we need to do is to write the code that will add tabs to this array of tabs and load the corresponding data. We also need code for closing a tab and removing that feed from the array. This rest will work by itself! The UI Our UI is not changing drastically. Our load button will now transform to a "Add Feed" button labeled, and we will add a tab strip with a close button on the next line. Our previous code was simply a horizontal group with a text input for the URL and a load button. Right after this we will now add the following: <s:HGroup <s:ButtonBar <s:Button </s:HGroup> This is another horizontal group with three components. The first is a ButtonBar which will display our tabs and let us switch between them. We have set its width to 100% so it takes us as much space as is available and pushes the close button to the edge. The second is the button to close the currently active tab, labeled simply "X". The requireSelection="true" bit in the code is to ensure that at least one tab is selected at any given time (if there is at least one tab of course). This is all we need to add, however the old "Load" button may be renamed to "Add Feed" or simply " " if required. The Code First of all we will define a new data type for the data associated with out feeds. For each feed tab, we will need to store a title and the feed data at least. Since this is a simple example, we will not manage refreshing the feeds every few minutes, otherwise it would have been wise to store the feed data as well. [RELATED_ARTICLE]For this we will define a new ActionScript 3 class. This is simple to do with Flash Builder 4; simply right-click on your project, and under the "New" menu select "ActionScript Class". This will pop up a new dialog box wherein you can enter details of your new class. Here is a little about some of the relevant parameters in the dialog: The first option is labelled "Package". When you create a new libraries of code, it is a good idea to organize them so that they can be reused later on without problems. If you are creating a new Class for retrieving stocks information from Yahoo called "YahooStocks" for a project called "Stock Manager" you might want to have a package such as com.stockmanager. Although it is recommended that you always place your classes inside a package, we will do without one here since it is a simple example. The second option is for a name for your class. We are giving our class a name of "FeedTab". The convention is to have a class name where each word begins with a capital such as "MultiTouchManager" or "ArrayCollection". Superclass is used if you are extending an already existing class. Those familiar with object-oriented programming will understand what this means. Those unfamiliar, should look up inheritance in object-oriented programming. You could use this feature for example to create your custom version of the "Video" class called "SubtitledVideo" to provide support for subtitles. Interfaces are a way of making different classes compatible with each other. For example, while the ArrayCollection, and XMLListCollection classes are very different —one stores data as an unstructured array, and the other as structured XML— both can be used as a source for the list. This is because both implement the IList interface. For them to be part of a list they both need to provide a few features, such as moving forward and backward in a list, getting the length of the list etc. If you create your own custom class which implements IList, it too will be usable a source for the list. This is the entirety of our "FeedTab" class: package { import mx.collections.XMLListCollection; public class FeedTab { public var title:String public var data:XMLListCollection; public function FeedTab(title:String, data:XMLListCollection){ this.title = title; this.data = data; } } } All it is doing is storing the data needed by our tab, which is: a title for the feed, and the feed data itself. The feed data itself contains a title, however this makes things a little bit simpler. The FeedTab function which serves as a constructor for this Class takes two the two pieces of data this class needs and stores them in the public variables which make up this class. Finally we will modify our main code. First of all remove the variable feedData, we will no longer need it as we handle multiple feeds. Secondly, we will add a new variable, and array to store all our feeds' data. [Bindable] private var feeds:ArrayCollection = new ArrayCollection(); Now in our onFeedLoaded function, we will need to add this feed to our list of feeds instead. private function onFeedLoaded(event:Event):void { var rssFeed:XML = XML((event.target as URLLoader).data); var feedData:XMLListCollection = new XMLListCollection(rssFeed.channel.item); var feedTab:FeedTab = new FeedTab(rssFeed.channel.titles, feedData); feeds.addItem(feedTab); feedTabs.selectedItem = feed; } The first line remains unchanged, and in the second line we are temporarily storing the XMLListCollection made up of all the items in the RSS feed, in a variable called feedData. Then we go on to create a new FeedTab item (which is the class we just created), by giving it the title of the feed (which can be found under channel.item of the RSS feed), and the feedData. We then push this FeedTab item into our list of feeds, and finally make the newly added FeedTab the new selected tab. We should now go ahead and add "feeds" as the dataProvider for the ButtonBar, and set the ButtonBar's labelField attribute to "title" since that is the name of the field where we are storing the title of the feed which is to be used as a label for the button. Finally, we need to handle removing tabs. This is easily done by adding a click handler for the close button with the following simple line of code: feeds.removeItemAt(feedTabs.selectedIndex); Here feedTabs is the id of the ButtonBar which lists the feed items. We are removing the item at the selectedIndex of the ButtonBar based tab bar, from the feeds array. This is only working as expected as the order of items in the feeds array and the tab bar is the same. Believe it or not, at this point we have a functional multi-tab feed browser! Go ahead and test your code. The full code for our application: <?xml version = "1.0" encoding = "utf-8"?> <s:WindowedApplication xmlns: <s:layout> <s:VerticalLayout/> </s:layout> <fx:Script> <![CDATA[ import mx.collections.ArrayCollection; import mx.collections.IList; import mx.collections.XMLListCollection; import spark.events.IndexChangeEvent; [Bindable] private var feeds:ArrayCollection = new Array); var feedData:XMLListCollection = new XMLListCollection(rssFeed.channel.item); var feed:FeedTab = new FeedTab(rssFeed.channel.title, feedData); feeds.addItem(feed); feedTabs.selectedItem = feed; } protected function feedItems_changeHandler(event:IndexChangeEvent):void { article.htmlText = feedItems.selectedItem.description; } protected function button1_clickHandler(event:MouseEvent):void { feeds.removeItemAt(feedTabs.selectedIndex); } ]]> </fx:Script> <s:HGroup <s:TextInput <s:Button </s:HGroup> <s:HGroup <s:ButtonBar <s:Button </s:HGroup> <s:List <mx:HTML </s:WindowedApplication> You can download a free trial of Adobe Flash Builder 4from the Adobe website.
http://www.digit.in/general/extending-our-air-feed-reader-application-with-flash-builder-4-5553.html
CC-MAIN-2017-30
refinedweb
1,687
64
I have spent more than six hours trying to get this program to work, but I cannot get it to work. This is a skeleton and description about it. Lesson 10-5 Hierarchical Records This lesson works with program Carsdefined in the Prelab assignment. Exercise 1:Augment Carin program Carswith the following two members: sold A Boolean variable soldDate If (sold), then soldDatecontains the date of sale; otherwise, soldDateis undefined. Function GetCarshould initialize sold to false. Write a function CarSold that takes variables of type Date and Car and records that the car has been sold and the date. Before invoking PrintCar, write the car owner’s name on the screen and ask if the car has been resold. If it has, call function CarSold and then write the car to file dataSold rather than file dataOut. Run your program using file cars.dat. Let Betty’s and Alice’s cars be resold. Exercise 2: Rewrite Car so that soldDate and the new owner’s name are encapsulated into a struct member soldTo. If the car has been resold, prompt for and read the new owner’s name. Run your program again using cars.dat. Let Betty’s car be sold to John and Alice’s car be sold to Cliff. // Program Cars reads a record from a file and writes // its contents back to another file with the price member // increased by 10%. #include #include #include using namespace std; struct Date { int month; int day; int year; }; struct Car { float price; Date purchased; string customer; }; Car GetCar(ifstream& dataIn); // Pre: File dataIn has been opened. // Post: The fields of car are read from file dataIn. void WriteCar(ofstream& dataOut, Car car); // Pre: File dataOut has been opened. // Post: The fields of car are written on file dataOut, // appropriately labeled. int main () { Car car; ifstream dataIn; ofstream dataOut; dataIn.open("cars.dat"); dataOut.open("cars.out"); cout << fixed << showpoint; car = GetCar(dataIn); while (dataIn) { car.price = car.price * 1.10; WriteCar(dataOut, car); GetCar(dataIn, car); } return 0; } //***************************************************** Car GetCar(ifstream& dataIn) { Car car; dataIn >> car.customer; dataIn >> car.price >> car.purchased.day >> car.purchased.month >> car.purchased.year; dataIn.ignore(2, '\n'); return car; } //***************************************************** void WriteCar(ofstream& dataOut, Car car) { dataOut << "Customer: " << car.customer << endl << "Price: " << car.price << endl << "Purchased:" << car.purchased.day << "/" << car.purchased.month << "/" << car.purchased.year << endl; }
http://www.chegg.com/homework-help/questions-and-answers/spent-six-hours-trying-get-program-work-cannot-get-work-skeleton-description--lesson-10-5--q3253400
CC-MAIN-2015-11
refinedweb
391
74.9
Hide Forgot Dearest, we managed to get a fresh RHEL 6.4 box to a state, when any attempt to localinstall fails with: ValueError: your.rpm has no attribute basepath For instance, yum localinstall ~/rpm/* --disablerepo=beaker-* results in: The events preceding this state consisted purely of registering to RHN channels, doing yum groupinstall, remove, update, install. No manual configuration of yum whatsoever. I will share the exact log internally. THX 4 any info While I have no idea where yum-checksum plugin comes from (it's neither from "yum" or "yum-utils"), it very likely just treats YumLocalPackage instances as YumAvailablePackage instances. The following patch should fix this issue on Yum side. diff --git a/yum/packages.py b/yum/packages.py index deb44e4..042a50b 100644 --- a/yum/packages.py +++ b/yum/packages.py @@ -1333,6 +1333,9 @@ def _rpm_long_size_hack(hdr, size): # which are actual rpms. class YumHeaderPackage(YumAvailablePackage): """Package object built from an rpm header""" + + remote_url = '<local>' + def __init__(self, repo, hdr): """hand in an rpm header, we'll assume it's installed and query from there""" Zdeněk, this is a plug-in to help us verify our RPMs are what we actually expect them to be. Here is the plug-in source: I'm not an expert, this is first thing I write in python and first thing I write for yum. So please let me know if something is wrong. It just logs the sha1 of every installed package through yum. Thank you! Np, the plugin code seems correct, it just does not handle local packages. pkg.remote_url is probably the only problem there. My patch from comment 3 should work this around. Alternatively, you can fix the plugin with something like: --- yum_checksum.py.orig 2013-10-08 15:43:05.592878389 +0200 +++ yum_checksum.py 2013-10-08 15:45:57.386388083 +0200 @@ -52,6 +52,10 @@ for pkg in conduit.getDownloadPackages(): # see remote_url() and checksum() fname = pkg.localPkg() - log.write('%s %s %s %s %s\n' % (hashfile(fname), fname, pkg.name, pkg.checksum, pkg.remote_url)) + if getattr(pkg, 'pkgtype', None) == 'local': + url = '<local package at %s>' % fname + else: + url = pkg.remote_url + log.write('%s %s %s %s %s\n' % (hashfile(fname), fname, pkg.name, pkg.checksum, url)) finally: log.close() Zdeněk, thank you for the advice. I see value in your patch to packages.py because the fact that a package is local does not mean it should not have an URL defined as well it's easier to code plug-ins wen attributes are always there. Do you think your patch will go upstream or should I use the method you suggest above to fix for yum_checksum only? btw I think it will be most straightforward to have packages.py like: > remote_url = 'file://' + urllib.pathname2url(os.path.abspath(pkg.localPkg())) because otherwise url would not really be an url and next time I decide to write some weird plugin incarnation of non-sense it may break again.. WDYT? If you need to track the package source, then file:// URL would certainly help. I was just trying to return a dummy string instead. Merged upstream.;a=commitdiff;h=da3b4624. Zdeněk, I tried your suggested patch but it fails badly for me: > if getattr(pkg, 'pkgtype', None) == 'local': > File "/usr/lib/python2.6/site-packages/yum/sqlitesack.py", line 270, in __getattr__ > raise KeyError, str(e) > KeyError: 'no such column: pkgtype' This is RHEL6 but I need it working on RHEL5 as well. Since your other patch will not be backported to RHEL at least for the time being, I thought it's better to have a fix locally. Forgot to say I'm seeing the error trying to install a package from remote repo. Sorry, was too quick writing this. Try to replace the above with: - if getattr(pkg, 'pkgtype', None) == 'local': + if hasattr(pkg, 'pkgtype') and po.pkgtype == 'local': Usually these are equivalent, but sqlitesack overrides __getattr__ .. Thank you! + if hasattr(pkg, 'pkgtype') and pkg.pkgtype == 'local': ..to fix the obvious copy-paste error. Guys? Ain't we gonna close this one or something? :-) Not yet. It seems simple enough, I'm proposing this to be fixed in RHEL 6.7 ... Patch to backport - comment.
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=1016148
CC-MAIN-2020-16
refinedweb
701
67.45
NSI Botches Domain Transfer, Says 'Not Our Problem' 262 Rolan writes "Wired is carrying a story about a botched domain trasfer that cost a customer "a large wad of money". In the end they say it's not their problem, even though they botched it, and Lawyers say he probably can't do anything about it. " Its an interesting article actually, and it doesn't sound like an isolated incident. Re:Did you even read the article? (Score:1) Re:Did you even read the article? (Score:1) Re:A solution for network solutions (Score:1) line or do you just want one? Shit, I "need" 106 octane gas for my streeet racer. Does that mean I should blow up a petroleum farm if the gas company won't sell it to me for $1/gallon? > The same thing goes with other monopolies such > as telephone, Actually, my phone service was better when ATT _was_ a monopoly, and I didn't have to put up with unsolicited phone calls "inviting" me to switch my long distance service. > electricity, gas, and others. These too need > to be turned over to non-profit groups. Yes you're correct. Government should control everything(?) > You'll never get good service, fair service, > and decent customer service when profit is > involved. I've experienced "free" health care when I was in the Air Force, and "for profit" health care since I got out. I'll pay every time. "Free" health care sucks. Note that doesn't mean I think monopolies are good, I just don't think that profit is evil... > Just recently I called the *ASSHOLES* at > Sprint/United Telephone about getting a > 56k line and I was told it would be $236/mo >+$600 install. Do you NEED it or WANT it? Have you tried calling ATT or another competitor to Sprint? Re:ummm... (Score:1) And yes, I realize that legally Network Solutions is the one to blame (as I mentioned). However, this is a campaign against greatdomains.com/register.com as potential cybersquatters. While it is not directly related to the issue at hand (about races.com), it stems from it. 5 day waiting period? (Score:1) What have our priorities come to !!!! Domain name overhaul. (Score:2) I had to change one of the IPs of my DNS server. It took 3 weeks for the change to finally take hold. During that time, sending aproximately 3 change form mails a day, it changed between the first placeholder IP, the original IP, and other IP -- never settling on the proper one. They are totally incompitent. McLanahan wanted to build a Web business around the races.com domain name, and shelled out thousands of dollars to acquire it. So first the poor fellow gave money to a domain squatter (really, don't do that). Then he turned around, transfered it, and noticed it was now in possesion of another squatter. How many times will this happen? How many squatters are out there? How many are in cahoots with NSI? (Speculation) Since NSI is losing its monopoly, it seems to have been more tollerant of people buying names for no reason, and keeping them with nothing on them. Can't the courts step in? --- Incorrect (Score:1) Re:IF I where that guy... (Score:1) The original owner is the innocent party, it seems. - Jeff A. Campbell - VelociNews ( [velocinews.com]) And this might not be all bad... (Score:1) Re:They ain't uber. (Score:1) Who has regulatory oversight? FCC? Commerce? (Score:1) If things are as bad as comments here on slashdot indicate, perhaps a petition for a proper review sent to the top of the regulatory pyramid and/or congressperson(s) on relevant committes would result in a better implementation. OTTOMH, I offer the following ideas to protect registration clients without threatening the cashflow of legitimate businesses: I suppose if there was a worry about a time stamping server pulling something funny (like being a cybersquat front), you could send an encrypted version (openable by yourself and perhaps some impartial legal entity) for timestamping first to several time stampers, then the clear text version. Assuming that would create legal evidence. IANAL. Re:Begging to be overthrown (Score:1) Quite hard. In fact, you have to devise a system that is reasonably fair, reasonably open and reasonably well-organized to have a chance of getting a significant following. So far, the best attempt has been ICANN; other attempts were CORE, EDNS and others. And if ICANN is the best we've been able to do, what does that tell you....? NOT trivial. What kind of service is that? (Score:1) Re:The Lesson is Clear (Score:2) What you say is certainly true. I should have said 'tied up for our field of use' instead. For this particular domain name (not mentioned here to protect the guilty), it's highly unlikely that anyone would want it except for our field of use. And even if someone wanted this domain name to sell faucets or airline tickets, I wouldn't care. If they want the domain name, and are going in our field of use, then they'll be in for a fight. We can afford lawyers too. But I'm not worried either way. The guy who has it now isn't going to use it himself, so he's got to try to find some other sucker to buy it. He'll probably just give it up after a while. And then we'll pick it up cheap. Patience is a virtue, Anyone else having problems registering DNS hosts. (Score:1) This is complete bullshit. The new dns server name resolves and it points to a valid IP address for a machine offering DNS services. Where is the requirement for the domain to have been registered with NSI. It is my choice who I give my money to to register the domain and it shouldn't prevent me from offering a vital network service. Also, if you have a domain originally registered under Internic you are not allowed to use the NSI 24/7 support line. You have to call a special Internic tech support line that is only open during normal business hours. Why?!!?! I still have to pay NSI $35 a year...how does that make me different and worth less support. I have found however, that the level of incompetence is consitent in both support centers. Has anyone else had problems registering a DNS host with NSI where it was denied because you didn't register the host's top level domain with them. DNS is what makes the Internet work and right now NSI is deliberately breaking it. Matt Re:OKay. NSI bungled it.. but keep reading... (Score:1) NOT SHOWN: GreatDomains=Register.com (Score:1) After some digging (CmdrTaco...I beg for whois register.com@register.com .... Registrant: Register.com, Inc. 575 8th Avenue 11th Floor New York, NY 10018 US And greatdomains.com: whois greatdomains.com@whois.networksolutions.com ... GreatDomains.com Inc (GREATDOMAINS6-DOM) 10 Universal City Plaza, Suite 1115 Universal City, CA 91608 US Re:A Fragile Plan? (Score:2) 1 Microsoft Way, to use your analogy is much easier to remember and to type in than 3 Microsoft Way. Besides, everything else aside, he PAID for 1 Microsoft Way and now he's got NOTHING. Start-up companies cannot afford to purchase essential things and then not recieve them or a refund. Doug Cybersquatting and cyberhijacking. (Score:1) Who's Really in the Wrong (Score:2) NSI did not lock the domain name as they should have... not once but TWICE. Big mistake on thier part but they are not to blame entirely. Register.com sold the name (apparantly) to GreatDomains.com who (again apparantly) sold the name to a gentleman in the UK. No mistake here, to Register.com the domain was available. (Their mistake is in not helping to retrieve the name after finding out it wasn't REALLY available). Now we have this gentleman in the UK who has the name and is willing to "give it up" for $500,000. Here is where to place your blame (IMHO). If, and I don't recall seeing it mentioned, this man was made aware of the mistake, he should have offered to rescind his deal with GreatDomains.com, who should rescind their deal with Register.com who should return the domain name to NSI who should LOCK the damn thing and complete the transfer. But NO this guy, who probably paid much more than $70 but much less than $500,000 for the domain is looking to make a profit at someone elses expense. He is WRONG, WRONG, WRONG! and he is who the "polite" emails should be sent to (again, JMHO). Re:arrogant, greedy, and inept (Score:1) I'd _love_ to see them get fried to a crisp. I'm tired of engineering jobs being treated as marketing. Cheers, Re:Ways to proceed. (Score:1) -- That's no way to run a business (Score:1) A real winning business attitude! Hope other potential customers take the hint, and register with someone else. Re:Begging to be overthrown (Score:1) OpenDNS? -Lx? The first thing that comes to my mind... (Score:1) hmm... (Score:1) While I think it's unfortunate that this happened, and unfortunate that there's no way NSI can be held responsible, I also don't see what would be so bad about simply taking a different domain name - sure, "races.com" would have been nice, but there are plenty of other names out there that would work just as well. Re:They ain't uber. (Score:1) cybersquatter, n., one who registers domain names for the sole purpose of reselling, leasing, or renting them for a profit, usually pricing it out of the reach of a potential customer. As a result, these domains never sell but instead sit in a domain registry for two years while the squatter goes out of business. -Lx? Indeed (Score:1) Im quite sure situations like this will be common, and people will begin taking the law into thier own hands Perhaps your Bill Gates?? (Score:1) Poll Idea (Score:1) Re:Did you even read the article? (Score:1) Buyer sues Seller for non-receipt of goods paid for (I believe he does have the right to do this, as the Seller "owned" the domain (which was in NSI's hands), and sold it to the Buyer, whom, due to NSI's incompetence, never received it) Seller in turn sues NSI for court costs and damages from the Buyer suit (as the Seller gave notice to NSI to do these things, and NSI screwed it up, thus harming the Seller's reputation, as well as opening him to the lawsuit from the Buyer) NSI can (and should) fight to get the domain back -- they obviously have proof that the domain was "in transit" and not "available" -- and as such, they should (conceivably) be able to sue the other domain registrar for the domain. Of course, the registrant could sue the other registrar, who could in turn sue NSI... In any case, it would seem to me (and I don't claim to be educated in the way these laws actually work, this is just common sense, which I realize our legal system has very little of) that NSI should take the fall for this one. Re:Domain name overhaul. (Score:1) Re:ummm... (Score:1) I don't know whether to laugh or cry at that. Alternative domain registrants (Score:1) However - a breath of fresh air for once. The I suggest that the US people lobby to have NSI thrown out and control passed to Nominet. They have done an excellent job. I have no association with them other than using them for domain registration myself. Re:How Do I Move My Domains? (Score:2) One poster mentioned that when buying a domain, you should make sure the transfer payment is void unless you actually receive ownership. Given the current state of affairs I agree entirely. Since the registration services do not assume any sort of responsibility, I would want some assurances that I am not going to be left with an empty wallet and no name. Another thing that I would do if I lost a name in this fashion would be to go to ICANN, and their regulators as well. ICANN is supposed to have a dispute resolution process too. This would sure be a good test of this process. NSI incompetence: moderate this up! (Score:2) how to foil greatdomains.com (Score:2) wesuck.com If everyone from you can do it yourself (Score:2) Re:Cybersquatting and cyberhijacking. (Score:2) I've noticed some oddities in the last few days, and just found one with inconsistent WHOIS data between NSI and InterNIC. Darned if I know what NSI would do if they registered a domain to two people... My solution... (Score:2) 1. Chalk up the US$4K to "learning experience"... I've spent more than that failing out of classes. 2. Register races.to it's a little catchier, ain't it? You can register it through register.com. It seems "racesnow.com" is taken now. Registered on December 11... looks like he's out of luck again. It's easier to look for a solution when you ain't crying or screaming "foul." Although I feel bad for this guy, playing victim and doing nothing about it isn't going to get anyone ahead. Time to move on now... learn and grow. -m in that case (Score:2) Re:BECAUSE "on-your-marks.com" SUCKS! (Score:2) I've often argued for a keyword based naming system for searching, where 'movies' 'reviews' pulls up a list of movie review sites, instead of having to use movies.com or moviereviews.com, which are a pain, don't necessarily provide the best service, and are limited, where few sites can exist with similar names. Sites could be known by something like an IPv6 IP number, something that wouldn't like current IPs do. Then the IP to Names relationship would be like the yellow pages, where you use keywords to narrow down the search, and once you find a company, you 'bookmark' it by writing down (programming) the phone number. This way, any number of sites can share the same category. If they pick obvious keywords only, their category gets found easier, but they're in a bigger list. But, no one site stands out based on having keywords that others can't have. Today's situation is like being able to buy the 'Sex' or 'Entertainment' section of the yellow pages, so that you're the only company there. Re:5 day waiting period?? (Score:3) So I called Mr. Hicken, who said he aquired the domain name legitimately, using standard NSI procedures, and almost immediately treatened to sue me if I tried to get the domain name back. As the company I worked for at the time had neither the time or money to waste pursiung Hicken in court, we let it drop. All I can figure is that he has, or had, friends at NSI. I don't know any other way he noticed the few-minutes (seconds?) gap between the delete and add for that domain. It certainly would not have shown up in WHOIS (updated every 24 hours!), so he shouldn't have even known that the domain was on the move. It was an inside job! NSI is just a poorly run company which found a way to latch onto the public teat. They would have been chewed up and spit out by the market without special government protection and status; what talent do they have? All they do is mismanage a system invented and set up by the NSF and Jon Postel, et al, way back when. And, unfortunately, ICANN is a joke and hasn't humbled NSI or improved the situation in the least. My Secret Recipe (Score:4) 5 day waiting period?? (Score:3) When he originally put in the forms for the transfer of the domain, NSI told him there would be a 5 day wait. A 5 DAY WAIT? For what? In my job I've registered literally hundreds of domain names, and transferred several dozen and I've never seen any notice about a 5 day wait. As anyone else ever had this happen to them? And one things for sure: if this guy had been a big corporation, NSI would have found a way to get that domain back. Re:The Lesson is Clear (Score:2) It wouldn't be a problem if NSI hadn't screwed everything up in the first place by not differentiating com, net, and org properly. Re:Mandatory ".us" country suffix (Score:2) Ways to proceed. (Score:3) There's also the possibility of using the new Domain Name Dispute Procedure, which works through the World Intellectual Property Organization in Geneva. That costs only $1000 to use, and might be worth a try. A Fragile Plan? (Score:3) Sure that's too bad and all, losing money through a botched process. NSI screwed up, BUT McLanahan knew the consequences. He's an MBA major. If he wants to succeed in business, he'd better toughen up. If the loss of a domain name is enough to crush a business plan, it couldn't have been much of a plan, imho. Graham Re:Disclaim all Liability (Score:2) Hmm. Nope. I don't think I can say EULA. Lessons Learned (Score:2) Lesson learned. Never use NSI, that company has to be one of the biggest cluster fscks on the Net. With as much money as they pull in per domain, you would think they could afford to mount an operation a little more efficient than your average fly by night company. Nearly everyone who has dealt with them probably has some horror stories about lost submissions, unreturned phone calls, unanswered e-mail, and a "sucks to be you" customer service policy. If you are using them and you are banking on fast turnaround, or even competent service, well, sucks to be you I hope some of the other new registrars can pick up the slack on this and provide some good service. Finkployd Original policies. (Score:2) Let me recap, and paraphrase. 1 - domains were free. There was no registration fee. NSI was appointed to perform the administrative tasks of running the registry. Note this didn't mean 'owning' the DNS or anything, just someone to do the work. 2 - to get a - to get a - to get a - The application states that you may not give fraudulent information on your registration (false company names are SO common nowadays) - TRANSFER OF OWNERSHIP (phrased as 'change of registrant' was *expressly* NOT POSSIBLE, except for one condition, being when one company purchases all the assetts of another (so mergers, things like that). This was in here SPECIFICALLY to prevent the type of behavior we see today. Then.. Internic (NSI) started charging a fee for registration, claiming the US Govt did not wish to fund it anymore, as it was no longer a US-only issue (which is true) They came up with the $100/first 2 years followed by $50/year registration fee. This made sense. Domains took forever to register. It *DOES* cost money to do this. Somewhere along the line, and I'm not clear why or what happened, or who to blame, but these rules stopped being enforced. NSI encouraged people to register In short, the breakdown of the original rules caused the system to go to hell. Grabbing a pitchfork and a torch. (Score:2) I'm normally not one to advocate guerrilla tactics for anything short of the repression of human rights, but at this point, I think greatdomains.com, and to a lesser extent the NSI, are fair game for email avalanches and, what the heck, a few crudely-spelled ungrammatical aspersions cast on the genetic integrity of their ancestors. Re:Grabbing a pitchfork and a torch. (Score:2) if flmes _are_ a problem, well, they deserve it anyway. NSI has cost this person, and many people like him, a lot of money and inconvenience; they can deal with the slight karmic retribution of having their mail server crash. The fact that companies like greatdomains.com exist is in my mind one of the biggest problems if not the biggest problem with the internet. The reason i am even slightly troubled by the fact that a thousand OK, maybe i'm a little bitter. whatever. I like the nic.cx people; they're cheap, and they have strict anti-domain-squatting-for-profit regulations that actually work. -mcc-baka INTELLECTUAL PROPERTY IS THEFT Re:The Lesson is Clear (Score:2) Remeber that Altavista failed to secure its domain name, Altavista.com, when they were first Digital was first starting the search engine because the guy who owned it wanted something like $10,000 for it and Digital never thought it would become a commercial venture. When Compaq bought them, they realized that they needed the Altavista domain name, and ended up paying $3,000,000 for it in the end. Anyway, I don't know what your business and maybe the net domain is okay. I'm just giving you a little food for thought. Perhaps Wired was more critical than they sounded. (Score:2) "[Network Solutions] offers no guarantees and won't be liable for registration gaffs"? Unlike a "gaffe", French for "social blunder", a "gaff" is (apart from the original meaning of a large fishing hook on a stick): "A trick or gimmick, especially one used in a swindle or to rig a game", ... or "Harshness of treatment; abuse".... ---- arrogant, greedy, and inept (Score:2) They don't care about domain registrants at all, and obviously in this case, even less because the guy didn't register his domain through a method that provided NSOL with the most profits. This fits nicely with their domain dispute policy which basically favours the bigger lawyer, on the theory that NSOL will get sued less often if they side with the money. DNS is a disaster now. For the last few months any request that isn't accompanied by a check writtn out to NSOL appears to go to It'd be great if we could arrest these guys and charge them with incompetence. Lock them up for 20 years. So...life goes on. (Score:4) Of course, us webheads know that's why you want to own microsoft.com or ibm.com. However, if something didn't go right with the domain registration, it's *not* the end of the world. I understand why somebody should be upset, since he had a "verbal contract" with NSI, but something happened. I don't see anybody getting upset because they can't use the username mike@aol.com. They apply creativity and imagination to come up with something original. so, maybe racing.com is taken, take reallykick-assracing.com. Contrary to what you might believe, there is more to web success than an URL. Look at slashdot, freshmeat, and 32bitsonline. They don't really have beautiful URLs. You have to market the site once it's set up That's just my 3 pfennigen. The Lesson is Clear (Score:2) I work for a company that has a .net domain name, because the .com domain was already taken. We were recently contacted by the guy who owns the .com domain name, who now wants us to pay him something like USD $10,000 for the .com name. I told our president that he should defintely not pay. The domain name speculators are just trying to leech money off others, without providing any useful service themselves. It's kind of funny... the guy (who has the .com name) says he's looking to sell it, and has got other bidders. Hah! We've got the trademark tied up in the USA, so no one else is going to touch it. We'll just wait for it to become available for the regular price. If people didn't have such a hang-up about .com domain names, there wouldn't be this kind of problem. Granted, we're not looking to start a portal site that people will hopefully stumble across by accident. But I sure as heck didn't find Slashdot by guessing at a domain name. Actually, except for major companies (like IBM) I don't usually try to guess a domain name, but use a directory instead. Even if I'm trying to start a portal, I'm not going to pay big bucks for a good name. I'll just come up with another. this is totally wrong... (Score:2) 2)He was using NSI's new "Worldnic" service, which gives you the same thing as the old registration, but costs $40 more. I'm the hostmaster at my place of employment, and the new system sucks. Whereas before one email + one reply was sufficent to make a change, now there are 3 different login/passwd combinations that need to be used to get anything useful done. I always thought the mail-back verification was more than safe enough; but it seems to me someone could try and brute force a password in order to steal a domain if they _really_ wanted to. Not alone.. (Score:2) I have a private domain name registered, so my name is in their system. Several months ago their billing database was corrupted (at least in my case) and I became the billing contact for a random domain. The first I heard of this is when I received a bill for that domain. I checked their whois database and found that I had become the billing contact. I sent them a polite email notifying them of the mistake, but they have so far refused to correct the error. I instead was forced to contact the true owners of the domain and ask them to complain to Network Solutions. It really scares me that a company whose entire business is in keeping a database of information can't even keep their billing database accurate. Doug Disclaim all Liability (Score:3) -------------------- Re:A Fragile Plan? (Score:2) Imagine deciding to start a car dealership, purchasing a large lot of land, only to have it mysteriously sold to someone else. You've lost the money invested in the land, as well as the land itself, your proposed place of business. If that isn't enough to kill any business plan I don't know what is! Doug Re:A Fragile Plan? (Score:2) On a slightly less cynical note a domain name is a companies best asset. On the internet geographical proximity isn't an issue and very few sites actually offer a service another site can't offer at a similar price. This means that the ONLY distinguishing mark of your company is your advertising and domain name. If your competitor's domain name is easier to remember he might end up with the entire buisness and you with nothing. New registry time! (Score:2) Form another registry. We can create a new TLD and nest things underneath there, but with one important difference over other projects like AlterNIC - the option to override the root nameservers. How come? Well, I for one am sick of hearing about Multi-Mega Conglomarate of Super Corporation Enterprises Inc, Ltd. using trademark law to snap up domains even remotely similar to their own, and often unfairly. My solution: first come, first serve, end of story. There will be no trademarks in the DNS system. There will be no money to be had in the system. There's a few other ideas I want to throw in, but that's the big one - root namespace overriding. I also think registration should be very easy - if the domain isn't used, click [register] and you're live after filling in the fields. The technologies there. e-mail me off slashdot, I'd like to hear what you think.. Me too, but there are alternatives! (Score:2) Here, Here! For a list of alternative domain registars other the NSI, check this out. [icann.org] Re:So...life goes on. (Score:2) Finkployd Re:Monopoly could be good (Score:2) Let's just rid of all DNS and memorise IP addresses like phone numbers. +++ Mike DeMaria Want an alternate to the GPL? Find out about it here. [nand.net] It's tradition, No? (Score:2) ----------- Usefull Tip (Score:2) Enclose a check with all letters you send to NSI. If you want to make a complaint write out a nice big one for $50 If you need to get tech supports give 'em $10 If you want to change any aspect of anything about $100 should do. Remember, bribes work! Re:New registry time! (Score:2) 1) who pays for root nameservers? Is this service free? If so how do you convince people to switch. 2) Trademarks are dangerous because they are *legally* protected. Very possibly you are going to end up in court alot over this even if you shouldn't. Big dollars for lawyers. 3)Everyone wants the websites they visit now to stay that way. Possible solution: Don't start over create a black list (like for email spam) over really aggregious trademark abusers (for instance companies which steal private citizen's laast names etc..). This blacklist would contain the domain name in question and possibly a new forwarding address. Individual ISP's would (hopefully) choose to add this list (distributed in proper format) to their name servers (force them to answer authoratively for these domains). Therefore punishing any company engaging in this practive. This has little to no funding requirements and since their is no single organization to sue legal challenges become very difficult. In addition it is implemented transparantely to most users (those who use their ISP's name servers) This brings up an interesting questions (Score:2) Re:Ways to proceed. (Score:2) Now that there are alternative registrars, people should start boycotting them.....wait a minute. They don't care about their customers. This probably wouldn't bother them. I wouldn't be surprised if they purposely left holes like this into the sytem to show that there should only be one company incharge of the system....but i think they are effectively proving that such a company should not be them. ajit Re:5 day waiting period?? (Score:2) My experience with domain registration... (Score:2) Anyway, on the bright side of this, after several emails expressing our irritation to both the domain prospecting company and NSI, the domain prospectors agreed to give us back the name. (Something which surprised the heck out of me.) It really wasn't that great a name, I guess they figured it wasn't worth the hassle. Moral: Don't base you bussiness on a domain name! (Score:4) While I feel pretty sorry for the guy who got ripped off, and am not the slightest bit surprised to see N$I acting this way, I think that if he was basing the entire bussiness on the url then he had the wrong attitude to begin with. I mean, in what other field would people base their entire bussiness plan on the NAME of the fucking company? Yes, as long as the Internet is still new to most of its users, and people still feel lost and unsure of where to go, owning a domain like buy.com or sex.com is a goldmine. But in the long run, you are on pretty thin ice if that is that is the base of your bussiness (yes, I know Wall-street doesn't agree with me). The web is not, and will never be, a keyword based system. In fact, if you read TLBs original paper on WWW for Cern, he specifically mentions having developed the Web because keyword based systems are BAD. Hypetext provides the ultimate decentralized namespace, and no one can argue that people don't become less and less dependant on obvious domain names as they become more at home with the Web and the way it works. Did Ebay, Yahoo, or Slashdot need obvious domains to succeed? Does the domain not being nerdnews.com detract from Slashdot's popularity and success? I have no clue what sort of a market there actually if for the website he wanted to start, but if his bussiness-plan WAS sound, I would recommend he thinks of another name and goes on. I'm no good at this, but why not for example on-your-marks.com or theyreoff.com? For someone more creative with words there must be hundreds of race related terms not urled yet. I really hope that someday people will realize that the domain name is not the website. If a site is good enough it can be just as successfull with some clever, easy to remember, but not generic domain, as it can if you spend millions on buying the most obvious related word you can afford... - We cannot reason ourselves out of our basic irrationality. All we can do is learn the art of being irrational in a reasonable way. Re: They screwed up on me too (Score:2) Hmmm... Just got this thought: Could enough reports to the Better Business Bureau Online [bbbonline.org] possibly do anything? So... (Score:2) After all, policy must apply fairly to all. (I'm not going to do this, but I have to say that the NSI are leaving themselves -wide- open on this one, and I doubt any judge would be sympathetic to them, if they did complain.) Re:A Fragile Plan? (Score:3) Slashdot? Amazon? Ebay? Yahoo? Excite? Those are all very non-descript names.... And that's why they catch, IMO... I agree with the original guy. If his business plan can't be adapted to a new domain name, then that in itself seems to be a problem. Re:Begging to be overthrown (Score:2) Not necessarily an "innocent" victim... (Score:5) When I spotted this story on Wired this morning, I decided to look this guy up (John McLanahan) - I've had my own experiences with NSI (not quite to the same extreme as he has), and wanted to find out some more details about his situation and see if I could help somehow. Tried searching the web for him - found a 29-year-old John McLanahan from Boston who came in 134th in a half-marathon [coolrunning.com], another who is a corporate lawyer in Georgia [troutmansanders.com], and one who lived sometime in the late 1700s (from a few geneology sites). From the Wired article, it sounded like the Boston McLanahan might be the one (right age range, into racing) but there was no e-mail address listed on the marathon results. So, I went to the NSI WHOIS server, searched for "McLanahan, John" [networksolutions.com], and found a John McLanahan with a Boston address [networksolutions.com] (actually, three or four handles with the same name and mailing address) who currently owns a number of domains related to racing (roadraces.com, sailingraces.com, runningclubs.com, raceplanning.com, raceinformation.com, coolraces.com) - sounds like the right guy... ...and then I notice the other domains this guy has registered. It looks like he owns a number of domains that are stock-ticker symbols for .com and hi-tech companies (TalkCity, Voyager.Net, ChemDex), some life-insurance related domains (weblifeinsurance.com, lifeinsuranceinfo.com), and some more generic business-related domains (bankinginformation.com, companyinterview.com). Unless his business plan covers more than just racing, I'd say he's been in the domain-speculation game for a while himself... especially when just about every domain I tried going to said "domain for sale". Not to excuse NSI's more-than-usual imcompetence, but suddenly I don't feel quite so sorry for this guy... ________________________ Re:The Lesson is Clear (Score:3) Sorry to puncture your balloon, but you appear to need a little education in trademark law. Unless you are a hugely major brand, like Coke or Disney or McDonalds, you don't have the trademark 'tied up'. You might have the trademark 'tied up' for a particular class of trade, but that doesn't mean that it couldn't be used somewhere else. Take a look at the word "Delta". I'm personally aware of three companies that call themselves "Delta" (Airlines, Faucets and Dental Insurance), and their trademarks don't conflict with each other (as long as the airline people don't try to sell faucets). So, it's entirely likely that your whatever.com address is going away and there's nothing you're going to be able to do about it. It's unlikely that you're bigger than Delta Airlines. And what if he sells it to somebody outside the U.S.? ...phil GreatDomains=Register.com (Score:2) Is it any wonder register.com won't give the name back? Their own sister company is the one who stole it. I can see it now, everytime a domain expires or is released to the pool in any way, register.com/greatdomains.com decide if they want it and within minutes have it stolen. They probably have people who sit there monitoring newly available names 24/7. It seems to me that the relationship between GreatDomains.com and register.com is totally inappropriate. It's like letting the fox guard the chickens. Quite frankly, I think selling domain names should fall under the same laws as scalping tickets. You sell them for face-value (cost of registration) or you don't sell them at all. I just did a little experiment and just starting making up names of domains that might be nice to have and checking them. At least 1/3 of the names I tried took me to web-sites offering said name for sale. If I remember right, trademark law requires that you have a product or service associated with a trademark, can't we have a similar law for domains? Can you see Title Search, Ins, Escrow fees coming? (Score:2) One wonders if the status of a given name is even maintained in a single transactional database. Or maybe they have defined name claim and release transactions, but not transfer? Can there be race conditions between competing registering businesses? Also, how can one be sure the very act of checking a name doesn't pass it to a speculator? It's apparently not encrypted, so who is in a position to snoop all those form submissions? Maybe one should be careful not to "check" unless ready to commit immediately. Hm. Are registering companies allowed to sell their server logs? What if they just extract the names being checked? Re:The first thing that comes to my mind... (Score:3) Think about it: if NSI has no blame, then there's no good solution to this. Register.com can't boot its customer - the domain was open for registration, that NSI had plans for it is irrelevant, since NSI didn't make that situation apparent until after Register.com was already in contract; the guy who originally held races.com and transferred it shouldn't have to pay back anything - he's out a domain name, already; and McLanahan shouldn't have to spend $500,000 to buy a domain that should be his or have to pay money for a different domain that he didn't want to begin with. NSI bungled the transfer process by failing to lock the domain name when it'd be highly trivial to do so. That constitutes liability in my mind. Re:Ways to proceed. (Score:2) OKay. NSI bungled it.. but keep reading... (Score:2) 2) NSI bungles the transfer (sucks, but they did) 3) under the new system, register.com has already sold the domain to someone. 4) NSI asks register.com (who they have NO authority over) if they can have it back, and explains what happened. 5) register.com says 'no, our cusomter has it under contract already. we can't back out' 6) NSI says 'I'm sorry, there is nothing we can do' Now. I see 3 main points to consider. 1) If you are going to buy a domain from someone (a horrible practice), you should make it THEIR responsibility to ensure that the domain is transfered correctly, and they should receive payment once the domain is in your posession. NOT before. 2) If Internic even mentioned to him on the phone 'okay, we messed it up once, sorry, we'll put it through again correctly' or anything to that effect, then he has a case against them. A written promise is not needed. They claimed they would do something for him, then didn't follow through, and it will cost him money. 2) The whole concept of treating domain names like property is bunk. They are *NOT* property. If they *were*, it would be easy to buy and sell them, and it isn't. Re:hmm... (Score:2) And the fact that the name races.com described the site very well. NSI's suggestion, racesnow.com, isn't very good. I don't understand why the site would be named "Race Snow" NSI times are chaotic (Score:2) Not surprised, but disappointed (Score:2) Nope, it's not disappointment for either Network Solutions nor register.com/greatdomains.com. It's disappointment for the hundreds of thousands of SlashDot members out there who, though continuously complaining that they're 'sick of cybersquatters like greatdomains.com,' do absolutely nothing about it. Guys, we can comment about it til the sun goes down and that's not making a damn difference. But rather than moving on and forget about it, why don't we do something about it? Though small in comparison to the likes of c|net or ZDNET, the userbase of Slashdot is certainly large enough to put a dent in register.com's and greatdomain.com's wallet. Or at least make them sit up and take notice. So why not, to start at least, an organized campaign boycotting greatdomains.com and register.com? I've found sportworld@msn.com (listed administrative contact) to be the most likely address to be checked - better than filling out the greatdomains.com support & bug report. I propose that each and every slashdot member out there who is sick of these types of stories, or having to pay $500,000 to a sleezy company who bought a domain for $70, write a letter - perhaps we could post a template of one here or, if Rob approves of this idea, on the main page - to register.com and greatdomains.com, telling them that (though it'd be inaccurate) every single one of the hundreds of thousands of slashdot members will now be using Network Solutions (in an attempt to get them to return the domain), and will definitely NOT be registering domains from greatdomains.com - and spreading the word as well. This is only the start. Letters could be sent to CNET, ZDNET, and just about any other electronics information site out there, publicizing this story and shining the light on what greatdomains.com does, including registering domains for cheap prices just for the purpose of reselling them for tons of cash. And of course, don't forget to mention their partnership with register.com The goal of this would be not so much to get McLanahan's domain back (though surely this is one goal), but in general to expose such companies as greatdomains.com/register.com and their motives. I am not kidding around here, I'm talking about an organized effort of every slashdot member who's sick of this sort of thing, with letters to any person or company who might seem relevant in this matter, and perhaps a website set up for our campaign. I know some (most) of you are looking right now to get back at Network Solutions for being so weakminded and "hey, it wasn't us" about this. But right now I'm having trouble placing full blame (though they probably deserve it) on Network Solutions, having just seen (for the first time) greatdomains.com. Granted, I've seen cybersquatters in the past, but never have I seen such a slick business as greatdomains.com, who try to act as just another large, respectable organization, overshadowing their unjust motives - which I feel could change if such motives are exposed to enough people publicly, and especially if such companies are boycotted by slashdot's users (their target audience, mainly), among other people. Guys, we've got an entire slashdot community and a voice. Let's use it. Skeptics of the campaign need not apply. How Do I Move My Domains? (Score:2) I've been wanting to transfer my domains OUT of NSI's purview. I don't find any such functionality on thier site. Last time I looked at register.com, there was a blurb about this to the effect that this capability had not yet been implemented. Is this currently possible? And if so, is it currently too damn dangerous to attempt? ====== "Rex unto my cleeb, and thou shalt have everlasting blort." - Zorp 3:16 (Score:2) 1. Complain to NSI 2. Talk to a lawyer 3. Take it to the media Guess what 4 is? (did anyone else get that insanely irritating flashing ad tile on wired? And they say video games make you want to kill!) Joker...nuff said (Score:3) If you order tons of domains, you can get a special account that verifies based on your pgp key (not sure if gpg keys work, they should though). Also they will bill you for your domains as opposed to normal registration which you pay up front. I just ordered two domains the other day from them for US ~70 and it was great. The records were done within 24 hours and I am a happy camper. I found joker through a suggestion of a slashdot user. They're fast. They have an SSL encrypted process. (heLLO? network solutions?) Ignore the fact that they use poor english on the site (it IS their second language) and you'll be happy. The only issue pain was having to re-register my name servers and contact info with corenic since network solutions info isn't corenic registered but that was cool with me. When my other 5 domains expire next year, I'm rolling them over to joker.com. They're fast, simple and in short, they kick ass. Check em out. Re:NSI has a bug in their system - plain and simpl (Score:2) Perhaps the gov't should just yank away their contract and run the root nameservers themselves until a suitable replacement is devised? NSI Controls Central Registry and thus Responible! (Score:2) *** YES, NSI IS RESPONSIBLE AND MUST FIX THEIR MISTAKE AND HERE'S WHY *** NSI is claiming that while they made a mistake, there's nothing they can do since the domain was registered by someone at Register.com. Nice try, but here's the problem: Keep in mind that NSI also controls the *central registry* for What about.... (Score:2) i think section 3 b. that he could sue icann or have a court petion force the trasfer of the domain to him. as long as he has proof, reciept from the seller, that the seller did infact trasfer the domain to him and therefore should not have been left for public sale. Just my thoughts Dictionary words (Score:2) Anybody want to slip me a copy of the zones? Re:Ouch! (Score:2) -- U.S. Circuit Court ruling on domain name ownership (Score:2) In other words, it's property. Not just a name, but something that someone can own. Cnet article on the ruling [cnet.com] After reading this article, I think that McLanahan has every bit of legal ground that he needs to file criminal charges against NSI for the theft of his property. Please remember that NSI is based in Herdon, Va, right near the very Circuit Court that issued the ruling. Mr McLanahan, if you're reading this, please go for it. As for NSI, we need something better, without a doubt. itachi Re:GreatDomains=Register.com WRONG (Score:2)
https://slashdot.org/story/99/12/11/1155244/nsi-botches-domain-transfer-says-not-our-problem
CC-MAIN-2017-26
refinedweb
8,134
73.68
Arrays are objects in Java (see Section 4.1, p. 100). A review of arrays is recommended before continuing with this section. The discussion on passing object reference values in the previous section is equally valid for arrays. Method invocation conversions for array types are discussed along with those for other reference types in Section 6.6. public class Percolate { public static void main (String[] args) { int[] dataSeq = {6,4,8,2,1}; // Create and initialize an array. // Write array before percolation. for (int i = 0; i < dataSeq.length; ++i) System.out.print(" " + dataSeq[i]); System.out.println(); // Percolate. for (int index = 1; index < dataSeq.length; ++index) if (dataSeq[index-1] > dataSeq[index]) swap(dataSeq, index-1, index); // (1) // Write array after percolation. for (int i = 0; i < dataSeq.length; ++i) System.out.print(" " + dataSeq[i]); System.out.println(); } public static void swap(int[] table, int i, int j) { // (2) int tmp = table[i]; table[i] = table[j]; table[j] = tmp; } public static void swap(int v1, int v2) { // (3) int tmp = v1; v1 = v2; v2 = tmp; } } Output from the program: 6 4 8 2 1 4 6 2 1 8 In Example 3.5, the idea is to repeatedly swap neighboring elements in an integer array until the largest element in the array percolates to the last position of the array. Note that in the definition of the method swap() at (2), the formal parameter table is of array type. The swap() method is called in the main() method at (1), where one of the actual parameters is the array variable dataSeq. The reference value of the array variable dataSeq is assigned to the array variable table at method invocation. After return from the call to the swap() method, the array variable dataSeq will reflect the changes made to the array via the corresponding formal parameter. This situation is depicted in Figure 3.6 at the first call and return from the swap() method, indicating how values of elements at index 0 and 1 in the array have been swapped. However, the definition of the swap() method at (3) will not swap two values. The method call swap(dataSeq[index-1], dataSeq[index]); will have no effect on the array elements, as the swapping is done on the values of the formal parameters.
https://etutorials.org/cert/java+certification/Chapter+3.+Operators+and+Assignments/3.20+Passing+Array+References/
CC-MAIN-2022-05
refinedweb
384
56.76
. Please note, this tutorial will focus for developing under Windows. Basically 99% of the information will be the same for Linux and Mac too. In case of difference will put a short note. Get the development tools It' very probably you'll already have all the necessary tools but in case, at first, you need to get the Google Android SDK from here and NDK from here. The SDK package already have inside Eclipse with ADT plugin installed than no additional operations will be required. The only important note is to "install" these two packages in a path not containing spaces. For make and example install the packages in a path like "C:\Program Files\Android\" will cause some problem since there are a lot of command line tools that, in case of spaces, will "interpret" the space as params separator and return error. I suggest something like "C:\Android\" just for avoid problems. SDK contain basic Android tools and NDK contain the tools for allow communication between Java and C/C++ native code called JNI. Instructions about how to configure these tools can be found in the Google site itself. Create Android app project We are now ready to start. Eclipse main executable is in the path "SDK\eclipse". Start it and select new project through menu: File => New => Project... Then select Android application project and fill all the information required by the wizard. We'll name our example project as "AndroidTest". App interface will be very simple. Just two buttons making basic operations for demonstration. Create java interface to native code Now the most interesting part. At first we need a java class to use as interface to native C/C++ functions to call. The syntax is very simple as you can see below: public class NativeCodeInterface { static { System.loadLibrary("NativeCode"); } public native int get_int(int param); public native String get_string(); } This is a standard java class with only two important points. The function loadLibrary, as the name suggest, will load our native code dynamics library (libNativeCode.so) at program startup. If the library will not be available the app will not start than keep in mind this "problem" in case you note this behaviour (check logcat debug). Available mean the library have to stay in the same folder of Android app or in some system path accessible by all apps. The second point to note is the keyword native used for define functions. Get C/C++ functions format Function to export from C/C++ side must have a specific format for allow java to invoke them. The rules of this format can be found in the documentation but, fortunately, we don't need to know it since the java provide a tool able to "convert" the java code to the corresponding C/C++ side. Open a command prompt and go to the project folder subdirectory path: [My Eclispe workspace path]\AndroidTest\bin\classes Here execute the following command: javah -jni com.example.androidtest.NativeCodeInterface Please note the "com.example.androidtest" is the name of our example app. In your case you'll have to type the name of your app package and the name of the class you chose as interface. If the command executed successfully you'll get a new file in the same current path called (following our name example): com_example_androidtest_NativeCodeInterface.h This is the C/C++ header file containing the format of the functions to export for allow java interfacing. The content will be similar to following: /* DO NOT EDIT THIS FILE - it is machine generated */ #include <jni.h> /* Header for class com_example_androidtest_NativeCodeInterface */ #ifndef _Included_com_example_androidtest_NativeCodeInterface #define _Included_com_example_androidtest_NativeCodeInterface #ifdef __cplusplus extern "C" { #endif /* * Class: com_example_androidtest_NativeCodeInterface * Method: get_int * Signature: (I)I */ JNIEXPORT jint JNICALL Java_com_example_androidtest_NativeCodeInterface_get_1int (JNIEnv *, jobject, jint); /* * Class: com_example_androidtest_NativeCodeInterface * Method: get_string * Signature: ()Ljava/lang/String; */ JNIEXPORT jstring JNICALL Java_com_example_androidtest_NativeCodeInterface_get_1string (JNIEnv *, jobject); #ifdef __cplusplus } #endif #endif As you can see here will have the same functions defined in the java class just created but in C/C++ format. You have to use these functions as they are without any changes since this is the only way for allow automatic "interface" between java and C/C++. For reach same result is there a more flexible way but also more complex since involve to know how C/C++ code have to be "configured" for allow java to call it. In this tutorial will use only this most easy way. Add native C/C++ code Here we'll add some native C/C++ file to be integrated into the whole app project. Before start just a short explanation. Java language used in Android app can communicate to native code using a mechanism called JNI. However with "native code" we mean a dynamics library written in C/C++ language and compiled using a toolchain (provided inside NDK). This dynamics library will export the functions with specific syntax just generated that allow to be called from Java side. Only dynamic libraries can be used, no static library and no standalone application. If you want to use functions exported by a precompiled static library you have to develop a dynamics library linked to the static library you need to use as "wrapper" between java and the functions you require. Another very important note to know is JNI is able to interface java with native code compiled in C format only. This mean if your code is C++ you need to create a wrapper code in C format to work with JNI and use this wrapper as interface to C++ code. In alternative you can use the extern "C" declaration before each C++ function you want to export to JNI. Now we have the functions prototype we need to add the functions body. In your project folder create a new folder called "jni" and move the header file just generated there. Once done create inside the jni folder a new file called "NativeCode.c" with the following content: #include "com_example_androidtest_NativeCodeInterface.h" JNIEXPORT jint JNICALL Java_com_example_androidtest_NativeCodeInterface_get_1int(JNIEnv *jenv, jobject jobj, jint param) { return (param + 1); } JNIEXPORT jstring JNICALL Java_com_example_androidtest_NativeCodeInterface_get_1string(JNIEnv *jenv, jobject jobj) { return (*jenv)->NewStringUTF(jenv, "String from native code"); } The body is very simple. The first function return the same integer param passed from java side increased by 1. The second function return a string converted from C/C++ format to java format. The last step is to create an additional file (always inside the jni folder) called Android.mk with the following content: LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) LOCAL_MODULE := NativeCode LOCAL_SRC_FILES := NativeCode.c include $(BUILD_SHARED_LIBRARY) This file is needed by the toolchain C/C++ compiler for know which file include in the compilation and the compilation result (shared library mean dynamics library). Once done all the work your Eclipse project folder should appear like the following: Configure Eclipse for compilation Now we have the native code ready but we need an additional step before. Currently we created an Android project. This mean Eclipse is configured for compile java code only and we need to "instruct" the IDE to compile native C/C++ code also. For reach such result we need the CDT (C/C++ Development Tools) component. Browse the Eclipse main menu and select this item: Help => Install New Software... In the "Install" window just showed, into the field "Work with" type the following URL: Then press enter and wait for data load (please note, if your network have a proxy for Internet access follow this guide to configure Eclipse to go through a proxy). In case you have a different Eclipse version than Galileo you need to use the correct URL. Check Eclipse main site for additional info. In any case, once data load done, open the "Programming Languages" section and select the "Eclipse C/C++ development tools" item as below: Then click "Next" and proceed with installation. In case your Eclipse will already have installed this component the wizard will advise you and no additional actions will be required. Now we have to convert our java project in a mixed java and C/C++ project. Select the following menu item in the Eclipse: File => New => Other... In the window showed open the C/C++ folder and select the "Convert to a C/C++ project" item as below (don't worry about the name, in this case "conversion" mean simply to add C/C++ management part): Click Next and select the Android project name to convert ("AndroidTest" in our case). In the "Project Type" field select "Makefile project" and "-- Other Toolchain --". Click Finish. Now Eclipse will ask you if you want to open this perspective now. Click to Yes for have both java and C/C++ compilation profile available. Configure NDK Next step is to configure Ecplise for invoke NDK compiler to use for C/C++ code. Into the Eclipse main window right click on the project name ("Project Explorer" subwindow) and in the popup menu select "Properties". Select "C/C++ Build" section. In the "Builder Settings" tab uncheck the "Use default build command" item and insert the full path of your NDK installation plus the name of the build script file "ndk-build.cmd" (in case of Linux and Mac system the name is only "ndk-build") as follow: Then move to "Behaviour" tab, uncheck "Clean" field and clear all text from "Build" params field as below: Click on the "Apply" button once done. Now is necessary to set the path where platform header files are located. In the same property window move to "C/C++ General" section and "Paths and Symbols" subsection. Here add the path where found header files to "GNU C" or "GNU C++" depends on your choice. In this example we'll use C files. The header file are inside the NDK installation folder. To be more precise the header files are under the "platform" subfolder but, as you can note, many different choices are available. Basically it depends by the Android target platform you selected for your project. In our example we use the API level 18 than the path to include will be as in image below: C:\Android\NDK\platforms\android-18\arch-arm\usr\include as in image below: Click to "Apply" and then "OK". This is the basic include, however if Eclipse report the error "Invalid arguments Candidates are:" for some common functions like memcpy, memset, strcpy and so on you have to insert an additional include path as follow: Press the two button will show expected result. Hope this tutorial will help you. C:\Android\NDK\toolchains\arm-linux-androideabi-4.8\prebuilt\windows\lib\gcc\arm-linux-androideabi\4.8\include In case if development under Linux instead of windows subfolder we'll have linux-x86 as only difference. Develop main activity code Now the last step. We need to develop the java code into main activity for call our external native code functions. The body of activity should be like the following: public class MainActivity extends Activity { NativeCodeInterface NativeCode = new NativeCodeInterface(); @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); Button GetStringBtn = (Button) findViewById(R.id.get_string); GetStringBtn.setOnClickListener(new OnClickListener() { public void onClick(View v) { String NativeCodeStr = NativeCode.get_string(); Toast.makeText(getApplicationContext(), NativeCodeStr, Toast.LENGTH_LONG).show(); } }); Button GetIntBtn = (Button) findViewById(R.id.get_int); GetIntBtn.setOnClickListener(new OnClickListener() { public void onClick(View v) { int NativeCodeInt = NativeCode.get_int(0); Toast.makeText(getApplicationContext(), "Passed 0 and returned " + NativeCodeInt, Toast.LENGTH_LONG).show(); } }); } } As you can see java code call native functions and show result in a toast widget. The first get the string from native code and the second pass 0 as function param and get return of 1. The resource activity_main.xml file containing the app window used in this example is/get_string" android: <Button android: </RelativeLayout>During compilation into Eclispe main window you should see (in "console" subwindow") something like this: **** Build of configuration Default for project AndroidTest **** C:\Android\NDK\ndk-build.cmd "Compile thumb : NativeCode <= NativeCode.c SharedLibrary : libNativeCode.so Install : libNativeCode.so => libs/armeabi/libNativeCode.so **** Build Finished **** This will inform you about the compilation result of native code part. If no error occurred the generated libNativeCode.so (in our example) will be placed into project subfolder libs\armeabi. All the binary files present in this subfolder path will be automatically included inside the generated .apk file by Eclipse. This mean following this way you'll have the native code library already included in the package and placed in the same path of Android app. The result of all this work will be the following: Press the two button will show expected result. Hope this tutorial will help you. Lots of thanks for the post :) .. It help me much & my project run successfully :D it s great thanks for post but it gives me one error would u like to help solve it 07-28 17:46:31.842: E/AndroidRuntime(16430): java.lang.UnsatisfiedLinkError: No implementation found for java.lang.String com.example.hellojni.NativeCodeInterface.get_string() (tried Java_com_example_hellojni_NativeCodeInterface_get_1string and Java_com_example_hellojni_NativeCodeInterface_get_1string__) Hi Sorry but this error is too much generic for try to know the problem. Case could be many. Are you compiling your native code file in .c format instead of .cpp? thanks it resolved now. would u like tell me simple example with image processing with opencv. i need to increase pdf image clearity . thanks in advance. and how to generate .h file again with update code Sorry, still never used opencv so I can not help you. About generate .h the procedure is the same, simply repeat it with update interface calls. Thank u vary much for reply. great blog thanks again. Good post. I've made this project. It works. The only problem when I closed project and reopen it, I've got the errors in NativCode.c like "JNICALL could not be resolved" Hi, sorry but never faced a problem like your than don't know how to help you, sorry. In any case remember when you close an android app the OS simply switch it in pause state, this mean if you need some initialization step it will not "executed" if the app switch again from pause to active state. selam naber This blog awesome and i learn a lot about programming from here.The best thing about this blog is that you doing from beginning to experts level. Love from good one..
https://falsinsoft.blogspot.com/2014/07/using-eclipse-for-develop-android-app.html
CC-MAIN-2018-51
refinedweb
2,389
55.24
I have a function which reads in a file, compares a record in that file to a record in another file and depending on a rule, appends a record from the file to one of two lists. I have an empty list for adding matched results to: match = [] restrictions match def link_match(file): links = json.load(file) for link in links: found = False try: for other_link in other_links: if link['data'] == other_link['data']: match.append(link) found = True else: pass else: print "not found" list_files=[] for file in glob.glob("/path/*.json"): list_files.append(file) map if __name__ == '__main__': pool = multiprocessing.Pool(processes=6) pool.map(link_match,list_files) pool.close() pool.join() match When multiprocessing, each subprocess gets its own copy of any global variables defined in the main script. This means that the link_match() function in each one of them is accessing a different match list. One workaround is to use a shared list, which in turn requires a SyncManager to synchronize access to the shared resource among the processes (which is created by calling multiprocessing.Manager()). This is then used to create the list to store the results (which I have named matches instead of match) in the code below. I also had to use functools.partial() to create a single argument callable out of the revised link_match function which now takes two arguments, not one (which is the kind of function pool.map() expects). from functools import partial import glob import multiprocessing def link_match(matches, file): # note: added results list argument links = json.load(file) for link in links: found = False try: for other_link in other_links: if link['data'] == other_link['data']: matches.append(link) found = True else: pass else: print "not found" list_files=[] for file in glob.glob("*.json"): list_files.append(file) if __name__ == '__main__': manager = multiprocessing.Manager() # create SyncManager matches = manager.list() # create a shared list link_matches = partial(link_match, matches) # create one arg callable to # pass pool.map() pool = multiprocessing.Pool(processes=6) pool.map(link_matches, list_files) # apply partial to files list pool.close() pool.join() print(matches)
https://codedump.io/share/XEJa22Xq9tVD/1/multiprocessing---calling-function-with-different-input-files
CC-MAIN-2017-17
refinedweb
342
55.64
As I was expecting, some assholes started throwing around snide remarks about my inquiries yesterday. I actually embarked on a farily large rant concerning the way people are simply not mature enough to accept that I am just trying to help out a few friends (and keep trying to figure out whether myself or my company were having difficulties due to some sort of psychiatric need for compensation), but I canned it and just posted a short(ish) disclaimer. Think what you want, say what you want - you're not hiring any of my friends that way. And if I ever want to change jobs, I'm not even going to give you a chance of hiring me. You're simply not worthy. Tuning with /proc Solaris was the first OS I came across where I did (minimal) kernel fine-tuning at runtime. Linux also lets you do some changes without rebooting the system, and this article has a nice overview of the stuff available under /proc to do that (I use /proc all the time for collecting performance counters, but some of this was new to me). Too Much Storage, Too Many Places Yes, you can have too much storage. The main issue I have with data is replication without needless waste, since the 10-20GB of data I can actually call my own (actual documentation, code and media I've created) is taking up around 130GB of disk space at several locations under the guise of various backups, snapshots and replicas. Of course there are other issues: Searching, remote access, backups, security, multi-platform support, etc. For instance, my Wintel laptop (where I have all my work e-mail - 2GB worth of it - my project documentation and references - 4 year's worth, another 5-7GB - and around 1GB sources, build trees, etc.) is partially replicated to an office share, our CVS repository, our Exchange server, our development server, and (weekly) to a Firewire HD at home. All of it synced using different protocols and tools (ssync, rsync, straight SMB, etc.), mostly by hand. My iBook doesn't store e-mail (I aggregate everything on an IMAP server at home) or code (I have my own CVS too), and Apple's updates are diligently downloaded by hand and filed away on another box. And my Linux box... Well, that's just too hairy. What I need is a filesystem that: - Is accessible from anywhere in a reasonably secure fashion - Works properly over low-bandwidth links (SMB over any sort of VPN performs hideously) - Has minimal support for all my platforms (I can live without Mac resource forks, but Linux and Windows boxes have to read the rest) - Provides a coherent namespace across all my boxes and disk volumes - Performs transparent replication of critical data (I should just set an attribute on a directory and be certain it's stored on more than one box) - Supports some sort of "offline" or "disconnected" operation (caching, directory indexes only, whatever) Volume abstraction (not having to care about specific volume sizes and splitting files across volumes) would also be great, but I'm being realistic here - I'm not asking for Star Trek-like filing systems... Yet. So once in a while I get entirely fed up with it all and start searching for a better way to do things. However, there seems to be fairly little progress on the distributed filesystem field. Over the past 10 years or so, I've been in places where people used SMB, NFS, AFS, Coda, Microsoft's DFS, AppleShare, the works. But, as usual, there is no single solution that addresses all my needs. After seeing this discussion, I'm now revisiting stuff like Unison and OpenAFS, but nothing seems good enough. Oh well. Now where were those patches I wrote to get netatalk running under Cygwin? Hmmm... The PEG-NX70V In Hand rage got his hands on a PEG-NX70V, and the pics are in the photo album. The machine is easily the best Palm device I've handled to this day (a bit big, but the Sony industrial design and usability put most other Palm and Pocket PC devices to shame). My 7650 is more than enough of a PDA for me these days, but if I ever feel like buying a Palm again, this would be the kind of device I'd get. After all, it is a Sony.
http://taoofmac.com/space/blog/2003/05/17
CC-MAIN-2017-13
refinedweb
736
62.72
In the last tutorial I taught you how to add color to triangles and quads. In this tutorial I will teach you how to rotate these colored objects around an axis. Using the code from the last tutorial, we will be adding to a few places in the code. I will rewrite the entire section of code below so it's easy for you to figure out what's been added, and what needs to be replaced. We'll start off by adding the two variables to keep track of the rotation for each object. We do this at the top of our program, underneath the other variables. You will notice two new lines after 'bool fullscreen=TRUE;'. These lines set up two floating point variables that we can use to spin the objects with very fine accuracy. Floating point allows decimal numbers. Meaning we're not stuck using 1, 2, 3 for the angle, we can use 1.1, 1.7, 2.3, or even 1.015 for fine accuracy. You will find that floating point numbers are essential to OpenGL programming. The new variables are called rtri which will rotate the triangle and rquad which will rotate the quad. #include <windows.h> // Header File For Windows Set To TRUE By Default GLfloat rtri; // Angle For The Triangle ( NEW ) GLfloat rquad; // Angle For The Quad ( NEW ) Now we need to modify the DrawGLScene() code. I will rewrite the entire procedure. This should make it easier for you to see what changes I have made to the original code. I'll explain why lines have been modified, and what exactly it is that the new lines do. The next section of code is exactly the same as in the last tutorial. int DrawGLScene(GLvoid) // Here's Where We Do All The Drawing { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear The Screen And The Depth Buffer glLoadIdentity(); // Reset The View glTranslatef(-1.5f,0.0f,-6.0f); // Move Into The Screen And Left The next line of code is new. glRotatef(Angle,Xvector,Yvector,Zvector) is responsible for rotating the object around an axis. You will get alot of use out of this command. Angle is some number (usually stored in a variable) that represents how much you would like to spin the object. Xvector, Yvector and Zvector parameters together represent the vector about which the rotation will occur. If you use values (1,0,0), you are describing a vector which travels in a direction of 1 unit along the x axis towards the right. Values (-1,0,0) describes a vector that travels in a direction of 1 unit along the x axis, but this time towards the left. D. Michael Traub: has supplied the above explanation of the Xvector, Yvector and Zvector parameters. To better understand X, Y and Z rotation I'll explain using examples... X Axis - You're working on a table saw. The bar going through the center of the blade runs left to right (just like the x axis in OpenGL). The sharp teeth spin around the x axis (bar running through the center of the blade), and appear to be cutting towards or away from you depending on which way the blade is being spun. When we spin something on the x axis in OpenGL it will spin the same way. Y Axis - Imagine that you are standing in the middle of a field. There is a huge tornado coming straight at you. The center of a tornado runs from the sky to the ground (up and down, just like the y axis in OpenGL). The dirt and debris in the tornado spins around the y axis (center of the tornado) from left to right or right to left. When you spin something on the y axis in OpenGL it will spin the same way. Z Axis - You are looking at the front of a fan. The center of the fan points towards you and away from you (just like the z axis in OpenGL). The blades of the fan spin around the z axis (center of the fan) in a clockwise or counterclockwise direction. When You spin something on the z axis in OpenGL it will spin the same way. So in the following line of code, if rtri was equal to 7, we would spin 7 on the Y axis (left to right). You can try experimenting with the code. Change the 0.0f's to 1.0f's, and the 1.0f to a 0.0f to spin the triangle on the X and Y axes at the same time. It's important to note that rotations are done in degrees. If rtri had a value of 10, we would be rotating 10 degrees on the y-axis. glRotatef(rtri,0.0f,1.0f,0.0f); // Rotate The Triangle On The Y axis ( NEW ) The next section of code has not changed. It draws a colorful smooth blended triangle. The triangle will be drawn on the left side of the screen, and will be rotated on it's Y axis causing it to spin left to right. glBegin(GL_TRIANGLES); // Start Drawing A Triangle glColor3f(1.0f,0.0f,0.0f); // Set Top Point Of Triangle To Red glVertex3f( 0.0f, 1.0f, 0.0f); // First Point Of The Triangle glColor3f(0.0f,1.0f,0.0f); // Set Left Point Of Triangle To Green glVertex3f(-1.0f,-1.0f, 0.0f); // Second Point Of The Triangle glColor3f(0.0f,0.0f,1.0f); // Set Right Point Of Triangle To Blue glVertex3f( 1.0f,-1.0f, 0.0f); // Third Point Of The Triangle glEnd(); // Done Drawing The Triangle You'll notice in the code below, that we've added another glLoadIdentity(). We do this to reset the view. If we didn't reset the view. If we translated after the object had been rotated, you would get very unexpected results. Because the axis has been rotated, it may not be pointing in the direction you think. So if we translate left on the X axis, we may end up moving up or down instead, depending on how much we've rotated on each axis. Try taking the glLoadIdentity() line out to see what I mean. Once the scene has been reset, so X is running left to right, Y up and down, and Z in and out, we translate. You'll notice we're only moving 1.5 to the right instead of 3.0 like we did in the last lesson. When we reset the screen, our focus moves to the center of the screen. meaning we're no longer 1.5 units to the left, we're back at 0.0. So to get to 1.5 on the right side of zero we dont have to move 1.5 from left to center then 1.5 to the right (total of 3.0) we only have to move from center to the right which is just 1.5 units. After we have moved to our new location on the right side of the screen, we rotate the quad, on the X axis. This will cause the square to spin up and down. ) This section of code remains the same. It draws a blue square made from one quad. It will draw the square on the right side of the screen in it's rotated position. glColor3f(0.5f,0.5f,1.0f); // Set The Color To A Nice Blue Shade glBegin(GL_QUADS); // Start Drawing A Quad glVertex3f(-1.0f, 1.0f, 0.0f); // Top Left Of The Quad glVertex3f( 1.0f, 1.0f, 0.0f); // Top Right Of The Quad glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom Right Of The Quad glVertex3f(-1.0f,-1.0f, 0.0f); // Bottom Left Of The Quad glEnd(); // Done Drawing The Quad The next two lines are new. Think of rtri, and rquad as containers. At the top of our program we made the containers (GLfloat rtri, and GLfloat rquad). When we built the containers they had nothing in them. The first line below ADDS 0.2 to that container. So each time we check the value in the rtri container after this section of code, it will have gone up by 0.2. The rquad container decreases by 0.15. So every time we check the rquad container, it will have gone down by 0.15. Going down will cause the object to spin the opposite direction it would spin if you were going up. Try chaning the + to a - in the line below see how the object spins the other direction. Try changing the values from 0.2 to 1.0. The higher the number, the faster the object will spin. The lower the number, the slower it will spin. rtri+=0.2f; // Increase The Rotation Variable For The Triangle ( NEW ) rquad-=0.15f; // Decrease The Rotation Variable For The Quad ( NEW ) return TRUE; // Keep Going } Finally change the code to toggle window / fullscreen mode so that the title at the top of the window is proper. if (keys[VK_F1]) // Is F1 Being Pressed? { keys[VK_F1]=FALSE; // If So Make Key FALSE KillGLWindow(); // Kill Our Current Window fullscreen=!fullscreen; // Toggle Fullscreen / Windowed Mode // Recreate Our OpenGL Window ( Modified ) if (!CreateGLWindow("NeHe's Rotation Tutorial",640,480,16,fullscreen)) { return 0; // Quit If Window Was Not Created } } In this tutorial I have tried to explain in as much detail as possible, how to rotate objects around an axis. Play around with the code, try spinning the objects, on the Z axis, the X & Y, or all three :) If you have comments or questions please email me. If you feel I have incorrectly commented something or that the code could be done better in some sections, please let me know. I want to make the best OpenGL tutorials I Genu Code For This Lesson. ( Conversion by Louis-Charles Dumais ) * DOWNLOAD GLUT Code For This Lesson. ( Conversion by Andy Restad ) * DOWNLOAD Irix Code For This Lesson. ( Conversion by Lakmal Gunasekara ) * DOWNLOAD Java Code For This Lesson. ( Conversion by Jeff Kirby ) * DOWNLOAD Java/SWT Code For This Lesson. ( Conversion by Victor Gonzalez ) * REALbasic Code For This Lesson. ( Conversion by Thomas J. Cunningham ) * DOWNLOAD Ruby Code For This Lesson. ( Conversion by Manolo Padron Martinez ) * 03Lesson 05 > NeHe™ and NeHe Productions™ are trademarks of GameDev.net, LLC OpenGL® is a registered trademark of Silicon Graphics Inc.
http://nehe.gamedev.net/tutorial/rotation/14001/
CC-MAIN-2017-26
refinedweb
1,737
75
Using the Request Directly¶ Warning The current page still doesn't have a translation for this language. But you can help translating it: Contributing. documented (with OpenAPI, for the automatic API user interface). from fastapi import FastAPI, Request app = FastAPI() @app.get("/items/{item_id}") def read_root(item_id: str, request: Request): client_host = request.client.host return {"client_host": client_host, "item_id": item_id} By declaring a path operation function parameter with the type being the Request FastAPI will know to pass the Request in that parameter. Tip Note that in this case, we are declaring a path parameter beside. Technical Details You could also use from starlette.requests import Request. FastAPI provides it directly just as a convenience for you, the developer. But it comes directly from Starlette.
https://fastapi.tiangolo.com/zh/advanced/using-request-directly/
CC-MAIN-2021-17
refinedweb
124
52.36
intersystems python client Hello, We are connecting to our client's Intersys database with python3.8 on our Ubuntu 20.04.2 LTS, we are having trouble gettin the intersys.pythonbind module to work. We are able to install the intersystems_irispython-3.2.0-py3-none-any.whl but, when we try to run the code we always get: Traceback (most recent call last): File "./get_well_life.py", line 4, in <module> import intersys.pythonbind ModuleNotFoundError: No module named 'intersys' Please could you tell us what may be going wrong? Please also feel free to ask for any additional infomation. Thanks and regards!
https://community.intersystems.com/post/intersystems-python-client
CC-MAIN-2022-05
refinedweb
102
71
On 4.2.2017 16:40, Ben Lipton wrote: On 01/12/2017 04:35 AM, Jan Cholasta wrote:On 11.1.2017 00:38, Ben Lipton wrote: AdvertisingOn 01/10/2017 01:58 AM, Jan Cholasta wrote:On 19.12.2016 21:59, Ben Lipton wrote:On 12/15/2016 11:11 PM, Ben Lipton wrote:On 12/12/2016 03:52 AM, Jan Cholasta wrote:On 5.12.2016 16:48, Ben Lipton wrote:Hi Jan, thanks for the comments. On 12/05/2016 04:25 AM, Jan Cholasta wrote:Hi Ben, On 3.11.2016 00:12, Ben Lipton wrote:Hi everybody, Soon I'm going to have to reduce the amount of time I spend on new development work for the CSR autogeneration project, and I want to leave the project in as organized a state as possible. So, I'm taking inventory of the work I've done in order to make sure that what's ready for review can get reviewed and the ideas that have been discussed get prototyped or at least recorded so they won't be forgotten.Thanks, I have some questions and comments, see below.Code that's ready for review (I will continue to put in as much time as needed to help get these ready for submission): - Current PR: hard would it be to update the PR to use the "new" interface from the design thread? By this I mean that currently there is a command (cert_get_requestdata), which creates a CSR from profile id + principal + helper, but in the design we discussed a command which creates a CertificationRequestInfo from profile id + principal + public key. Internally it could use the OpenSSL helper, no need to implement the full "new" design. With your build_requestinfo.c code below it looks like it should be pretty straightforward.This is probably doable with the cffi, but I'm concerned about usability. A user can run the current command to get a (reusable) script, and run the script to get a CSR. It works with keys in both PEM files and NSS databases already. If we change to outputting a CertificationRequestInfo, in order to make this usable on the command line, we'll need: - An additional tool to sign a CSR given a CertificationRequestInfo (for both types of key storage). - A way to extract a SubjectPublicKeyInfo structure from a key within the ipa command (like [1] but we need it for both types of key storage) Since as far as I know there's no standard encoding for files containing only a CertificationRequestInfo or a SubjectPublicKeyInfo, we'll be writing and distributing these ourselves. I think that's where most of the extra work will come in.For PEM files, this is easily doable using python-cryptography (to extract SubjectPublicKeyInfo and sign CertificationRequestInfo) and PyASN1 (to create a CSR from the CertificationRequestInfo and the signature).I didn't realize that python-cryptography knew about SubjectPublicKeyInfo structures, but indeed this seems to be pretty straightforward: key = load_pem_private_key(key_bytes, None, default_backend()) pubkey_info = key.public_key().public_bytes(Encoding.DER, PublicFormat.SubjectPublicKeyInfo) Thanks for letting me know this functionality already existed.I'm currently working on the step of signing the CertificationRequestInfo and creating a CSR from it. I think I have it working with pyasn1, but of course the "signature algorithm" for the CSR needs to be specified and implemented within the code since I'm not using a library that understands CSRs natively. The code I have currently always produces CSRs with the sha256WithRSAEncryption algorithm (DER-encode request info, SHA256, PKCS #1v1.5 padding, RSA encryption), and the OID for that algorithm is hardcoded in the output CSR. Is this ok or will we need more flexibility than that?IMO it's OK for starters.For NSS databases, this will be trickier and will require calling C functions, as neither certutil nor python-nss provide a way to a) address existing keys in the database by key ID b) get SubjectPublicKeyInfo for a given key.This can be worked around by: 1. Generating a key + temporary certificate: n=$(head -c 40 /dev/urandom | base32) certutil -S -n $n -s CN=$n -x -t ,, 2. Extracting the public key from the certificate: certutil -L -n $n -a >temp.crt (extract the public key using python-cryptography) 3. Deleting the temporary certificate: certutil -D -n $n 4. Importing the newly issued certificate: certutil -A -n $n -t ,, -a <new.crtOof, thanks, I'm not sure I would have been able to come up with that. Can you generate a key without a temporary certificate if you use the NSS API, or does their model require every key to belong to a cert?I'm pretty sure it's possible, but it certainly won't be as simple as this. I gave up after a few hours of digging into NSS source code and not being able to figure out how.As for encoding, the obvious choice is DER. It does not really matter there is no standard file format, as we won't be transferring these as files anyway.Agreed. I just meant there aren't tools already because this isn't a type of file one often needs to process.Would it be ok to stick with the current design in this PR? I'd feel much better if we could get the basic functionality into the repo and then iterate on it rather than changing the plan at this point. I can create a separate PR to change cert_get_requestdata to this new interface and at the same time add the necessary adapters (bullet points above) to make it user-friendly.Works for me.Updated the PR to fix conflicts with master. Had some trouble with CI but creating a new PR with the same commits fixed it (). Not sure if it's fixed permanently, so I guess I'll just keep the two PRs synchronized now, or we could close the old one.You can close the old one. Just to make sure we are on the same page, you want this PR to be merged before submitting additional PRs built on top of it?Yes, I would like to merge this one to have as a starting point if you're comfortable with it:. I just did a force push to clean up the history, but the final diff should be the same as it was before.OK.I would probably just implement the adapters within the cert_build/cert_request client code unless you think having standalone tools is valuable. I suppose certmonger is going to need these features too, but I don't know how well sharing code between them is going to work.cert-request is exactly the place where it should be :-) I wouldn't bother with certmonger until we have a server-side csrgen.- Allow some fields to be specified by the user at creation time: idea :-)- Automation for the full process from getting CSR data to requesting cert:, although I would prefer if this was a client-side extension of cert-request rather than a completely new command.I did try that at first, but I struggled to figure out the interface for the modified cert-request. (Not that the current solution is so great, what with the copying of options from cert_request and certreq.) If I remember correctly, I was uncertain how to implement parameters that are required/invalid based on other parameters: the current cert-request takes a signed CSR (required), a principal (required), and a profile ID; the new cert-request (what I implemented as cert-build) takes a principal (required), a profile ID (required), and a key location (required). I can't remember if that was the only problem, but I'll try again to merge the commands and get back to you.To make the CSR argument optional on the client, you can do this: def get_options(self): for option in super(cert_request, self).get_options(): if option.name == 'csr': option = option.clone(required=False) yield IMO profile ID should default to caIPAserviceCert on the client as well.I originally had it doing so, but changed it to a required option based on feedback in this email:: "In general use I think that 'caIPAserviceCert' is unlikely to be used a majory of the time, and it is a new command so there are no compatibility issues; therefore why not make the profile option mandatory?" I guess since we're talking about cert-request now, the compatibility issues are back. has now been updated to change the cert_request command rather than adding a new command. It seems to work now (thanks for the advice on making the argument optional), the only thing I'm having trouble with is the default for the profile_id argument. Previously, the default was applied by this code in cert_request.execute: profile_id = kw.get('profile_id', self.Backend.ra.DEFAULT_PROFILE) But now, in the client, I need the default to pass to cert_get_requestdata if no profile is specified. I'm not sure I can access backends from the client to get it the same way the server code does. Should I just import ipapython/dogtag.py and use the DEFAULT_PROFILE set in there? Is there a way I can give the option a default that will be seen in both the server and the client?Just wanted to call attention to this question. The code that's currently problematic is here: (will pass None when in fact the argument default should be used).self.get_default_of('profile_id')Other prototypes and design ideas that aren't ready for submission yet: - Utility written in C to build a CertificationRequestInfo from a SubjectPublicKeyInfo and an openssl-style config file. The purpose of this is to take a config that my code already knows how to generate, and put it in a form that certmonger can use. This is nearly done and available at:! As I said above, this could really make implementing the "new" csrgen interface simple.- Ideally it should be possible to use this tool to reimplement the full cert-request automation (local-cert-build branch) without a dependency on the certutil/openssl tools. However, I don't think any of the python crypto libraries have bindings for the functions that deal with CertificationRequestInfo objects, so I don't think I can do this in the short term.You can use python-cffi to write your own minimal bindings. It's fairly straightforward, take a look at FreeIPA commit 500ee7e2 for an example of how to port C code to Python with python-cffi.Thank you for the example. I will take a look.- Certmonger "helper" program that takes in the CertificationRequestInfo that certmonger generates, calls out to IPA for profile-specific data, and returns an updated CertificationRequestInfo built from the data. Certmonger doesn't currently support this type of helper, but (if I understood correctly) this is the architecture Nalin believed would be simplest to fit in. This is not done yet, but I intend to complete it soon - it shouldn't require much code beyond what's in build_requestinfo.c.To me this sounds like it should be a new operation of the current helper rather than a completely new helper.Maybe so. I certainly wouldn't call this a finished design, I just wanted to have some kind of proof of concept for how the certmonger integration could work. For what it's worth, that prototype is now available at [2].OK.Anyway, the ultimate goal is to move the csrgen code to the server, which means everything the helper will have to do is call a command over RPC.- Tool to convert an XER-encoded cert extension to DER, given the ASN.1 description of the extension. This would unblock Jan Cholasta's idea of using XSLT for templates rather than text-based formatting. I should be able to implement the conversion tool, but it may be a while before I have time to demo the full XSLT idea.Was there any progress on this?I have started working on implementing it with asn1c, and I'm already seeing some of the inconvenience (security issues aside) of building on the server. Libtasn1 seems like a much better model, but doesn't seem to have XER support. Anyway, don't quite have results here yet but I think I should have the XER->DER demo with asn1c ready in a week or two.Implementing XER codec on top of libtasn1 shouldn't be too hard; I have a WIP which I will post soon.It took me some experimentation to get this to work, but the solution with asn1c is actually quite simple because the tool automatically provides a sample C file that converts between different formats. So, this very basic shell script is able to do the conversion: $ cat ExtKeyUsage.xer <ExtKeyUsageSyntax> <KeyPurposeId>1.3.6.1.5.5.7.3.2</KeyPurposeId> <KeyPurposeId>1.3.6.1.5.5.7.3.4</KeyPurposeId> </ExtKeyUsageSyntax> $ cat KeyUsage.asn1 KUModule DEFINITIONS ::= BEGIN KeyUsage ::= BIT STRING { digitalSignature (0), nonRepudiation (1), -- recent editions of X.509 have -- renamed this bit to contentCommitment keyEncipherment (2), dataEncipherment (3), keyAgreement (4), keyCertSign (5), cRLSign (6), encipherOnly (7), decipherOnly (8) } ExtKeyUsageSyntax ::= SEQUENCE SIZE (1..MAX) OF KeyPurposeId KeyPurposeId ::= OBJECT IDENTIFIER END $ ./xer2der.sh KeyUsage.asn1 ExtKeyUsageSyntax ExtKeyUsage.xer 2>/dev/null | xxd 00000000: 3014 0608 2b06 0105 0507 0302 0608 2b06 0...+.........+. 00000010: 0105 0507 0304 ......So far I don't have a working example using libtasn1. I have something close to it, but it's hacky, as the libtasn1 API is pretty limited, and I didn't have time to work on it in the last few weeks.I got it working, needs just a little polishing. It's still ugly hacky though.So: currently on my to do list are the certmonger helper and the XER->DER conversion tool. Do you have any comments about these plans, and is there anything else I can do to wrap up the project neatly? Thanks, BenHonza[1] [2] you for the review! I just created and for the two follow-up branches I had pending (and updated with ideas from this thread and the previous PR's thread). I'm still working on converting the API to consuming SubjectPublicKeyInfo structures and producing CertificationRequestInfo ones - I have the OpenSSL flow working, but am still missing a step for the NSS flow. Specifically, after step 2 of the 4 you suggested above, I need to use NSS to use the private key in the db to sign the SubjectPublicKeyInfo before I can use python-cryptography to make it into a CSR like I'm doing with OpenSSL. I'm sure this is not very hard, but I haven't quite figured it out yet. Sigh, NSS does not have a generic signing tool (cmsutil and signtool are not generic enough) and python-nss does not have a signing API. I got this far:Sigh, NSS does not have a generic signing tool (cmsutil and signtool are not generic enough) and python-nss does not have a signing API. I got this far: from nss import nss nss.nss_init(db_path) nss.set_password_callback(lambda slot, retry, password: password) slot = nss.get_internal_key_slot() slot.authenticate(False, db_password) cert = nss.find_cert_from_nickname(nickname) key = nss.find_key_by_any_cert(cert)Unfortunately this means we will have to call C code. IMO it would be best to drop support for NSS for the time being and add it back when we know exactly what C code to call. -- Jan Cholasta -- Manage your subscription for the Freeipa-devel mailing list: Contribute to FreeIPA:
https://www.mail-archive.com/freeipa-devel@redhat.com/msg39652.html
CC-MAIN-2018-09
refinedweb
2,607
62.07
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Hello, I was really delighted to hear that svg import is now supported natively in S25. But the only method I am aware of so far is via drag and drop. I would like to import an svg file using only its path as input in a python script. Is there a way to do that? Thank you Hello @InterfaceGuy, thank you for reaching out to us. You can just use the existing loading and merging functions in c4d.documents. The snippet below will open a file dialog where you must select a SVG file, and it will then load that SVG file into a new scene. The example will also suppress the options dialog of the importer. There are many options with c4d.documents.LoadDocument and c4d.documents.MergeDocument; to not suppress these options dialogs, to merge with specific documents, etc. If necessary, you could also get hold of the SVG importer plugin first to set the import settings. See our documentation for details on c4d.documents.LoadDocument. And in our GitHub Python SDK repository, in the Files & Media section, you can find examples for manipulating the im- and exporter plugins. c4d.documents c4d.documents.LoadDocument c4d.documents.MergeDocument Cheers, Ferdinand The result: The script: import c4d import os def main(): """Loads an svg file from a file dialog. You can just replace `file` with any path you would like. """ file = c4d.storage.LoadDialog(title='Select a file') if not file or os.path.splitext(file)[1].lower() != ".svg": raise RuntimeError("Please select a svg file.") doc = c4d.documents.LoadDocument(file, c4d.SCENEFILTER_NONE) if doc is None: raise RuntimeError("Failed to load svg file.") c4d.documents.InsertBaseDocument(doc) c4d.documents.SetActiveDocument(doc) if __name__=='__main__': main() Thank you so much!
https://plugincafe.maxon.net/topic/13617/how-to-import-svg-in-s25-from-path/2?lang=en-US
CC-MAIN-2021-43
refinedweb
334
58.38
ZoomableUIView is a protocol that any UIView can conform to in order to be able to zoom and pan views without using UIScroll View. Particularly handy for views within UIScrollViews ZoomableUIView is using swift3. Usage The simple code to get ZoomableUIView running in your own app. - In case you installed ZoomableUIView via CocoaPods you need to import it (add this somewhere at the top of your source code file): import ZoomableUIView - When you want to conform to the protocol: class CustomView: UIView, ZoomableUIView - Or: extension CustomView: ZoomableUIView - If you want to set the view zoomable: self.setZoomable(true) - When you want to reset the zoom: self.reset() - Mandatory protocol conformance Works like UIScrollViewDelegate’s viewForZooming. May well be that you want to use self in this case but you may want the current view to handle the zoom recognition and a subview to be zoomed, in that case return the subview in this func func viewForZooming() -> UIView Set zooming options such as min and max zoom func optionsForZooming() -> ZoomableViewOptions Requirements UIKit must be imported. If you are using ZoomableUIView in an App Extension, you must add EXTENSION to your Other Swift Flags Build Settings. Installation ZoomableUIView is available through CocoaPods. To install it, simply add the following line to your Podfile: pod "ZoomableUIView" In case you don’t want to use CocoaPods – just copy the files ZoomableUIView/ZoomableUIView.swift & ZoomableUIView/CGAffineTransform.swift to your Xcode project. Credit Author: James Coughlan License **ZoomableUIView** is available under the MIT license. See the LICENSE file for more info. Latest podspec { "name": "ZoomableUIView", "platforms": { "ios": "8.0" }, "summary": "Zoomable UIView protocol to zoom UIView without a UIScrollView.", "requires_arc": true, "version": "0.1.8", "license": { "type": "MIT", "file": "LICENSE" }, "authors": { "James Coughlan": "[email protected]" }, "homepage": "", "source": { "git": "", "tag": "0.1.8" }, "frameworks": "UIKit", "source_files": "ZoomableUIView/**/*.{swift}", "pushed_with_swift_version": "3.0" } Mon, 10 Jul 2017 13:00:26 +0000
https://tryexcept.com/articles/cocoapod/zoomableuiview
CC-MAIN-2018-13
refinedweb
312
65.01
Compressed data is a great thing, especially when the GPU hardware is able to decompress it for free while rendering. But what exactly does vertex data compression mean, and how do we enable it? XNA defaults to 32 bit floating point for most vertex data. For instance the VertexPositionNormalTexture struct is 32 bytes in size, storing Position and Normal as Vector3 (12 bytes) and TextureCoordinate as Vector2 (8 bytes). 32 bit floats are great for precision and range, but not all data needs so much accuracy! There are many other options to choose from: enum VertexElementFormat { Single, Vector2, Vector3, Vector4, Color, Byte4, Short2, Short4, NormalizedShort2, NormalizedShort4, HalfVector2, HalfVector4 } The HalfVector formats are only available in the HiDef profile, but all the others are supported by Reach hardware as well. Generating packed values from C# code is easy thanks to the types in the Microsoft.Xna.Framework.Graphics.PackedVector namespace. For instance we could easily make our own PackedVertexPositionNormalTexture struct that would use HalfVector2 or NormalizedShort2 instead of Vector2 for its TextureCoordinate field. To compress vertex data that is built as part of a model, we must use a custom content processor. This example customizes the built-in ModelProcessor, automatically converting normal data to NormalizedShort4 format, and texture coordinates to NormalizedShort2. This is an 8 byte saving, reducing 32 byte uncompressed vertices to 24 bytes:); } } } Note that we had to choose NormalizedShort4 format for our normals, even though these values only actually have three components, because there is no NormalizedShort3 format. That’s because GPU vertex data must always by 4 byte aligned. We could avoid this wastage by merging multiple vertex channels. For instance if we had two three component data channels, a and b, we could store (a.x, a.y, a.z, b.x) in one NormalizedShort4 channel, plus (b.y, b.z) in a second NormalizedShort2 channel. We would then have to change our vertex shader to extract this data back into the original separate channels, so this approach is more intrusive than just changing the format of existing data channels. Vertex compression often works better if you adjust the range of the input data before changing its format. For instance, NormalizedShort2 is great for texture coordinates, but only if the texture does not wrap. If you have any texture coordinate values outside the range -1 to 1, these will overflow the packed range. This can be avoided by scanning the entire set of texture coordinates to find the largest value, then dividing every texture coordinate by this maximum. The resulting data will now compress into NormalizedShort format with no risk of overflow. To render the model, you must store the value by which everything was divided, pass it into your vertex shader, and have the shader multiply all texture coordinates by this scale factor. How much you win by compressing vertex data obviously depends on how many vertices you are dealing with. For many games the gain may be too small to be worth bothering with. But when using detailed models or terrains that contain millions of vertices, the memory and bandwidth savings can be significant. (I might have accidentally posted this twice, if so, my apologies) I miss the old Normalized101010 format on the 360, and there's no normalized 32-bit format with XNA 4 (due to, of course, a general lack of support on the PC side of things, I'm sure). I ended up using Byte4 and doing the rescaling manually, it's less free than otherwise…though it's still a massive speed win over uncompressed vertex data for my terrain. Moral is: even if you don't have a perfect vertex format that gets you exactly the right number, using the packed format that's the right size and doing a little work in the shader is still the way to go if you need that extra vertex bandwidth 🙂 This is actually the main reason I miss having custom shaders for XNA on Windows Phone 7. For example, my preferred implementation of terrain geomipmapping uses Short2 for [x, y] coordinates (that vertex data is shared across all patches, and rendered with an offset), and Single for the (non-shared) [z] coordinate. The vertex data gets put back together in the vertex shader. But without custom effects, this kind of compression is of course not possible. Which is ironic, as WP7 is the one place where you'd want it the most 🙂 Thanks Shawn , this may be a benefit for me you see in my engine i use VertexPositionTexture format for all my models and as a post effect i calculate the normal from the depthbuffer and allso transperent rendering is also done as a port effect , so i simply pas the vertex and texture thrue the pipeline and yes Which is ironic, as WP7 is the one place where you'd want it the most 🙂 is there some way to expose the depthbuffer and allow us to create post effect on the phone i now it all about battery , but pehaps you cood add this to the market place "this game consume battery" and let the user desicde of what he whant to use battery or not now back.. i think i will try the packed thing out thanks as allways you are full of great tricks Michael Hi Shawn, Clearly I'm late to the party here, but I can't find an answer to this anywhere online and you seem to be the man to ask. Can you use Short2 for vertex position data? I'm trying to do this at the moment, but I get literally no output. If I switch to using the same values as floats in a Vector3 (as x,y,0), it works. Are the WP7 shaders not configured to accept short positions, or maybe 2D positions? Cheers, Bob You can use Short2 in vertex data, but this is an integer type, so your vertex shader must be written to accept integer rather than float inputs. BasicEffect takes floats, so Short2 will not work with it. NormalizedShort2 might be a better choice?
https://blogs.msdn.microsoft.com/shawnhar/2010/11/19/compressed-vertex-data/
CC-MAIN-2016-44
refinedweb
1,019
55.37
Here we are going to use the math module as a an introduction to using modules. The math module contains a range of useful mathematical functions that are not built into Python directly. So let’s go ahead and start by importing the module. Module imports are usually performed at the start of a programme. import math When this type of import is used Python loads up a link to the module so that module functions and variables may be used. Here for example we access the value of pi through the module. print (math.pi) OUT: 3.141592653589793 Another way of accessing module contents is to directly load up a function or a variable into Python. When we do this we no longer need to use the module name after the import. This method is not generally recommended as it can lead to conflicts of names, and is not so clear where that function or variable comes from. But here is how it is done. from math import pi print (pi) OUT: 3.141592653589793 Multiple methods and variables may be loaded at the same time in this way. from math import pi, tau, log10 print (tau) print (log10(100)) OUT: 6.283185307179586 2.0 But usually it is better practice to keep using the the library name in the code. print (math.log10(100)) OUT: 2.0 To access help on any Python function use the help command in a Python interpreter. help (math.log10) Help on built-in function log10 in module math: log10(...) log10(x) Return the base 10 logarithm of x. To find out all the methods in a module, and how to use those methods we can simply type help (module_name) into the Python interpreter. The module must first have been imported, as we did for math above. help (math) OUT:. Integral. This is the smallest integer >=)-1 for small x. fabs(...) fabs(x) Return the absolute value of the float x. factorial(...) factorial(x) -> Integral Find x!. Raise a ValueError if x is negative or non-integral. floor(...) floor(x) Return the floor of x as an Integral. This is the largest integer <=close tau = 6.283185307179586 FILE /home/michael/anaconda3/lib/python3.6/lib-dynload/math.cpython-36m-x86_64-linux-gnu.so So now, for example, we know that to take a square root of a number we can use the math module, and use the sqrt() function, or use the pow() function which can do any power or root. print (math.sqrt(4)) print (math.pow(4,0.5)) OUT: 2.0 2.0 In Python you might read about packages as well as modules. The two names are sometimes used interchangeably, but strictly a package is a collection of modules. One thought on “7. Python basics: math module”
https://pythonhealthcare.org/2018/03/16/7-python-basics-math-module/
CC-MAIN-2020-29
refinedweb
467
76.32
You can subscribe to this list here. Showing 1 results of 1 Just a ping to bring up the subject again. Attached is a patch to add support for the abstract namespace for Unix Domain Sockets, along with tests for the sb-bsd-sockets test suite, and a client server example using a C-based server and a Lisp based client. The connectivity test uses threads. I really would like feedback, especially if it's not acceptable. Tested on x86-64 (threads, domain sockets, and abstract namespace), win32 (none of these), and darwin PPC-32 (domain sockets). It would be interesting to see how the test suite fares on platforms that support threads and domain sockets, but lack abstract namespace support. Thanks, Matt -- "You do not really understand something unless you can explain it to your grandmother." -- Albert Einstein.
http://sourceforge.net/p/sbcl/mailman/sbcl-devel/?viewmonth=200807&viewday=15
CC-MAIN-2014-52
refinedweb
139
63.09
In this tutorial, we’ll be looking at how to build a SMS-to-Slack bridge using Python and Twilio. The bridge will work in such a way that every time your Twilio phone number receives an SMS message, we’ll forward the content of the SMS message to a channel in Slack. Furthermore, any threaded replies in Slack to the message that was posted will automatically be sent as an SMS message to the originating number. Technical requirements To follow along, you’ll need the following: - A free Twilio Account. If you use this link to register, you will receive $10 credit when you upgrade to a paid account. - A free Slack account, and a Slack workspace you have administrator access to. - Python 3 - Ngrok. This will make the development version of our application accessible over the Internet. Creating a Python environment Let’s create a directory where our project will reside. From the terminal, run the following command: $ mkdir twilio_slack-Python: A helper library that makes it easy to interact with the Twilio API. - Python Slack Client: A helper library for interacting with the Slack API. - Python-dotenv: A library for importing environment variables from a .envfile. To install all the dependencies at once, run the following command: $ pip install flask twilio slackclient python-dotenv Creating a Slack webhook endpoint As part of creating the Slack bot, we need to define an endpoint where Slack can forward messages that are posted in a channel. Before we can successfully configure that endpoint, Slack needs to verify and ensure that the endpoint is valid, so we’ll begin by implementing the verification portion of our Slack endpoint. To pass Slack’s verification, the endpoint needs to return a response with the value of a challenge key contained in the payload Slack sends to it in the verification request. You can read more about this process here. Create a main.py file at the root of the project’s directory and add the following code to the file: from flask import Flask, request, Response app = Flask(__name__) @app.route('/incoming/slack', methods=['POST']) def send_incoming_slack(): attributes = request.get_json() if 'challenge' in attributes: return Response(attributes['challenge'], mimetype="text/plain") return Response() if __name__ == '__main__': app.run() Here, we’ve created an endpoint with the /incoming/slack URL and specified the HTTP method to be POST. The challenge key is obtained from the request object from the Flask framework and returned back as a response in plaintext, according to the verification requirements from Slack. Run the following command in your terminal to start the Flask application: $ python main.py Setting up Ngrok Since our application is currently local, there’s no way Slack will be able to make a POST request to the endpoint we just created. We can use Ngrok to set up a temporary public URL so that our app is accessible over the web.. Creating the Slack bot To be able to post and receive messages on behalf of our application to a channel on Slack, we need to create a Slack bot. Head over to Slack apps and select “Create New App”. Next, you'll be prompted with a screen similar to the one below. Assign the name “twilio-slack” to the bot, select the Slack workspace you’ll use the app on, and then click the “Create App” button. Once the app has been created, we need to assign the right Scopes to it. Scopes define the API methods an app is allowed to call. Select “Add features and functionality” and then select the “Permissions” box: Next, scroll down to the Scopes section and assign the following scopes to the app: - “channels:history” - “channels:join” - “channels:read” - “chat:write” - “chat:write.customize” - “chat:write.public” Our Slack bot needs a way to be notified whenever a message is posted to the channel where we’ll be sending the SMS notification to. To do so, the bot needs to subscribe to an event Slack provides. Head back to the “Basic information” page, open the “Add features and functionality” dropdown and select “Event Subscriptions”. Next, toggle “Event Subscriptions” on. For the “Request URL” field, add the forwarding URL from ngrok with the /incoming/slack path for the endpoint we created above added at the end. Your Request URL should look like, but note that the first part of the ngrok domain will be different in your case. After you’ve entered the URL, Slack will send a POST request to the endpoint to verify it, so make sure both the Flask application and ngrok are running. Once the verification is achieved, scroll down to the “Subscribe to events on behalf of users” section and add the following event : - “message.channels” This allows our application to be notified whenever a message is posted to a channel in Slack. Next click “Save Changes”. After you’ve saved these settings, we need to install the bot to a Slack workspace so that an API token can be generated for us. Head back to the “Basic information” page and open the “Install your app to your workspace” dropdown. Click the “Install App to Workspace” and grant the necessary permissions. Once this is done, select “OAuth & Permissions” under the “Features” tab and take note of the “Bot User OAuth Access Token” that was generated for the bot. We’ll be needing it shortly. Setting up Twilio After you sign up for an account on Twilio, head over to your Twilio Console and click on Phone Numbers. If you already have one or more phone numbers assigned to your account, select the number you would like to use for this project. If this is a brand new account, buy a new phone number to use on this project. Note that if you are using a trial Twilio account you will need to verify the phone number you’ll be sending SMS messages to. You can do that here. In the phone number configuration scroll down to the “Messaging” section and under the “A message comes in” field, use the forwarding URL from ngrok with /incoming/twilio appended. At this point, it’s important to note this endpoint doesn’t yet exist in our application. We shall be creating it shortly. Ensure the request method is set to HTTP POST and then click the “Save” button at the bottom of the page to save the settings. On your Twilio Console, take note of your Account SID and Auth Token. We are going to need these values to authenticate with the Twilio service. Coding our bot In the Python project, create a .env file at the root of the project’s directory and edit the file with all the credentials and settings we’ve noted thus far: SLACK_BOT_TOKEN=xxxx TWILIO_ACCOUNT_SID=xxxx TWILIO_AUTH_TOKEN=xxxx TWILIO_NUMBER=xxxx import os import slack import re from dotenv import load_dotenv from flask import Flask, request, Response from twilio.rest import Client from twilio.twiml.messaging_response import MessagingResponse load_dotenv() app = Flask(__name__) Here, we’ve imported all the major dependencies our project will be needing. Next, add the following code just below the app variable: slack_token = os.getenv("SLACK_BOT_TOKEN") slack_client = slack.WebClient(slack_token) twilio_client = Client() The slack_client instance will be used to interact with the Slack API, while the twilio_client will be used to interact with the Twilio API. Forwarding SMS messages to Slack Next, we’ll add the endpoint we configured in the Twilio SMS configuration. This function will send incoming SMS messages to Slack. @app.route('/incoming/twilio', methods=['POST']) def send_incoming_message(): from_number = request.form['From'] sms_message = request.form['Body'] message = f"Text message from {from_number}: {sms_message}" slack_message = slack_client.chat_postMessage( channel='#general', text=message, icon_emoji=':robot_face:') response = MessagingResponse() return Response(response.to_xml(), mimetype="text/html") The first thing we did was to obtain the message and phone number that sent the SMS. They both come in the payload of the POST request with a key of Body and From respectively. A Slack message is then constructed and posted to the “#general” channel of our workspace using the slack_client instance. You can change the Slack channel according to your needs. Twilio expects the response from this endpoint should be in TWIML or Twilio Markup Language. But in this project we really have nothing to do on the Twilio side. Thankfully, the Twilio Python helper library comes bundled with some classes that make generating an empty response easy. Sending Slack threaded replies via SMS We’ve handled one side of our bot’s logic. The next thing to do is to handle sending out SMS messages whenever a threaded reply is written on a Slack message that was posted by the Twilio endpoint. Edit the send_incoming_slack() function we created earlier with the following code: @app.route('/incoming/slack', methods=['POST']) def send_incoming_slack(): attributes = request.get_json() if 'challenge' in attributes: return Response(attributes['challenge'], mimetype="text/plain") incoming_slack_message_id, slack_message, channel = parse_message(attributes) if incoming_slack_message_id and slack_message: to_number = get_to_number(incoming_slack_message_id, channel) if to_number: messages = twilio_client.messages.create( to=to_number, from_=os.getenv("TWILIO_NUMBER"), body=slack_message) return Response() return Response() Next, add the following auxiliary functions also in the main.py file: def parse_message(attributes): if 'event' in attributes and 'thread_ts' in attributes['event']: return attributes['event']['thread_ts'], attributes['event']['text'], attributes['event']['channel'] return None, None, None def get_to_number(incoming_slack_message_id, channel): data = slack_client.conversations_history(channel=channel, latest=incoming_slack_message_id, limit=1, inclusive=1) if 'subtype' in data['messages'][0] and data['messages'][0]['subtype'] == 'bot_message': text = data['messages'][0]['text'] phone_number = extract_phone_number(text) return phone_number return None def extract_phone_number(text): data = re.findall(r'\w+', text) if len(data) >= 4: return data[3] return None Once Slack makes a POST request to the /incoming/slack endpoint and we determine it is not a verification message, the parse_message() function checks to see if there’s a thread_ts key contained in the payload. This is important, because the presence of that key indicates that the message is a threaded reply. If the key exists, the function returns the value of the thread_ts key, the text of the threaded reply message and the id of the channel where the message was posted. If the message was not a threaded reply, we return None for the three values and with that the request does not have anything else to do and ends with an empty response. When the message is a threaded reply, the get_to_number() function is invoked. This function sends a request to the Slack API to retrieve the text of the parent message the threaded reply belongs to. A check is carried out to ensure that a subtype key exists in the payload that was returned and has a value of bot_message. This allows us to know that the parent message was a message from our bot. Next, the content of the parent message is passed as an argument to the extract_phone_number() function. Using Python’s regular expression module, this function takes the message, and extracts each word contained in the message to a list while ignoring punctuation marks. Based on our bot’s messaging structure, the item at index 3 is returned which will be the phone number from where the original message came from. The get_to_number() function returns this number back to the send_incoming_slack() function. To complete the action of the endpoint, an SMS message with the content of the threaded reply is sent to the phone number of the original message using the twilio_client instance. Testing To test the application make sure that you are running the Flask application and ngrok. Keep in mind that ngrok creates a different URL every time it runs, so after a restart you will need to update the Twilio endpoint in the Twilio Console and the Slack endpoint in the Slack app configuration to match the new URL assigned by ngrok. This is obviously only a concern during development, as a production deployment will use a public URL without the need to use ngrok. To test the service, send an SMS message to your Twilio phone number from your phone. Head over to the Slack workspace on which you configured the bot and you should see the message appear in the #general channel. Next, add a threaded reply to the message in Slack, and the message should be forwarded back to your phone in an SMS! Conclusion In this tutorial we’ve seen how we can build a Slack bot that receives and sends SMS messages using Python and Twilio. The GitHub repository with the complete code for this project can be found here. Dotun Jolaoso Website: Github: Twitter:
https://www.twilio.com/blog/build-sms-slack-bridge-python-twilio
CC-MAIN-2020-45
refinedweb
2,100
61.67
Congrats for the great work you guys did on this plugin. I'm a huge fan already! I have a simple form with xtype grid to test this plugin. However, I can't manage to drop the data onto the grid. Code: /*! * Ext JS Library 3.1.1 * Copyright(c) 2006-2010 Ext JS, LLC * licensing@extjs.com * */ Ext.onReady(function(){ Ext.QuickTips.init(); var data = [ ['abc'], ['def'], ['ghi'] ]; teststore = new Ext.data.SimpleStore({ fields: [ {name: 'a'} ] }); teststore.loadData(data); var simple = new Ext.FormPanel({ labelWidth: 75, // label settings here cascade unless overridden url:'save-form.php', frame:true, title: 'Csv Form', bodyStyle:'padding:5px 5px 0', height: 300, width: 350, //defaults: {width: 230}, //defaultType: 'textfield', items: [{ xtype: 'grid', height: 300, plugins: [Ext.ux.grid.DataDrop], editable: true, store: teststore, columns: [ { id: 'a', header: 'A', sortable: true, dataIndex: 'a'} //{ id: 'b', header: 'B', sortable: true }, //{ id: 'c', header: 'C', sortable: true } ] }], buttons: [{ text: 'Save' },{ text: 'Cancel' }] }); simple.render(document.body); }); There are no errors on firebug, the data's just not dropping. Thanks! - Join Date - Mar 2007 - Location - Baltimore, MD - 1,501 - Vote Rating - 8 Have you included the Override.js file as HEllo, great job I have one question, can I use it with Ext2.x Thanks I resolved the problem. I added this snippet to define "Ext.isFunction" Code: Ext.isFunction = function(fn){ return (typeof fn == 'function'); }; Thanks Will this work in a window? I am trying to drag to a test grid I put in a window, and it only seems to hit the override class when I drop in a specific spot, but then I get a lastT is undefined. Code: me.hide(); t = Ext.get(document.elementFromPoint(x, y)); me.show(); if (!t) { lastT.fireEvent('mouseout'); ///UNDEFINED HERE <edit> Yes, I tested successfully with a grid rendered to the document body, but when trying to drop on the same grid in a Ext.Window, it doesn't capture the text, it pastes whatever text I am copying into the url bar and does a google search for that text...Weird, because from what I read about document.elementFromPoint on your blog, it should be detecting the window...I will keep looking. <edit2> I maximized the window, and did a drop, and it worked great! I will try and figure out how it was missing my drop even though I was doing it in the center of the grid. This is the greatest plugin. Impressive work guys. Hi VinylFox, great plugin! Thanks for it. I'm trying to embed it in my grid and it works great. However I'm trying to add events to it so I can listen when the data is dropped so I can add a modal window that shows the differences between the original records in the store and the possibly different records from the excel file. As the plugin is seen as an object that does not extend observable or anything else, I can't listen for events on it. Do you have an idea about how to extend the datadrop plugin in order to be notified when the data is dropped to perform popup a modal window? kind regards, Michael Cool plugin! BTW, when dropping a large number of rows I noticed that each Record was being added, focused upon, then highlighted - becoming uber slow, esp when used with BufferView. As an improvement, I slightly modified the dataDropped function to add an array of Records to the store, outside the for loop -vs- adding incrementally for each row: Code: // on change of data in textarea, create a Record from the tab-delimited contents. function dataDropped(e, el){ var nv = el.value; el.blur(); if (nv !== '') { var store = this.getStore(), Record = store.recordType; el.value = ''; var rows = nv.split(lineEndRE), cols = this.getColumnModel().getColumnsBy(function(c){ return !c.hidden; }), fields = Record.prototype.fields; if (cols.length && rows.length) { var recs = new Array(); for (var i = 0; i < rows.length; i++) { var vals = rows[i].split(sepRe), data = {}; if (vals.join('').replace(' ', '') !== '') { for (var k = 0; k < vals.length; k++) { var fldName = cols[k].dataIndex; var fld = fields.item(fldName); data[fldName] = fld ? fld.convert(vals[k]) : vals[k]; } var newRec = new Record(data); recs.push(newRec); } } store.add(recs); var idx = store.data.length-1; this.view.focusRow(idx); Ext.get(this.view.getRow(idx)).highlight(); resizeDropArea.call(this); } } })
https://www.sencha.com/forum/showthread.php?79511-Ext.ux.grid.DataDrop/page4
CC-MAIN-2015-22
refinedweb
729
68.87
split¶ - paddle.fluid.layers.nn. split ( input, num_or_sections, dim=- 1, name=None ) [source] Split the input tensor into multiple sub-Tensors. - Parameters input (Tensor) – A N-D Tensor. The data type is bool, float16, float32, float64, int32 or int64. num_or_sections (int|list|tuple) – If num_or_sectionsis int, then the num_or_sectionsindicates the number of equal sized sub-Tensors that the inputwill be divided into. If num_or_sectionsis a list or tuple, the length of it indicates the number of sub-Tensors and the elements in it indicate the sizes of sub-Tensors’ dimension orderly. The length of the list mustn’t be larger than the input‘s size of specified dim. dim (int|Tensor, optional) – The dimension along which to split, it can be a scalar with type intor a Tensorwith shape [1] and data type int32or int64. If \(dim < 0\), the dimension to split along is \(rank(input) + dim\). Default is -1. name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name . - Returns The list of segmented Tensors. - Return type list(Tensor) Example import paddle.fluid as fluid # input is a Tensor which shape is [3, 9, 5] input = fluid.data( name="input", shape=[3, 9, 5], dtype="float32") out0, out1, out2 = fluid.layers.split(input, num_or_sections=3, dim=1) # out0.shape [3, 3, 5] # out1.shape [3, 3, 5] # out2.shape [3, 3, 5] out0, out1, out2 = fluid.layers.split(input, num_or_sections=[2, 3, 4], dim=1) # out0.shape [3, 2, 5] # out1.shape [3, 3, 5] # out2.shape [3, 4, 5] out0, out1, out2 = fluid.layers.split(input, num_or_sections=[2, 3, -1], dim=1) # out0.shape [3, 2, 5] # out1.shape [3, 3, 5] # out2.shape [3, 4, 5] # dim is negative, the real dim is (rank(input) + axis) which real # value is 1. out0, out1, out2 = fluid.layers.split(input, num_or_sections=3, dim=-2) # out0.shape [3, 3, 5] # out1.shape [3, 3, 5] # out2.shape [3, 3, 5]
https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/fluid/layers/nn/split_en.html
CC-MAIN-2021-31
refinedweb
336
62.85
During my coverage, I scratched my head on the following case (python 3.4) def simple_gen_function(str_in, sep=""): if sep == "": yield str_in[0] for c in str_in[1:]: yield c else: return str_in # yield from str_in str_in = "je teste " t = "".join(simple_gen_function(str_in)) p = "".join(simple_gen_function(str_in, "\n")) print("%r %r" % (t, p)) # 'je teste' '' yield from str_in The presence of yield in a function body turns it into a generator function instead of a normal function. And in a generator function, using return is a way of saying "The generator has ended, there are no more elements." By having the first statement of a generator method be return str_in, you are guaranteed to have a generator that returns no elements. As a comment mentions, the return value is used as an argument to the StopIteration exception that gets raised when the generator has ended. See: >>> gen = simple_gen_function("hello", "foo") >>> next(gen) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration: hello yieldanywhere in your def, it's a generator! In the comments, the asker mentions they thought the function turned into a generator dynamically, when the yield statement is executed. But this is not how it works! The decision is made before the code is ever excuted. If Python finds a yield anywhere at all under your def, it turns that def into a generator function. See this ultra-condensed example: >>> def foo(): ... if False: ... yield "bar" ... return "baz" >>> foo() <generator object foo at ...> >>> # The return value "baz" is only exposed via StopIteration >>> # You probably shouldn't use this behavior. >>> next(foo()) Traceback (most recent call last): ... StopIteration: baz >>> # Nothing is ever yielded from the generator, so it generates no values. >>> list(foo()) []
https://codedump.io/share/0SFfGMJsRMrH/1/generator-with-return-statement
CC-MAIN-2017-09
refinedweb
287
54.32
I love the type hinting from Python 3, but I'm really tired of writing from typing import * You could hijack the builtins module and put what you need there. That would make the code harder to maintain, as it would be harder to figure where these globals are coming from, or if they are accidentally clobbered. To be clear, it's possible, but I recommend not doing this. The main module would need to do something like this at the top. If it's not the first thing to happen in the program then other modules won't work properly. Import order shouldn't make a difference, so if someone messes with this and it break the program then it will be hard to figure out why. import typing # I assume you meant typing, not types import builtins vars(builtins).update({k: getattr(typing, k) for k in typing.__all__}) # Any module could do this without having to import anything def f(x: T) -> T: return x
https://codedump.io/share/d8wzmL3Z5c7P/1/is-it-possible-to-make-all-modules-import-a-module-implicitly
CC-MAIN-2017-04
refinedweb
168
70.73
... It’s a difficult time for many – our hats off to the brave men and women helping heal those affected by the virus. We’ll get through this together. We are working hard to get our next major release out the door in the next 45 days. We expect to share more information on the release and what we expect to include shortly. In the meantime, here’s this month’s edition of XAF Tips & Tricks. We hope it’s of value… We extended the SecurityStrategy class with numerous methods you can use to check if a user can perform CRUD operations: IsGrantedExtensions. For supplementary information, please review the following article: How to: Display a List of Users Allowed to Read an Object. We have redesigned online documentation for the following important concepts and added comparison tables. We also fully documented our new ServerView and InstantFeedbackView modes. Please let us know if this information is of value or if you feel we need to add additional content to help you select the appropriate mode for specific usage scenarios. You can now avoid ViewItem.ControlCreated and other event handlers for many common usage scenarios - simply use our DetailView.CustomizeViewItemControl method inside the OnActivated method to customize View item controls. You no longer need to unsubscribe from events or check whether controls are created. Please let us know your thoughts. using DevExpress.ExpressApp; using DevExpress.ExpressApp.Editors; using DevExpress.XtraEditors; // ... public class SetMinValueController : ObjectViewController<DetailView, DemoTask> { protected override void OnActivated() { base.OnActivated(); View.CustomizeViewItemControl(this, SetMinValue); } private void SetMinValue(ViewItem viewItem) { SpinEdit spinEdit = viewItem.Control as SpinEdit; if (spinEdit != null) { spinEdit.Properties.MinValue = 0; } } } This GitHub repository demonstrates how to generate database updater code for security roles created via the application UI in a development environment. This KB article explains how to prevent deletion of the last active Administrator and the last administrative role with active users. I found that every post in the original industry prompts to send an app show case. I have sent an email to clientservices@devexpress.com on March 29th. How long will it take to get results or reply to me? Thank you Simplified access to detail view editors. Yes! That's a great addition. Thanks! Thanks for detailed Data Access modes documentation, that is really helpful. I was very enthusiastic reading through InstantFeedbeck, until last point - OpenObject does not work. It is incredibly useful and very often used action which I can not give up - is there really no way to implement it in InstantFeedback, even if it sacrifice some performance? Simplified ViewItem customization - big thanks to this one. I am actually tempted to refactor my entire codebase. @he dandan: We are all doing well. Thank you for asking. We hope you, your family and your teammates are unaffected by the pandemic. No, why? Hello Mario, I want the OpenObject Action to be available in DataView and these new modes like you do. Unfortunately, it works only in modes that contain real objects (Client, Server). In all other modes we have only a set of scalar values to display and OpenObject is not easy to support in a generic way. @Alex - thanks for your response. Yes, I understand it is not trivial, however, maybe it could be solved with some additional metadata either using attributes or DataView columns declaration, I will throw my idea here. I am actually much more interested in InstantFeedback and InstantFeedbackView (and also ServerView) than DataView, and I would not mind to set additional attribute in code, or in Model Editor, something like ActualObjectType and ActualObjectKeyField defined for column which would also mean that Oid field from target table would have to be present in Columns (maybe with Visible to false or Index to -1), but would be retrieved from database and type + oid information would allow you to enable OpenObject action. As an example, if I would want LastName and FirstName for Person table in Invoice_ListView, I would also add Oid column from Person table, and to all three columns I would assign ActualObjectType to Person (this could probably be automatic because when I select LastName, I have to expand Person property, so you already know this column is from person table - you only need Oid information for those column and I would assign Person.Oid for all three columns - which could allow OpenObject on all three columns. This could be very generic solution and no one would be forced to use it, it would be completely optional and would provide so much needed OpenObject functionality. Please or to post comments.
https://community.devexpress.com/blogs/xaf/archive/2020/04/01/expressapp-framework-tips-and-tricks-march-2020.aspx
CC-MAIN-2020-24
refinedweb
761
55.24