text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Returns a reference to the last element of the vector. reference back( ); const_reference back( ) const; The last element of the vector. If the vector is empty, the return value is undefined. If the return value of back is assigned to a const_reference, the vector object cannot be modified. If the return value of back is assigned to a reference, the vector object can be modified. When compiling with _SECURE_SCL 1, a runtime error will occur if you attempt to access an element in an empty vector. See Checked Iterators for more information. // vector_back.cpp // compile with: /EHsc #include <vector> #include <iostream> int main() { using namespace std; vector <int> v1; v1.push_back( 10 ); v1.push_back( 11 ); int& i = v1.back( ); const int& ii = v1.front( ); cout << "The last integer of v1 is " << i << endl; i--; cout << "The next-to-last integer of v1 is "<< ii << endl; } The last integer of v1 is 11 The next-to-last integer of v1 is 10 Header: <vector> Namespace: std
http://msdn.microsoft.com/en-us/library/0532x4xk.aspx
crawl-002
refinedweb
165
58.28
23 January 2008 19:30 [Source: ICIS news] HOUSTON (ICIS news)--The US economy is not in a recession, but it will slow down this year compared with 2007, Dow Chemical CEO Andrew Liveris said on Wednesday, according to a media report. In a news wire report by Reuters from the World Economic Forum in ?xml:namespace> Liveris said Dow, since mid-2007, had been experiencing weaker demand. However, the situation had worsened dramatically recently, according to the report "It's no more acute now than it was a few months ago. Therefore, the first half (of 2008) I think will be a continuation of this, which suggests that the overall year will be a slower year than last year - there's no question about that," he said,
http://www.icis.com/Articles/2008/01/23/9095235/dow-ceo-says-us-not-in-recession-report.html
CC-MAIN-2015-14
refinedweb
128
55.17
A friendly place for programming greenhorns! Big Moose Saloon Search | Java FAQ | Recent Topics Register / Login JavaRanch » Java Forums » Certification » Web Services Certification (SCDJWS/OCPJWSD) Author Why does not JAXB bind java.util.Map properly but handles java.util.List Ravi C Kota Ranch Hand Joined: Jan 29, 2008 Posts: 61 posted Jan 31, 2011 13:19:31 0 Hello All, Some where I read, in this forum that usage of Collections is not recommended as JAXB will not be knowing how to bind them. So, in order to understand the problem, I started writing a sample code. The code looks as below package sample; public interface IDatabase {} ------------------------------------------------------------------------------------------------------ package sample; public class Database implements IDatabase{} ------------------------------------------------------------------------------------------------------ package sample; import java.util.List; import java.util.Map; import javax.jws.WebService; @WebService public class CollectionService { public List getList(){ return null; } public Map getMap(){ return null; } public Object getObject(){ return null; } public Object[] getObjectArray(){ return null; } public Object[][] getObjectMultiArray(){ return null; } public IDatabase getDatabase(){ return null; } } So as seen above, I'm trying to see how the binding works for different kinds of object types. I ran the wsgen against the above code, but would end up with the exception Caused by: com.sun.xml.internal.bind.v2.runtime.IllegalAnnotationsException: 2 counts of IllegalAnnotationExceptions [exec] java.util.Map is an interface, and JAXB can't handle interfaces. [exec] this problem is related to the following location: [exec] at java.util.Map [exec] at private java.util.Map org.learning.ws.jaxws.GetMapResponse._return [exec] at org.learning.ws.jaxws.GetMapResponse [exec] java.util.Map does not have a no-arg default constructor. The exception is misleading here, as it sounds as if an Interface can not be used as a return type here. For that matter, List is also an interface and I do not see any issue for List. It is also failing for getDatabase() for the same reason as that of Map. Using the Interface as the return type is the quite common and also highly respected way of coding. But apparently I'm missing something here to get it work. Can someone point me out the mistake I'm making here and a corrective action. Thanks & Regards, Ravi C.Kota SCJP 5.0, OCDJWS 5.0 Rajkishore Pujari Ranch Hand Joined: Sep 03, 2005 Posts: 46 posted Jan 31, 2011 13:52:20 0 This link may help in answering your question a bit I agree. Here's the link: - if it wasn't for jprofiler, we would need to run our stuff on 16 servers instead of 3. subject: Why does not JAXB bind java.util.Map properly but handles java.util.List Similar Threads Cannot get new screen to work properly; similar screen was recently added successfully Errata on HF question? Problem getting attributes from request in Struts2 How to retrieve metadata in ibatis How to Serialize Maps? All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/525574/java-Web-Services-SCDJWS/certification/JAXB-bind-java-util-Map
CC-MAIN-2013-20
refinedweb
497
57.87
Hello, below you can find my TestCafe test: import { Selector } from 'testcafe'; fixture Test Test const closeCookiesWindow = Selector ('#app > div.cookie-notice > div > div > button');const startExampleVOD = Selector ('#app > div.layout.layout-default > div > div.container.player-test-container > div:nth-child(3) > div:nth-child(1) > div > span');const startExampleLiveChannel = Selector ('#app > div.layout.layout-default > div > div.container.player-test-container > div:nth-child(3) > div:nth-child(2) > div > span'); test.page()('Test: Example Videos', async t => { await t .click(closeCookiesWindow) .wait(1000) .click(startExampleVOD) .click(startExampleLiveChannel) .wait(50000); }); I start the test from terminal:testcafe chrome,firefox,edge abc.js On chrome:- Test VOD will only show advertisement, then after ads video won't start,- Test Live channel will never start. On Firefox, Edge:- Test VOD will show advertisement and after ads will start,- Test Live channel will immediately start. I need to start the video in chrome to for example check if video description/subtitles are displayed on video player. Could you give me a hint what is the problem with chrome browser here? Hi Adam,I run your site in Chrome, and none of videos play. I see a lot of errors in the browser console, e.g.: Failed to load resource: the server responded with a status of 403 (Forbidden)przetestuj-usluge:1 The SSL certificate used to load resources from will be distrusted in M70. Once distrusted, users will be prevented from loading these resources. See for more information.proxy.html:82 Calling getCart, cid=397790234 nullUXNZS-RYLJT-8VBQQ-FNZMH-TNUXW:14 [Deprecation] chrome.loadTimes() is deprecated, instead use standardized API:: nextHopProtocol in: Paint Timing. @ UXNZS-RYLJT-8VBQQ-FNZMH-TNUXW:14playerapi/product/1628/player/configuration?type=MOVIE&platform=BROWSER&4K=true:1 Failed to load resource: the server responded with a status of 403 (Forbidden)portal.js?v=1b317c97cbfc:2 Uncaught (in promise) Error: Request failed with status code 403 at t.exports (portal.js?v=1b317c97cbfc:2)_ at t.exports (portal.js?v=1b317c97cbfc:2)__ at XMLHttpRequest.h.(anonymous function) as zone_symbolON_PROPERTYreadystatechange__ at XMLHttpRequest.H (polyfills.942207901072f0395901.bundle.js:1)__ at t.invokeTask (polyfills.942207901072f0395901.bundle.js:1)__ at r.runTask (polyfills.942207901072f0395901.bundle.js:1)__ at e.invokeTask as invoke__ at h (polyfills.942207901072f0395901.bundle.js:1)__ at XMLHttpRequest.d (polyfills.942207901072f0395901.bundle.js:1)_ It looks like Chrome cannot play videos from this web site. Thank you for response Marina. This is strange, because I non-stop use google chrome on that website and I am able to watch all the videos. Did you try to start any other video from? Colleague of mine also created similar test in Selenium and video started in chrome. The problem is only in TestCafe google chrome. Adam,I tried the following page from your first post:. Does it work for you? Nevertheless, would please open your tested.20.0-alpha.4 If this does not help, please provide us with a simple test so that we can see the issue locally. If you do not want to share your data with other users, you can send a private message to me. Both and work for me in chrome, but not in TestCafe chrome. I have made another test and opened youtube video in TestCafe and it does not start for me in chrome as well.It works in firefox and microsoft edge. I reinstalled chrome (Chrome 66.0.3359 / Windows 10.0.0), didn't help. const playButton = Selector ('#movie_player > div.ytp-chrome-bottom > div.ytp-chrome-controls > div.ytp-left-controls > button'); test.page()('Test: Example Videos', async t => { await t .wait(10000) .click(playButton) .wait(20000); In the test above I open website with youtube video, wait 10s for video to start, then try to click play button and wait another 20s. Video does not start at any point, it is freezed like in the picture. Hi Adam, Your "youTube" test passes properly on my side in Chrome if I run it with the -e argument. Nevertheless, I have found one issue when running the test, which we are going to fix in the context of the following thread: The getRegistration method of navigator.serviceWorker should be overridden In addition, at present, TestCafe proxy fully supports only http. We are going to add support for https soon: Run testcafe with https protocol
https://testcafe-discuss.devexpress.com/t/videos-dont-start-on-chrome-they-start-on-other-browsers/1211
CC-MAIN-2021-39
refinedweb
729
60.82
A small example package Project description Mugpy: A Mugsy client for python MIT License What? An API wrapper to brew coffee with the Mugsy Robotic Coffee maker by Matthew Oswald. In his own words: Mugsy is the world's first hackable, customizable, dead simple, robotic coffee maker. Usage from mugpy.api import Mugpy mugsy = Mugpy("a", "b", "c") result = mugsy.coffee_now() print(result) # Boolean Notes I don't actually have a machine yet and the API isn't live quite yet. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/mugpy/
CC-MAIN-2019-47
refinedweb
108
67.25
In development environments like Spyder or Jupyter Notebook the following code from sympy import * init_printing() eq = Eq(sympify('(a**2 + sqrt(b)) / log(x)')) display(eq) yields this output whereas in Pycharm you'll end up with this Is it not possible to display formulas with Pycharm in a more reasonable way? It seems that the latest version of sympy available in doesn't have display() function, so I can't reproduce the issue using your code. Please ensure your snippet works and let me know which sympy version you are using. Im using Sympy 1.4. display() is not part of sympy, but part of ipython (lib\site-packages\ipython\core\display.py) and therefore should be working in PyCharm. What error message do you get when you run the code snippet in PyCharm? Yes, thanks for clarifying. It seems Jupyter and Spyder use their own libraries to print formulas. PyCharm just use what is available in the shell. You can reproduce it in ipython shell outside of PyCharm and it will look the same, so I guess it should be submitted as feature request. You can do that in our issue tracker
https://intellij-support.jetbrains.com/hc/en-us/community/posts/360003447520-Is-it-possible-to-pretty-print-sympy-formulas-?page=1
CC-MAIN-2019-47
refinedweb
194
73.37
All C# types fall into the following categories: Value types (struct, enum) Reference types (class, array, delegate, interface) Pointer types The fundamental difference between the three main categories (value types, reference types, and pointer types) is how they are handled in memory. The following sections explain the essential differences between value types and reference types. Pointer types fall outside mainstream C# usage, and are covered later in Chapter 4. Value types are the easiest types to understand. They directly contain data, such as the int type (holds an integer), or the bool type (holds a true or false value). A value type's key characteristic is when you assign one value to another, you make a copy of that value. For example: using System; class Test { static void Main ( ) { int x = 3; int y = x; // assign x to y, y is now a copy of x x++; // increment x to 4 Console.WriteLine (y); // prints 3 } } Reference types are a little more complex. A reference type really defines two separate entities: an object, and a reference to that object. This example follows exactly the same pattern as our previous example, but notice how the variable y is updated, while in our previous example, y remained unchanged: using System; using System.Text; class Test { static void Main ( ) { StringBuilder x = new StringBuilder ("hello"); StringBuilder y = x; x.Append (" there"); Console.WriteLine (y); // prints "hello there" } } This is because the StringBuilder type is a reference type, while the int type is a value type.. Let's look at the next line: StringBuilder y = x; When we assign x to y, we are saying, "make y point to the same thing that x points to." A reference stores the address of an object (an address is a memory location, stored as a four-byte number). We're actually still making a copy of x, but we're copying this four-byte number as opposed to the StringBuilder object itself. Let's look at this line: x.Append (" there"); This line actually does two things. It first finds the memory location represented by x, and then it tells the StringBuilder object that lies at that memory location to append " there" to it. We could have achieved exactly the same effect by appending " there" to y, because x and y refer to the same object: y.Append (" there"); A reference may point at no object, by assigning the reference to null. In this code sample, we assign null to x, but can still access the same StringBuilder object we created via y: using System; using System.Text; class Test { static void Main ( ) { StringBuilder x; x = new StringBuilder ("hello"); StringBuilder y = x; x = null; y.Append (" there"); Console.WriteLine (y); // prints "hello there" } } The stack is a block of memory that grows each time a function is entered (basically to store local variables) and shrinks each time a function exits (because the local variables are no longer needed). In our previous example, when the main function finishes executing, the references x and y go out of scope, as do any value types declared in the function. This is because these values are stored on the stack. The heap is a block of memory in which reference type objects are stored. Whenever a new object is created, it is allocated on the heap, and returns a reference to that object. During a program's execution, the heap starts filling up as new objects are created. The runtime has a garbage collector that deallocates objects from the heap so your computer does not run out of memory. An object is deallocated when it is determined that it has zero references to it. You can't explicitly delete objects in C#. An object is either automatically popped off the stack or automatically collected by the garbage collector. A good way to understand the difference between value types and reference types is to see them side-by-side. In C#, you can define your own reference types or your own value types. If you want to define a simple type such as a number, it makes sense to define a value type, in which efficiency and copy-by-value semantics are desirable. Otherwise you should define a reference type. You can define a new value type by declaring a struct, and define a new reference type by defining a class. To create a value-type or reference-type instance, the constructor for the type may be called, with the new keyword. A value-type constructor simply initializes an object. A reference-type constructor creates a new object on the heap, and then initializes the object: // Reference-type declaration class PointR { public int x, y; } // Value-type declaration struct PointV { public int x, y; } class Test { static void Main( ) { PointR a; // Local reference-type variable, uses 4 bytes of // memory on the stack to hold address PointV b; // Local value-type variable, uses 8 bytes of // memory on the stack for x and y a = new PointR( ); // Assigns the reference to address of new // instance of PointR allocated on the // heap. The object on the heap uses 8 // bytes of memory for x and y, and an // additional 8 bytes for core object // requirements, such as storing the // object's type synchronization state b = new PointV( ); // Calls the value-type's default // constructor. The default constructor // for both PointR and PointV will set // each field to its default value, which // will be 0 for both x and y. a.x = 7; b.x = 7; } } // At the end of the method the local variables a and b go out of // scope, but the new instance of a PointR remains in memory until // the garbage collector determines it is no longer referenced Assignment to a reference type copies an object reference, while assignment to a value type copies an object value: ... PointR c = a; PointV d = b; c.x = 9; d.x = 9; Console.WriteLine(a.x); // Prints 9 Console.WriteLine(b.x); // Prints 7 } } As with this example, an object on the heap can be pointed at by multiple variables, whereas an object on the stack or inline can only be accessed via the variable it was declared with. "Inline" means that the variable is part of a larger object; i.e., it exists as a data member or an array member. C# provides a unified type system, whereby the object class is the ultimate base type for both reference types and value types. This means all types, apart from the occasionally used pointer types, share the same basic set of characteristics. Simple types are so called because most have a direct representation in machine code. For example, the floating-point numbers in C# are matched by the floating-point numbers in most processors, such as Pentium processors. For this reason, most languages treat them specially, but in doing so create two separate sets of rules for simple types and user-defined types. In C#, all types follow the same set of rules, resulting in greater programming simplicity. To do this, the simple types in C# alias structs found in the System namespace. For instance, the int type aliases the System.Int32 struct, the long type aliases the System.Int64 struct, etc. Simple types therefore have the same features one would expect any user-defined type to have. For instance, the int type has function members: int i = 3; string s = i.ToString( ); This is equivalent to: // This is an explanatory version of System.Int32 namespace System { struct Int32 { ... public string ToString( ) { return ...; } } } // This is valid code, but we recommend you use the int alias System.Int32 i = 5; string s = i.ToString( ); Simple types have two useful features: they are efficient, and their copy-by-value semantics are intuitive. Consider again how natural it is assigning one number to another and getting a copy of the value of that number, as opposed to getting a copy of a reference to that number. In C#, value types are defined to expand the set of simple types. In this example, we revisit our PointV and PointR example, but this time look at efficiency. Creating an array of 1,000 ints is very efficient. This allocates 1,000 ints in one contiguous block of memory: int[ ] a = new int[1000]; Similarly, creating an array of a value type PointV is very efficient too: struct PointV { public int x, y } PointV[ ] a = new PointV[1000]; If we used a reference type PointR, we would need to instantiate 1,000 individual points after instantiating the array: class PointR { public int x, y; } PointR[ ] a = new PointR[1000]; // creates an array of 1000 null references for (int i=0; i<a.Length; i++) a[i] = new PointR( ); In Java, only the simple types (int, float, etc.) can be treated with this efficiency, while in C# one can expand the set of simple types by declaring a struct. Furthermore, C#'s operators may be overloaded, so that operations that are typically applicable only to simple types are applicable to any class or struct, such as +, -, etc. (see Section 4.5, in Chapter 4). So that common operations can be performed on both reference types and value types, each value type has a corresponding hidden reference type. This is created when it is cast to a reference type. This process is called boxing. A value type may be cast to the "object" class, which is the ultimate base class for all value types and reference types, or an interface it implements. In this example, we box and unbox an int to and from its corresponding reference type: class Test { static void Main ( ) { int x = 9; object o = x; // box the int int y = (int)o; // unbox the int } } When a value type is boxed, a new reference type is created to hold a copy of the value type. Unboxing copies the value from the reference type back into a value type. Unboxing requires an explicit cast, and a check is made to ensure the value type to convert to matches the type contained in the reference type. An InvalidCastException is thrown if the check fails. You never need to worry about what happens to boxed objects once you've finished with them; the garbage collector take cares of them for you. Using collection classes is a good example of boxing and unboxing. In this example, we use the Queue class with value types: using System; using System.Collections; class Test { static void Main ( ) { Queue q = new Queue ( ); q.Enqueue (1); // box an int q.Enqueue (2); // box an int Console.WriteLine ((int)q.Dequeue( )); // unbox an int Console.WriteLine ((int)q.Dequeue( )); // unbox an int } }
http://etutorials.org/Programming/C+in+a+nutshell+tutorial/Part+I+Programming+with+C/Chapter+2.+C+Language+Basics/2.4+Value+Types+and+Reference+Types/
crawl-001
refinedweb
1,785
62.07
I am with Orcon myself so made the call with some intentions in mind. After being on hold for a while I got the message that the retention team wasn't contactable at the moment and they will call me back. Stay tuned for those results. What I had in mind was, seeing as they removed a service that I signed up for I feel I should be either a) paying less (a similar service cost around $5/month) or b) getting something in return (a bump up to a faster plan) I'm also going to push for the my contract break out to be waived as per their email. If an agreement cant be reached then it will have to be another providers turn. Has anyone else made the call and what were the results? PS. not sure if this is the right sub.
https://www.geekzone.co.nz/forums.asp?forumid=151&topicid=177525
CC-MAIN-2020-10
refinedweb
146
84.61
‘Cookie curtain’ offers security and privacy benefits Google has announced plans to partition the HTTP cache of its Chrome browser in a move designed to protect against some forms of side-channel attack. As things currently stand, a site can run code that will check whether another site opened in Chrome on the same machine has loaded a resource or not. This behavior creates a means for a malicious site to determine whether or not a user has visited a specific site as well as opening the door to possible cross-site search attack, a class of vulnerability that’s become the focus of recent research. The cache can also be used to store cross-site super-cookies (AKA ever-cookies) as a fingerprinting mechanism, creating a tracking issue that’s exacerbated by having a common HTTP cache. These various issues involving HTTP caching are a problem for browser makers in general, as evidenced by a discussion on the topic on GitHub back in May. Partition party, people Apple partitioned the cache in Safari more than six years ago, with both Mozilla and Google planning to follow suit. Apple uses eTLD+1 to partition the cache, whereas Google is going with a slightly different architecture. “The HTTP cache is currently one-per-profile, with a single namespace for all resources regardless of origin or renderer process,” Google explains in an intent to ship notice. “This opens the browser to a side-channel attack where one site can detect if another site has loaded a resource by checking if it’s in the cache. “This feature will partition the HTTP cache using top frame origin (and also possibly the subframe origin) to prevent documents from one origin from knowing if a resource from a cross-origin document load was cached or not,” it adds. HTTP cache partitioning for Chrome will be offered for both the mobile and desktop versions of the browser software. For users the change will offer privacy and security benefits, but there will be a trade-off for some web developers, Google warns. “This is not a breaking change, but it will have performance considerations for some organizations,” Google said. “For instance, those that serve large volumes of highly cacheable resources across many sites (e.g., fonts and popular scripts).” RELATED Cross-site search attack applied to snoop on Google's bug tracker
https://portswigger.net/daily-swig/google-lifts-the-veil-on-chrome-cache-partition-plans
CC-MAIN-2020-50
refinedweb
395
55.98
.6 and Python 2.x. If the Django version doesn’t match, you can refer to the tutorial for your version of Django by using the version switcher at the bottom right corner of this page, or update Django to the newest version. If you are using Python 3.x, be aware that your code may need to differ from what is in the tutorial and you should continue using the tutorial only if you know what you are doing with Python 3.x.. The development server¶ Let’s verify this worked. Change into the outer mysite directory, if you haven’t already, and run the command python manage.py runserver. You’ll see the following output on the command line: Validating models... 0 errors found August 28, 2015 - 15:50:53 Django version 1 or compiling translation files don’t trigger a restart, so you’ll have to restart the server in these cases., NAME should be the full absolute path, including filename, of that file. The default value, os.path.join(BASE_DIR, 'db.sqlite3'), will store the file in your project directory. If you are not using SQLite as your database, additional settings such as USER, PASSWORD, makes use of at least one database table, though, mysite/settings.py file. You’ll see a message for each database table it creates, and you’ll get a prompt asking you if you’d like to create a superuser account for the authentication system. Go ahead and do that. syncdb. The syncdb command will only create tables for apps in INSTALLED_APPS.. In our simple poll app, we’ll create two models: Poll and Choice.) Poll Poll. Django supports all the common database relationships: many-to-ones, many-to-manys and one-to-ones. Activating models¶ sql polls You should see something similar to the following (the CREATE TABLE SQL statements for the polls app): BEGIN; CREATE TABLE "polls_poll" ( "id" integer NOT NULL PRIMARY KEY, "question" varchar(200) NOT NULL, "pub_date" datetime NOT NULL ); CREATE TABLE "polls_choice" ( "id" integer NOT NULL PRIMARY KEY, "poll_id" integer NOT NULL REFERENCES "polls_poll" ("id"), "choice_text" varchar(200) NOT NULL, "votes" integer NOT NULL ); COMMIT; Note the following: - The exact output will vary depending on the database you are using. The example above is generated for SQLite. - The syncdb command runs the SQL from sqlall on your database for all apps in INSTALLED_APPS that don’t already exist in your database. This creates all the tables, initial data and indexes for any apps you’ve and Poll, Choice # Import the model classes we just wrote. # No polls are in the system yet. >>> Poll.objects.all() [] # Create a new Poll. # >>> p = Poll(>> p.pub_date datetime.datetime(2012, 2, 26, 13, 0, 0, 775217, tzinfo=<UTC>) # Change values by changing the attributes, then calling save(). >>> p.>>. On Python 3, simply replace __unicode__ by __str__ in the following example: from django.db import models class Poll(models.Model): # ... def __unicode__(self): # Python 3: def __str__(self): return self.question class Choice(models.Model): # ... def __unicode__(self): # Python 3: def __str__(self): return self.choice_text It’s important to add __unicode__() methods (or __str__() on Python 3) to your models, not only for your own sanity when dealing with the interactive prompt, but also because objects’ representations are used throughout Django’s automatically-generated admin. __unicode__ or __str__? On Python 3, things are simpler, just use __str__() and forget about __unicode__(). If you’re familiar with Python 2, gibberish to you, just remember to add __unicode__() methods to your models. With any luck, things should Just Work for you. Note these are normal Python methods. Let’s add a custom method, just for demonstration: import datetime from django.utils import timezone # ... class Poll that was published this year. >>> from django.utils import timezone >>> current_year = timezone.now().year >>> Poll.objects.get(pub_date__year=current_year) <Poll: What's up?> # Request an ID that doesn't exist, this will raise an exception. >>>_recently() True # Give the Poll a couple of Choices. The create call constructs a new # Choice object, does the INSERT statement, adds the choice to the set # of available choices and returns the new Choice object. Django creates # a set to hold the "other side" of a ForeignKey relation # (e.g. a poll's choices) which can be accessed via the API. >>> p = Poll.objects.get(pk=1) # Display any choices from the related object set -- none so far. >>> p.choice_set.all() [] # Create three choices. >>> p.choice_set.create(choice_text='Not much', votes=0) <Choice: Not much> >>> p.choice_set.create(choice_text='The sky', votes=0) <Choice: The sky> >>> c = p.choice_set.create(choice_text= this year # (reusing the 'current_year' variable we created above). >>> Choice.objects.filter(poll__pub_date__year=current_year) [<Choice: Not much>, <Choice: The sky>, <Choice: Just hacking again>] # Let's delete one of the choices. Use delete() for that. >>> c = p.
https://docs.djangoproject.com/en/1.6/intro/tutorial01/
CC-MAIN-2015-35
refinedweb
807
66.23
The Visual Studio allows us to creating a custom project template for adding a new project based on the settings and references in an existing one. Developers can use this approach to rename a project with related assembly and default namespace when adding a new project. For existing projects in the solution, however, manually changing those names seems more practical and straightforward. The manual renaming tasks are not so hard for an ASP.NET, WPF, or Windows application whereas some special cares are needed for Silverlight web applications. This article provides a guide for how to rename Silverlight projects, reset links between projects due to name changes, and resolve some related issues so that the application will not crash after even extensive name changes. Let's start with a demo application from my last post which contains both regular Silverlight application projects and WCF RIA Services class library projects. You can download the source code files there and use the solution and projects as the beginning point. We will change all names related to the web host project from ProductApp.Web to StoreApp.Web. In the Visual Studio solution, click the ProjectApp.Web project name twice or right-click the project name then select Rename. Change the name text to StoreApp.Web. Renaming a project will automatically change the name of the physical project metadata file (.csproj in C#) and update the reference in any other project in the solution that uses the renamed project. Make sure that the StoreApp.Web is selected as the current project. Open the Quick Replace screen by clicking Find and Replace from the Edit menu. Replace all instances of ProductApp.Web with StoreApp.Web in code files for the current project. ProductApp.Web StoreApp.Web Right-click the project name, select Properties, and then select Application side menu. Change the current text in both Assembly name and Default namespace field to StoreApp.Web. We will keep the name of assembly and default namespace the same as that of the project for the clarity and easy maintenance although these names can be different. Click Show All Files icon in the Solution Explorer, and delete the ProductApp.Web.dll and ProductApp.Web.pdb in the bin folder. Alternatively we can delete the bin folder if there is no manually entered library source file inside and let build process repopulate it. This step is not required but doing so will keep the project clean. Save all files by pressing Ctrl + Shift + S keys and then clicking the Close Solution command in the File menu to close the solution. Special Note: making sure to close the solution when renaming the physical project location folder and doing solution or project metadata file content changes outside the Visual Studio in below steps. Go to the physical root folder of the solution using the Windows Explorer, rename the folder ProductApp.Web to StoreApp.Web under the solution root folder. Open the ProductApp.sln file using the Notepad or any other text editor. Find the lines shown below. Project("{-GUID number-}") = "StoreApp.Web", "ProductApp.Web\StoreApp.Web.csproj", "{-GUID number-}" EndProject Change the relative directory path from ProductApp.Web to StoreApp.Web in the code and then save the file. Project("{-GUID number-}") = "StoreApp.Web", "StoreApp.Web\StoreApp.Web.csproj", "{-GUID number-}" EndProject Re-open the solution by selecting the ProductApp.sln from the Recent Projects and Solutions of the File menu. The application should work fine when running it by pressing F5. We will change all names related to main client project from ProductApp to StoreApp.Main. There are more steps involved in the renaming and associated issues for this type of projects than any other types. In the Solution Explorer, change the project name ProductApp to StoreApp.Main using the same step shown for the web host project. Make sure that the StoreApp.Main is selected as the current project. Use the Quick Replace screen to replace all instances of ProductApp with StoreApp.Main in code files for the current project. ProductApp StoreApp.Main Right-click the project name, select Properties, and then select Silverlight side menu. Perform the following changes. The updated Properties screen should look like this: In the Show All Files mode, delete the Debug folder under the bin folder if there is no manually entered library source file inside. Otherwise, just delete the old ProductApp.dll, ProductApp.pdb, ProductApp.xap, and ProductAppTestPage.html. Again this step is optional. Expend the obj folder and delete the Debug folder under it. The Visual Studio will re-populate the Debug folder during next build process. This is a very important step. If missing this step, the changed assembly name will not be in effect because the cached files for the assembly in the obj\Debug folder are not refreshed. We will get runtime errors or the data binding failure. Restarting the development web server or IIS, or re-opening the Visual studio would not help. This behavior is for the Silverlight projects only (both for C# and VB), not for all other types such as ASP.NET, WPF, or Windows projects. Save all files and close the solution (see the special note previously for modifying physical project folder name and metadata file contents). Go to the physical root folder of the solution using the Windows Explorer, rename the folder ProductApp to StoreApp.Main under the root solution folder. Project("{-GUID number-}") "StoreApp.Main", "ProductApp\StoreApp.Main.csproj", "{-GUID number-}" EndProject Change the relative directory path from ProductApp to StoreApp.Main in the code and then save the file. Project("{-GUID number-}") "StoreApp.Main", "StoreApp.Main\StoreApp.Main.csproj", "{-GUID number-}" EndProject Still in the Windows Explorer, open the StoreApp.Main.csproj project file under StoreApp.Main folder. Find the line: <TestPageFileName>ProductAppTestPage.html</TestPageFileName> Change the ProductAppTestPage.html to StoreAppMain.html in the code and then save the file. ProductAppTestPage.html StoreAppMain.html <TestPageFileName>StoreAppMain.html</TestPageFileName> Re-open the solution. In the web server StoreApp.Web project, rename the ProductAppTestPage.aspx to StoreAppMain.aspx and ProductAppTestPage.html to StoreAppMain.html. See the screenshot under the Step 12. Since the link exists between the web and client projects, the server will automatically update the xap file name when the name is changed from the client project. However, the xap file name in the starting page files is not updated. We need to open both StoreAppMain.aspx and StoreAppMain.html files and then find first param node under the object node. Change the xap file name from ProductApp.xap to StoreApp.Main.xap in the code. The updated code is shown below. ProductApp.xap StoreApp.Main.xap <param name="source" value="ClientBin/StoreApp.Main.xap"/> The old ProductApp.xap in the ClientBin folder will not be used. We can delete it. In the Solution Explorer, right-click the StoreAppMain.aspx and select Set As Start Page. The application should work fine when running it by pressing F5. We will change all names related to the RIA Services server project from ProductRiaLib.Web to StoreRiaLib.Web. The steps for renaming the RIA Services Class Library server project are similar to those for the Web host server project. Refer to the corresponding screenshots if needed. Rename the ProductRiaLib.Web to StoreRiaLib.Web in the Solution Explorer. This will automatically change the physical project metadata file name and update the project reference set in the web host server project, the StoreApp.Web. Make sure that the StoreRiaLib.Web is selected as the current project. Use the Quick Replace screen to replace all instances of ProductRiaLib.Web with StoreRiaLib.Web in code files for the current project. ProductRiaLib.Web StoreRiaLib.Web In the Application section of the project Properties screen, change the existing text in both Assembly name and Default namespace fields to StoreRiaLib.Web. Delete the ProductRiaLib.Web.dll and ProductRiaLib.Web.pdb in the bin folder to keep the project clean. Go to the physical root folder of the solution using the Windows Explorer, rename the folder ProductRiaLib.Web to StoreRiaLib.Web under the root solution folder. Project("{-GUID number-}") = "StoreRiaLib.Web", "ProductRiaLib.Web\StoreRiaLib.Web.csproj", "{-GUID number-}" EndProject Change the relative directory path from ProductRiaLib.Web to StoreRiaLib.Web in the code and then save the file. Project("{-GUID number-}") = "StoreRiaLib.Web", "StoreRiaLib.Web\ StoreRiaLib.Web.csproj", "{-GUID number-}" EndProject Re-open the solution but do not build or run the application until completing the changes in the RIA Services class library client project performed in the next section. If the database file is in the App_Data folder in this project and the absolute path is used in the connection string, a change is needed for the connection string in the Web.config file from the web host server project, in our case, the StoreApp.Web project. Change ProductRiaLib.Web to StoreRiaLib.Web for the Data Source of the connection string. <connectionStrings> <add name="ProductDbContext" connectionString="Data Source=[Your-StoreApp-Solution-Path]\StoreRiaLib.Web\App_Data\ProductData.sdf" providerName="System.Data.SqlServerCe.4.0"/> </connectionStrings> We will change all names related to the RIA Service client project from ProductRiaLib to StoreRiaLib.Client. In addition to the similar steps that are used for the RIA Services class library server project, some other interventions are needed. Rename the ProductRiaLib to StoreRiaLib.Client in the Visual Studio Solution Explorer. This will automatically change the physical project metadata file name and update the project reference set in the Silverlight client project, the StoreApp.Main. Make sure that the StoreRiaLib.Client is selected as the current project. Use the Quick Replace screen to replace all instances of ProductRiaLib with StoreRiaLib.Client in the code for the current project. ProductRiaLib StoreRiaLib.Client Go to the physical root folder of the solution using the Windows Explorer, rename the folder ProductRiaLib.Web to StoreRiaLib.Web in the root solution folder. Project("{-GUID number-}") = "StoreRiaLib.Client ", "ProductRiaLib\StoreRiaLib.Client.csproj", "{-GUID number-}" -some other nodes- EndProject Project("{-GUID number-}") = "StoreRiaLib.Client ", "StoreRiaLib.Client\StoreRiaLib.Client.csproj", "{-GUID number-}" -some other nodes- EndProject Open the StoreRiaLib.Client.csproj file under the project folder StoreRiaLib.Client using the Notepad or any other text editor. Find if the LinkedServerProject node points to the correct physical server location. If not, update it with the correct one shown below. LinkedServerProject <LinkedServerProject>..\StoreRiaLib.Web\StoreRiaLib.Web.csproj</LinkedServerProject> Re-open the solution. Before building or running the application, we need to replace the old RIA Serives namespace references with the new ones in all Silverlight client projects that consume the RIA Services. The data is always provided in the namespace of RIA Services Server project, in our updated case, the StoreRiaLib.Web although the code from the client proxy file in the RIA Services client project is actually accessed. Select the StoreApp.Main project as the current project and replace all instances of ProductRiaLib.Web with StoreRiaLib.Web in code files using the Quick Replace screen. Note that the principle of this step applies to any case when the namespace is changed for a class library project referenced by a consumer project. For example, if we have the ProductApp.Common class library with the same name for the namespace, when renaming the project and namespace to StoreApp.Common, the namespace references or prefixes for all projects using this library need to be changed to StoreApp.Common in the code. Run the application by pressing F5. The application should work as the same as before all projects are renamed. So far all the functional parts are renamed and work well. However the solution name, solution physical filename, and the names of virtual solution folders, if any, inside the solution are still old. Renaming the solution and solution virtual folder can be easily done by directly changing the name text in the Solution Explorer. The solution physical file name will automatically be updated when renaming the solution. Renaming a virtual solution folder name is entirely independent from other parts of the solution. All final names for the demo application are shown like this. Renaming projects and resetting related issues for a Silverlight application are not so easy. Architects and developers need well plan all the names for projects to avoid making name-related changes. But in case you need to rename your projects in the development phases, or for project expansions, or importing projects from other resources but having naming issues, you can freely do the renaming tasks following this.
http://www.codeproject.com/Articles/370484/Rename-Visual-Studio-Projects-and-Resolve-Related?fid=1706575
CC-MAIN-2014-42
refinedweb
2,072
52.26
Hi all, I cleaned up the QuestionsAndAnswers wiki page [1] and effectively removed all content, stopped inviting people to use the page as a way to ask questions and put some explaining redirects to the mailing lists (and the various helpful and searchable archives out there) in the text instead. The reason for that is that the list of questions was completely unbalanced: some questions could point people reading them into completely wrong directions. And a lot of questions were simply unanswered, because nobody regularly monitors that page. For ongoing questions and answers we have the mailing lists; with nabble for example, it is also very easy to start writing mails to a list (if someone is reluctant to do the normal subscription process, although that is not that hard). The wiki is simply not an effective forum. For frequently answered questions we have the official FAQ [2]. As a replacement, I listed links to the mailing list archives of Jackrabbit (the ones with a proper usability ;-)). The old content is still available in the history of the page and for the purpose of mailing list indexing I also added the contents of the old page at the end of this mail. [1] [2] Regards, Alex -- Alexander Klimetschek alexander.klimetschek@day.com ============================ QuestionsAndAnswers Apache Jackrabbit Questions and Answers This page is an alternative to the [WWW] Jackrabbit mailing lists for people who prefer using a web forum instead of a mailing list for asking questions. Please feel free to add any Jackrabbit questions or answers here. The best questions will be incorporated into our website documentation as the [WWW] Jackrabbit FAQ. Where does the "ObjectPersistenceManager" store its data? Question: I use "org.apache.jackrabbit.core.state.obj.ObjectPersistenceManager" for the PersistenceManager.I create a node, then upload a file.after that i also success get the file.but i want know where "ObjectPersistenceManager" save the file.i can't find it in my file system.En,my system is windows.? Answer: Please check the following: * The [WWW] First Hops document * The ExamplesPage If you have ideas of good examples you'd like to see, please submit Is there a way to use Jackrabbit without access control, or to get it working without changing JVM properties? I'm using/evaluating Apache Jackrabbit for an open source project (platypuswiki.sf.net). My application is distributable as a War file . Answer: The JAAS configuration is no longer required for simple deployments. Starting from Apache Jackrabbit version 1.0 you do not need to set the JAAS login configuration options unless you want to override the default settings. See [WWW] JCR-351 for the background and resolution of this issue. Question: Thank you for the response. I have understood, that I have to use "TransientRepository" in place of "Repository" for omit the JAAS configuration. Is it true ? What are others differences (advantages/disavantages) of using "TransientRepository" in place of "Repository" implmentation ? Supported operations Questions: 1. I have saw that NamespaceRegistry.unregisterNamespace(..) is not supported. Is this feature planned in next releases ? Or there are other ways to change a namespace URI of a registered namespace ? Answer: See this mail: [WWW] ? How do I use Jackrabbit in MY projects with Maven 2 ? If you are using maven 2, happily the releases of Jackrabbit are kept in the central maven repo. Put the following into your projects pom.xml:
http://mail-archives.apache.org/mod_mbox/jackrabbit-users/200812.mbox/raw/%3Cc3ac3bad0812140802p394a08bah116e372a6d0a8a3@mail.gmail.com%3E/
CC-MAIN-2016-18
refinedweb
563
57.87
I want to create a simple program that will output a string to a text file. Below is my code : import java.io.*; public class JavaTesting { public static void main(String[] args) { File myfile = new File ("myfile.txt"); myfile.getParentFile().mkdirs(); PrintWriter printWriter = new PrintWriter(myfile); printWriter.println ("Trying to write the txt using Java"); printWriter.close(); } } But J-grasp is throwing me the following error: ----jGRASP exec: javac -g JavaTesting.java JavaTesting.java:10: error: unreported exception FileNotFoundException; must be caught or declared to be thrown ^ 1 error ----jGRASP wedge2: exit code for process is 1. As I am very new to Java, I am unable to understand the error. Can anybody help me in resolving the error? This error is occurring because you are not doing the Exception Handling and not telling compiler that there is a chance to throw a FileNotFoundException. In this situation the FileNotFoundException will be thrown if the file does not exist. You can refer below code: public static void main(String[] args) throws FileNotFoundException { try { printWriter.println ("Trying to write the txt using Java "); } catch (FileNotFoundException ex) //Please insert the code to run when exception occurs
https://kodlogs.com/34125/error-unreported-exception-filenotfoundexception-must-be-caught-or-declared-to-thrown
CC-MAIN-2020-34
refinedweb
193
57.47
Apple's Macworld Looking To Corporate Users Zonk posted more than 7 years ago | from the hi-i'm-a-business-mac dept. ." Frist post (-1, Offtopic) Anonymous Coward | more than 7 years ago | (#17476484) Article text (-1) Anonymous Coward | more than 7 years ago | (#17476502) <body bgcolor="#FFFFFF"> <font size="1">Sponsored by:</font><BR> <A HREF=" <IMG SRC=" <p> <img src=". <img src="" width="2" height="5" alt=""><BR> <font size="-1">This story appeared on Network World at<BR> <!--startindex--> <a id="top" name="top"></a><h1>Apple’s Macworld opens arms to corporate users </h1> <H3>OS X upgrade, new iPod and possibly the oft-rumored iPhone too take center stage</H3> <p class="byline">By <a href="/Home/jmears.html">Jennifer Mears</a>, Network World, 01/04/07 </p> <!-- CONTENT GOES HERE--> <!--#set var="pages" value="3" --> <!--#include virtual="/cgi-bin/pgnav05.pl?pageof=yes&pages=${p <!--#if expr="${compare} = <p class="first" <a xmlns:o="urn:" xmlns:st1="urn:" xmlns:w="urn:" href=" bolstered its standing as a viable server alternative in corporate data centers. </p> <p>Attendees can expect more details on <a xmlns:o="urn:" xmlns:st1="urn:" xmlns:w="urn:" href=" observers say Jobs may give an earlier date for its release during his talk. <a xmlns:o="urn:" xmlns:st1="urn:" xmlns:w="urn:" href=" including an updated version of Boot Camp, software now in beta that lets Windows run on Apple machines. </p> <!--#include virtual="/includes/ads-ata.html"--> <p. </p> <p. <!--#include virtual="/cgi-bin/pgnav.pl?cont=yes&pages=${pages </p> <!--#endif --> <!--#if expr="${compare} = <p. </p> <p>About 400 exhibitors, with more than 100 first-timers, will pack both the north and south halls of the convention center, says Paul Kent, vice president of MacWorld. </p><!--#if expr="${compare} != <p. </p> .” <!--#include virtual="/cgi-bin/pgnav.pl?cont=yes&pages=${pages </p> <!--#endif --> <!--#if expr="${compare} = <p>Macworld’s Kent agrees, noting that the show has increased its offerings for enterprise professionals during the past four years. </p> <p> <a xmlns:o="urn:" xmlns:st1="urn:" xmlns:w="urn:" href=". </p><!--#if expr="${compare} != <p>Schoun Regan, who owns Mac training firm ITInstruction.com in Lexington, Ky., and is this year’s chair of the MacIT Conference, says he’s seeing more interest in Apple systems among Windows shops. </p> <p>“This is because of [Macs now running on] the Intel chipset and because people are understanding that Mac OS X is a robust, scalable and secure operating system,” he says. </p> <p>At MacWorld last year, Jobs introduced <a xmlns:o="urn:" xmlns:st1="urn:" xmlns:w="urn:" href=" of the year. </p> <p>“The impact of Apple’s migration to Intel is really very large,” O’Donnell says. “It allows running Windows on Apple hardware either native or virtualized, and this is good for users, systems administrators, Apple and <a xmlns:o="urn:" xmlns:st1="urn:" xmlns:w="urn:" href=" </p> <p>As a result, IT executives who may be taking a first serious look at Macs should consider Macworld a testing ground, O’Donnell says. </p> <p>.” </p> <p> </p> <p> </p> <!--#endif --> <!--#include virtual="/includes/global-pgnav.html" --> <p> <font size="-1">All contents copyright 1995-2007 Network World, Inc. <a href=''> I for one.... (0) jo_ham (604554) | more than 7 years ago | (#17476538) I think it's merely speculation at this point though, unless they introduce something for the corporate world that will really make people stand up. They have already started by essentially making all their machines Windows compatible, while still maintaining the OS X train. I think they'd need to introduce something huge to really shake the corporate spenders into moving away from Dell+Windows+Office in the cheapest possible configuration. Who knows? I seriously doubt it will be an Office suite, put it that way. heh. Re:I for one.... (0) Anonymous Coward | more than 7 years ago | (#17476632) Re:I for one.... (2, Interesting) Total_Wimp (564548) | more than 7 years ago | (#17476978):I for one.... (0) Anonymous Coward | more than 7 years ago | (#17478582) Plugging an iPod into a Mac and using it as a source hard drive to upgrade that machine to the newest OS or patch level is great but that is not going to be an effective use of resources for an enterprise. It's hopeless (-1, Flamebait) realmolo (574068) | more than 7 years ago | (#17476592) (2, Informative) MBCook (132727) | more than 7 years ago | (#17476678):It's hopeless (1) drinkypoo (153816) | more than 7 years ago | (#17477082) get your ass sued in a corporate environment. Re:It's hopeless (0) Albanach (527650) | more than 7 years ago | (#17477206) Why would it be a violation of the license to buy a new mac and an OEM copy of Windows - isn't that exactly what OEM copies are designed for, purchase with a new computer? Re:It's hopeless (1) drinkypoo (153816) | more than 7 years ago | (#17478638) copies, Microsoft would just prevent them from getting OEM windows for resale. Re:It's hopeless (2, Insightful) AliasTheRoot (171859) | more than 7 years ago | (#17477238) Re:It's hopeless (1) Zaurus (674150) | more than 7 years ago | (#17477450) There is no dispute that most custom business apps are written to Windows... I'll dispute it. My company's been writing custom business apps on only Linux/BSD/OS X for 6 years now. Never written a single custom business app for Windows. I don't make any claims about the rest of the world, but in my sphere of influence ALL custom apps are NOT "written to Windows." Flame on! Apple is nowhere in servers (1) Animats (122034) | more than 7 years ago | (#17477758):It's hopeless (5, Insightful) Ignignot (782335) | more than 7 years ago | (#17476730) By far the largest cost in IT is man hours. If you drop those by a little, you can save more than an apple will cost you. Re:It's hopeless (2, Interesting) balsy2001 (941953) | more than 7 years ago | (#17476988) Re:It's hopeless (0) Anonymous Coward | more than 7 years ago | (#17477488) Re:It's hopeless (1, Troll) drinkypoo (153816) | more than 7 years ago | (#17477002) cost of support because you either need people who know both platforms and are thus ostensibly worth more money (especially if there actually were any real demand for people with mac skills, which we all know there is not) or you more people. The single biggest cost in the typical windows shops I've seen has been dealing with viruses and malware. But if you lock the systems down a bit, then you can protect them from most of that. Meanwhile the mac simply doesn't serve all your business needs, so you will need something else, and homogeneity makes life MUCH simpler in IT. Re:It's hopeless (1) realmolo (574068) | more than 7 years ago | (#17477156) on the server, which gets backed up. If you need to install software on a machine, you do it with SMS, and don't even have to touch the client machines. If you want to REALLY get crazy, you give everyone a roaming profile, so any machine they login to has all their stuff. For anti-virus, you buy a Fortigate unit to block viruses and spyware at the "gateway" level. The end. Any problems, you just re-image the machine. Yes, it's a lot of work. But it's a one-time thing. And big networks NEED this kind of functionality. Not to mention they probably need Exchange/Outlook, too. I personally think Exchange sucks balls, but it does do a lot of neat stuff, and lots of companies use it. As for "internet servers"...you should use Linux in almost all cases. Re:It's hopeless (4, Informative) larkost (79011) | more than 7 years ago | (#17477946). Mod parent up (0) Anonymous Coward | more than 7 years ago | (#17478440) Re:It's hopeless (0) Anonymous Coward | more than 7 years ago | (#17477960) Re:It's hopeless (2, Insightful) towermac (752159) | more than 7 years ago | (#17478052) Re:It's hopeless (1) BlowChunx (168122) | more than 7 years ago | (#17478074) Re:It's hopeless (2, Informative) dgatwood (11270) | more than 7 years ago | (#17478168) (1) otis wildflower (4889) | more than 7 years ago | (#17478574) Mail.app + iCal + AddressBook + iChat + iSync + internal Here's hoping we see that internal "enterprise Re:It's hopeless (0) Anonymous Coward | more than 7 years ago | (#17478382) Everything you described is available on Mac OS X, and IMHO, easier. You create a "standard" image of Windows for these machines, and keep the image on the network, and use Ghost (or equivalent) to push images onto the client PCs. Mac OS X server does this easily, except you don't need Ghost, because you just boot the image over the network. And if you want to change the image, you just change it on the server and reboot the clients. If you just want to install some new software on the clients, you set up a Network Install image and they auto-discover and auto-install it. Or you can use Apple Remote Desktop if you want to schedule it. It's even better than SMS because building your own packages is not a pain in the ass. There's even an Automator action for it. If you want to REALLY get crazy, you give everyone a roaming profile, so any machine they login to has all their stuff. How is this crazy? Networked home folders are nothing new. Mac OS X even supports roaming profiles for Windows (it's NT4, not AD, but it still works). Yes, it's a lot of work. It's much less work on Mac OS X. I suggest you actually get informed [apple.com] before saying things like "Macs would be somewhat workable if you had a SMALL network, I guess." Re:It's hopeless (2, Informative) Graff (532189) | more than 7 years ago | (#17478428) (1) swb (14022) | more than 7 years ago | (#17477222) work or fewer people. And then there are the intangible considerations -- managers who don't want reduced headcount for power/empire reasons, fears of reduced QoS from lower headcounts, more complicated time/personnel management, let alone the challenges of switching a computing platform. And then there's the issues of "general" expenses like power savings that almost nobody notices or cares about except at the most macro level where switching platforms might not even be noticed as anything other than a statistical abberation. Wrong, or that can be wrong (1) goombah99 (560566) | more than 7 years ago | (#17478172) Thus if it's the IT dept that is advising corporate decsion making you get people voting for their jobs and expertise and saying they can't solve the problems on the macs. In reality if they just had a slightly bigger mac IT department that most of the time twidded it's thumbs like the maytag repairman but was ready to fight the big fires, they could overall have a smaller IT dept. There's simply no question that macs are easier to maintain on a day to day basis. But you need the depth of IT staff to fight the big fires and few mac IT depts have that. Over and Over I see the same happening to the linux techs who, after being hired for unix, are sucked in to the Windows vortex that consumes all IT resources, leading us to want to hire yet another unix tech. Re:It's hopeless (0, Offtopic) mcho (878145) | more than 7 years ago | (#17476754) Oh crap, the flood gates are going to open...head for higher ground! Re:It's hopeless (5, Insightful) Anonymous Coward | more than 7 years ago | (#17476880). Re:It's hopeless (2, Insightful) JavaLord (680960) | more than 7 years ago | (#17477746) Re:It's hopeless (1) PPGMD (679725) | more than 7 years ago | (#17478110) As a consultant I have dealt with many shops that have one admin type person for up to 100 PC's. It just requires pre-planning and a good initial infrastructure. Of course that won't solve hardware issues, but software issues can be nipped in the butt. For me setting up a new shop of about 100 PC's would be easy. And I could easily have it done by a single person on day to day activities with 3 maybe 4 servers (DC, file server and secondary DC, SMS/AV server, and an ISA server with filtering software to prevent spyware sites) But then again I have years of Windows experience, and enough Mac experience to know my way around them. Re:It's hopeless (4, Interesting) armada (553343) | more than 7 years ago | (#17476898) (0, Insightful) Anonymous Coward | more than 7 years ago | (#17477454):It's hopeless (1) linuxpng (314861) | more than 7 years ago | (#17477464) mac for work, and there are probably hundreds of little proprietary applications that can't be reproduced that companies use like this. I'll agree once you get a mac that has no hardware problems, you're going to have less work to do than windows. The trick is getting that mac with no hardware problems. If you haven't had one, you're lucky. I've had 5 macs with major hardware issues out of the box....and if you've had them repair your machine flawlessly, you're lucky there too. None of mine have been repaired where they didn't cosmetically damage something or just mess up the repair completely. I just don't expect to see companies dealing with high failure rates. If their published failure rate is low, I would be suprised because I've had many other and off brand PC's that haven't had these issues. Your mileage might have varied, but I can't believe I am the only one this happened to. Re:It's hopeless (1) armada (553343) | more than 7 years ago | (#17478416) Re:It's hopeless (2, Insightful) dan828 (753380) | more than 7 years ago | (#17478132):It's hopeless (0) Anonymous Coward | more than 7 years ago | (#17478152) Hiperbole = not a word. In English, anyways. Re:It's hopeless (0, Redundant) armada (553343) | more than 7 years ago | (#17478650) Hiperbole = not a word. In English, anyways. Re:It's hopeless (2, Informative) enterix (5252) | more than 7 years ago | (#17476942) (1) Shivetya (243324) | more than 7 years ago | (#17476986) (3, Informative) AliasTheRoot (171859) | more than 7 years ago | (#17477188) :It's hopeless (0) Anonymous Coward | more than 7 years ago | (#17477278) Isn't Apple's Open Directory for Mac OS X Server equivalent to Active Directory? Also GPO support is available for Macs via Centrify's DirectControl software for those that insist on living in a Microsoft Active Directory world. Re:It's hopeless (2, Informative) UnknowingFool (672806) | more than 7 years ago | (#17477498):It's hopeless (1) MPHellwig (847067) | more than 7 years ago | (#17477814) However some fine tuning will be needed to fully mimic GPO, MS did really a great job there, althouhg GPO are usually used to prevent uncorporate behaviour like installing unauthorized software, automatic distributing of MSI packages and logon/off scripts to set resources. But keep in mind the the NT philosophy of user friendly is quite the opposite of unix in general, though MacOSX has made some improvements for the GUI handicapped users. We just need customers (4, Insightful) Soong (7225) | more than 7 years ago | (#17476634) Re:We just need customers (2, Funny) BlowChunx (168122) | more than 7 years ago | (#17477772) Re:We just need customers (4, Interesting) FellowConspirator (882908) | more than 7 years ago | (#17477790). Mac OWNER, Windows Administrator. (-1, Flamebait) Anonymous Coward | more than 7 years ago | (#17476704) I also (use to) maintain Windows networks. Active Directory, even with it's flaws, is so *#%&#% powerful. The tweakery that one can impliment it just awesome. The Mac is great. It MAY even work in the Corporate world provided it was used for EMAIL, SURFING and EXCEL. Beyond that, I wouldn't touch it. Of course, if you are only using it for those applications, you just paid an additional 20-40% for the same thing a Windows box can do (yet you lose the administration). The Enterprise world will never touch anything OS X related. It is incompatable with their Enterprise enviornment. The Medium business class could possibly use it but they don't want to pay for the additional hardware, software (if it exists) or the maintenance cost. The SMALL business could possibly use Macs. I know some small shops but it's always based on Application or Image. Rarely cost. Re:Mac OWNER, Windows Administrator. (2) megaditto (982598) | more than 7 years ago | (#17477178) Totally agree here. OSX, FreeBSD, linux, and OpenVMS are for "n00bs". Everyone know that the real "l33t h4ck3r admiz" chose Windows. Re:Mac OWNER, Windows Administrator. (0) Anonymous Coward | more than 7 years ago | (#17477236) If desktops, then you are a troll. If servers, why pay more for a peice of hardware with an expensive OS when you can just get FreeBSD. Linux is for the fanboy. BSD is for the paid. Re:Mac OWNER, Windows Administrator. (0) Anonymous Coward | more than 7 years ago | (#17477358) Re:Mac OWNER, Windows Administrator. (0) Anonymous Coward | more than 7 years ago | (#17477988) With unlimited clients for around $1000, it's even cheap compared to an average price on windows XP server environment. nope. Can't see that anyone would ever want to set up something like that in an enterprise environment. Re:Mac OWNER, Windows Administrator. (0) Anonymous Coward | more than 7 years ago | (#17478532) Re:Mac OWNER, Windows Administrator. (1) Beer_Smurf (700116) | more than 7 years ago | (#17478044) Great strategy (2, Insightful) 140Mandak262Jamuna (970587) | more than 7 years ago | (#17476782):Great strategy (0) Anonymous Coward | more than 7 years ago | (#17476956) Umm, are you sure you're on a Mac? If there's a "Start" button in the lower left corner, the problem might be that you're actually using Windows. Re:Great strategy (1) Andrewkov (140579) | more than 7 years ago | (#17477364) Re:Great strategy (1) 140Mandak262Jamuna (970587) | more than 7 years ago | (#17477514) Re:Great strategy (5, Insightful) Bemopolis (698691) | more than 7 years ago | (#17477518). Hahahaha (0) Anonymous Coward | more than 7 years ago | (#17476784) There's no way they can match the price of a standard PC + Windows, so why bother? The world is bigger than the US, the day Apple realizes that there might be a (very) small hope. Not unless they address Corporate needs (-1, Redundant) Anonymous Coward | more than 7 years ago | (#17476798) Re:Not unless they address Corporate needs (1) EvanTaylor (532101) | more than 7 years ago | (#17477254) I've been doing a project where I may want to setup NFS or kerberos (future planning, nothing like that now) and was just interested in any problems I may run into. Basically I have the option of deploying some cool networking stuff, and may end up doing it as a learning experience. Re:Not unless they address Corporate needs (0) Anonymous Coward | more than 7 years ago | (#17478672) and the easter egg from MacWorld will be some podCAST tools and LDAP proxy that allows you to not have to double bind to get authentication services from AD and policy settings from Open Directory binding. still no DFS which is pathetic. go sell some kids some ipods apple, thats all you are really good at. If Apple wants corporate market penetration (1) Asshat Canada (804093) | more than 7 years ago | (#17476820) Think different; Just Say No to Apple (2, Informative) micromuncher (171881) | more than 7 years ago | (#17476932):Think different; Just Say No to Apple (1) micromuncher (171881) | more than 7 years ago | (#17478312) Where's the Windows AD Integration? (4, Insightful) Nutsquasher (543657) | more than 7 years ago | (#17477028). Re:Where's the Windows AD Integration? (1, Informative) Anonymous Coward | more than 7 years ago | (#17477272) Re:Where's the Windows AD Integration? (2, Insightful) SlamMan (221834) | more than 7 years ago | (#17477458) Its definitely a step in the right direction though. Re:Where's the Windows AD Integration? (1) PPGMD (679725) | more than 7 years ago | (#17477782) Re:Where's the Windows AD Integration? (5, Insightful) SlamMan (221834) | more than 7 years ago | (#17477412) :Where's the Windows AD Integration? (1) PPGMD (679725) | more than 7 years ago | (#17477902) Most of us are trying to cut down the crap we have to carry with us. With older laptops all I had to do was throw in the console adapter and console cable of the right type for the router and I was good to go. Now I have to carry a half dozen little adapters and such to do the same job. But I can't really blame Mac except for the removal of the modem, most PC makers are removing those ports. Re:Where's the Windows AD Integration? (1) Guy Harris (3803) | more than 7 years ago | (#17477468) How is Samba involved with this? (Only a tiny amount of OS X's client-side code to handle Microsoft protocols comes from Samba.) Re:Where's the Windows AD Integration? (0) Anonymous Coward | more than 7 years ago | (#17478084) Item (3). My MacBook battery lasts longer than my work-issued Dell Latitude by a a long shot. The Dell has dark keys with blue fonts for alternate Fn features. Try see those in low light. My MacBook has keys that light up automatically in low light. Given a choice, I'd rather have the MacBook Pro in lieu of the Dell for work any day. Yep & you're also eligible for upgrade pricing (1) HABITcky (828521) | more than 7 years ago | (#17478380) Apple needs to offer more flexibility for business (-1, Flamebait) snuf23 (182335) | more than 7 years ago | (#17477080) We use a lot of Macs at the office but Apple's so called "Enterprise" options are a joke compared to major vendors such as HP, Dell, IBM or Sun. Re:Apple needs to offer more flexibility for busin (1, Informative) Anonymous Coward | more than 7 years ago | (#17477604) Re:Apple needs to offer more flexibility for busin (1) Ramble (940291) | more than 7 years ago | (#17478400) Re:Apple needs to offer more flexibility for busin (0) Anonymous Coward | more than 7 years ago | (#17477796) Re:Apple needs to offer more flexibility for busin (1) snuf23 (182335) | more than 7 years ago | (#17478728) Heh (0) Anonymous Coward | more than 7 years ago | (#17477104) He made a funny. Mac in widespread enterprises will happen when hell freezes over (i.e. Linux becomes widespread in home use) I'm a Mac... (0) Anonymous Coward | more than 7 years ago | (#17477106) From the Apple adverts I'm under the impression Macs can only blog and print photos. Maybe make a home movie or two... Other than resulting in all system administrators suddenly becoming good looking, young, thin and trendy, I don't see what real use Apple systems have in a corporate setting. Re:I'm a Mac... (1) Lord of Hyphens (975895) | more than 7 years ago | (#17478190) Bad Apples (0) Anonymous Coward | more than 7 years ago | (#17477116) [malfy.org] Nitpick on term "consumers" (1) noidentity (188756) | more than 7 years ago | (#17477138) Unless they're now catering to people who don't "consume" their computers, it's still a consumer-oriented show, only now they are including corporate (would-be) consumers. Hmmm, corporate consumers... a literal one of those would be nice to have around. Apple won't go anywhere unless (2, Insightful) Anonymous Coward | more than 7 years ago | (#17477194). Head less desktop? (0, Flamebait) Joe The Dragon (967727) | more than 7 years ago | (#17477226) Apple needs a head less desktop with desktop parts and the mac pro costs too much for basic desktop uses. Also APPLE IF YOU RELAY WANT TO GET IN TO CORPORATE MARKET coming with mac osx for all hardware! Re:Head less desktop? (1) BasilBrush (643681) | more than 7 years ago | (#17477434) Re:Head less desktop? (1) slide-rule (153968) | more than 7 years ago | (#17477618) Re:Head less desktop? (1) DDLKermit007 (911046) | more than 7 years ago | (#17478196) Exchange (1) chiller2 (35804) | more than 7 years ago | (#17477384) Re:Exchange (0) Anonymous Coward | more than 7 years ago | (#17478596) They're running OS X Server with Kerio MailServer, which supports OTA sync with any device that uses ActiveSync. Sure, it's third party, but if Kerio can do it, Apple can do it. My rather large lumbering employer (1) gelfling (6534) | more than 7 years ago | (#17477554) If you don't believe this then why is so much IT work going to India and South America where the pure productivity derived from projects that have to connect and communicate North America with these locations is so much worse, and so popular at the same time? Corporate car fleets are cheap ass Fords, not Camrys. We should learn from this example. Are you guys crazy? (1) Octatonic (808510) | more than 7 years ago | (#17477656) Re:Are you guys crazy? (1) micromuncher (171881) | more than 7 years ago | (#17478116) Apple also got out of discounting hardware to developers... but that's another story. gn4a (-1, Offtopic) Anonymous Coward | more than 7 years ago | (#17478226) Group policies vs workgroup manager (1) zerofoo (262795) | more than 7 years ago | (#17478372) Love Microsoft or not, Group Polices rock. They are very flexible, and can tweak very detailed settings right out of the box. You can even make custom ADM templates if you are so inclined. Workgroup manager is a start, but it is not very flexible (no ability for machine specific settings VS user specific settings). I expect OD and AD integration to keep getting better, but as it stands now, it isn't really ready for enterprise use. Still, Microsoft should look over its shoulder. Apple is coming to eat Redmond's lunch. The next few years should be fun to watch. -ted
http://beta.slashdot.org/story/78042
CC-MAIN-2014-41
refinedweb
4,361
68.7
ASP.NET core is still pretty new – at the time of writing, it’s still only at Release Candidate 1. I downloaded it for the first time a few days ago to play with the sample projects, and was surprised (in a good way) by how much has changed in the default project for MVC6. Of course the standard way of using Models, Views and Controllers is still similar to how it was in recent versions of MVC – but, the project infrastructure and configuration options are unrecognisably different (at least to me). One of the first things I do when I set up a new project is configure the instrumentation – namely logging. I’d read a new feature of ASP.NET Core is that it provides built-in interfaces for logging – ILogger and ILoggerFactory. This is a nice feature and provides me with an opportunity to write cleaner code. In previous versions of MVC, if I’d injected a logger interface into my controller classes, I still needed to introduce a dependency on a 3rd party library to every class that used this interface. So even though I’m injecting a dependency using an interface, if I changed logging library, I’d have to modify each of these classes anyway. Of course I could write a wrapper library for my 3rd party logging library, but I’d prefer not to have to write (and test) even more code. Having the logging interface built into the framework gives me the opportunity to clean this up. So if I now want to add logging to my controller, I can write something like the code below. You can see this doesn’t have a dependency on a 3rd party library’s namespace – just a namespace provided by Microsoft. using Microsoft.AspNet.Mvc; using Microsoft.Extensions.Logging; namespace WebApplication.Controllers { public class HomeController : Controller { private ILogger<HomeController> _logger; public HomeController(ILogger<HomeController> logger) { _logger = logger; } public IActionResult Index() { _logger.LogInformation("Home controller and Index action - logged"); return View(); } For this post, I created a default MVC6 project, and modified the HomeController to match the code above – I just added the bold text. So how can we integrate third party libraries into an MVC6 project? Configure the default ASP.NET MVC6 project to use NLog Let’s configure NLog first. - First we’ll need to install a pre-release nuget package: Install-package NLog.Extensions.Logging -pre - Then we need to add a configuration file – nlog.config – to the root of our project. You can get a perfect example from github here – just remember to change the file locations in this config file to directories that exist in your environment. - Finally, modify the Startup.cs file’s Configure method by adding a couple of lines of code. public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { loggerFactory.AddNLog(); env.ConfigureNLog("nlog.config"); Now just run the project – notice I didn’t need to make any changes to my HomeController class. My project created a log file named “nlog-all-2016-03-27.log” which has the text: 2016-03-27 00:27:29.3796|WebApplication.Controllers.HomeController|INFO|Home controller and Index action - logged Configure the default ASP.NET MVC6 project to use Serilog Let’s say for whatever reason – maybe you want to use message templates to structure your logging data – you decide that you’d prefer to use the Serilog library instead of NLog. What changes do I need to make to my project to accommodate this? Previously, if I’d wanted to change logging library, I’d have had to change every class that logged something – probably remove a namespace inclusion of “using NLog” and add a new one of “using Serilog”, and maybe even change the methods used to log information. But with Asp.NET Core, I don’t need to worry about that. - First I need to install a pre-release nuget package for Serilog; Install-package Serilog.Sinks.File -pre - Next, I need to modify the Startup.cs file in a couple of places – the first change goes into the Startup method: public Startup(IHostingEnvironment env) { // For Serilog Log.Logger = new LoggerConfiguration() .WriteTo.File(@"C:\users\jeremy\Desktop\log.txt") .CreateLogger(); The next change goes into the Configure method: public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { loggerFactory.AddSerilog(); That’s it – after running the project again, I had logs written to the file at C:\users\jeremy\Desktop\log.txt, showing the entry: 2016-03-27 00:01:46.923 +00:00 [Information] Home controller and Index action - logged Obviously I can also safely remove the NLog packages and configuration that this point. Conclusion So you can see the new ASP.NET Core framework has made it super easy to swap out logging library dependencies. A big advantage for me is that the logging interface used by each file is now part of the framework that Microsoft provide, which means my classes aren’t tightly coupled to an implementation.
https://jeremylindsayni.wordpress.com/2016/03/27/how-to-use-nlog-or-serilog-with-c-in-asp-net-core/
CC-MAIN-2017-26
refinedweb
832
55.44
Hello, we are evaluating APR as a possible base for a multi-platform server project. I have successfully compiled and executed the following APR-"Hello world!" program under Unix: #include <stdio.h> #include <apr_general.h> int main(int argc, const char *argv[]) { apr_status_t rv; rv = apr_initialize(); printf( "Hello APR world!\n"); apr_terminate(); return 0; } Now I try to compile this with Visual Studio 8 (2005). I have successfully built the APR libs and the test suite with Studio. But when compiling the above program, I get the following error message: ------ Build started: Project: TestAPR, Configuration: Debug Win32 ------ Compiling... try-helloworld-APR.c d:\apr\apr\include\apr_errno.h(52) : error C2061: syntax error : identifier 'apr_strerror' d:\apr\apr\include\apr_errno.h(52) : error C2059: syntax error : ';' d:\apr\apr\include\apr_errno.h(52) : error C2059: syntax error : 'type' The respective line in apr_errno.c reads APR_DECLARE(char *) apr_strerror(apr_status_t statcode, char *buf, apr_size_t bufsize); It seems that the macro APR_DECLARE is not defined. I have spent one day to read all available documentation on APR and have searched the mailing list archive for messages concerning how to compile with APR under windows, but I just could not find out what settings I have to specify to use the APR libs with Visual Studio. The page just says > Integrating the Library > > We should tell ya'll 'bout this, no? :-) Can you please help me? Any hints are highly appreciated. Thank you and kind regards, Joachim
http://mail-archives.apache.org/mod_mbox/apr-dev/200801.mbox/%3C47999FAB.5010901@mpi-sb.mpg.de%3E
CC-MAIN-2017-30
refinedweb
244
58.79
When I was at tech-ed in Barcelona recently I met Corey Hynes, who looks after building a lot the labs for these events and some of our internal training too. If you were at Tech-ed and saw Windows 2008 R2 on our stand there, you were seeing some of Corey’s work. He builds a lot of virtual Machines and asked if I could add a feature to the codeplex library for Hyper-V which will show up in the next version I post there. I was looking at the way he distributes his VMs and added a second thing to the library. Without giving all of his trade secrets away, Corey builds VMs making the maximum use of differencing disks, so if he has 4 machines running the same OS he will have 1 parent and 4 differencing VHDs. This doesn’t give the best performance*, but these small scale environments don’t need it. Corey also makes a lot of use of snapshots to allow a VM to be rolled back – or rolled forward to the state at the end of a lab, again something you’d avoid in production to get the best performance. His technique is to export each VM, remove the duplicated parent VHD and make a secondary copy of the description files which are destroyed in the import process and then compact the whole lot. If anything goes wrong with the VM on the target computer it can be deleted and re-imported just by unpacking the import description files. So I though it would be a good idea to allow my Import-VM function to preserve the files, the line of code it needs is if ($Preserve) {Add-ZipFile "$path\importFiles.zip" "$Path\config.xml","$path\virtual machines"} Add-ZipFile ZipName FileNames, is all very well as a function specification, but how do you write it. I’m told that for licensing reasons the only way to access ZIP files that Windows provides is via explorer so the technique is to create an empty ZIP file and then tell explorer to add the files to it. Here’s the code to make a new, empty, Zip file. Function new-zip {Param ($zipFile) if (-not $ZipFile.EndsWith('.zip')) {$ZipFile += '.zip'} set-content $ZipFile ("PK" + [char]5 + [char]6 + ([string][char]0) * 18)} As you can see a 22 character header marks a file as a ZIP file. The the code below adds files to it. Filter Add-ZIPfile {Param ($zipFile=$(throw "You must specify a Zip File"), $files) if ($files -eq $null) {$files = $_} if (-not $ZipFile.EndsWith('.zip')) {$ZipFile += '.zip'} if (-not (test-path $Zipfile)) {new-zip $ZipFile} $ZipObj = (new-object -com shell.application).NameSpace(((resolve-path $ZipFile).path)) $files | foreach { if ($_ -is [String]) {$zipObj.CopyHere((resolve-path $_).path )} elseif (($_ -is [System.IO.FileInfo]) -or ($_ -is [System.IO.DirectoryInfo]) ) {$zipObj.CopyHere($_.fullname) } start-sleep -seconds 2} $files = $null } $files | foreach { if ($_ -is [String]) {$zipObj.CopyHere((resolve-path $_).path )} elseif (($_ -is [System.IO.FileInfo]) -or ($_ -is [System.IO.DirectoryInfo]) ) {$zipObj.CopyHere($_.fullname) } start-sleep -seconds 2} $files = $null } The key thing is the Shell.application object has a namespace method which takes a path, and returns a folder or zipfile as a namespace. The namespace has a “copy here” method, so the logic is check for one or more file(s) passed as a parameter or via the pipe.Check that the Zip file ends with .ZIP and if it does, and the .ZIP extension. If the file doesn’t exist,create it as an empty ZIPfile.Get a namespace for it and call the copy here method for each file passed. (If the file was a name, resolve it to a full name and if it is an object get the full name from the object). Easy …. Now this led me to explore the Shell.application object more, but I’ll make that another post. * Update Corey pointed out that by sharing a VHD with the common files on you maximize the benefit of any read cache. Differencing disks (including the ones used for snapshots) are extended when blocks on the parent change, that’s the slow bit. In a workload with few changes to the disk a differencing disk can work out faster. Update 2. Corey was way to polite to mention I’d misspelled his name ! I’ve put that right. I’m out of touch to the tune of a couple of hundred posts, so here’s a quick round-up of some interesting So a couple of posts back I showed a little bit of PowerShell which could create a new Zip file and hook
http://blogs.technet.com/b/jamesone/archive/2008/12/08/borrowing-from-windows-explorer-in-powershell-part-1-zip-files.aspx
CC-MAIN-2014-23
refinedweb
788
70.33
Apache HTTP Server Request Library #include "apreq.h" #include "apr_buckets.h" Go to the source code of this file. Gets the character encoding for this parameter. Sets the character encoding for this parameter. Url-decodes a name=value pair into a param. APREQ_ERROR_BADSEQ or APREQ_ERROR_BADCHAR on malformed input. Url-encodes the param into a name-value pair. creates a param from name/value information Turns off the taint flag. Sets the tainted flag. Returns an array of parameters (apreq_param_t *) matching the given key. The key is case-insensitive. Returns a ", " -joined string containing all parameters for the requested key, an empty string if none are found. The key is case-insensitive. Parse a url-encoded string into a param table. Returns the first param in req->body which has both param->v.name matching key (case insensitive) and param->upload != NULL. Returns a table of all params in req->body with non-NULL upload brigades. Upgrades args and body table values to apreq_param_t structs.
http://httpd.apache.org/apreq/docs/libapreq2/apreq__param_8h.html#0f1db12120bb2307f5e33186f094b0d6
crawl-003
refinedweb
164
63.56
j'ai exaaactement le même problème . Help? Tried 2 diff languages, JS and PHP, same algorithm, but got 2 diff results. In JS I wasn't able to pass 91%, in PHP I got 100%. After submitting the JS version it keep failing step 02. Hi, I'm stuck in this problem with C#..I never have to optimize my code like that I don't know why it's too slow, can you help me ? Edit: Solved P.S I ll delete my code after an hint What's wrong with : int[] array = new int[N]; <-- You don't need to resize it anymore after that[...]Array.Sort(array); thank you, I thinked we can't do that.I still have an error but not because my sort is too long, I ll search why ^^ on the last test my answer is always 70 .. What did I do wrong? actuel= temp should be outside the if statement It's driving me crazy. Java implementation with TreeSet and single "for" loop, still does not validate test #6 (Horses in disorder). Anyone with a clue about this? Use a TreeSet to store the Integer values. TreeSets are ordered automatically by their natural ordering. Furthermore, TreeSets do not allow duplicate entries; the add() method returns false if the value is already present in the set, which makes the rest of the process unneccessary (as equal Integers have a difference of 0 ). Regarding the use of a regular for-loop to compare values: You might consider using a ListIterator as all the Pi[n] elements are in the for-loop scope until the end of the loop. Although I must admit that I haven't tested the difference. Good luck! Hi Everyone , I am stuck on this puzzle and since it can't access its varibles and other stuff, I am stuck at a whoop 90% coverage Any help will appreciated. Thank Suraj Hey, i can't pass the Horses in disorder test too and I can't find anything wrong with my code. Any help is appreciated.Python 3: n = int(input()) s = set() for i in range(n): p = int(input()) s.add(p) ss = sorted(s) if len(ss) > 1: min = ss[1] - ss[0] for i in range(2,len(ss)): if ss[i] - ss[i-1] < min: min = ss[i] - ss[i-1] print(min) else: print(0) I'm getting the same thing. You defined your initial input collection as a set() and not a list(). What are the two main differences between a set and a list? I covered the case where there are only horses with the same power with if len(ss) > 1, I can't think of any other case where having 2 or more horses with the same power would make a difference. Obviously I am missing something, but I have no idea what it is. If you had 2 or more horses with the same power, what would their difference be? How would they appear in your set() before getting sorted? len(ss) is going to tell you how many horses there are in the set you have converted to a sorted list. But it doesn't tell you how many horses there are with the same value, if any. Try creating a custom test case with a list of horses like 5, 6,8,6,3,1. What should the answer be? Why can't I see the solutions from other members, they are all locked? You need to solve the puzzle first. Hey guys any suggestions on how I can improve this code to make it run faster ? Thank you much appreciated ! import java.util.*;import java.io.*;import java.math.*; /** * Auto-generated code below aims at helping you parse * the standard input according to the problem statement. **/class Solution { public static void main(String args[]) { Scanner in = new Scanner(System.in); int N = in.nextInt(); int [] array= new int[N]; int D=100000,sub=0; for (int i = 0; i < N; i++) { array[i] = in.nextInt(); } for (int j=0; j<(N-1);j++){ for (int k=1; k<N;k++){ if (array[j]>array[k]) sub=array[j]-array[k]; else sub=array[k]-array[j]; if ((sub<D)&&(sub!=0)) D=sub; } } System.out.println(D); } } Sort the array so you don't have 2 for loop. Hello everyone I'm in need of a little help. I'm writing this in Java. At first I essentially used a brute force algorithm. first 2 cases are fine. Second case of course will time out. because i'm comparing every possible combination pair. So this leads to n + n+1 +n+2... (geometric sequence)=1/2n(n+1)=(n^2+n)/2 so is O(n)=n^2. with n=99999 will yield a very large number.. ok. no big deal 2nd method is to write pretty much a "quick sort" recursive algorithm function that sorts the data in order in average case n*log(n) time then there is only n-1 (hence linear time which is FAST) amount of pairs. first 2 cases are fine. now this time there is an out of memory heap error. I use 2 arrays (Not ArrayLists) where the function returns L * Pivot * R such that L and R are arrays and Pivot is one inger where the base cases are size 2 and 1 that swap positions. If I was writing this in C++ I would be able to "deallocate" my temporary Left and Right Arrays. However Java prides itself of no having to worry about heap memory and have been trouble trying to explicitly call the "garbage collector" at the end of the recursive function. Could anyone who was successful in java be so kind briefly help or tell me how they did it. Thank you in advance. I think your solution might be too complex. Remember this is an easy puzzle. There is a O(n) solution. 5 lines will be sufficient.
http://forum.codingame.com/t/horse-racing-duals-puzzle-discussion/38?page=10
CC-MAIN-2018-30
refinedweb
1,005
74.69
I'm interested in a wide variety of problems involving modeling, strategy, and simulation. I'd heard that it takes seven shuffles to randomize a deck of cards, and I finally came across the article that is the source of this urban legend - Bayer and Diaconis (1992). For an excellent discussion of the problem, see How to Win at Poker, and Other Science Lessons. As you might expect, the measurement of randomness is key. Trefethen and Trefethen (2000) used a different measurement and determined that at six shuffles, the information content of deck reaches a value near its minimum. Also importantly, Bayer and Diaconis argue that the deck remains essentially ordered until the fourth shuffle, when randomness increases sharply. Trefethen and Trefethen argue that by the information content definition, the randomness increases from the first shuffle, and continues more or less linearly until the fifth shuffle. from riffle import riffle, count_rises import numpy as np import matplotlib.pyplot as plt %matplotlib inline n_shuffles = 12 deck = np.arange(52) for i in range(1,n_shuffles+1): riffle(deck) plt.subplot(3,4,i) plt.plot(np.arange(52),deck,'o') plt.xticks([]) plt.yticks([]) if i==1: plt.xlabel('1 shuffle') else: plt.xlabel('{} shuffles'.format((i))) plt.xlim(0,53) plt.ylim(0,53) plt.suptitle('starting position (x) vs current position (y) after n shuffles', y=1.03) plt.tight_layout() plt.savefig('fig1.png',dpi=600,bbox_inches='tight') # Figure 1: Starting position vs. ending position for a single deck of cards shuffled up to 16 times. Figure 1 is a single run of shuffling a deck twelve times, where the x-axis is the starting position, and the y-axis is the ending position. At zero shuffles, starting position = ending position. As the shuffling process continues, the starting position and ending positions should be increasingly unstructured. At three shuffles, the deck looks pretty random, although you can still see a few isolated "rising sequences" - places at which you're likely to see a run of sequential cards. At n=5, the deck appears about as random as it's going to get. deck_size = 52 n_shuffles = 12 k = 1000 results = np.zeros((n_shuffles,deck_size+1)) for j in range(n_shuffles): for i in range(k): deck = np.arange(deck_size) riffle(deck,j+1) rises = count_rises(deck) if np.shape(rises)[0] > 0: results[j,rises[:,0]] = results[j,rises[:,0]] + rises[:,1] results = results/k random_results = np.zeros((n_shuffles,deck_size+1)) for j in range(n_shuffles): for i in range(k): deck = np.arange(deck_size) np.random.shuffle(deck) rises = count_rises(deck) if np.shape(rises)[0] > 0: random_results[j,rises[:,0]] = random_results[j,rises[:,0]] + rises[:,1] random_results = random_results /k nshow = 6 plt.figure(figsize=(20,3)) for i in range(n_shuffles): plt.subplot(1,n_shuffles+1,i+1) plt.bar(np.arange(1,nshow),results[i,1:nshow]) plt.ylim((0,1.1*np.max(results))) plt.xticks(np.arange(1,nshow).astype(int),np.arange(2,nshow+1).astype(int)) plt.bar(np.arange(1,nshow) ,random_results[0,1:nshow],color='red') plt.title(str(i+1)) if i==0: plt.ylabel('rising sequences per deck') if i>0: locs,labels = plt.yticks() plt.yticks(locs,[]) plt.savefig('fig2.png',dpi=600) # Figure 2: Probability distributions of "rising sequences" in a shuffled deck (blue) vs. a randomized deck (red). Blue has far # rising sequences than one would expect through about 7 shuffles. # X axis is the length of rising sequences (e.g., 1-2-3 has a length of 3). Even a randomized deck is likely to have # a small rising sequence (9-10) But that's not the whole story. Figure 2 shows the distribution of these rising sequences (blue) in comparison to one drawn from a true randomization (red). The distribution of blue doesn't approach the distribution of red until about 8 or 9 shuffles. Before that, you'll find more rising sequences in the deck than you'd expect. n_shuffles = 12 n_decks = 10000 top_card = 255*np.ones((n_decks,n_shuffles),dtype=np.uint8) # Shuffle each deck, and locate where the top card went. Store that result # in an array. for i in range(n_decks): deck = np.arange(52) for j in range(n_shuffles): riffle(deck) top_card[i,j] = np.where(deck==0)[0][0] for i in range(n_shuffles): plt.subplot(3,4,i+1) hist, bin_edges = np.histogram(top_card[:,i],bins=np.arange(0.5,51.5,1),density=True) plt.plot(hist) plt.xticks([]) plt.yticks([]) if i==0: plt.xlabel('1 shuffle') else: plt.xlabel('{} shuffles'.format((i+1))) plt.tight_layout() plt.suptitle('probability location of top card after n shuffles (10,000 deck simulation)',y=1.03) plt.savefig('fig3.png',dpi=600,bbox_inches='tight') # Figure 3 below: Figure 3 shows the results of tracking where the card that started on top ended up. At n=1 shuffles, there's roughly a 50% chance that the new top card is the original top card. When the deck is truly random, it should be equally likely that the original top card is at any position in the deck (1/52 chance of being anywhere, or about 1.923%). The probability distribution is certainly in that range at n=5 (varying from about 1% to 4%), but is still obviously oriented towards the top of the deck. Only at thirteen shuffles does the p value for a correlation between the original order and the distribution go above 0.05) (n=12 was p=.02, by the way). Statistical tests aside, the remaining order is pretty easy to pick out visually up to this point by the slope of the line. (Note the extent of the y-axis changes as you progress through the shuffle to highlight the remaining structure!) These results were based on a simulation of 1 million decks. So, as you might expect, the number of shuffles to randomness depends on your definition of randomness. If I were betting on the top card, my chances of winning (though small for n>4) are better than average all the way up until the twelfth or thirteenth shuffle! References Bayer, D., & Diaconis, P. (1992). Trailing the dovetail shuffle to its lair. The Annals of Applied Probability, 2(2), 294-313. Gilbert. (1995). Theory of shuffling (Technical memorandum): Bell Laboratories. Trefethen, L. N., & Trefethen, L. M. (2000). How many shuffles to randomize a deck of cards? Proceedings of the Royal Society of London: A, 456(2002), 2561-2568.
https://nbviewer.org/github/thomaspingel/riffle-shuffle/blob/master/A%20Quick%20Look%20at%20Shuffling.ipynb
CC-MAIN-2021-49
refinedweb
1,085
50.53
The DECnet-Plus software supports a new address format, the OSI addressing format. OSI addresses (NSAPs) can be bigger and longer than Phase IV addresses, or they can fall within the limits of Phase IV addressing. OSI addresses that falls within the limits of Phase IV addressing are referred to as Phase IV-compatible addresses. DECnet Phase V addresses that fall outside Phase IV address space are referred to as extended addresses. Refer to Section 4.5 for more information. Making communication possible between a Phase IV node and a DECnet-Plus system depends on the address of the DECnet-Plus system and on your routing configurations. You have the following options for assigning addresses to DECnet-Plus systems: The DECnet Phase IV network-address format consists of 16 bits (2 bytes) of information: This format limits the network to 63 areas and a maximum of 1023 nodes per area. In contrast, the OSI address format can be up to 20 bytes long, thus extending network addresses beyond Phase IV limits. Table 1-1 lists the differences between Phase IV and OSI addresses. Figure 1-1 shows the address parts listed in Table 1-1. For a complete description of OSI addresses and their individual parts, see Chapter 4. Figure 1-1 Examples of Phase IV and DECnet Phase V Addresses Phase IV nodes can communicate with DECnet-Plus systems only if the DECnet-Plus systems have Phase IV-compatible addresses. A Phase IV-compatible address is an OSI address that has a Phase IV address encoded within it. You can use link state routing with Phase IV-compatible addressing. Phase IV-compatible addresses continue to be suitable for many DECnet Phase V networks; using them can ease migration to link state routing. You can choose to use Phase IV-compatible addressing until all, or most, network servers and services are running DECnet-Plus. 1.3.2 Advantages of Using OSI Addresses That Are Also Phase IV Compatible Use Phase IV-compatible addresses for DECnet-Plus systems that: Use Phase V addresses that are larger than Phase IV limitations if one or more of the following apply to your network: You can assign more than one address to a system. This practice is called multihoming. For example, you can give any DECnet-Plus system both a Phase IV-compatible OSI address and an extended address. The benefit is that you can communicate with Phase IV, Phase V, and other OSI systems. DECnet-Plus allows you to assign to a node as many as three addresses. Multihoming also makes it easy for you to belong to more than one OSI network. This feature is particularly useful when you want to combine networks. Rather than have all the systems in both networks get new addresses that reflect the new combined network, the systems that need to participate in both networks can have an address in each one. 1.3.5 Autoconfiguration of Addresses Two ways exist to configure network addresses: autoconfiguring them or manually configuring them. Autoconfiguration means that the adjacent router configures an end node's network address. This is the easier way to configure NETs. If you have a DECnet Phase V router adjacent to your system (on the same LAN or connected to your system by a point-to-point link), you can let the router configure your network addresses for you. DECnet-Plus end systems can communicate with both DECnet Phase V routers and Phase IV routers. For Phase-IV compatibility, DECnet Phase V routers can communicate in Phase IV routing packet format. While running the Phase IV algorithm, DECnet Phase V routers provide full routing capability for: Like all DECnet-Plus systems, DECnet Phase V routers support OSI addressing. You can set these systems to use either DECnet Phase V link state or Phase IV routing vector for routing on either level 1 or level 2, depending on your network's needs. Note that you can also choose between using a dedicated router in your network to perform routing or to configure your system as a host-based router. You can configure end systems with DECnet Phase V addresses beyond the limits of Phase IV addressing if your routing infrastructure supports it. For complete information about routing topology choices, see the documentation set of your routing product. 1.4.1 Interdomain Routing DECnet-Plus supports connections to other networks through interdomain routing, the ability for a network (one routing domain) to connect to a different routing domain. The IDP of an OSI address provides a unique identifier for the network. The ISO protocols and the OSI addressing scheme enable global multivendor interoperability. When planning your transition from Phase IV to Phase V, allow for connections to expand to a global network, if appropriate for your enterprise. Also consider your network's accessibility and security needs if it were to become part of a global network. With interdomain routing, your network can connect either to another network that is based on the DIGITAL Network Architecture (DNA) or to a network with a multivendor routing scheme. The two connected networks remain distinct. Communication takes place: Setting DNA neighbor false ensures that your network passes no routing information to the other. However, this configuration may not provide sufficient security for your network, because it does not stop any packets that the other network sends to systems in your network. For information about setting up connections between networks, see your network management guide. 1.4.2 Level 2 Routing Between Phase IV and Phase V Areas All level 1 routers within an area must use the same routing protocol. At level 2, however, you can have a mix of routing vector and link state. Level 2 routers running different protocols communicate through interphase links. An interphase link directly connects a level 2 router using routing vector with a level 2 router using link state. DECnet Phase V routers running link state use reachable-address tables to manually configure routing information across the link. For information about how to use interphase links and reachable-address tables, see your network management guide. For further details, see the router management documentation. 1.4.3 Multivendor Routers If your network includes multivendor OSI-compliant routers, they might not support one or more of the following DECnet-Plus features:.5 Name Services and Time Service Considerations An important part of planning for migration to DECnet-Plus is planning for your name services and for the DIGITAL Distributed Time Service (DECdts). The DECdns distributed namespace is no longer a requirement for DECnet-Plus and the Local namespace is not dependent on DECdns. However, the DECdns clerk software is still required on each node. DECnet-Plus provides access to the node name and addressing information stored in one or more name services. DECnet-Plus supports the following name services: While configuring DECnet-Plus, the system administrator specifies one or more of the following name services to use on the node: the Local namespace, DECdns, or Domain. 1.5.1 The Local Namespace The Local namespace is a discrete, nondistributed namespace that exists on a single node and provides that node with a local database of name and addressing information. Depending on the number of address towers stored, the Local namespace is designed to scale to at least 100,000 nodes. The prefix LOCAL: (or local:) is reserved to indicate that the information for the node is stored in the Local namespace.. You cannot use the DECdns Control Program (DNSCP) to manage information stored in the Local namespace. Instead, use decnet_register to manage the node name and address information stored in your namespace. The new decnet_register tool is described in your network management guide. 1.5.2 The Name Service Search Path At configuration time, you will be asked to specify a naming service search path. This search path applies systemwide. The search path contains a list of name service keywords. Each keyword is. 1.5.3 Name Service and Time Service Interdependencies DECdts synchronizes the system clocks in computers connected by a network and, therefore, enables distributed applications to execute in the proper sequence even though they run on different systems. The DECnet-Plus software, your name service, and DECdts are interdependent: DECnet-Plus for DIGITAL UNIX contains DECdns and DECdts clerk software. If you use the Local namespace, DECdts uses a local configuration script. For DECdns and DECdts planning information, see Chapters 6, 7, 8, 9, and 10. 1.5.4 Mapping Node Names to Addresses During transition, the network maps node names to addresses in the following ways: To ensure that Phase IV nodes and DECnet-Plus systems can communicate: To ensure that DECnet-Plus systems can communicate with other DECnet-Plus systems using RFC1006 or RFC1859, enter the Internet Addresses of the nodes in the Domain Name System (DNS/BIND). 1.6 OpenVMS Cluster Systems (OpenVMS Only) You can combine a single OpenVMS Cluster consisting of both Phase IV and DECnet-Plus nodes with the following restrictions: Given these restrictions, you can migrate an existing Phase IV OpenVMS Cluster to a DECnet-Plus OpenVMS Cluster one node at a time. DECnet-Plus no longer requires that at least one node in the cluster be a routing node, but there must be a Phase V router on the network for the alias to work. The DECnet-Plus cluster alias allows a cluster to consist of all end systems..7 DECnet Phase IV Applications DECnet-Plus supports DECnet Phase IV applications as described in the following sections. 1.7.1 DECnet for OpenVMS Phase IV Applications (OpenVMS Only) DECnet for OpenVMS Phase IV applications that use Queue I/O Request ($QIO) system service calls continue to work in DECnet-Plus without changes. You do not have to change node names to DECnet Phase V format. DECnet-Plus for OpenVMS offers both the $IPC and $QIO interfaces. OpenVMS Interprocess Communication ($IPC) system service is an operating system interface that is new with DECnet-Plus for OpenVMS. This interface to the Session Control layer lets you use DECnet software to perform interprocess communications. With $IPC, you can connect to the target application by specifying its full name or its NSAP address, as well as the Phase IV way of specifying node name and application object number and name. For details about these programming interfaces, refer to DECnet-Plus for OpenVMS Programming.
http://h71000.www7.hp.com/doc/73final/6495/6495pro_001.html
CC-MAIN-2015-18
refinedweb
1,732
51.78
Hi there, I have a 13192-EVB board that uses a MC9S08GT60 MCU. I have some code that works on this board to get temperature readings from a DS1822 digital temperature sensor. I am now trying to transmit this information wirelessly to another 13192-EVB or 13192-SARD board. I am just looking for some general information on the options I have available to accomplish this. I have already tried to use an sample Zigbee application and modify it to transmit the temperature information but it doesn't seem to work. I may have to use another sample program which generates an interrupt when the temperature conversion is done and then transmits the data at that time. I was reading over the Beestack Application Development Guide and the example in there is similar it seems. It uses the onboard accelerometer data and when the result is available from the A/D converter it then transmits the data using Zigbee. This is very similar to what I want to do except the temperature sensor is a digital one so I'm not using the A/D converter unit. I have tried to modify the smac-per-tx application for this purpose but it doesn't seem to be working. When I transmit to the other board I get all zero's for some reason. When I manually input a value to transmit such as 26, the value shows up fine on the other end. I will post my code here and perhaps people can take a look at it and give me an idea of where I may have gone wrong. I have also shown the part of the code where the transmitting is happening and where I try to call my temperature function to transmit the result. Thanks for your help. switch (app_status) { case INITIAL_STATE: //Walk the LEDs //For TX LED1 = LED_OFF; //Turn off all LEDs LED2 = LED_OFF; LED3 = LED_OFF; LED4 = LED_OFF; LED1 = LED_ON; //Lights LED1 for (loop = 0; loop < LED_DELAY; loop++); LED1 = LED_OFF; LED2 = LED_ON; //Lights LED2, Turns off LED1 for (loop = 0; loop < LED_DELAY; loop++); LED2 = LED_OFF; LED3 = LED_ON; //Lights LED3, Turns off LED2 for (loop = 0; loop < LED_DELAY; loop++); LED3 = LED_OFF; LED4 = LED_ON; //Lights LED4, Turns off LED3 for (loop = 0; loop < LED_DELAY; loop++); LED4 = LED_OFF; //Turns off LED4 LED1 = LED_ON; app_status = IDLE_STATE; //Switch app status to TX_STATE; tx_packet.u8DataLength = 10; //Set the data length of the packet. in this case, 6. packet_count=0; break; /* See if START TX has been hit */ if ((gu16Events & KBI2_EVENT) != 0) { #if BUZZER_ENABLED BUZZER = BUZZER_ON; #endif delay(10); #if BUZZER_ENABLED BUZZER = BUZZER_OFF; #endif gu16Events &= ~KBI2_EVENT; /* Clear the event */ packet_count=0; setLedsMode(LED_DIGIT_MODE, (UINT16) 8421, 10, LED_NO_FLAGS);; //Set the data length of the packet. in this case, 6. tx_packet.u8DataLength = 10; packet_count=0; app_status = TX_STATE; } I just wanted to mention as well that each Zigbee packet in this case is 10 bytes(modified from the original 18) and it requires that I transmit a '0' to begin. This is why every transmission starts with a zero and then I transmit the temperature information 9 times after that. I should also mention that it is not necessary for me to use the smac_per_tx application to transmit the DS1822 temperature information. If you know of a simpler zigbee application that I could use, I would gladly use that as well. I am basically looking for the simplest way to take the data from the DS1822 and transmit it via Zigbee to another 13192 EVB-board. The number of transmissions I require is not more than 10 measurements per day for example but more measurements would not be a bad thing if that is easier to implement. I suspect it is the same difficulty to implement 10 measurements or greater than 10. I realize that the MC9S08GT60 and the 13192-EVB is older hardware, but I am just looking for a rough idea of how to create a custom Zigbee application based on interrupts. Is there anyone with experience in creating custom Zigbee applications that can give me some suggestions or pointers? It may help if you just explain the general steps of how you were able to create a custom Zigbee application to transmit data wirelessly from a temperature sensor for example or some other device with constantly updated data. Any help would be very much appreciated. Thanks.
https://community.nxp.com/thread/92396
CC-MAIN-2020-24
refinedweb
723
57.71
alixxxMembers Content count25 Joined Last visited Community Reputation105 Neutral About kalixxx - RankMember - works awesomely tnx - im sry and you are right c# i will try .tag and report back kalixxx posted a topic in General and Gameplay Programming replied to kalixxx's topic in General and Gameplay Programmingreally nothing? not even a "stop looking its not out there"? kalixxx replied to vincero's topic in General and Gameplay Programmingmakeing one myself and as far as iv seen you will need an item object with a list of effects of each one. the effects are generic and you dont even have to name them or make them all in advance thats the work of you item creator. now the effects need nothing unless you adress them BUT you can make new effect any time you need. kalixxx posted a topic in General and Gameplay Programming? - well ill try makeing a new thread here when ill finish some stuff. but im trying to make a space game with all the player/ship info is on an array most of the components in the game need to know whats in there! i need to know how different threads can talk to the same array without bonking there heads! - as for the b var it cannot be and old result coz there is none and the results are too consistent(no not the exact same number) and as for the book, i made the example a tad bigger so i can see timing too! kalixxx posted a topic in General and Gameplay Programmingim thinking of making a part of a game im starting in threads. but i dont have mach exp in threading and how data moves between them. in a c# book i have it says the next code if safe. iw works but im not so sure how this will scale up to big game arrays. T t; public int b = 0; void button1_Click(object sender, EventArgs e) { int a = 0; t = new T(); new Thread(t.run).Start(); while (a < 1000000) { a++; } label1.Text = a + ""; label2.Text = b + ""; t.stop = true; a = 0; } public class T { public int a = 0; public bool stop = false; public void run() { while (a < 1000000 && stop==false) { a++; Program.f.b = a; } a = 0; } } this code just accesses public vars with no locks or anything and "freely" talks between threads. am i wrong here in some way? kalixxx replied to kalixxx's topic in General and Gameplay Programmingwhats AABB test?? kalixxx posted a topic in General and Gameplay Programmingok lets say i got a 3d grid (sectors of the same size) and there is a sphere in it(middle point and radius) i want to know what sectors are in or touching the sphere. like drawing a circle only in 3d. i looked on the net and i think there is a name for this and im just asking the wrong question! - im moveing this . but i cant delete it.. so some1 kill it or just dont post here - i dont want to look if each node is in the sphere! i know there is an algorithm out there to do this! like drawing a circle only in 3d. - is this even the right place to ask this? kalixxx posted a topic in Graphics and GPU Programmingok lets say i got a 3d grid (sectors of the same size) and there is a sphere in it(middle point and radius) i want to know what sectors are in or touching the sphere. i looked on the net and i think there is a name for this and im just asking the wrong question!
https://www.gamedev.net/profile/155916-kalixxx/?tab=issues
CC-MAIN-2017-30
refinedweb
607
79.19
Stefano, 11.08.2011 16:24: > now that I've nailed Cython code, I'd like to get into something more funny. > Currently, I'm working on a set of macros to seamlessy integrate Cython into > CMake build process (in fact, I love CMake). But, I'd like to work also on > something more essential, so... Here's something truly essential that's been on our TODO list for ages, and it's not even that hard to do, given today's infrastructure (inline functions in .pxd files, function signature overloading and all that). Basically, the idea is to use a .pxd for existing Python modules (especially stdlib modules) and to override *some* names in it with fast C functions. Approach: - for each normally "import"-ed module (except for relative imports), search for a corresponding .pxd file in both the PYTHONPATH and under Cython/Includes/cpython/ - if found, build a scope for it that falls through to the scope of the normally imported Python module, but looks up names in the .pxd namespace first. For testing, write a cpython/math.pxd that contains replacements for *some* of the functions and constants in Python's math module. There is obviously a bit more to it than the short wrap-up above. For example, if none of the signatures of a function in the .pxd matches, it would have to fall through to the module as well. That isn't all that easy to accomplish with just a name lookup. But it's also not strictly required for an initial implementation, it would just mean that your math.pxd implementation would have to be provide a complete set of signatures for the functions it offers. I think the math module is particularly friendly here. Interested? Stefan
https://mail.python.org/pipermail/cython-devel/2011-August/001281.html
CC-MAIN-2017-04
refinedweb
294
73.78
Revision history for Dist-Zilla-PluginBundle-Author-ETHER 0.087 2015-02-20 02:47:43Z - now requiring a more conservative minimum version for Module::Build::Tiny, when it is used. - fix the 'default' minting profile - lower the default maximum-allowed minimum perl version in built distributions from 5.008001 to 5.006. 0.086 2015-02-01 01:38:17Z - no longer scans inc/ for configure_requires minimum perl version - generated CONTRIBUTING file now mentions the existence of a TODO file in the repository (which is never packaged) - dropped the use of "use warnings FATAL => 'all';' in this distribution's tests, and in newly-minted distributions - add 'default' minting profile, which is the same as 'github' - switched from using [Git::NextVersion] and [PkgVersion] as the version provider and version inserter (respectively) to [RewriteVersion::Transitional] and [BumpVersionAfterRelease::Transitional] 0.085 2015-01-16 01:45:19Z - fix test failure with metadata mismatch created by [Run::*] 0.031 0.084 2015-01-09 21:46:59Z - fix tests that failed in 0.083 for anyone who did not have a ~/.pause file 0.083 2015-01-08 04:24:23Z - fix regression in 0.082 that caused new generated files not to be commited to git after a release - -remove = <plugin> now also accepts the unique plugin name, not just the class, for more targeted plugin removal 0.082 2015-01-03 20:15:37Z - use [Git::Describe]'s new on_package_line feature - (temporarily?) revert use of 'our' syntax for $VERSION declarations (see RT#101095) 0.081 2014-12-10 02:09:34Z - $VERSION statements inserted into modules now use 'our' syntax rather than fully-qualfied variable names - added more files to the list of things we will never gather from the dist, for future-proofing 0.080 2014-11-22 04:53:32Z - avoid failing on perl 5.21.6 where aliased.pm is warning 0.079 2014-11-15 08:27:51Z - now using colours in diagnostic mesages 0.078 2014-10-29 00:47:46Z - documented the first version each configuration option became available - remove duplicate [MetaConfig] in tests, to avoid issues with meta merging in Dist::Zilla 5.022 (RT#99852) - ensure environment variables are in the right state during tests 0.077 2014-10-26 21:16:11Z - switch from [EOLTests] to [Test::EOL] 0.076 2014-10-18 22:07:58Z - bump optional dependency on [MakeMaker::Awesome] to avoid old bug with loaded modules - copy_file_from_release option now appends to, rather than overshadowing, the defaults, so users do not need to repeat the defaults (which may change over time) - reset default eumm_version in MakeMaker plugins to 0, as done in [MakeMaker] version 5.020 - new 'changes_version_columns' configuration option, for tweaking [NextRelease] format strings 0.075 2014-10-12 01:02:55Z - fix tests that died when run outside a git repository 0.074 2014-10-12 00:01:59Z - refer to "the maintainer(s)", rather than "me", in generated CONTRIBUTING file - use new build_warnings option in [Git::Check] 0.073 2014-09-06 02:07:00Z - fixed a few small omissions in the minting profile for new Dist::Zilla plugin distributions - no longer performing side effects for plugins that are -remove'd in the final configuration - added [AuthorityFromModule] (twinned for now with [Authority], until supported is added in PAUSE) - bump prereq on [ModuleBuildTiny::Fallback], to get fixes for interaction with [CheckBin], [CheckLib] 0.072 2014-08-20 04:33:59Z - fix syntax that parses badly on all perls before 5.21.1 - hey that's new enough for everyone, right? 0.071 2014-08-19 01:11:20Z - fix test that broke when I released version 0.09 of Dist::Zilla::Plugin::Test::NoTabs - remove prereq declaration for plugin that is only actually ever used for ether herself - now adding all plugins pulled in by the plugin bundle to the built distribution's develop prereqs (to add to the plugins outside the bundle, that were already being added by [Prereqs::AuthorDeps]) 0.070 2014-08-16 18:48:19Z - revert git contributor change from v0.069; opened (github) CPAN-API/metacpan-web#1270 instead. - order contributors by number of commits, descending - no longer adding an $AUTHORITY variable to modules, as nothing (save Class::MOP, Moose and Moo) has ever used it - this also avoids shifting our line numbers by 3 0.069 2014-08-07 02:44:38Z - bump prereq on [Git::Contributors] to allow tests to pass when running outside of a git repository - remove hacks for [ReadmeAnyFromPod], now that the stable release is out supporting phase=release - fix test failure when installer's ~/.pause cannot be parsed - fix test failure on older Dist::Zilla where TestRunners did not dump their configuration into metadata - now including authors in contributor lists, except for the releaser herself 0.068 2014-08-06 04:15:24Z - README.pod is now generated only in the repository, after release, so it never shows up in the shipped dist - include a default .mailmap in the minted dist; invite the user to update .mailmap if the contributor data is not quite right - avoid errors from in XS-based distributions where a token Makefile.PL is included and we try to generate our own over top - more directories added to no_index - now instead of installing release with [InstallRelease], switch to [Run::AfterRelease] with an author-specific PAUSE URL, to help out cpanm-reporter - adjust the command that updates the .latest symlink, to account for @#$!@#$ incompatibilities between implementations of `ln` - at last! add [Git::Contributors] 0.067 2014-07-25 01:09:13Z - disabled [EnsurePrereqsInstalled] for now - it is too annoying getting travis to install everything soon enough! 0.066 2014-07-18 03:25:10Z - internally-hardcoded extra arguments to be passed to optional plugins are now handled in a more generic way, now encompassing all plugins that might be used - minted Dist::Zilla plugins now contain an extra bit of needed prereq declaration, for valid metadata production - airplane mode can now also be enabled via the DZIL_AIRPLANE environment variable - now converting the main module to README.pod (pod instead of markdown) for committing to the repository - fix test of expected files produced by bundle to handle the new data file put out by [Test::ReportPrereqs] 0.014. - XS-based distributions (*.xs files found in repository root) must now have a basic Makefile.PL provided to assist development 0.065 2014-06-09 20:36:31Z - fix dist.ini in minted dzil plugin dist (v0.064) - include tailored [MetaResources] for minted dzil plugin's dist.ini - inject [SurgicalPodWeaver] in develop prereqs, if selected via the surgical_podweaver option - include placeholder for keyword declarations in minted modules - xt/ tests are now once again run after t/ - engage maximum fallbackiness! default installers now [ModuleBuildTiny::Fallback], [MakeMaker::Fallback] - skip running after-build shell commands on systems with no bash, to avoid failing tests on various architectures 0.064 2014-05-21 16:56:30Z - one more .latest symlink fix - add [CheckIssues] - drop automatic develop prereq on Dist::Zilla <what I have> 0.063 2014-05-11 17:53:12Z - really fix failing tests this time (from v0.060) - fix updating of .ackrc file and .latest symlink in various build scenarios (from v0.061) 0.062 2014-05-08 17:02:13Z - remove using plugins in tests that will cause the test build to fail if users don't have develop prereqs installed 0.061 2014-05-04 21:28:33Z - fix ln flags for linux - fix filefinders used in no-tabs tests 0.060 2014-05-03 22:52:01Z - 0.059 2014-04-26 22:32:12Z - revisions to and fixes for generated CONTRIBUTING document 0.058 2014-04-19 05:38:04Z - minimum perl version really really fixed now - drop [Test::UnusedVars] - too many false positives; perl-critic is better suited to address these sorts of issues - Makefile.PL now checks for 'git' command, so we can get NA reports for every smoker that tried to look at us (before attempting to install all the prerequisites) - customized minting output if the dist is a Dist::Zilla plugin 0.057 2014-04-11 01:04:20Z - minimum perl version really softened to 5.10.1, except when minting distributions (which should have happened in v0.051) - when building as 'dzil build' or 'dzil release', add a .latest symlink to facilitate future grepping across repositories 0.056 2014-04-08 02:43:13Z - automatically add keywords to metadata, extracted from # KEYWORDS: comment in main module - switch from [Test::Version] to [CheckStrictVersion] 0.055 2014-03-24 04:03:47Z - fix missing prereq declaration needed from last release 0.054 2014-03-23 04:49:15Z - pass default_jobs => 9 option to all installer tools in use, and [RunExtraTests] - also ensure we never index corpus/; only list no_index directories when they actually exist 0.053 2014-03-12 15:01:19Z - now using [Test::CleanNamespaces] to build this distribution as well as in newly-minted distributions - make airplane mode actually work (doh!) 0.052 2014-02-25 04:56:32Z - [VerifyPhases] added (another information-only plugin) - added missing prereq (from v0.051) - minted clean-namespaces.t test no longer skips ::Conflicts module (added in v0.019) 0.051 2014-02-23 01:10:45Z - fix argument passed to [Git::Commit] that no longer works now that it uses Path::Tiny instead of Path::Class - new "surgical_podweaver" configuration option - minimum perl version softened to 5.010, except for when minting distributions (still at 5.013002) 0.050 2014-02-13 17:39:49Z - adjusted CONTRIBUTING content for XS-based dists - I promise to include a token Makefile.PL to allow development without dzil. 0.049 2014-01-29 02:56:31Z - fixed tests that can fail on systems with no spelling dictionaries installed 0.048 2014-01-21 04:07:40Z - removed [PruneCruft], which does nothing useful with [Git::GatherDir] - removed [ManifestSkip] which does nothing useful when there is no MANIFEST.SKIP file - add an option for what files to copy from the release, for easier customization - added a dummy option for [Git::Check], to work around an issue in Config::MVP::Assembler::WithBundles::_add_bundle_contents 0.047 2014-01-14 05:34:39Z - adjusted plugin order so [PkgVersion] gets a chance to insert into the first blank line of the package (new in 5.010) before other modules insert their code lines - pod removed from in the middle of source is now replaced with a commented-out version of itself, to avoid altering line numbers 0.046 2014-01-11 19:23:31Z - ensure that .pod files are also eligible for indexing - minting profile switched to using a dist sharedir to store profiles, so we can now ship using Module::Build::Tiny 0.045 2014-01-06 18:32:05Z - fixed transposed link text in documentation (thanks, rwstauner!) - fixed new tests that fail when run outside a git repository 0.044 2014-01-04 18:46:04Z - adjust airplane mode so it performs all (non-network) pre-release checks first, before aborting the release - minor tweaks to CONTRIBUTING text 0.043 2013-12-14 17:34:14Z - stale [@Git] modules now being checked for again 0.042 2013-12-08 00:24:09Z - drop use of [-Encoding] Pod::Weaver plugin - no longer needed with Pod::Weaver 4 - update generated CONTRIBUTING file to filter out local plugins when running 'dzil authordeps --missing', and to mention the irc channel when available - new "airplane" mode, to faciliate development while the network is not available 0.041 2013-11-29 06:02:08Z - add explicit dep on Pod::Markdown, to get the version that creates metacpan hyperlinks - add [Test::Portability] - no longer prompting about stale [@Git] modules 0.040 2013-11-12 17:50:39Z - [Git::CheckFor::MergeConflicts] is working again - work around failure of minting test where a development release of a plugin is required to generate all expected files 0.039 2013-11-11 02:30:24Z - fix config for [Git::Commit], to properly commit files that were newly added to the dist - now generating full starting content for CONTRIBUTING, README.md, LICENSE when minting a new distribution 0.038 2013-11-09 22:46:12Z - fixed typo in CONTRIBUTING file (thanks, Сергей Романов!) - added use of [Prereqs::AuthorDeps], for more entries in develop prereqs 0.037 2013-11-02 20:58:18Z - fix regexp error while creating CONTRIBUTING file 0.036 2013-11-02 20:54:09Z - version format in Changes file altered from appending -TRIAL to the version (which violates CPAN::Meta::Spec) to adding ' (TRIAL RELEASE)' after the timestamp - temporarily (?) disable broken [Git::CheckFor::MergeConflicts] - more tweaks to CONTRIBUTING text - plugins which are only used in certain conditions are now also declared as runtime requirements, to make it easier for contributors (requested by haarg) 0.035 2013-10-31 05:50:52Z - drop the ego tag in Changes on every release 0.034 2013-10-31 02:05:20Z - more tweaks to CONTRIBUTING text 0.033 2013-10-24 01:47:28Z - now only generating README.md, LICENSE and CONTRIBUTING in the build directory, and copying back to the repository (and committing) only at release time 0.032 2013-10-17 02:26:03Z - fix some bad templates for files used in the minting profile - added [CheckSelfDependency] - all prereqs that are used based on config settings are now (guaranteed to be) included in the pluginbundle as runtime recommendations, as well as injected into the built dist as develop requirements (RT#89530) - more tweaks to CONTRIBUTING text 0.031 2013-10-13 17:43:37Z - fix discrepancy for [Test::Compile] between prereq versions in metadata, and runtime required version (RT#89429) 0.030 2013-10-12 22:09:46Z - xt_mode must be used in [Test::Compile] when xt/ tests run before t/ (because 'make'/'Build' has not been run yet) - fixed broken gathering of sharedir 0.029 2013-10-12 21:27:40Z --- a.k.a. "the ribasushi release" - dropped [Test::CheckDeps] - xt/ tests are now run before t/ - compile test now generated as xt/author/00-compile.t - prereqs are noisly, but non-fatally, verified in t/00-report-prereqs.t - now generating a custom CONTRIBUTING file 0.028 2013-10-06 06:17:12Z - injected t/00-check-deps.t no longer bails out when failures are encountered - the shell command used to update the local .ackrc is now POSIX shell compatible - fix unit tests that broke when Dist::Zilla::Plugin::InstallGuide 1.200001 was released (which retroactively broke the 0.027 release) 0.027 2013-09-27 03:50:26Z - minted dists once again use the default 'installer' value (now [MakeMaker::Fallback] and [ModuleBuildTiny], since 0.025) - added missing required prereqs, and now properly skipping relevant tests when optional dependencies are not installed (with guard tests to verify that all prereqs are properly declared) (RT#88977) 0.026 2013-09-25 02:00:04Z - skip relevant tests when optional dependencies are not installed 0.025 2013-09-22 22:04:46Z - added missing dependency on [MojibakeTests] (RT#88807) - fixed bad dependency for NoTabs tester that changed names - make tests run without git again (broken in 0.024) by removing all git-based plugins for testing - now generating a t/00-report-prereqs.t - 'installer' option can now be specified more than once, for stacking plugins - new installer default: MakeMaker::Fallback and ModuleBuildTiny (RT#88642) 0.024 2013-09-19 02:29:14Z - now also support the server = catagits option, for Catalyst repositories hosted at Shadowcat Systems - switch to [Test::NoTabs], also testing examples/ files - bump prereq on [Test::CheckDeps] to get CPAN::Meta::Check dependency that we can now remove from here - vim modeline added to .pm in minted dist - default 'installer' backend now defaults to 'none', forcing consumers to explicitly state a preference; minted dists specify ModuleBuildTiny, the previous default (RT#88642) 0.023 2013-09-11 01:43:22Z - now checking for stale prereqs at dist release time 0.022 2013-09-10 01:48:53Z - warnings tests bypassed during installation, to prevent installation issues in the presence of deprecation warnings from upstream dependencies (in this case, via Moose 2.1100) - and similar change made in test generated via minting profile - bumped dependency version for [Test::PodSpelling] and wordlists - bumped dependency on Dist::Zilla, for yaml encoding fixes 0.021 2013-09-07 19:41:06Z - disable invocation of cpanm-reporter (see RT#88367) - added [MojibakeTests], for testing file encoding at release time 0.020 2013-09-02 23:50:35Z - set new die_on_existing_version option in [PkgVersion] - after releasing and we install the dist, submit a cpantesters report 0.019 2013-08-21 19:16:27Z - now supporting dists hosted elsewhere than github (currently gitmo, p5sagit, or other), via the 'server' option - Test::Version now runs in strict mode - generated clean-namespaces.t test now skips ::Conflicts module - [Test::Kwalitee] now included in bundle, rather than adding it into the minted dist.ini separately 0.018 2013-08-16 05:11:06Z - now using [PromptIfStale] to ensure the plugin bundle is always the latest version, and all plugins are checked at release time - bring back [Test::CPAN::Changes], now that the spec has become a bit more reasonable (removed since v0.015) - Changes entries are now made with times in UTC, with a trailing 'Z' rather than the CLDR 'ZZZZ' format code 0.017 2013-08-04 16:12:17Z - update minimum version of perl required, re syntax used in templates applied during minting - inject a forced dependency on a fixed CPAN::Meta::Check - skip pluginbundle tests if .git dir is not present (too many plugins rely on git data) 0.016 2013-08-03 19:44:15Z - added basic tests for the pluginbundle and minter - clean up .ackrc editing at build time 0.015 2013-08-01 23:02:32Z - [Test::CPAN::Changes] omitted if its version >= 0.21, pending resolution of too-strict datetime formats (e.g. RT#87499) 0.014 2013-07-30 00:25:48Z - fix dist.ini munging done at release time (v0.013) - injected compile test now also checks files in examples/ - ExecDir now looks for installable executables in script/, for compatibility with Module::Build::Tiny 0.013 2013-07-28 23:59:41Z - inserts --ignore-dir line into .ackrc at build time - when releasing this pluginbundle, dist.ini is edited (and committed) to force a dependency on the newly-released version, and all other local dist.inis using this bundle are also edited (without committing) - issue tracking is disabled in newly-created github repositories 0.012 2013-07-17 01:32:35Z - do not require trial versions of [Test::Compile], allowing flexibility as which we choose to build with 0.011 2013-07-06 00:59:57Z - compile test will also check for warnings, when author is testing 0.010 2013-06-28 18:28:01Z - t/00-check-deps.t test now has TODO tests for 'recommends' and 'suggests' prereqs - release test for unused variables reinstated 0.009 2013-06-20 17:18:12Z - 'installer' now defaults to ModuleBuildTiny - (experimental) namespaces-are-clean release test added to minting profile, although this won't pass for many types of dists as many things are sloppy about modifying the package stash 0.008 2013-06-11 22:59:23Z - fix bad templating in minted module - during minting, push the initial commit to github after creating the remote 0.007 2013-05-30 01:48:34Z - minting profile updates: - extra kwalitee tests enabled by default - test now has 'use <main module>;', prefers done_testing, and now fails, rather than not compiling - main module now avoids the list transformer where it breaks pod coverage tests; includes more template pod - config option added for changing the installer backend (still defaults to MakeMaker, for now) 0.006 2013-05-11 00:06:41Z - support dropped for directly passing stopwords via a mvp argument -- it's easier to just use a directive right in pod (and they can be added via ConfigSlicer too, if needed) - minting process now prompts to create a repository on github - alter max_target_perl setting for *this dist itself only* to commit to run on 5.16.3+, not 5.8.8+ - the version of Dist::Zilla used to build this distribution is injected as a develop prerequisite for users of the plugin bundle - the version of this bundle used to mint a dist is set as the minimum version for subsequent builds of the dist - after release, the github repository is updated with the distribution's abstract as its description, and the metacpan page as its homepage. 0.005 2013-04-28 15:47:40Z - bump version for [Test::Compile] and [Test::CheckDeps] (RT#84900, RT#84904, RT#84905) 0.004 2013-04-27 03:56:11Z - fix missing .gitignore in minted dists - [-Encoding] podweaver plugin added to this distribution and minted dists - extra stopwords added, after disabling my local aspell dictionary - Pod::Weaver plugins used in this bundle are added as runtime prerequisites 0.003 2013-04-21 17:22:48Z - now building ourself with ourselves, [@Author::ETHER]. - [Test::UnusedVars] added - RT mail link cleaned up - bump prereqs for Dist::Zilla::Role::PluginBundle::PluginRemover, Dist::Zilla::Plugin::Test::CPAN::Changes - document the multiple mechanisms for adding stopwords (and add the [-Stopwords] weaver plugin, which seems to be needed sometimes) 0.002 2013-04-14 22:40:08Z - do not index our profiles/, to avoid its contents from showing up under "Documentation" on metacpan - users of [@Author::ETHER] can now -remove plugins 0.001 2013-04-14 20:52:50Z - Initial release.
https://metacpan.org/changes/distribution/Dist-Zilla-PluginBundle-Author-ETHER
CC-MAIN-2015-11
refinedweb
3,599
53.51
iSkeletonScript Struct ReferenceSkeleton script is the interface that provides animation of a skeleton. More... #include <imesh/skeleton.h> Inheritance diagram for iSkeletonScript: Detailed DescriptionSkeleton script is the interface that provides animation of a skeleton. Definition at line 232 of file skeleton.h. Member Function Documentation Create new key frame. Find key frame by name. Get script factor. Get key frame by index. Get number of frames in the script. Get script loop value. Get script name. Get script speed. Get script duration. Recalculates spline for bones rotations. Needs to be called every time when new frames are added or removed. Remove frame by index. Set script factor. Set script loop value. Set script name. Set script speed (default = 1.0). Set script duration. The documentation for this struct was generated from the following file: - imesh/skeleton.h Generated for Crystal Space 1.0.2 by doxygen 1.4.7
http://www.crystalspace3d.org/docs/online/api-1.0/structiSkeletonScript.html
CC-MAIN-2015-32
refinedweb
149
65.39
Using The New ArangoDB Geo Index Cursor via AQL This tutorial will show you how to import OpenStreetMap data into an ArangoDB instance and execute efficient geo queries on your database. Requirements: The tutorial is split into three parts: - data acquisition and import - creating the index - querying ArangoDB with geo index Import We have chosen to search for restaurants near our headquarter in Cologne. This will give us some new ideas where to have lunch and yields easy verifiable results. The import.sh downloads an osm file and extracts the file using bunzip2. The extracted file is then imported into a running arangod instance using the places_to_eat.py passing import as argument. places_to_eat.py makes use of lxml that allows event based xml-parsing. This allows us to deal with huge osm xml files. Finally the pyarango python driver is used to connect to the database and store the extracted information about restaurants like location (latitude/longitude) and name. Index Creation Now that the data is imported we create a geo index to execute performant geo queries. This can be done with the following command: This command will create a geo index on the fields lat and lon. You need to make sure that the data stored for latitude and longitude is given in degree and as floating point type. Providing location values as string is not supported. Now you can verify that the index has been created with: Using The Index Now with data and index in place we are ready to explore what restaurants are near us. We do this by writing a query that should look very familiar to AQL-users: This query iterates over the places_to_eat collection and sorts the documents by distance to our headquarter located at geo-coordinate (50.9316394,6.9398916). Finally we limit the number of results to 1. The new distance function that we have used in this query takes 2 pairs of geo-coordiantes one represents a fix location and the other the locations in the collection we use (accessors d.lat/d.lon). Now let us use the db._explain() function to see what is going on in the database: When the optimizer discovers the distance function it replaces the enumerate collection node and the sort node by an index node. Using the index node will provide the documents in sorted order and we need only inspect as many elements as required by the LIMIT statement. The second point will be elaborated further shortly. Let us inspect the queries result first: There is El Gaucho, the restaurant with the best streaks in town, almost below our office! Now lets understand why we need to inspect less documents when using the geo index. Therefore we assume that we want to query restaurants that are not to far away. So lets tweak our query to search within a certain area: This query will make the advantage very obvious if you compare the optimized and non optimized version of the execution plan. So we take first a look at an evaluation plan that does not utilise the geo index: As you can see we need to iterate the full collection because we retrieve the documents in an arbitrary order. Now let us inspect the optimized rule: The optimized rule does not iterate the full collection, it does not need a FilterNode. All information is contained in the IndexNode and only documents within the given radius are considered. Conclusion Using a geo index with the new distance function will shorten the execution time of queries. This is especially true for queries that utilise sort and filter conditions as shown in the article. The improvement is archived because the optimizer will adjust the rules in a way that: - only relevant documents will be inspected - the number of node in the plan will be reduced We hope you enjoy the new functionality and provide us some feedback so we can further improve your experience.
https://www.arangodb.com/using-arangodb-geo-index-cursor-via-aql/
CC-MAIN-2019-39
refinedweb
659
58.82
is_dead_from_error Contents is_dead_from_error# Boolean value reflecting if the Sketch has been run and has now stopped because of an error. Examples# import time def setup(): py5.background(255, 0, 0) print("the sketch is ready:", py5.is_ready) py5.run_sketch() print("the sketch is running:", py5.is_running) py5.exit_sketch() # wait for exit_sketch to complete time.sleep(1) print("the sketch is dead:", py5.is_dead) print("did the sketch exit from an error?", py5.is_dead_from_error) Description# Boolean value reflecting if the Sketch has been run and has now stopped because of an error. This will be True only when is_dead is True and the Sketch stopped because an exception was thrown. Updated on September 01, 2022 16:36:02pm UTC
https://py5.ixora.io/reference/sketch_is_dead_from_error.html
CC-MAIN-2022-40
refinedweb
118
69.58
Bugzilla – Bug 1056 Should document gcc 3.4.4 as known-broken on x86-64 Last modified: 2007-04-01 15:15:55 You need to before you can comment on or make changes to this bug. The last file compiled during the build of llvm-gcc when it crashes: /usr/home/jeffc/llvm-gcc/obj/gcc/xgcc -B/usr/home/jeffc/llvm-gcc/obj/gcc/ -B/home/jeffc/llvm-gcc/install/amd64-unknown-freebsd6.1/bin/ -B/home/jeffc/llvm-gcc/install/amd64-unknown-freebsd6.1/lib/ -isystem /home/jeffc/llvm-gcc/install/amd64-unknown-freebsd6.1/include -isystem /home/jeffc/llvm-gcc/install/amd64-unknown-freebsd6.1/sys-include -O2 -DIN_GCC -W -Wall -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -isystem ./include -fPIC -pthread -g -DHAVE_GTHR_DEFAULT -DIN_LIBGCC2 -D__GCC_FLOAT_NOT_NEEDED -I. -I. -I../../gcc -I../../gcc/. -I../../gcc/../include -I./../intl -I../../gcc/../libcpp/include -I/usr/home/jeffc/llvm/include -I/home/jeffc/llvm/obj/include -DL_lshrdi3 -c ../../gcc/libgcc2.c -o libgcc/./_lshrdi3.o WARNING: 128-bit integers not supported! ../../gcc/libgcc2.c: In function '__lshrti3': ../../gcc/libgcc2.c:412: internal compiler error: Segmentation fault: 11 This is apparently due to an optimization bug in LLVM. The crash goes away if the -O2 option is removed. The preprocessed version of the file, as well as the bytecode file produced with -emit-llvm -O0 is attached. Created an attachment (id=520) [details] Preprocessed version of file causing crash. Created an attachment (id=521) [details] Bytecode output from -emit-llvm -O0 I've tried to run all optimization passes, which llvm-gcc actually runs. Unfortunately, there was no crash. 1. Does llvm-gcc crash at -O1? 2. Could you please try ./opt -verify -lowersetjmp -funcresolve -raiseallocs -simplifycfg -mem2reg -globalopt -globaldce -ipconstprop -deadargelim -instcombine -simplifycfg -prune-eh -inline -simplify-libcalls -argpromotion -raise -tailduplicate -simplifycfg -scalarrepl -instcombine -predsimplify -condprop -tailcallelim -simplifycfg -reassociate -licm -loop-unswitch -instcombine -indvars -loop-unroll -instcombine -load-vn -gcse -sccp -instcombine -condprop -dse -dce -simplifycfg -deadtypeelim -constmerge -funcresolve -internalize -ipsccp -globalopt -constmerge -deadargelim -inline -prune-eh -globalopt -globaldce -argpromotion -instcombine -predsimplify -scalarrepl -globalsmodref-aa -licm -load-vn -gcse -dse -instcombine -simplifycfg -verify failed_bytecode.bc | llc and check, whether llc crashes. (This is the full list of optimizations run by llvm-gcc4 at -O2) The crash does occur with -O1. Only -O0 compiles successfully. The opt command you give does crash on my machine (llc never sees any bytecode). The stack trace is: (gdb) where #0 0x00000030008a9a3d in ?? () #1 0x00007fffffffd7a0 in ?? () #2 0x0000000000d58e00 in ?? () #3 0x00007fffffffd910 in ?? () #4 0x000000000088f04b in (anonymous namespace)::InstCombiner::visitOr (this=0x7fffffffd7a0, I=@0xd58e00) at /usr/home/jeffc/llvm/lib/Transforms/Scalar/InstructionCombining.cpp:3481 Previous frame identical to this frame (corrupt stack?) Looks like stack corruption. Well, there are 4 instcombine's in the list. Could you please find, which one segfaults? And after - prepare bytecode just before that instcombine, so just "opt -instcombine newbytecode.bc" crashes. Also, try "bugpoint -find-bugs oldbytecode.bc" to get somehow reduced bytecode. If valgrind is avaiable on your system you might also try to prepend "-enable-valgrind" to bugpoint command line. Created an attachment (id=523) [details] Simplified bytecode file causing crash The first -instcombine is what crashes. bugpoint reduced the bytecode significantly. The following command reproduces the crash: opt bugpoint-reduced-simplified.bc -instcombine But probably not on your machine. It may be reproducible only on x86_64 machines, or worse on FreeBSD systems only. Nope, it doesn't crash for me on 32-bit Windows built with VC++. No crash here too (gcc/linux and gcc-mingw32/windows). Jeff, since you've got the only platform on which this fails, could you please gdb opt and set a breakpoint on InstructionCombiner::visitOr ? Then step through the code until it breaks. Send me the output from that debug session and I'll see if I can figure out where it goes wrong. Anything else you learn from this might be useful too. Alternatively, can you include the output of 'opt -instcombine -debug' on the .bc file? Thanks, -Chris Output from opt -instcombine -debug: IC: Old = %tmp41 = getelementptr %struct.DWstruct* %tmp40, int 0, uint 0 ; <long*> [#uses=1] New = %tmp41 = getelementptr { { long, long } }* %w, int 0, uint 0, uint 0 ; <long*> [#uses=0] IC: DCE: %tmp40 = getelementptr %struct.DWunion* %w, int 0, uint 0 ; <%struct.DWstruct*> [#uses=0] IC: Old = %tmp39 = bitcast ulong %tmp39 to long ; <long> [#uses=1] New = or long %tmp37, %tmp38 ; <long>:<badref> [#uses=0] IC: DCE: %tmp39 = or ulong %tmp37, %tmp38 ; <ulong> [#uses=0] IC: MOD = %tmp37 = bitcast ulong %tmp37 to long ; <long> [#uses=0] IC: DCE: %tmp37 = lshr ulong %tmp35, ubyte %tmp36 ; <ulong> [#uses=0] IC: Old = %tmp36 = trunc int %tmp36 to ubyte ; <ubyte> [#uses=1] New = trunc long %tmp36 to ubyte ; <ubyte>:<badref> [#uses=0] IC: DCE: %tmp36 = trunc long %tmp36 to int ; <int> [#uses=0] IC: DCE: %tmp35 = bitcast long %tmp35 to ulong ; <ulong> [#uses=0] Segmentation fault (core dumped) Very strange. Okay, please try this: gdb --args opt -debug -instcombine bugpoint.xxx.bc > b InstCombiner::visitGetElementPtrInst > r > c > c > n > n .... Just keep nexting until you crash, it shouldn't be too long. When it crashes, please info about the code that was just being next'ed over. Thanks Jeff, -Chris Created an attachment (id=526) [details] GDB session Two 'c' are too many. 254 'n' are needed to hit the crash after the first 'c'. Okay, this is even more confusing :( It looks like there is a bogus instruction being put on the worklist. I can't reproduce this, and nicholas wasn't able to reproduce this with valgrind on x86. Do you have valgrind? Does it report anything when you run it on 'opt -instcombine' here? If not, can I get access to a jail or something to track this down? -Chris I suspect that this is related to Bug 1063, which I think is GCC miscompiling LLVM when targetting x86-64. I'm not going to have time to investigate this until after the holidays unfortunately, -Chris The problem disappears when building with gcc 4.0.4 (instead of the 3.4.4 that FreeBSD 6.1 comes with). However, there are still other problems. There appear to be some scripting bugs, though it's not clear what the consequences are: checking whether llvm-gcc is sane... yes test: /home/jeffc/llvm/install: unexpected operator This isn't the only place the "unexpected operator" occurs: checking for sin in -lm... yes test: FreeBSD: unexpected operator checking for library containing lt_dlopen... no Very interesting. We should add x86-64 3.4.4 to know "known bad" list of GCC's in the getting started guide. The scripting but is probably a real bug. Shouldn't this be closed? GCC 3.4.x is definitely generating bad x86_64 code. GCC 4.0.x works. I already fixed the scripting bugs. Yes, you're right, patch here:
http://llvm.org/bugs/show_bug.cgi%3Fid=1056
crawl-002
refinedweb
1,149
51.04
lights, and connect them to a Modbus/TCP powered controller. Part of our goals in writing the SEC562 course is to provide hands-on experience understanding the security of ICS protocols such as Modbus/TCP, CIP, PROFINET, DNP3 and others. This is done through the completion of several missions, where the team of analysts has a defined goal, and has to use offensive or defensive skills to achieve the stated goal. In the case of the traffic light mission, the team has to hack their way into the CyberCity Department of Transportation (DoT) network, pivot from publicly accessible systems to restricted access systems, and use the compromised host to deliver custom a Modbus/TCP exploit that manipulates the traffic light patterns. I'm biased, but I think these missions are SUPER FUN. Challenging, for sure, but a great opportunity to learn about a whole new realm of interesting protocols (ICS and related technology) that allow you to use hacking to interact with the kinetic world, manipulating systems that move or control things that move (like... traffic lights!). The class itself is 80% hands-on, 20% lecture, so you spend much more time DOING than listening... and falling asleep after eating too much lunch (been there). In this article, we'll take a peek at the Traffic Control CyberCity mission. I'm not going to give away everything, but we'll take a look at how we can combine useful reconnaissance and information gathering, web attacks, privilege escalation, pivoting, and Modbus/TCP exploits effectively. CyberCity Scoring Server In the SEC562 class, we don't just give you a target, shrug, and say "figure it out". It's not a good use of your time. Instead, we provide a well-tested environment using the NetWars Scoring Server, that identifies the mission and asks questions in a sequential order that guide you through the mission steps. If you get stuck on a question the automated hint system brings you a little bit closer to the answer with each hint. This way, you can work at your own pace: figure out small portions of the mission on your own, or use hints to get extra assistance where desired. When you answer a question correctly, you get all the hints automatically, just to validate your technique with what we planned for the mission. Let's jump in and start answering some questions. Reconnaissance Like any penetration test, you'll conduct reconnaissance analysis and information gathering before evaluating the target systems for vulnerabilities. For example: "Leverage the FaceSpace site (facespace.co.nw) to identify three employees working for the CyberCity Department of Transportation. Enter the last name of the DoT employee whose name ends in "be"." FaceSpace is our social networking site within CyberCity, complete with thousands of accounts and posts from various CyberCity citizens. FaceSpace is built on the Elgg open-source social networking software, acting both like Twitter and Facebook. Like other social networking sites, FaceSpace is a wealth of sensitive data that is useful for reconnaissance analysis including the disclosure of username information, "friend" associations to identify co-workers, and other sensitive data. Searching for various on "department of transportation" and "DoT" turns up some interesting results. Looks like Jermaine Strobbe is a new hire for the DoT, and the answer for our question. Let's move on to some scanning and information gathering. Scanning Later in the mission, we start getting questions like this one: "Additional informational resources about the structure of systems and traffic light control protocols are stored in non-guest accounts on the filebox.dot.city.nw site. Access these protected resources and identify the model number for the traffic light controller used by the Cyber City DoT. Enter the numeric portion of the traffic light controller model number." The filebox.dot.city.nw site is used for sharing public resources - it's the CyberCity version of ownCloud or other cloud storage providers and based on a private cloud system that I got to evaluate for a customer penetration test not long ago. When you visit the site, several files are accessible to guest users as shown here. When I mouse-over the link for any of the files, I see URLs that look like this: This looks like a classically bad URL scheme, where the files are in a random-looking directory. Browsing to the directory itself, we get the following: So, we know we have a directory browsing issue, but we want to get access to other user's files on this same system. We could mount a password guessing attack against the login screen, but that could take ages and might lock-out user accounts. If we focus on the unpredictable directory portion of the URL, we see the string Z3Vlc3QK. Browsing to small variations on this string only returns 404 errors from the server.". However, we can try different decoding methods to further analyze this content. While the string could just be random lower/upper alphanumeric values, let's try different decoding options.". Seeing this, we can use our earlier reconnaissance data of username information for directory path guessing attacks to bypass authentication to access other user cloud files. You can do this manually, or a little shell script can speed things up: jwright@ccgateway2 ~ $ for username in jstobbe bstobbe jdesoto bdforge rgray ; do > enc=`echo -n $username | openssl enc -base64` # -n very important here! > curl -sL -w "%{http_code} %{url_effective}\n" -o /dev/null > done 404 404 200 404 404 Here, we see mostly 404's, but one wonderful 200 indicates that we found another encoded directory that was previously hidden from us. Skipping to the pillaging phase, we retrieve all the files in the directory and evaluate them to learn more about the target system. Among other things, we learn that the traffic light system used by the CyberCity DoT is a product from Traffic Control Systems (TCS) that includes an extended Human-Machine Interface (HMI) that allows for online reporting of traffic data. We can further validate that by browsing to the website, shown here. Looking at the page source, we find another interesting target to explore: Exploitation Jumping ahead a little bit, we can explore the hmi.dot.city.nw target. What we find is a straightforward login page, asking the user to enter credentials for the product management console. If we try to authenticate with a guessed password for a DoT employee, we get an error from the server. When we fail authentication, we get a <div> on the page letting us know. However, the URL now has a new parameter: msg=loginfail.html. NOTE: Anytime you see anything that looks like a filename in the URL, try to use the parameter to access other files on the system! Even though it's 2015, it still happens all the time. Here too, we have a straightforward Local File Include (LFI) issue that allows us to read files outside of the web root: However, the web user is limited to read files owned by www-data, or with "world" read permission. This limits our ability to get additional access on the target system (e.g. we can't read the /etc/shadow file, and start password cracking). SEC562 participants have to leverage a second vulnerability to get files uploaded to the target system and then include those scripts in the LFI to gain a shell on the target. Pivoting Once we get basic shell access to the hmi.dot.city.nw box, we can start to pillage the host for information, and use it as a pivot point to attack downstream systems including the individual Modbus/TCP traffic light controllers. Here's what we find out through pillaging the host: - Target OS is Ubuntu 12.04.4 LTS - Host has a public interface on the 10.21.12.0/24 network at 10.21.12.11. - Host has a second private interface on the 10.21.22.0/24 network at 10.21.22.11. - PLCs for traffic light controllers exist at 10.21.22.21, 10.21.22.22, 10.21.22.23, and 10.21.22.24 - All PLCs are listening on TCP/502 for Modbus/TCP connections - TCS traffic reporting software runs from /opt/b2b/lightstatus Using the cameras that monitor CyberCity in real time, and packet capture data from the traffic light controller PLCs, we can gain some insight into how the Modbus protocol is configured to control the traffic light patterns. Here, the master device (the Ubuntu box at 10.12.22.10) is transmitting a "Write Multiple Coils" message to the target device at 10.21.22.23. In Modbus, a coil is a binary value (on or off), while a register is an analog reading (from Y to Z). Modbus lacks any kind of authentication, encryption, or integrity protection; clearly, this was a protocol that was not written to be used on the same network as a hostile adversary. (Ed. understatement of the year) While some Modbus/TCP attack tools exist, I typically find it is easiest to build what I want with a quick Python script instead of adapting a different tool for a specific task. First, we can experiment with setting all the bits to the ON position with: Joshuas-MacBook-Pro-2:~ jwright$ cat lightmanip.py #!/usr/bin/python from pymodbus.client.sync import ModbusTcpClient from time import sleep import sys i=0 while(i < 30): client = ModbusTcpClient(sys.argv[1]) client.write_coils(0, [True, True, True]*4) # Ref #, followed by coil settings in list form client.close() sleep(1) i+=1 print "Done" Here, the list element [True, True, True] represents the Red/Yellow/Green lights, repeated 4 times for the North, South, East, and West directions. When we run the script with the IP address of one of the PLCs, it should change all the traffic lights to the on position. python lightmanip.py 10.21.22.21 Done Viewed through the traffic light reporting page, we see something like this: This view only shows a single light on for the Quadrant 1 traffic lights (all red). This could be because the monitoring software doesn't have the logic to keep testing for other lights to be set (since that shouldn't happen in practice). Viewed through the CyberCity camera, we see a different picture: It's a little hard to tell because the LEDs are so bright, but all 12 LEDs are shining strong because of our tool. Now, it's a simple matter of correlating the traffic lights to the individual IP addresses, and updating the script to manipulate each traffic light per the mission directive. Conclusion In this article we looked at some of the techniques used in the SEC562: CyberCity Hands-on Kinetic Cyber Range Exercise. For an attacker, the world of Industrial Control Systems opens up a lot of attack opportunity, both from the ease with which these protocols can be exploited, and the kinetic impact an attacker can produce. Learning about these attacks expands your skillset both as an attacker and as a defender (and, we get to have a lot of fun in the process). -Joshua Wright Follow @joswr1ght The work "Main St Lights" is a derivative of "US Route 6 (2)" by Nicholas A. Tonelli, used under CC BY. "Main St Lights" is licensed under CC BY by Joshua Wright. All other images copyright Counter Hack, Inc., All Right Reserved. Posted December 7, 2015 at 12:23 AM | Permalink | Reply With today's 128-bit code, hacking is not nearly the issue it has been, but it seems there is always some chink in the armor that can be exploited. Otherwise, an order must be placed with the supplier which could delay the laptop repair. To end your worries, here are some useful tips to help you find a professional PC specialist.
https://pen-testing.sans.org/blog/2015/06/10/traffic-lights-and-modbustcp-a-sec562-cybercity-hacking-adventure
CC-MAIN-2019-22
refinedweb
1,980
59.53
UI Frontiers - Silverlight Printing Basics By Charles Petzold | May 2011 Silverlight 4 added printing to the Silverlight feature list, and I want to plunge right in by showing you a tiny program that put a big smile on my face. The program is called PrintEllipse and that’s all it does. The XAML file for MainPage contains a Button, and Figure 1 shows the MainPage codebehind file in its entirety. using System; using System.Windows; using System.Windows.Controls; using System.Windows.Media; using System.Windows.Printing; using System.Windows.Shapes; namespace PrintEllipse { public partial class MainPage : UserControl { public MainPage() { InitializeComponent(); } void OnButtonClick(object sender, RoutedEventArgs args) { PrintDocument printDoc = new PrintDocument(); printDoc.PrintPage += OnPrintPage; printDoc.Print("Print Ellipse"); } void OnPrintPage(object sender, PrintPageEventArgs args) { Ellipse ellipse = new Ellipse { Fill = new SolidColorBrush(Color.FromArgb(255, 255, 192, 192)), Stroke = new SolidColorBrush(Color.FromArgb(255, 192, 192, 255)), StrokeThickness = 24 // 1/4 inch }; args.PageVisual = ellipse; } } } Notice the using directive for System.Windows.Printing. When you click the button, the program creates an object of type PrintDocument and assigns a handler for the PrintPage event. When the program calls the Print method, the standard print dialog box appears. The user can take this opportunity to set which printer to use and set various properties for printing, such as portrait or landscape mode. When the user clicks Print on the print dialog, the program receives a call to the PrintPage event handler. This particular program responds by creating an Ellipse element and setting that to the PageVisual property of the event arguments. (I deliberately chose light pastel colors so the program won’t use too much of your ink.) Soon a page will emerge from your printer filled with a giant ellipse. You can run this program from my Web site at bit.ly/dU9B7k and check it out yourself. All the source code from this article is also downloadable, of course. If your printer is like most printers, the internal hardware prohibits it from printing to the very edge of the paper. Printers usually have an intrinsic built-in margin in which nothing is printed; printing is instead restricted to a “printable area” that’s less than the full size of the page. What you’ll notice with this program is that the ellipse appears in its entirety within the printable area of the page, and obviously this happens with minimum effort on the part of the program. The printable area of the page behaves much like a container element on the screen: It only clips a child when an element has a size that exceeds the area. Some far more sophisticated graphics environments—such as Windows Presentation Foundation (WPF)—don’t behave nearly as well (but, of course, WPF offers much more printing control and flexibility than Silverlight). PrintDocument and Events Besides the PrintPage event, PrintDocument also defines BeginPrint and EndPrint events, but these aren’t nearly as important as PrintPage. The BeginPrint event signals the beginning of a print job. It’s fired when the user exits the standard print dialog by pressing the Print button and gives the program the opportunity to perform initialization. The call to the BeginPrint handler is then followed by the first call to the PrintPage handler. A program that wishes to print more than one page in a particular print job can do so. In every call to the PrintPage handler, the HasMorePages property of PrintPageEventArgs is initially set to false. When the handler is finished with a page, it can simply set the property to true to signal that at least one more page must be printed. PrintPage is then called again. The PrintDocument object maintains a PrintedPageCount property that’s incremented following every call to the PrintPage handler. When the PrintPage handler exits with HasMorePages set to its default value of false, the print job is over and the EndPrint event is fired, giving the program the opportunity to perform cleanup chores. The EndPrint event is also fired when an error occurs during the printing process; the Error property of EndPrintEventArgs is of type Exception. Printer Coordinates The code shown in Figure 1 sets the StrokeThickness of the Ellipse to 24, and if you measure the printed result, you’ll discover that it’s one-quarter inch wide. As you know, a Silverlight program normally sizes graphical objects and controls entirely in units of pixels. However, when the printer is involved, coordinates and sizes are in device-independent units of 1/96th inch. Regardless of the actual resolution of the printer, from a Silverlight program the printer always appears to be a 96 DPI device. As you might know, this coordinate system of 96 units to the inch is used throughout WPF, where the units are sometimes referred to as “device-independent pixels.” This value of 96 DPI wasn’t chosen arbitrarily: By default, Windows assumes that your video display has 96 dots to the inch, so in many cases a WPF program is actually drawing in units of pixels. The CSS specification assumes that video displays have a 96 DPI resolution, and that value is used for converting between pixels, inches and millimeters. The value of 96 is also a convenient number for converting font sizes, which are commonly specified in points, or 1/72nd inch. A point is three-quarters of a device-independent pixel. PrintPageEventArgs has two handy get-only properties that also report sizes in units of 1/96th inch: PrintableArea of type Size provides the dimensions of the area of the printable area of the page, and PageMargins of type Thickness is the width of the left, top, right and bottom of the unprintable edges. Add these two together (in the right way) and you get the full size of the paper. My printer—when loaded with standard 8.5 x 11 inch paper and set for portrait mode—reports a PrintableArea of 791 x 993. The four values of the PageMargins property are 12 (left), 6 (top), 12 (right) and 56 (bottom). If you sum the horizontal values of 791, 12 and 12, you’ll get 815. The vertical values are 994, 6 and 56, which sum to 1,055. I’m not sure why there’s a one-unit difference between these values and the values of 816 and 1,056 obtained by multiplying the page size in inches by 96. When a printer is set for landscape mode, then the horizontal and vertical dimensions reported by PrintableArea and PageMargins are swapped. Indeed, examining the PrintableArea property is the only way a Silverlight program can determine whether the printer is in portrait or landscape mode. Anything printed by the program is automatically aligned and rotated depending on this mode. Often when you print something in real life, you’ll define margins that are somewhat larger than the unprintable margins. How do you do this in Silverlight? At first, I thought it would be as easy as setting the Margin property on the element you’re printing. This Margin would be calculated by starting with a desired total margin (in units of 1/96th inch) and subtracting the values of the PageMargins property available from the PrintPageEventArgs. That approach didn’t work well, but the correct solution was almost as easy. The PrintEllipseWithMargins program (which you can run at bit.ly/fCBs3X) is the same as the first program except that a Margin property is set on the Ellipse, and then the Ellipse is set as the child of a Border, which fills the printable area. Alternatively, you can set the Padding property on the Border. Figure 2 shows the new OnPrintPage method. void OnPrintPage(object sender, PrintPageEventArgs args) { Thickness margin = new Thickness { Left = Math.Max(0, 96 - args.PageMargins.Left), Top = Math.Max(0, 96 - args.PageMargins.Top), Right = Math.Max(0, 96 - args.PageMargins.Right), Bottom = Math.Max(0, 96 - args.PageMargins.Bottom) }; Ellipse ellipse = new Ellipse { Fill = new SolidColorBrush(Color.FromArgb(255, 255, 192, 192)), Stroke = new SolidColorBrush(Color.FromArgb(255, 192, 192, 255)), StrokeThickness = 24, // 1/4 inch Margin = margin }; Border border = new Border(); border.Child = ellipse; args.PageVisual = border; } The PageVisual Object There are no special graphics methods or graphics classes associated with the printer. You “draw” something on the printer page the same way you “draw” something on the video display, which is by assembling a visual tree of objects that derive from FrameworkElement. This tree can include Panel elements, including Canvas. To print that visual tree, set the topmost element to the PageVisual property of the PrintPageEventArgs. (PageVisual is defined as a UIElement, which is the parent class to FrameworkElement, but in a practical sense, everything you’ll be setting to PageVisual will derive from FrameworkElement.) Almost every class that derives from FrameworkElement has non-trivial implementations of the MeasureOverride and ArrangeOverride methods for layout purposes. In its MeasureOverride method, an element determines its desired size, sometimes by determining the desired sizes of its children by calling its children’s Measure methods. In the ArrangeOverride method, an element arranges its children relative to itself by calling the children’s Arrange methods. When you set an element to the PageVisual property of PrintPageEventArgs, the Silverlight printing system calls Measure on that topmost element with the PrintableArea size. This is how (for example) the Ellipse or Border is automatically sized to the printable area of the page. However, you can also set that PageVisual property to an element that’s already part of a visual tree being displayed in the program’s window. In this case, the printing system doesn’t call Measure on that element, but instead uses the measurements and layout already determined for the video display. This allows you to print something from your program’s window with reasonable fidelity, but it also means that what you print might be cropped to the size of the page. You can, of course, set explicit Width and Height properties on the elements you print, and you can use the PrintableArea size to help out. Scaling and Rotating The next program I took on turned out to be more of a challenge than I anticipated. The goal was a program that would let the user print any image file supported by Silverlight—namely PNG and JPEG files—stored on the user’s local machine. This program uses the OpenFileDialog class to load these files. For security purposes, OpenFileDialog only returns a FileInfo object that lets the program open the file. No filename or directory is provided. I wanted this program to print the bitmap as large as possible on the page (excluding a preset margin) without altering the bitmap’s aspect ratio. Normally this is a snap: The Image element’s default Stretch mode is Uniform, which means the bitmap is stretched as large as possible without distortion. However, I decided that I didn’t want to require the user to specifically set portrait or landscape mode on the printer commensurate with the particular image. If the printer was set to portrait mode, and the image was wider than its height, I wanted the image to be printed sideways on the portrait page. This little feature immediately made the program much more complex. If I were writing a WPF program to do this, the program itself could have switched the printer into portrait or landscape mode. But that isn’t possible in Silverlight. The printer interface is defined so that only the user can change settings like that. Again, if I were writing a WPF program, alternatively I could have set a LayoutTransform on the Image element to rotate it 90 degrees. The rotated Image element would then be resized to fit on the page, and the bitmap itself would have been adjusted to fit the Image element. But Silverlight doesn’t support LayoutTransform. Silverlight only supports RenderTransform, so if the Image element must be rotated to accommodate a landscape image printed in portrait mode, the Image element must also be manually sized to the dimensions of the landscape page. You can try out my first attempt at bit.ly/eMHOsB. The OnPrintPage method creates an Image element and sets the Stretch property to None, which means the Image element displays the bitmap in its pixel size, which on the printer means that each pixel is assumed to be 1/96th inch. The program then rotates, sizes and translates that Image element by calculating a transform that it applies to the RenderTransform property of the Image element. The hard part of such code is, of course, the math, so it was pleasant to see the program work with portrait and landscape images with the printer set to portrait and landscape modes. However, it was particularly unpleasant to see the program fail for large images. You can try it yourself with images that have dimensions somewhat greater (when divided by 96) than the size of the page in inches. The image is displayed at the correct size, but not in its entirety. What’s going on here? Well, it’s something I’ve seen before on video displays. Keep in mind that the RenderTransform affects only how the element is displayed and not how it appears to the layout system. To the layout system, I’m displaying a bitmap in an Image element with Stretch set to None, meaning that the Image element is as large as the bitmap itself. If the bitmap is larger than the printer page, then some of that Image element need not be rendered, and it will, in fact, be clipped, regardless of a RenderTransform that’s shrinking the Image element appropriately. My second attempt, which you can try out at bit.ly/g4HJ1C, takes a somewhat different strategy. The OnPrintPage method is shown in Figure 3. The Image element is given explicit Width and Height settings that make it exactly the size of the calculated display area. Because it’s all within the printable area of the page, nothing will be clipped. The Stretch mode is set to Fill, which means that the bitmap fills the Image element regardless of the aspect ratio. If the Image element won’t be rotated, one dimension is correctly sized, and the other dimension must have a scaling factor applied that reduces the size. If the Image element must also be rotated, then the scaling factors must accommodate the different aspect ratio of the rotated Image element. void OnPrintPage(object sender, PrintPageEventArgs args) { // Find the full size of the page Size pageSize = new Size(args.PrintableArea.Width + args.PageMargins.Left + args.PageMargins.Right, args.PrintableArea.Height + args.PageMargins.Top + args.PageMargins.Bottom); // Get additional margins to bring the total to MARGIN (= 96) Thickness additionalMargin = new Thickness { Left = Math.Max(0, MARGIN - args.PageMargins.Left), Top = Math.Max(0, MARGIN - args.PageMargins.Top), Right = Math.Max(0, MARGIN - args.PageMargins.Right), Bottom = Math.Max(0, MARGIN - args.PageMargins.Bottom) }; // Find the area for display purposes Size displayArea = new Size(args.PrintableArea.Width - additionalMargin.Left - additionalMargin.Right, args.PrintableArea.Height - additionalMargin.Top - additionalMargin.Bottom); bool pageIsLandscape = displayArea.Width > displayArea.Height; bool imageIsLandscape = bitmap.PixelWidth > bitmap.PixelHeight; double displayAspectRatio = displayArea.Width / displayArea.Height; double imageAspectRatio = (double)bitmap.PixelWidth / bitmap.PixelHeight; double scaleX = Math.Min(1, imageAspectRatio / displayAspectRatio); double scaleY = Math.Min(1, displayAspectRatio / imageAspectRatio); // Calculate the transform matrix MatrixTransform transform = new MatrixTransform(); if (pageIsLandscape == imageIsLandscape) { // Pure scaling transform.Matrix = new Matrix(scaleX, 0, 0, scaleY, 0, 0); } else { // Scaling with rotation scaleX *= pageIsLandscape ? displayAspectRatio : 1 / displayAspectRatio; scaleY *= pageIsLandscape ? displayAspectRatio : 1 / displayAspectRatio; transform.Matrix = new Matrix(0, scaleX, -scaleY, 0, 0, 0); } Image image = new Image { Source = bitmap, Stretch = Stretch.Fill, Width = displayArea.Width, Height = displayArea.Height, RenderTransform = transform, RenderTransformOrigin = new Point(0.5, 0.5), HorizontalAlignment = HorizontalAlignment.Center, VerticalAlignment = VerticalAlignment.Center, Margin = additionalMargin, }; Border border = new Border { Child = image, }; args.PageVisual = border; } The code is certainly messy—and I suspect there might be simplifications not immediately obvious to me—but it works for bitmaps of all sizes. Another approach is to rotate the bitmap itself rather than the Image element. Create a WriteableBitmap from the loaded BitmapImage object, and a second WritableBitmap with swapped horizontal and vertical dimensions. Then copy all the pixels from the first WriteableBitmap into the second with rows and columns swapped. Multiple Calendar Pages Deriving from UserControl is an extremely popular technique in Silverlight programming to create a reusable control without a lot of hassle. Much of a UserControl is a visual tree defined in XAML. You can also derive from UserControl to define a visual tree for printing! This technique is illustrated in the PrintCalendar program, which you can try out at bit.ly/dIwSsn. You enter a start month and an end month, and the program prints all the months in that range, one month to a page. You can tape the pages to your walls and mark them up, just like a real wall calendar. After my experience with the PrintImage program, I didn’t want to bother with margins or orientation; instead, I included a Button that places the responsibility on the user, as shown in Figure 4. Figure 4 The PrintCalendar Button The UserControl that defines the calendar page is called CalendarPage, and the XAML file is shown in Figure 5. A TextBlock near the top displays the month and year. This is followed by a second Grid with seven columns for the days of the week and six rows for up to six weeks or partial weeks in a month. <UserControl x: <Grid x: <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <TextBlock Name="monthYearText" Grid. <Grid Name="dayGrid" Grid. <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="*" /> <RowDefinition Height="*" /> <RowDefinition Height="*" /> <RowDefinition Height="*" /> <RowDefinition Height="*" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> </Grid> </Grid> </UserControl> Unlike most UserControl derivatives, CalendarPage defines a constructor with a parameter, as shown in Figure 6. public CalendarPage(DateTime date) { InitializeComponent(); monthYearText.Text = date.ToString("MMMM yyyy"); int row = 0; int col = (int)new DateTime(date.Year, date.Month, 1).DayOfWeek; for (int day = 0; day < DateTime.DaysInMonth(date.Year, date.Month); day++) { TextBlock txtblk = new TextBlock { Text = (day + 1).ToString(), HorizontalAlignment = HorizontalAlignment.Left, VerticalAlignment = VerticalAlignment.Top }; Border border = new Border { BorderBrush = blackBrush, BorderThickness = new Thickness(2), Child = txtblk }; Grid.SetRow(border, row); Grid.SetColumn(border, col); dayGrid.Children.Add(border); if (++col == 7) { col = 0; row++; } } if (col == 0) row--; if (row < 5) dayGrid.RowDefinitions.RemoveAt(0); if (row < 4) dayGrid.RowDefinitions.RemoveAt(0); } The parameter is a DateTime, and the constructor uses the Month and Year properties to create a Border containing a TextBlock for each day of the month. These are each assigned Grid.Row and Grid.Column attached properties, and then added to the Grid. As you know, often months only span five weeks, and occasionally February only has four weeks, so RowDefinition objects are actually removed from the Grid if they’re not needed. UserControl derivatives normally don’t have constructors with parameters because they usually form parts of larger visual trees. But CalendarPage isn’t used like that. Instead, the PrintPage handler simply assigns a new instance of CalendarPage to the PageVisual property of PrintPageEventArgs. Here’s the complete body of the handler, clearly illustrating how much work is being performed by CalendarPage: Adding a printing option to a program is so often viewed as a grueling job involving lots of code. To be able to define most of a printed page in a XAML file makes the whole thing much less frightful. Charles Petzold is a longtime contributing editor to MSDN Magazine. His new book, “Programming Windows Phone 7” (Microsoft Press, 2010), is available as a free download at bit.ly/cpebookpdf. Thanks to the following technical experts for reviewing this article: Saied Khanahmadi and Robert Lyon MSDN Magazine Blog More MSDN Magazine Blog entries > Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/en-us/magazine/hh148152.aspx
CC-MAIN-2016-36
refinedweb
3,339
56.05
Introduction Oops! What is MLOps? – asked the guy who read the term for the first time. There has been a lot of buzz around this word recently, isn’t it? Hold on! This article will make things simple for you by the use of examples and practical code-based implementation on this topic. In earlier times, producing classic cars and automobiles involved a great work of craftmanship. The workers would gather at a single place to work on making a model by using their tools and fitting the metallic car parts. The process would take days and months with the cost being high obviously. Over time, the industrial revolution came in and it started making things efficient and it was only Henry Ford and his company that started producing car models efficiently and at a reduced price. How did this magic happen? All thanks to the assembly line! Taking forward the analogy, it won’t be wrong to call MLOps the assembly line for Machine Learning. The term MLOps is a combination of ‘Machine Learning’ and ‘Operations’, meaning that it is a way to manage the operations of ML models, the codes, the data, and everything involved from making it to using it. And not just using it once, but by improving it continuously. For you to have clarity on MLOps, you need to understand this – For a student working on an ML project individually, probably all the steps are done by him on his laptop, in a single environment, with not much complexity involved. But in a corporate or a tech company, there are several Data Scientists and Engineers working simultaneously and they should be doing all steps efficiently, especially to serve a customer or the use-case. Remember Henry Ford did the same thing efficiently and even in MLOps, we do the conventional model building/deploying things in a better and more organized way. WHAT EXACTLY IS MLOps? Is it new software, a new library/module, or some piece of code? You can still get your answer if you imagine the ‘assembly line’. MLOps is a concept that involves a set of practices for operating all the steps involved in making Machine Learning work. The exact steps will vary but it generally involves gathering data, making and testing models, improving the models, putting them to use, and then again improving it based on real-time feedback. For all these to happen, there should be an exact set of processes and this is where MLOps comes in. It’s important to know that MLOps is basically derived from DevOps – the same concept that has been in use for quite a while now in the software domain. It includes a lot of concepts that make it unique, bringing the adoption of MLOps to a dire necessity today. These are – Focusing on the Machine Learning lifecycle – The concept of MLOps is primarily based on the idea that every Machine Learning project has to have a lifecycle. Proper implementation is possible only upon giving importance to every step in this cycle. Often data scientists and engineers (having shared roles) work on any one of these steps and thus a proper organization & set of practices is always required. This gives rise to the concept of MLOps. CI/CD/CT through pipelines & automation – CI stands for continuous integration. CD stands for continuous development. CT stands for continuous training. MLOps focuses on continuously improving the ML model based on the evaluation and feedback received upon its testing and deployment. The code is subject to several changes across the ML lifecycle and thus a feedback loop is in play through pipelines that automate the process to make it fast. Making things scalable, efficient, collaborative – There’s no use of MLOps if it’s no better in comparison to the existing platforms and frameworks. With ML being a known concept for some years now, every team/company most probably has its own strategies for managing the work distribution amongst their people. By adopting the concept of MLOps, the collaboration gets more scalable and efficient with the time consumed is less. This is especially made possible through the various platforms. LEVERAGING MLOps PLATFORMS If you have been eager to know how exactly MLOps is implemented, then just like DevOps, it is possible through the various platforms that are available for this job. These allow you to make a pipeline so that you or your company can create an organized integration of ML models, codes, and data. Some of the platforms by the tech leaders are – Google Cloud AI Platform, Amazon SageMaker, Azure Machine Learning. There are several open-source platforms also, like the MLflow which we will be discussing practically implement today. GETTING THE FLOW WITH MLflow When it comes to implementing MLOps, MLFlow is certainly a leading name. With various components to monitor your operations, it makes training, testing, tracking, and rebuilding of models easier, throughout their lifecycle. It’s a very useful platform to quickly set up your company projects onto MLOps infrastructure so that people with different job roles can work collaboratively on a single project. To start with, MLflow majorly has three components – Tracking, Projects, and Models. This chart sourced from the MLflow site itself clears the air. While ‘tracking’ is for keeping a log of changes that you make, ‘projects’ is for creating desired pipelines. We have the Models feature. An MLFlow model is a standard format for packaging machine learning models that can be used in a variety of downstream tools — for example, real-time serving through a REST API or batch inference on Apache Spark. HANDS-ON WITH MLflow & Google Colab With all things said and done, it’s time to get going and set up MLflow for our own projects. As said earlier, you can get the true essence of the MLOps concept only in a real corporate environment when you are working with other people with shared roles. Nevertheless, you can have an experience of how it works on your own laptop. We will be installing and setting MLflow on Google Colab. You can of course do it in your local environment or Jupyter Notebook. But by using Colab, we keep it fast and shareable. We also get some things to learn, like using ngrock to get a public URL for remote tunnel access. Let’s get started by opening a new notebook and trying the code below. ## Step 1 - Installing MLflow and checking the version !pip install mlflow --quiet import mlflow print(mlflow.__version__) ## Step 2 - Starting MLflow, running UI in background with mlflow.start_run(run_name="MLflow on Colab"): mlflow.log_metric("m1", 2.0) mlflow.log_param("p1", "mlflow-colab") # run tracking UI in the background get_ipython().system_raw("mlflow ui --port 5000 &") ## Step 3 - Installing pyngrok for remote tunnel access using ngrock.com !pip install pyngrok --quiet from pyngrok import ngrok from getpass import getpass # Terminate open tunnels if any exist ngrok.kill() ## Step 4 - Login on ngrok.com and get your authtoken from # Enter your auth token when the code is running NGROK_AUTH_TOKEN = getpass('Enter the ngrok authtoken: ') ngrok.set_auth_token(NGROK_AUTH_TOKEN) ngrok_tunnel = ngrok.connect(addr="5000", proto="http", bind_tls=True) print("MLflow Tracking UI:", ngrok_tunnel.public_url) ## Step 5 - Loading dataset, training a XGBoost model and tracking results using MLflow from sklearn import datasets from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score, log_loss import xgboost as xgb import mlflow import mlflow.xgboost def main(): # Loading iris dataset to prepare test and train iris = datasets.load_iris() X = iris.data Y = iris.target X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3, random_state=25) # enable auto logging in MLflow mlflow.xgboost.autolog() dtrain = xgb.DMatrix(X_train, label=Y_train) dtest = xgb.DMatrix(X_test, label=Y_test) with mlflow.start_run(): # train the XGBoost model params = { "objective": "multi:softprob", "num_class": 3, "learning_rate": 1, "eval_metric": "mlogloss", "seed": 42, } model = xgb.train(params, dtrain, evals=[(dtrain, "train")]) # evaluate model Y_prob = model.predict(dtest) Y_pred = Y_prob.argmax(axis=1) loss = log_loss(Y_test, Y_prob) acc = accuracy_score(Y_test, Y_pred) # log metrics mlflow.log_metrics({"log_loss": loss, "accuracy": acc}) if __name__ == "__main__": main() By clicking on the link generated before, you should be able to see the list of experiments on the MLflow UI page. Every experiment is listed along with several parameters and metrics. You can search, filter, and compare them too. By clicking on each experiment, you get to discover a lot more details. There’s also an option to download in CSV format. The results for all the experiments are tracked. You can also see the log-loss function below – This is just a tiny example of what MLflow has to offer and it’s now for us to make the best use. More concept details and examples here. Curious to know how better you can use MLflow with Colab to store and track all your models, check here. BE THE BOSS WITH MLOps – THE TIME IS NOW! A lot has been already saying and doing with Machine Learning. But practical implementation is what the world has to look for. Though ML & AI are wonderful concepts, using them appropriately involves multiple aspects and also the combined efforts of people with diverse skillsets. Thus managing ML projects in a corporate is never an easy task. Using MLOps and its various platforms like MLflow are probably the best ways to deal with this issue, just like a boss and even you can make a platform of your own. The MLOps concept is just at its nascent stage and there’s a lot of development expected in the coming days. So it’s high time to keep our eyes and ears open (and being a little nosy with ML). About Author Hello, this is Jyotisman Rath from Bhubaneswar, and it’s the toughest challenges that always excite me. Currently pursuing my I-MTech in Chemical Engineering, I have a keen interest in the field of Machine Learning, Data Science, and AI and I keep looking for their integration with other disciplines like science, chemistry with my research interests. Would love to see you on LinkedIn and Instagram. Mail me here for any queries. The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.You can also read this article on our Mobile APP
https://www.analyticsvidhya.com/blog/2021/06/mlops-now-made-simple-using-mlflow/
CC-MAIN-2021-25
refinedweb
1,714
64.3
Prev Java NetBeans Experts Index Headers Your browser does not support iframes. Re: Why is a static protected field NOT protected as seems it should? From: Lew <noone@lewscanon.com> Newsgroups: comp.lang.java.help Date: Thu, 28 May 2009 00:25:21 -0400 Message-ID: <gvl3ni$osj$1@news.albasani.net> Norman wrote: Why is a static protected field NOT protected as seems it should? The non static protected field is protected as it should be according to the book. See comments/questions below. .... package app2; public class classtest extends app1.Main { The class name should follow the convention of starting with an upper-case letter and using camel case thereafter. public void amethod(app1.Main m, classtest c) { Notice that argument 'm' is passed in from the "outside", so to speak. That is relevant to the behavior you ask about. System.out.printf("amethod"); // vvv Why doesn't netbeans [sic] complain about this? It's not so mucn NetBeans itself as that it's following the rules of the Java compiler. // Isn't sprotect protected in the same manner as protect? System.out.printf("sprotect\n", app1.Main.sprotect); This access is by class reference to a static member, through the supertype 'app1.Main'. This counts as walking up the subclass's own hierarchy from "within itself", so to speak, thus fulfilling ?6.6.2: A protected member or constructor of an object may be accessed from outside the package in which it is declared only by code that is responsible for the implementation of that object. [ The JLS then goes on to qualify this more explicitly ] as cited by Joshua Cranmer. The static reference is going up through this class's own inheritance as seen by its own implementation. System.out.printf("pprotect\n", app1.Main.spub); System.out.printf("protect\n", protect); This instance reference is to the object's own internal instance variable 'pub': System.out.printf("pubprotect\n", pub); Now you switch to instance reference, using arguments 'm' and 'c' passed in from the "outside": // vvv protect has protected access in app1.Main System.out.printf("protect\n", m.protect); System.out.printf("pubprotect\n", m.pub); System.out.printf("protect\n", c.protect); System.out.printf("pubprotect\n", c.pub); } } Because those last protected references are through another object, not through 'this', they are not part of the instance's own knowledge of itself. It can know these things about itself, but not about the other objects, even though they be of the same type or subtype thereof. The 'public' attribute should have raised no warning. -- Lew Generated by PreciseInfo ™ ." "Charges of 'dual loyalty' and countercharges of anti-Semitism have become common in the feud, with some war opponents even asserting that Mr. Bush's most hawkish advisers "many of them Jewish" are putting Israel's interests ahead of those of the United States in provoking a war with Iraq to topple Saddam Hussein," says the Washington Times.
https://preciseinfo.org/Convert/Articles_Java/NetBeans_Experts/Java-NetBeans-Experts-090528072521.html
CC-MAIN-2022-27
refinedweb
495
56.86
Java Control statements control the order of execution in a java program, based on data values and conditional logic. There are three main categories of control flow statements; Selection statements: if, if-else and switch. Loop statements: while, do-while and for. Transfer statements: break, continue, return, try-catch-finally and assert. We use control statements when we want to change the default sequential order of execution SELECTION STATEMENTS The If Statement The if statement executes a block of code only if the specified expression is true. If the value is false, then the if block is skipped and execution continues with the rest of the program. You can either have a single statement or a block of code within an if statement. Note that the conditional expression must be a Boolean expression. The simple if statement has the following syntax: if (<conditional expression>) <statement action> Below is an example that demonstrates conditional execution based on if statement condition. public class IfStatementDemo { public static void main(String[] args) { int a = 10, b = 20; if (a > b) System.out.println("a > b"); if (a < b) System.out.println("b < a"); } } Output b > a Download IfStatementDemo.java The If-else Statement The if/else statement is an extension of the if statement. If the statements in the if statement fails, the statements in the else block are executed. You can either have a single statement or a block of code within if-else blocks. Note that the conditional expression must be a Boolean expression. The if-else statement has the following syntax: if (<conditional expression>) <statement action> else <statement action> Below is an example that demonstrates conditional execution based on if else statement condition. public class IfElseStatementDemo { public static void main(String[] args) { int a = 10, b = 20; if (a > b) { System.out.println("a > b"); } else { System.out.println("b < a"); } } } Download IfElseStatementDemo.java Switch Case Statement The switch case statement, also called a case statement is a multi-way branch with several choices. A switch is easier to implement than a series of if/else statements. The switch statement begins with a keyword, followed by an expression that equates to a no long integral value. Following the controlling expression is a code block that contains zero or more labeled cases. Each label must equate to an integer constant and each must be unique. When the switch statement executes, it compares the value of the controlling expression to the values of each case label. The program will select the value of the case label that equals the value of the controlling expression and branch down that path to the end of the code block. If none of the case label values match, then none of the codes within the switch statement code block will be executed. Java includes a default label to use in cases where there are no matches. We can have a nested switch within a case block of an outer switch. Its general form is as follows: switch (<non-long integral expression>) { case label1: <statement1> case label2: <statement2> … case labeln: <statementn> default: <statement> } // end switch When executing a switch statement, the program falls through to the next case. Therefore, if you want to exit in the middle of the switch statement code block, you must insert a break statement, which causes the program to continue executing after the current code block. Below is a java example that demonstrates conditional execution based on nested if else statement condition to find the greatest of 3 numbers. public class SwitchCaseStatementDemo { public static void main(String[] args) { int a = 10, b = 20, c = 30; int status = -1; if (a > b && a > c) { status = 1; } else if (b > c) { status = 2; } else { status = 3; } switch (status) { case 1: System.out.println("a is the greatest"); break; case 2: System.out.println("b is the greatest"); break; case 3: System.out.println("c is the greatest"); break; default: System.out.println("Cannot be determined"); } } } Output c is the greatest Download SwitchCaseStatementDemo.java
https://wideskills.com/java-tutorial/java-control-flow-statements
CC-MAIN-2022-27
refinedweb
668
54.73
csBoxClipper Class Reference [Geometry utilities] The csBoxClipper class is able to clip convex polygons to a rectangle (such as the screen). More... #include <csgeom/polyclip.h> Detailed Description The csBoxClipper class is able to clip convex polygons to a rectangle (such as the screen). Definition at line 78 of file polyclip.h. Constructor & Destructor Documentation Initializes the clipper object to the given bounding region. Definition at line 96 of file polyclip.h. Initializes the clipper object to a rectangle with the given coords. Definition at line 99 of file polyclip.h. Member Function Documentation Classify some bounding box against this clipper. Simple clipping. Clip and compute the bounding box. Clip and return additional information about each vertex. Return a pointer to the array of csVector2's. Definition at line 126 of file polyclip.h. Return number of vertices for this clipper polygon. Definition at line 122 of file polyclip.h. Return true if given point is inside (or on bound) of clipper polygon. Definition at line 118 of file polyclip.h. The documentation for this class was generated from the following file: - csgeom/polyclip.h Generated for Crystal Space 1.4.1 by doxygen 1.7.1
http://www.crystalspace3d.org/docs/online/api-1.4/classcsBoxClipper.html
CC-MAIN-2016-26
refinedweb
197
53.27
AxKit::XSP::QueryParam - Advanced parameter manipulation taglib Add the parm: namespace to your XSP <xsp:page> tag: <xsp:page And add this taglib to AxKit (via httpd.conf or .htaccess): AxAddXSPTaglib AxKit::XSP::QueryParam AxKit::XSP::QueryParam is a taglib built around the Apache::Request module that allows you to manipulate request parameters beyond simple getting of parameter values. <param:get Get a value from the given parameter. The "name" attribute can be passed as a child element for programattic access to parameter values. If the index attribute is supplied, and if multiple parameters are supplied for the same "name", then the appropriate parameter is returned. If multiple values for the same parameter are given, but no index is supplied, the first value is returned. Now, if you can understand that convoluted set of instructions, then you're smarter than me! <param:set Set a parameter value. You can use child elements for both "name" and "value". This is very useful when you want to override the parameters provided by the userr. <param:remove Remove a parameter. Surprisingly enough, you can use child elements here as well. Are you beginning to notice a pattern? <param:exists Returns a boolean value representing whether the named parameter exists, even if it has an empty or false value. You can use child...oh, nevermind, you get the idea. <param:enumerate/> Returns an enumerated list of the parameter names present. Now, it hardly needs to be said, but unfortunately, it will be said anyway: This tag can take a name attribute (or, well, see above) supplying the name of the parameter you want to enumerate. Why, you may ask, is this necessary? If multiple parameters are supplied that all have an identical name, this attribute will allow you to enumerate all the appropriate name/value pairs for that key name. It's output is something like the following: <param-list> <param id="1"> <name>foo</name> <value>bar</name> </param> ... </param-list> <param:count Returns the number of parameters provided on the request. If a name is provided, the number of parameters supplied for the given name is returned. If the name is left out, then the total number of parameters is returned. <param:if name="foo"</param:if>> Executes the code contained within the block if the named parameter's value is true. You can optionally supply the attribute "value" if you want to evaluate the value of a parameter against an exact string. This tag, as well as all the other similar tags mentioned below can be changed to "unless" to perform the exact opposite (ala Perl's "unless"). All options must be supplied as attributes; child elements can not be used to supply these values. <param:if-exists name="foo"</param:if-exists>> Executes the code contained within the block if the named parameter exists at all, regardless of it's value. <param:if-regex name="foo" value="\w+"</param:if-regex>> Executes the code contained within the block if the named parameter matches the regular expression supplied in the "value" attribute. The "value" attribute is required. Michael A Nachbaur, mike@nachbaur.com Copyright (c) 2002-2004 Michael A Nachbaur. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/dist/AxKit-XSP-QueryParam/lib/AxKit/XSP/QueryParam.pm
crawl-003
refinedweb
552
56.96
Check if 2 lines overlap Project description Ormuco Code Challenge This project has been created only for demonstration purpose. The challenge Your goal for this question is to write a program that accepts two lines (x1,x2) and (x3,x4) on the x-axis and returns whether they overlap. As an example, (1,5) and (2,6) overlaps but not (1,5) and (6,8). The Answer source directory: ./overlap usage: - Run pip3 install cf_lines_overlap from overlap.overlap import overlap result = overlap((1,5), (4,25)) # It will return True if the lines overlap, False otherwise Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/cf-lines-overlap/
CC-MAIN-2021-43
refinedweb
125
63.59
Product Information Colonel Sanders on Being a No-Code Vegan If you had a new vegan or vegetarian product and wanted to get an “influencer” to blog about it would you pick Colonel Sanders or Ronald McDonald? In this case I have been an ABAP programmer since 1999 and even before then I was interested in programming. I wrote my first code when I was 14 (1980). I have done this for so long that naturally I am going to be a bit biased and say that programming is a Good Thing. However unless you have been hiding under a large rock lately you will have noticed that there has been a large push from SAP recently in regard to “Low Code/No Code” solutions. I have been fairly vocal on this subject in my various books and blogs, for example: The spiel I usually come out with is that this concept is not new in the least – even back in the days when programmers used punch cards to program the “news” was that very shortly you would just be able to tell a program what it should do in some sort of declarative manner and the program would be created without one line of code. This message has been repeated on and off since about 1950 i.e. for over seventy years now. So the obvious question would be – is this like Nuclear Fusion? Always ten years in the future. The Ghost of No-Code Past Sticking with SAP – over the years there have been a series of ways to speed up development by providing tools whereby “Citizen Smith Developers” (before the term was even invented) could shoulder the burden of code writing that was normally done by ABAP developers. In no particular order such initiatives included:- - SAP Query - HR pseudo-code - Variant Configuration pseudo-code - Validation/Substitution logic in FI/CO - BRF+ - ABAP for Key Users - Robotic Process Automation - Excel Let us look at the last two in some detail Excel It will come as no great shock that an incredibly large number of end users download data from SAP (and whatever other systems you may be using plus the internet) into spreadsheets and do a whole bunch of calculations. Sometimes the aim is to make pretty graphs and pie charts, sometimes the aim is to calculate data which then gets fed back into the SAP system. This is why SAP has always enabled a standard way of downloading ALV reports to Excel. From about ten years ago there was also ABAP2XLSX (every heard of that? If not, then invent a way of uploading/downloading from ABAP to Excel and then write a blog about it on the SAP Community website every month forever). Some of the data in the spreadsheet is just a straight download from SAP but of course you also have calculated columns – this can range from simply adding two columns together progressing to doing a VLOOKUP between worksheets and often ending up with mega-complicated macros in Visual Basic. Back in 1989 on my very first day in the accounts department when I was still a student doing a summer job I discovered you could do macros. Excel did not exist then – the spreadsheet was Lotus 123. Did that make me a Citizen Smith Developer? Are people who do VLOOKUPS Citizen Smith Developers? Do people who add two columns together qualify? What am I going to be working on next month? That would be this whacking great spreadsheet developed by people in the business over a period of years. It has dozens of worksheets most of which are populated with SAP data, and the end result is fed back into SAP. My job of course is to build the same thing inside SAP. At my company we do that sort of thing all the time, often having an “Excel In Place” front end so the users do not get a heart attack. How complicated can these spreadsheets get? The most complicated one I have ever seen was used to design concrete and I translated that into ABAP code to power the variant configuration module. How long did it take to translate that spreadsheet into SAP? Four years. Robotic Process Automation I have to confess that where I work we do not use the RPA company that SAP bought and rebranded. We use a competitor product namely UIPATH. Going down a rabbit hole for a second when one of the first use cases was being developer it was all about going into a web site, getting some information, and putting that information into SAP. The only problem was that the website popped up that “Are you a robot?” question. So the question was asked – can the robot automatically tick the box saying it is not a robot? I really, really hope the answer is “no”. In any event since RPA was a “Citizen Smith Developer” type of thing a call for volunteers was made throughout all the head office staff. Someone in accounts payable put their hand up. A few months later I was talking to that RPA champion, and he showed me some things he had built. It looked like a big VISIO flowchart to me, a bit like business workflow, where you configure the various boxes to do assorted things by clicking checkboxes and filling in fields. And guess what? If a box cannot do every single thing you want you can add code to it or call a Z function module. After a very short time I realised that this “random end user” was actually sharp as a whip and I said to him that when (not if) he wanted to transition to getting a job as an actual ABAP programmer I would give him any advice/help that I could. The conclusion I draw from all this is that anyone who has the desire to be a “Citizen Smith Developer” in fact has the desire to be an actual “Pro-Code” developer. The Ghost of No-Code Yet to Come The current SAP push for no-code is “SAP AppGyver”. Here is a blog which explains the concept: I always thought the most interesting quote in that blog was “solutions being worked on by teams of 20, can now be placed in the hands of one person“. If that is true then logically you can sack 95% of the programmers and everything will be fine.Some managers will believe this statement as it is what they want to hear and are most likley heading for an unpleasent surprise once the development team is gone. The Ghost of No-Code Present Now, despite all the negative things I have been saying about no-code solutions, one fine day I was contacted by somebody who wanted to demonstrate one such solution to me, so I could blog about it. So this is what I am doing. The gentleman in question is called Avi Mishan and his company in Israel (Home | DPROS (d-pro.biz)) has pretty much cornered the no code market. When he contacted me he had no idea that I had actually lived in Tel Aviv for two years many years back and so was most surprised when my replies included short bursts of Hebrew (one of my silly pastimes in 1998/1999 was trying to translate the Spice Girls song lyrics into Hebrew e.g. Stop Right Now, To-Dah-Ra-Bah) So here I am, Colonel Sanders, trying to be as unbiased as possible whilst talking about organic vegan soya beans taking the place of Fried Chicken. Here are two slides from his (DPROS) presentation which state the aim of Low/No-Code. This aim has not changed since the 1950’s but let us spell it out. This process is what I have always been used to and I never questioned it as it always seemed to work well to me. If I had to be brutally honest though often the end user requirement does get distorted before it gets to the developer. In any event the purpose of “Low/No-Code” is to change the process as follows. That sounds wonderful does it not? You end up with a new end user requirement live in production, potentially on the same day they ask for it. Possibly an hour after they ask for it. At this point you probably expect me to start screaming and shouting and generally jumping up and down like a Baboon and grunting incoherently in an attempt to totally discredit the whole concept. Admittedly that would normally be the way I would proceed but as have said I am trying to be as unbiased as I can here and so will just describe what I saw in the product demonstration. This Episode has been Sponsored by the Letter “Z” I have no idea if where I work is typical, but if you were to ask one of our end users what is the split between their usage of “Z” reports inside the SAP GUI and their usage of standard SAP reports they would probably say “Oh, are there standard SAP reports?” So – why are all the reports custom? It is generally because (a) standard SAP has to cater for all countries and business lines so the standard reports have to be very generic by nature and (b) though you can influence some standard reports to a limited extent via the IMG and user exits the scope of what you can do is never enough for the end users and (c) when we started most standard SAP reports still used WRITE statements as opposed to the ALV. Thus we wrote loads of bespoke ALV reports. But what if at the time we could have indeed made all the needed changes to the standard reports without writing one line of code? Veneer Disease Now is the time to revisit the “Open/Closed Principle” which is how you can change the behaviour of an existing application without changing the code at all. That sounds impossible at first glance until you think to yourself “What does a user exit do then?” and it all starts to make sense. The idea is to put a “veneer” on the surface of standard SAP. You still have all the standard tables and BAPIs and unreleased function modules you love to use in the background, but on their journey to and from the user they are intercepted by another layer which alters what the end user sees and increases that user’s ability to interact with the system. As far back as 1998 you had GUIXT whereby you could drag fields all around the screen on a standard SAP transaction, rename them, change drop down lists into radio buttons, add fancy graphics and so on. The underlying SAP system was not changed at all. As an experiment we made a standard transaction look like the legacy transaction on the mainframe – right down to the green and black colour scheme – so of course the end users loved it. Nowadays you have Screen Personas doing the exact same thing, albeit with a web front end, and if you think about if the UI5 concept is all about totally decoupling the front-end user interface from the back-end system. Of course with those three approaches – GUIXT, Screen Personas and UI5 – there is lots of code involved, even if some or even all of it is generated automatically. Use Case / Suitcase / Suits You Sir! In this case I can see three areas where you might want to do some sort of “no-code” development in regard to reports. - Enhancing a standard SAP report - Enhancing an existing “Z” report - Creating a new report from scratch I would also note if all an ABAP programmer ever did was create and change ALV reports then they would probably get bored very quickly and start wishing “the business” did in fact have some sort of self-service tool to make these changes, so the programmer could work on something more interesting. Out of Work Navvy Going to a Demo The no-code product that was demonstrated to me was called “Insight SAP” which is an SAP add-on (written in ABAP) Interestingly on the web site the very first words about the product are „Instant Spreadsheet Functionality Within SAP“. A bit like ABAP2XLSX (if you have ever heard of that) I suppose but without the code. The logic being if you could do Excel like things to the data within an SAP report you would not need to download the report to Excel and then do Excel like things on the data. Anyway with this toll what you can do to the SAP report is as follows:- - Add a new column which gets its value from the database - Add a new calculated column - Do conditional formatting (i.e. change cell colour based on certain criteria) - Have columns where the user can enter/change data (which is more than CL_SALV_TABLE can, I don’t know if I have ever mentioned this anywhere) - Add in pie charts and bar charts and such based on the data in the report, like you can do with Excel and indeed ABAP2XLSX - Display the report and associated data in the from of a Fiori App, so you can see it on your smartphone - Send Alerts - Add new functions on the toolbar Basically a lot of things you can do by adding/changing code in an ALV report and some you cannot (such as the graphics) but without actually writing any ABAP code. So from the Citizen Smith Developer point of view there is not one line of code, but I presume there is a lot of code dynamically generated in the background based on what I would call the report configuration. As per the pictures above the end goal of this sort of thing is to reduce the development time for weeks to hours and knock the ABAP developer out of the equation. Why-Z Man Say, Only Fools Rush In As a last point my favourite quote from the slides was “If the company wanted lots of manual coding, why did they buy an ERP system in the first place?”. I would say that in the year 1996 SAP was promising that a vanilla SAP system can give you everything you want, no need for your own code. Fast forward to 2022 and SAP is saying that S/4HANA can give you everything you want, you do not need all that Z code you wrote over the last twenty odd years, because all that missing functionality is now in the standard system, bound to be. In effect I am hearing the exact same message from SAP that I heard 25 years ago i.e. stay vanilla, keep the core clean, you don’t need any custom code. That message did not work so well the first time and now SAP are doing the exact same thing again and expecting different results. Disclaimer I would just like to state I have no financial interest in the above product whatsoever, or indeed in any “no-code” product. If anything I have a financial interest in companies not using no-code products as that supposedly safeguards my job and the jobs of all ABAP developers. In reality the latter is nonsense – no-code products have been around for decades, and all the “pro-code” developers are still employed. Just this morning I saw on Twitter an article about how very soon no-one will have to do any manual testing anymore, testing will all be done by advanced AI programs. Yes. Of course it will. And even if that were true it most likely would not cause a headcount reduction. Starting with the very first SAP implementation I worked on in 1997 and right up until the current date the business case always includes a headcount reduction and yet after go-live you usually end up employing more people. The rule seems to be – the more you automate things the more people you need. This paradox probably has a name, but I don’t know what it is. In one of our sites in Australia we “replaced” some of our laboratory staff with a giant robot, but when I went to see the robot in action I noticed no-one had actually been laid off, if anything there was an extra position that was created – someone had to watch the robot all day and when any of its arms got stuck they had to hit them with the “robot stick”. My Take These products clearly work. In the demo I just wrote about and what I saw with the UIPATH RPA product you can clearly build/change something “without one line of code” very quickly indeed. Also, there is clearly a market for this sort of thing. The billion million dollar question is – will the “new generation” of “no-code” products revolutionise development in the way that all the previous generations promised to do? To recap – the inventor of “SAP AppGyver” promises a 95% reduction in the number of developers you need. The “SAP Insight” claim a more modest 15% reduction in work hours based on statistics from actual clients. Either way I am not going to be losing any sleep. Companies will no doubt buy these products and no doubt get a benefit from them. But ABAP is rather like COBOL and Cockroaches and Excel and the SAP GUI. No matter how many atom bombs you drop on any of those they crawl out from underneath the mushroom cloud and keep on going. Cheersy Cheers Paul Hahaha, that's the beauty of your titles: I KNEW it was you already from the notification, before reading the name 😀 Yup, simplification comes along from time to time but in general it seems the tradeoff is between power and convenience: if you want to be able to do everything, you'll typically have to do everything, most of the time. POWER TO THE PEOPLE! A few random thoughts. Back in the early nineties we had a no-code solution that generated COBOL. Much fuss and noise. "No more developers". Yet two years later, there they were. The thing is, as soon as you add a loop or a conditional - you're back to programming. And a non-programmer (or a bad programmer - which is essentially the same thing) will screw things up. Or at least create an unmaintainable monster. Back in the mid 90s, I was going through some processes with a FICO consultant. She explained that she ran one (Z) report, downloaded the results, then ran another (Z) report and copy pasted the data in the results, into the selection screen. I was able to explain that as both are Z reports, I could simply add the functionality to call the second one directly from the first. "Oh, can you do that?". Ooo. SAP Data Services... began life as Actaworks then got bought out by BO (a business that didn't stink, apparently) and then by SAP. Generated Z programs in production. I had a lot of fun (=$$$$) dealing with these for a client, as they were so slow. I also pointed out that the RFC Function Modules that generated the code were somewhat insecure and could be used to inject any code into the production server, which concerned the security team so much that they enforced the Dev->Test->Production process for any generated reports. There were other security issues as well (like the DS code was in the Z space instead of a /namespace/ space). I and someone who used to be reasonably well known on this site, managed to highlight these to some senior bods at SAP and get addressed. I'm pretty sure that we were on the DS team as the first against the wall when the revolution comes. And then, back in 2016, while in Barcelona for TechEd, I did a workshop on developing Fiori Apps. The trainers stated "With Fiori Apps, you see, there's no need to develop anything in ABAP!". I took issue with this and explained I'd just come from a workshop entitled "Developing Fiori Apps with ABAP". Plus ça change... 😀 Hi Matthew, I discovered computer programming in 1978, when I was 15 yo, and it was Z80 assembler. I graduated as an engineer in computing science in 1986 after having learned programming in BASIC, Pascal, Fortran, COBOL, Lisp, Prolog and finally C. The only programming language I coded programs with during the beginning of my career, 1986-1992, was C. So I know a bit about what is programming and what is a programming language. And now I'm working at SAP to promote low-code/no-code, because it is the sense of IT industry! I started programming a little later. ZX80, when I was 11. A year later got a ZX81 and soon after that was programming in Z80A. Loved it. I guess you always remember your first love... I can't remember how many programming languages I've been paid to program in since then (and others I've just learned for fun), but it's double digits. Nowadays, it's just ABAP and Java. There are basically three programming languages: procedural, object oriented, functional. Or maybe four with Clojure, Lisp etc. I'm sure programming will become lovely with visual design. But it has been tried since those first "the pretty diagram compiles to COBOL" days, and never really caught on. I'm cynical because I've seen this stuff again and again - and because management will always buy on the premise that "it'll reduce IT costs", which it never does*. I'm always open to new ideas though. Perhaps this time you really have designed a better mousetrap. Interestingly, I'm currently developing a plug-in for ADT. The e4 framework does a lot of the coding for you, via forms and configuration - leaving you just to do the interesting bits. I guess that's moving towards low-code. But it certainly isn't for Citizen Smith! One thing I have learned over the years in the IT industry is that, essentially, not much changes. In the science of computer programming however, well that's quite different. (Btw, in the notifications it shows the first sentence you originally wrote. Your secret is safe with me.) * The best way to reduce IT costs it to employ really really good people and keep them. Sorry, I'm already taken. Hi Paul. Coming from the AppGyver team, need to correct you here since this is definitely not what Marko promises, and none of us are aiming to put developers out of work. It's quite the opposite - that developers will have tools available that let them build and focus on more exciting things beyond syntax. There's always a need for programming and pro-developers, which is why AppGyver's approach to no-code is through visual programming. -Esmeé I totally agree. As soon as you have conditions and loops -> you need a programmer! Ok in that case what does "“solutions being worked on by teams of 20, can now be placed in the hands of one person“" mean? That is a direct quote from his blog. Probably i am mis-interpreting that quote in some way, but at first glance it seems to indicate that a solution which previously took 20 people to develop can now be done by one person. Cheersy Cheers Paul I don't want to speak for Marko Lehtimäki, but I think when we're talking about these tasks it's not necessarily the large complex projects - these will of course still require the involvement of pro developers. For smaller processes though, the "business user" who faces a certain problem every day can certainly create an app or automation to solve that, under the governance of IT. I'd also add that since the beginning of SAP AppGyver, we never wanted to take the "programming" out of no-code - but with no-code, teach everyone to become a (visual) programmer. The journey with non-tech users is that they often start off with simple apps, and then gradually dive deeper into the tool to more complex tasks. Just out of curiosity, have you tried out AppGyver yet? It would be interesting to hear your perspective there once you've had some experiences with the tool. Hi Paul, FYI, and with all my respect, saying "the RPA company that SAP bought and rebranded" does not reflect the truth at all, and is misleading the reader on what SAP Intelligent RPA really was. I am from Contextor, "the company SAP bought" end of 2018, and I can tell you that SAP Intelligent RPA is (or was, as it is now deprecated) not at all a rebranding of what our product was in 2018: Contextor features and building blocks have been integrated into a brand new product SAP started developing months before acquiring Contextor, and this new product, SAP Intelligent RPA, has been launched at SAPPHIRE NOW 2019. In a few words: what Contextor brought was a full set of connectors to automate on top of non-SAP stuff like Microsoft Excel, web browers, legacy systems such as IBM mainframes and AS/400 through terminal emulators, Oracle etc. We also brought a Desktop Studio for bots design, and this studio has been completely recreated to offer a Cloud Studio, launched in version 2.0 at SAP TechEd 2020. And you might have seen that we recently launched SAP Process Automation, a broader tool made out of SAP Intelligent RPA (RPA, of course) SAP Workflow Management (BPM tool) with also embedded AI capabilities such as AI Business Services to better extract data out of documents, and of course with a low-code/no-code experience for Citizen Developers, as this product is part of SAP's LCNC portfolio. Do not be afraid, Paul: attend our LCNC sessions at SAP Sapphire 2022 and give SAP Process Automation a try, you might be positively surprised Juergen Mueller, CTO of SAP, just shared his opinion on low-code/no-code: "Every Company Is A Technology Company". On March 18, he explained that "Low-Code/No-Code Development and Citizen Automation are the Future of Enterprise Resilience". Back in 1997 our Software Engineering Professor made the prophecy that software gets standardized by the minute and software developing will become obsolete. Fast forward 25 years: nothing changed. Did you ever adapt variant configuration to the commerce cloud? Thank god if you get spared... I think my job as a developer is less in danger than that of managers promoting the miracles of no-code tools. I experimented with iRPA Cloud Desktop for a proof of concept and i felt tortured. Since the moment you drag and drop a loop or a condition block and understand what you are doing, you become more of a developer than of a "citizen". Also, the time comes, not too late, when to achieve a certain requirement it is not enough to combine a set of blocks and you end up needing to write some code. To integrate that code with the non-code part is not the easiest task even if you are a skilled code developer. Does anyone believe that a diagram with more than 20 blocks is more intelligible than the equivalent in code? Renaming elements, moving pieces of code from one place to another, in other words refactoring, is a constant in development. Does anyone deny me that this task in a diagram-like tool is not an absolute nightmare? WRITING CODE IS THE LEAST OF DEVELOPER'S PROBLEMS. Exactly. I'm intrigued how concepts such as clean coding, SOLID etc. translate into these tools. Profoundly badly I'd imagine. The citizen developer must be assumed to be a person without principles. As a programmer and a Creator of InsightSAP, I can tell you that by implementing the system I am now able to focus my efforts on the more complicated and interesting tasks. It doesn't eliminate the need for my programming skills but it does free resources and allows us to provide better support to our business end-users. Give it a few years and some of those reports that the citizens have created will need to be rewritten in a more appropriate platform - probably ABAP. And it will be painful and expensive. The problem with these citizens' tools is that are simultaneously too powerful and too limited. They're powerful enough that non-developers can screw up royally, and limited enough that they can't do everything (easily) the citizens want to write. Maybe this time it'll be a better mousetrap and actually fulfil the desires - but forgive me if I'm highly skeptical. I've been in this business for over 30 years... there's not that many new ideas. Even HANA was touted as a new thing. "In memory database!". I first encountered that such a beast back in 1994 (when the 2GB of memory required was really rather expensive!) . Also bear in mind that creating something is 5% of the software lifecycle. Bug Fixes + Enhancements are 95% of the software Lifecyle. I am sure all these no-code solutions have automated unit tests when you make changes. I am also sure 99.9% of ABAP code does not have such tests but mine does, ha ha ha, ah ha ha ha. As Matthew says maybe this time - after 50 years - the problem has finally been cracked. If so great. Appy The above shows how to do he UI5 equivalent of the screen painter - in AppGver. I am not so sure there is an equivalent in the general Eclipse / VS Code / BTP way of building screens. Cheersy Cheers Paul
https://blogs.sap.com/2022/05/01/colonel-sanders-on-being-a-no-code-vegan/
CC-MAIN-2022-21
refinedweb
4,987
66.98
How do I multiply items within a CSV Hi, I am trying to import a CSV file then find the product of all of the numbers within the file. The code I have used so far is: import csv mylist = list(csv.reader(open(DATA+'mydata.csv','rb'),dialect='excel')) from numpy import prod prod(mylist) However, this doesn't work, I receive the following error "TypeError: cannot perform reduce with flexible type". I think this is because of the format of how the mylist is created. This list data comes out in this format: [['1'],['2'],['3']] How do I coerce it into the format (1,2,3)?
https://ask.sagemath.org/question/10243/how-do-i-multiply-items-within-a-csv/?answer=15086
CC-MAIN-2020-50
refinedweb
109
69.31
//oc.h,v 1.12 2007/05/19 03:07:33 leonb Exp $ // $Name: release_3_5_22 $ #ifndef _DJVMDOC_H #define _DJVMDOC_H #ifdef HAVE_CONFIG_H #include "config.h" #endif #if NEED_GNUG_PRAGMAS # pragma interface #endif #include "DjVmDir.h" #ifdef HAVE_NAMESPACES namespace DJVU { # ifdef NOT_DEFINED // Just to fool emacs c++ mode } #endif #endif class ByteStream; class DataPool; class GURL; class GUTF8String; class DjVmNav; /** @name DjVmDoc.h Files #"DjVmDoc.h"# and #"DjVmDoc.cpp"# contain implementation of the \Ref{DjVmDoc} class used to read and write new DjVu multipage documents. @memo DjVu multipage documents reader/writer. @author Andrei Erofeev <eaf@geocities.com> @version #$Id: DjVmDoc.h,v 1.12 2007/05/19 03:07:33 leonb Exp $# */ //@{ /** Read/Write DjVu multipage documents. The "new" DjVu multipage documents can be of two types: {\em bundled} and {\em indirect}. In the first case all pages are packed into one file, which is very like an archive internally. In the second case every page is stored in a separate file. Plus there can be other components, included into one or more pages, which also go into separate files. In addition to pages and components, in the case of the {\em indirect} format there is one more top-level file with the document directory (see \Ref{DjVmDir}), which is basically an index file containing the list of all files composing the document. This class can read documents of both formats and can save them under any format. It is therefore ideal for converting between {\em bundled} and {\em indirect} formats. It cannot be used however for reading obsolete formats. The best way to convert obsolete formats consists in reading them with class \Ref{DjVuDocument} class and saving them using \Ref{DjVuDocument::write} or \Ref{DjVuDocument::expand}. This class can also be used to create and modify multipage documents at the low level without decoding every page or component (See \Ref{insert_file}() and \Ref{delete_file}()). */ 00119 class DJVUAPI DjVmDoc : public GPEnabled { // Internal function. protected: DjVmDoc(void); void init(void); public: /// Creator static GP<DjVmDoc> create(void); /** Inserts a file into the document. @param data ByteStream containing the( ByteStream &data, DjVmDir::File::FILE_TYPE file_type, const GUTF8String &name, const GUTF8String &id, const GUTF8String &title=GUTF8String(), int pos=-1 ); /** Inserts a file into the document. @param pool Data pool containing( const GP<DataPool> &pool, DjVmDir::File::FILE_TYPE file_type, const GUTF8String &name, const GUTF8String &id, const GUTF8String &title=GUTF8String(), int pos=-1 ); /** Inserts a file described by \Ref{DjVmDir::File} structure with data #data# at position #pos#. If #pos# is negative, the file will be appended to the document. Otherwise it will be inserted at position #pos#. */ void insert_file(const GP<DjVmDir::File> & f, GP<DataPool> data, int pos=-1); /** Removes file with the specified #id# from the document. Every file inside a new DjVu multipage document has its unique ID (refer to \Ref{DjVmDir} for details), which is passed to this function. */ void delete_file(const GUTF8String &id); /** Set the bookmarks */ void set_djvm_nav(GP<DjVmNav> n); /** Returns the directory of the DjVm document (the one which will be encoded into #DJVM# chunk of the top-level file or the bundle). */ GP<DjVmDir> get_djvm_dir(void); /** Returns contents of file with ID #id# from the document. Please refer to \Ref{DjVmDir} for the explanation of what IDs mean. */ GP<DataPool> get_data(const GUTF8String &id) const; /** Reading routines */ //@{ /** Reads contents of a {\em bundled} multipage DjVu document from the stream. */ void read(ByteStream & str); /** Reads contents of a {\em bundled} multipage DjVu document from the \Ref{DataPool}. */ void read(const GP<DataPool> & data_pool); /** Reads the DjVu multipage document in either {\em bundled} or {\em indirect} format. {\bf Note:} For {\em bundled} documents the file is not read into memory. We just open it and access data directly there. Thus you should not modify the file contents. @param name For {\em bundled} documents this is the name of the document. For {\em indirect} documents this is the name of the top-level file of the document (containing the \Ref{DjVmDir} with the list of all files). The rest of the files are expected to be in the same directory and will be read by this function as well. */ void read(const GURL &url); //@} /** Writing routines */ //@{ /** Writes the multipage DjVu document in the {\em bundled} format into the stream. */ void write(const GP<ByteStream> &str); /** Writes the multipage DjVu document in the {\em bundled} format into the stream, reserving any of the specified names. */ void write(const GP<ByteStream> &str, const GMap<GUTF8String,void *>& reserved); /** Stored index (top-level) file of the DjVu document in the {\em indirect} format into the specified stream. */ void write_index(const GP<ByteStream> &str); /** Writes the multipage DjVu document in the {\em indirect} format into the given directory. Every page and included file will be stored as a separate file. Besides, one top-level file with the document directory (named #idx_name#) will be created unless #idx_name# is empty. @param dir_name Name of the directory where files should be created @param idx_name Name of the top-level file with the \Ref{DjVmDir} with the list of files composing the given document. If empty, the file will not be created. */ void expand(const GURL &codebase, const GUTF8String &idx_name); /** Writes an individual file, and all included files. INCL chunks will be remapped as appropriate. */ void save_page(const GURL &codebase, const DjVmDir::File &file) const; /** Writes an individual file if not mapped, and all included files. INCL chunks will be remapped as appropriate. All pages saved are added to the #incl# map. */ void save_page(const GURL &codebase, const DjVmDir::File &file, GMap<GUTF8String,GUTF8String> &incl) const; /** Writes an individual file specified, remapping INCL chunks as appropriate. Included files will not be saved. */ void save_file(const GURL &codebase, const DjVmDir::File &file) const; /** Writes the specified file from the given #pool#. */ GUTF8String save_file(const GURL &codebase, const DjVmDir::File &file, GMap<GUTF8String,GUTF8String> &incl, const GP<DataPool> &pool) const; //@} private: void save_file(const GURL &codebase, const DjVmDir::File &file, GMap<GUTF8String,GUTF8String> *incl) const; GP<DjVmDir> dir; GP<DjVmNav> nav; GPMap<GUTF8String, DataPool > data; private: // dummy stuff static void write(ByteStream *); static void write_index(ByteStream *); }; inline GP<DjVmDir> 00260 DjVmDoc::get_djvm_dir(void) { return dir; } //@} #ifdef HAVE_NAMESPACES } # ifndef NOT_USING_DJVU_NAMESPACE using namespace DJVU; # endif #endif #endif
http://djvulibre.sourcearchive.com/documentation/3.5.22/DjVmDoc_8h-source.html
CC-MAIN-2017-26
refinedweb
1,032
55.03
The Ljung-Box test is a statistical test that checks if autocorrelation exists in a time series. It. This tutorial explains how to perform a Ljung-Box test in Python. Example: Ljung-Box Test in Python To perform the Ljung-Box test on a data series in Python, we can use the acorr_ljungbox() function from the statsmodels library which uses the following syntax: acorr_ljungbox(x, lags=None) where: - x: The data series - lags: Number of lags to test This function returns a test statistic and a corresponding p-value. If the p-value is less than some threshold (e.g. α = .05), you can reject the null hypothesis and conclude that the residuals are not independently distributed. The following code shows how to use this function to perform the Ljung-Box test on the built-in statsmodels dataset called “SUNACTIVITY”: import statsmodels.api as sm #load data series data = sm.datasets.sunspots.load_pandas().data #view first ten rows of data series data[:5] YEAR SUNACTIVITY 0 1700.0 5.0 1 1701.0 11.0 2 1702.0 16.0 3 1703.0 23.0 4 1704.0 36.0 #fit ARMA model to dataset res = sm.tsa.ARMA(data["SUNACTIVITY"], (1,1)).fit(disp=-1) #perform Ljung-Box test on residuals with lag=5 sm.stats.acorr_ljungbox(res.resid, lags=[5], return_df=True) lb_stat lb_pvalue 5 107.86488 1.157710e-21 The test statistic of the test is 107.86488 and the p-value of the test is 1.157710e-21, which is much less than 0.05. Thus, we reject the null hypothesis of the test and conclude that the residuals are not independent. Note that we chose to use a lag value of 5 in this example, but you can choose any value that you would like to use for the lag. For example, we could instead use a value of 20: #perform Ljung-Box test on residuals with lag=20 sm.stats.acorr_ljungbox(res.resid, lags=[20], return_df=True) lb_stat lb_pvalue 20 343.634016 9.117477e-61 The test statistic of the test is 343.634016 and the p-value of the test is 9.117477e-61, which is much less than 0.05. Thus, we reject the null hypothesis of the test once again and conclude that the residuals are not independent. Depending on your particular situation you may choose a lower or higher value to use for the lag.
https://www.statology.org/ljung-box-test-python/
CC-MAIN-2021-04
refinedweb
405
67.25
The easiest place to start writing smart contracts in Solidity in within the online Remix IDE. Given it is an online IDE, no installation or development environment setup is required, you can navigate to the site and get started! Remix also provides very good tools for debugging, static analysis, and deployment all within the online environment. The source code used in this tutorial can be found here. Before we get started, a quick reminder of what we will be building: A dApp which will allow In Remix, create a new file by selecting the “+” icon in the upper left-hand corner. Name the file: Bounties.sol In the first line of our Solidity Smart Contract, we tell the compiler which version of Solidity to use: pragma solidity ^0.5.0; This tells Solidity that the code can be compiled with Solidity compiler version 0.5.0 and above, up to version 0.6.0 (the ^ character limits the compiler version up to the next breaking change, being 0.6.0) To create the contract class we add the following: contract Bounties { } Next, we add a constructor so that our Contract can be instantiated: constructor() public {} At this stage we have the basic skeleton of a Smart Contract, we can now test it compiles in the Remix IDE. Your Bounties.sol file should look like this: pragma solidity ^0.5.0; contract Bounties { constructor() public {} } In Remix, select the “Compile” tab in the top right-hand side of the screen, and start the compiler by selecting the “Start to Compile” option If everything is ok, you should see a green label with the name of your contract: “Bounties”, this indicates the compilation was successful. Issuing a Bounty Now that we have the basic skeleton of our smart contract, we can start adding functions, first we will tackle allowing a user to issue a bounty. Declare state variables What are state variables in solidity? A smart contract instance can maintain a state, which is kept in the storage area of the EVM. This state consists of one or more variables of the solidity types. These state variables can only be modified via a function call invoked within a transaction. You can see a full list of solidity types in the solidity types documentation First, let's declare an enum which we’ll use to keep track of a bounties state enum BountyStatus { CREATED, ACCEPTED, CANCELLED } Next, we define a struct which defines the data held about an issued bounty struct Bounty { address issuer; uint deadline; string data; BountyStatus status; uint amount; } What is a struct? Structs allow us to define custom composite types which allow us to aggregate/organise data. Now, let's define an array where we will store data about each issued bounty Bounty[] public bounties; Issue Bounty Function Now that we have declared our state variables we can now add functions to allow users to interact with our smart contract function issueBounty( string memory _data, uint64 _deadline ) public payable hasValue() validateDeadline(_deadline) returns (uint) { bounties.push(Bounty(msg.sender, _deadline, _data, BountyStatus.CREATED, msg.value)); return (bounties.length - 1); } The function issueBounty receives a string memory _data and an integer _deadline as arguments (the requirements as a string, and the deadline as a unix timestamp) As of Solidity version 0.5.0 explicit data location for all variables of struct, array or mapping types is now mandatory. Read more about Solidity 0.5.0 breaking changes here Since string is an array of bytes we must explicitly specify the data location of the argument _data. We specify memory since we do not wish to store this data when the transaction has been completed. Solidity requires that you define the return type(s) We specify: returns(uint) Which means we are returning a uint (the array index of the Bounty as the ID) We define the visibility of this function as public.Read more about solidity function visibility In order to send ETH to our contract we need to add the payable keyword to our function. Without this payable keyword the contract will reject all attempts to send ETH to it via this function. The body of our function just has two lines bounties.push(Bounty(msg.sender, _deadline, _data, BountyStatus.CREATED, msg.value)); First we insert a new Bounty struct to out bounties array, setting the BountyStatus to CREATED. In Solidity, msg.sender is automatically set as the address of the sender, and msg.value is set to the amount of Weis ( 1 ETH = 1000000000000000000 Weis). So we set the msg.sender as the issuer and the msg.value as the bounty amount. return (bounties.length - 1); Validation with Modifiers Modifiers in solidity allow you to attach additional pieces of code to be run before or after the execution of a function. It is common practice in solidity to use modifiers to perform argument validation for solidity functions. Validate Deadline validateDeadline(_deadline) is added to ensure the deadline argument is in the future, it should not be possible for a user to issue a bounty with a deadline in the past. modifier validateDeadline(uint _newDeadline) { require(_newDeadline > now); _; } We use the modifier keyword to declare a modifier, modifiers like functions can receive arguments of their own. The position of the _; symbol is key within a modifier. This body of the function being modified is inserted where this symbol appears. So the validateDeadline modifier essentially says, execute this line: require(_newDeadline > now); Then execute the main function. For validation the require keyword allows for conditionals to be set, if not met, the execution is halted, reverted, and remaining gas returned to the user. In general require should be used to validate user inputs, responses from external contracts, and state conditions prior to execution. You can read more about assert, require, and revert here. So to modifier validateDeadline reads as follows: If the deadline > now continue and execute function body, else revert and refund remaining gas to caller. Has Value hasValue() is added to ensure msg.value is a non zero value. Even though as previously discussed the payable keyword ensures msg.value is set, it can still be sent as zero. Similar to validateDeadline we use require to ensure msg.value input is valid e.g >0 modifier hasValue() { require(msg.value > 0); _; } payable is actually a pre-defined modifier in solidity, and validates that ETH is sent when calling a function which requires the smart contract to be funded. You can read more about how modifiers can be used to restrict access and guard against incorrect usage in the solidity documentation Issue Bounty Event It is best practice when modifying state in solidity to emit and event. Events allow blockchain clients to subscribe to state changes and perform actions based on those changes. For example a user interface showing a list of transfers in and out of an account, for example etherscan, could listen to a “transfer” event to update the user on the latest transfers in and out of an account. Read more about solidity events here. Since when issuing a bounty we change the state of our Bounties.sol contract we will issue a BountyIssued event. First, we need to declare our event: event BountyIssued(uint bounty_id, address issuer, uint amount, string data); Our BountyIssued event emits the following information about the bounty data stored: - *bountyId: *The id of the issued bounty - *issuer: *The account of the user who issued the bounty - *amount: *The amount in Weis allocated to the bounty - *data: *The requirements of the bounty as a string Then in our issueBounty function, we need to emit the BountyIssued event: bounties.push(Bounty(msg.sender, _deadline, _data, BountyStatus.CREATED, msg.value)); *emit BountyIssued(bounties.length - 1,msg.sender, msg.value, _data);* return (bounties.length - 1); Now that we have added our issueBounty function our Bounties.sol file should look like the following: pragma solidity ^0.5.0; /** * memory ); } Deploy & interact in Remix Now that we have our smart contract we can deploy to a local development blockchain running in the RemixIDE (browser), and test our issueBounty function. First, lets compile our Bounties.sol contract to ensure we have no errors. In Remix, select the “Compile” tab in the top right hand side of the screen, and start the compiler by selecting the “Start to Compile” option. You will notice, a few static analysis warnings in the IDE above the compilation result. Remix runs a set of static analysers to help avoid known security vulnerabilities and follow best practices. You can read more about Remix Analysis here. We can ignore these warning for now and move on to deploying and interacting with our smart contract. In Remix, select the Run tab in the top right hand side of the screen. Within the Environment dropdown section, select the Javascript VM option. The “JavaScript VM” option, runs a Javascript VM blockchain within the browser, this allows you to deploy and send transactions to a blockchain within the RemixIDE in the browser. This is particularly useful for prototyping especially since no dependencies are required to be installed locally. You can read more about running transactions within Remix here. Within the Run tab in Remix, with the JavaScript VM environment option selected. Click the Deploy button. This executes a transaction to deploy the contract to the local blockchain environment running in the browser. We’ll talk more about contract creation transactions later on in the series. Within the RemixIDE console, which is located directly below the editor panel, you will see the log output of the contract creation transaction. The “green” tick indicates that the transaction itself was successful. Within the “Run” tab in Remix, we can now select our deployed Bounties contract so that we can invoke the issueBounty function. Under the “Deployed Contracts” section we see a list of function which can be invoked on the deployed smart contract. Here we have the following options: issueBountythe colour of this button “pink” indicates that invocation would result in a transaction bountiesthe colour of this button “blue” indicates that invocation would result in a call To invoke the issueBounty function, we need to first set the arguments in the “issueBounty” input box. Set the string _data argument to some string “some requirements” and set the uint64 _deadline argument to a unix timestamp in the future e.g “1691452800” August 8th 2023. Since our issueBounty function is payable we must ensure msg.value is set, we do this by setting the values at the top of the “Run” tab with the RemixIDE. Here we have the following options: - *Environment: *As previously alluded to, sets the blockchain environment to interact with. - *Account: *Allows the selection of an account to send the transaction from, and also to see the amount of ETH available in each account. - *Gas Limit: *Set the max amount of gas to be used by execution of the transaction - *Value: *The amount to send in msg.valuehere you can also select the denomination in “Wei, Gwei, Finney and Ether” So go ahead and set “Value” to some number > 0, but less than the current amount available in the selected account. In this example we’ll set it to 1 ETH Clicking the “issueBounty” button in the “Deployed Contracts” section, within the “Run” tab, will send a transaction invoking the issueBounty function, on the deployed Bounties contract. Within the console you will find the log output of the issueBounty transaction. The “Green” tick indicates the transaction was successful. The decoded output, gives you the return value of the function call, here it is 0. This should be the index of our “Bounty” data within the bounties array in our smart contract data store. We can double check the storage was correct by invoking the “bounties” method in the “Deployed Contracts” section. Set the uint256 argument of the bounties function to 0 and click the “blue” bounties button. Here we confirm that the data inputs for our issuedBounty are retrieved correctly from the “bounties” array with deployed smart contracts storage. Try it yourself Now that you have seen how to add a function to issue a bounty, try adding the following functions to the Bounties contract: fulfilBounty(uint _bountyId, string _data)This function should store a fulfilment record attached to the given bounty. The msg.sendershould be recorded as the fulfiller. acceptFulfilment(uint _bountyId, uint _fulfilmentId)This function should accept the given fulfilment, if a record of it exists against the given bounty. It should then pay the bounty to the fulfiller. function cancelBounty(uint _bountyId)This function should cancel the bounty, if it has not already been accepted, and send the funds back to the issuer Note: For acceptFulfilment you will need to use the address.transfer(uint amount) function to send the ETH to the fulfiller. You can read more about the address.transfer member here. You can find the complete Bounties.sol file here for reference. Next Steps - Read the next guide: Understanding smart contract compilation and deployment - Learn more about Remix-IDE from the documentation and github If you enjoyed this guide, or have any suggestions or questions, let me know in the comments. If you have found any errors, feel free to update this guide by selecting the 'Update Article' option in the right hand menu, and/or update the code
https://kauri.io/article/124b7db1d0cf4f47b414f8b13c9d66e2/v9/remix-ide-your-first-smart-contract
CC-MAIN-2019-30
refinedweb
2,227
53.51
If a collection of tokens serves as a list, consider making it look like a list. As an example I'll use Ruby's class method attr_accessor, which accepts any number of arguments. Unstacked: attr_accessor :arity, :name, :original_name, :owner, :parameters, :receiver, :source_location, :super_method Stacked: attr_accessor \ :arity, :name, :original_name, :owner, :parameters, :receiver, :source_location, :super_method Much more readable, no? Note: For a user-defined method that accept many arguments, stacking may not be the best option. Consider instead whether such a method should accept an object. Nevertheless: def initialize( name:, address:, phone:, date_of_birth:, debit_card_number:, debit_card_pin:, ) end Another stack-worthy notation: %w/ apple banana orange peach persimmon pineapple scuppernong tomato / And of course pretty much everyone already does this: { :cabbages => 0, :cucumbers => 2, :peas => 200, :potatoes => 4, :radishes => 8, :sweet_potatoes => 0, } Others? In Ruby? Or in another language? Discussion (3) Oh yeah, stacking is great! I added it to my companies style guide for Python :). An useful tip to point out, use trailing commas! This way your Git diffs only highlight the lines you actually changed. Thanks, Dylan. But be careful not to put a comma on the last line of a stack of attr_accessormethod names. Causes major problems for the code that follows. I’ll keep that in mind if I ever write in Ruby. It keeps floating around my list of languages to try but it seems to solve mostly the same problems as Python so I haven’t found a good reason to play with it yet.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/burdettelamar/readable-code-stacking-tokens-4f73
CC-MAIN-2022-27
refinedweb
247
55.13
> Would you mind to include into Bionic? this commit claims to fix a problem introduced by commit 0722b359342d2a9f9e0d453875624387a0ba1be2, but that commit isn't in Bionic, unless I'm missing something. Can you provide steps for reproducing your problem? ** Changed in: systemd (Ubuntu Bionic) Status: New => Incomplete -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to systemd in Ubuntu. Title: namespace: make MountFlags=shared work again Status in systemd package in Ubuntu: Fix Released Status in systemd source package in Bionic: Incomplete Bug description: Systemd in Bionic fails to handle MountFlags correctly. Would you mind to include into Bionic? This bug seriously affects Docker > 18.6, see the Docker release notes for 18.09.1 (- notes/#18091). To manage notifications about this bug go to: -- Mailing list: Post to : touch-packages@lists.launchpad.net Unsubscribe : More help :
https://www.mail-archive.com/touch-packages@lists.launchpad.net/msg285846.html
CC-MAIN-2021-31
refinedweb
146
57.77
Forums Dev Problem with multiple MSP inlets/outlets in C API Viewing 3 posts - 1 through 3 (of 3 total) Hi folks, I’m an MSP newbie and got stuck with a two-in two-out object. I’ve got weird interactions between the ins and outs. When printing out the pointer addresses of the ins and outs I saw they’re the same?! Here’s an excerpt from my code (just looking at the 64bit part): void *bl_slicer_new(t_symbol *s, long argc, t_atom *argv) { t_bl_slicer *x = (t_bl_slicer *)object_alloc(bl_slicer_class); if (x) { dsp_setup((t_pxobject *)x, 2); outlet_new((t_object*)x, "signal"); outlet_new((t_object*)x, "signal"); } return (x); } void bl_slicer_dsp64(t_bl_slicer *x, t_object *dsp64, short *count, double samplerate, long maxvectorsize, long flags) { post("my sample rate is: %f, count: %d", samplerate, *count); object_method(dsp64, gensym("dsp_add64"), x, bl_slicer_perform64, 0, NULL); } void bl_slicer_perform64(t_bl_slicer *x, t_object *dsp64, double **ins, long numins, double **outs, long numouts, long sampleframes, long flags, void *userparam) { t_double *inL = ins[0]; t_double *inR = ins[1]; t_double *outL = outs[0]; t_double *outR = outs[1]; post("ins: %d %d, outs: %d %d", ins[0], ins[1], outs[0], outs[1]); } Here’s the print out: ins: 315077024 315085280, outs: 315085280 315077024 You see that ins[0]==outs[1], and ins[1]==outs[0]. But why is this? I would assume the addresses to be different. Thanks for any help… Hi there, many threads about this: Best - Luigi Hey Luigi, thank you very much! I didn’t discover these threads when searching. Now it’s all clear :) Best, Boris C74 RSS Feed | © Copyright Cycling '74
http://cycling74.com/forums/topic/problem-with-multiple-msp-inletsoutlets-in-c-api/
CC-MAIN-2014-41
refinedweb
263
54.46
1 Using the C++ interface (oscode)¶ 1.1 Overview¶ This documentation illustrates how one can use oscode via its C++ interface. Usage of oscode involves defining an equation to solve, solving the equation, and extracting the solution and other statistics about the run. The next sections will cover each of these. For a complete reference, see the C++ interface reference page, and for examples see the examples directory on GitHub. 1.2 Defining an equation¶ The equations oscode can be used to solve are of the form where \(x(t)\), \(\gamma(t)\), \(\omega(t)\) can be complex. We will call \(t\) the independent variable, \(x\) the dependent variable, \(\omega(t)\) the frequency term, and \(\gamma(t)\) the friction or first-derivative term. Defining an equation is via giving the frequency \(\omega(t)\), giving the first-derivative term \(\gamma(t)\), Defining the frequency and the first-derivative term can either be done by giving them as functions explicitly, or by giving them as sequences evaluated on a grid of \(t\). 1.2.1 \(\omega\) and \(\gamma\) as explicit functions¶ If \(\omega\) and \(\gamma\) are closed-form functions of time, then define them as #include "solver.hpp" // de_system, Solution defined in here std::complex<double> g(double t){ return 0.0; }; std::complex<double> w(double t){ return std::pow(9999,0.5)/(1.0 + t*t); }; Then feed them to the solver via the de_system class: de_system sys(&w, &g); Solution solution(sys, ...) // other arguments left out 1.2.2 \(\omega\) and \(\gamma\) as time series¶ Sometimes \(\omega\) and \(\gamma\) will be results of numerical integration, and they will have no closed-form functional form. In this case, they can be specified on a grid, and oscode will perform linear interpolation on the given grid to find their values at any timepoint. Because of this, some important things to note are: oscodewill assume the grid of timepoints \(\omega\) and \(\gamma\) are not evenly spaced. If the grids are evenly sampled, set even=truein the call for de_system(), this will speed linear interpolation up significantly. The timepoints grid needs to be monotonically increasing. The timepoints grid needs to include the range of integration (\(t_i\),:math:t_f). The grids for the timepoints, frequencies, and first-derivative terms have to be the same size. The speed/efficiency of the solver depends on how accurately it can carry out numerical integrals of the frequency and the first-derivative terms, therefore the grid fineness needs to be high enough. (Typically this means that linear interpolation gives a \(\omega(t)\) value that is accurate to 1 part in \(10^{6}\) or so.) If you want oscode to check whether the grids were sampled finely enough, set check_grid=truein the call for de_system(). To define the grids, use any array-like container which is contiguous in memory, e.g. an Eigen::Vector, std::array, std::vector: #include "solver.hpp" // de_system, Solution defined in here // Create a fine grid of timepoints and // a grid of values for w, g N = 10000; std::vector<double> ts(N); std::vector<std::complex<double>> ws(N), gs(N); // Fill up the grids for(int i=0; i<N; i++){ ts[i] = i; ws[i] = std::sqrt(i); gs[i] = 0.0; } They can then be given to the solver again by feeding a pointer to their underlying data to the de_system class: de_system sys(ts.data(), ws.data(), gs.data()); Solution solution(sys, ...) // other arguments left out Often \(\omega\) and \(\gamma\) are much easier to perform linear interpolation on once taken natural log of. This is what the optional islogw and islogg arguments of the overloaded de_system::de_system() constructor are for: #include "solver.hpp" // de_system, Solution defined in here // Create a fine grid of timepoints and // a grid of values for w, g N = 10000; std::vector<double> ts(N); std::vector<std::complex<double> logws(N), gs(N); // Note the log! // Fill up the grids for(int i=0; i<N; i++){ ts[i] = i; logws[i] = 0.5*i; gs[i] = 0.0; // Will not be logged } // We want to tell de_system that w has been taken natural log of, but g // hasn't. Therefore islogw=true, islogg=false: de_system sys(ts.data(), logws.data(), gs.data(), true, false); Solution solution(sys, ... ) // other arguments left out 1.2.2.1 DIY interpolation¶ For some problems, linear interpolation of \(\omega\) and \(\gamma\) (or their natural logs) might simply not be enough. For example, the user could carry out cubic spline interpolation and feed \(\omega\) and \(\gamma\) as functions to de_system. Another example for wanting to do (linear) interpolation outside of oscode is when Solution.solve() is ran in a loop, and for each iteration a large grid of \(\omega\) and \(\gamma\) is required, depending on some parameter. Instead of generating them over and over again, one could define them as functions, making use of some underlying vectors that are independent of the parameter we iterate over: // A, B, and C are large std::vectors, same for each run // k is a parameter, different for each run // the grid of timepoints w, g are defined on starts at tstart, and is // evenly spaced with a spacing tinc. // tstart, tinc, A, B, C defined here std::complex<double> g(double t){ int i; i=int((t-tstart)/tinc); std::complex<double> g0 = 0.5*(k*k*A[i] + 3.0 - B[i] + C[i]*k; std::complex<double> g1 = 0.5*(k*k*A[i+1] + 3.0 - B[i+1] + C[i+1]*k); return (g0+(g1-g0)*(t-tstart-tinc*i)/tinc); }; 1.3 Solving an equation¶ Once the equation to be solver has been defined as an instance of the de_system class, the following additional information is necessary to solve it: initial conditions, \(x(t_i)\) and \(\dot{x}(t_f)\), the range of integration, from \(t_i\) and \(t_f\), (optional) set of timepoints at which dense output is required, (optional) order of WKB approximation to use, order=3, (optional) relative tolerance, rtol=1e-4, (optional) absolute tolerance atol=0.0, (optional) initial step h_0=1, (optional) output file name full_output="", Note the following about the optional arguments: rtol, atolare tolerances on the local error. The global error in the solution is not guaranteed to stay below these values, but the error per step is. In the RK regime (not oscillatory solution), the global error will rise above the tolerance limits, but in the WKB regime, the global error usually stagnates. The initial step should be thought of as an initial estimate of what the first stepsize should be. The solver will determine the largest possible step within the given tolerance limit, and change h_0if necessary. The full output of solve()will be written to the filename contained in full_output, if specified. Here’s an example to illustrate usage of all of the above variables: #include "solver.hpp" // de_system, Solution defined in here // Define the system de_system sys(...) // For args see previous examples // Necessary parameters: // initial conditions std::complex<double> x0=std::complex<double>(1.0,1.0), dx0=0.0; // range of integration double ti=1.0, tf=100.0; // Optional parameters: // dense output will be required at the following points: int n = 1000; std::vector t_eval(n); for(int i=0; i<n; i++){ t_eval[i] = i/10.0; } // order of WKB approximation to use int order=2; // tolerances double rtol=2e-4, atol=0.0; // initial step double h0 = 0.5; // write the solution to a file std::string outfile="output.txt"; Solution solution(sys, x0, dx0, ti, tf, t_eval.data(), order, rtol, atol, h0, outfile); // Solve the equation: solution.solve() Here, we’ve also called the solve() method of the Solution class, to carry out the integration. Now all information about the solution is in solution (and written to output.txt). 1.4 Using the solution¶ Let’s break down what solution contains (what Solution.solve() returns). An instance of a Solution object is returned with the following attributes: times[std::list of double]: timepoints at which the solution was determined. These are not supplied by the user, rather they are internal steps that the solver has takes. The list starts with \(t_i\) and ends with \(t_f\), these points are always guaranteed to be included. sol[std::list of std::complex<double>]: the solution at the timepoints specified in times. dsol[std::list of std::complex<double>]: first derivative of the solution at timepoints specified in times. wkbs[std::list of int/bool]: types of steps takes at each timepoint in times. 1 if the step was WKB, 0 if it was RK. ssteps[int]: total number of accepted steps. totsteps[int]: total number of attempted steps (accepted + rejected). wkbsteps[int]: total number of successful WKB steps. x_eval[std::list of std::complex<double>]: dense output, i.e. the solution evaluated at the points specified in the t_evaloptional argument dx_eval[std::list of std::complex<double>]: dense output of the derivative of the solution, evaluted at the points specified in t_evaloptional argument.
https://oscode.readthedocs.io/en/latest/oscode.html
CC-MAIN-2022-21
refinedweb
1,503
54.63
On Wed, 6 Mar 2002, Conor MacNeill wrote: > > Everything that worked before will continue to work in exactly the > > same way, it is completely backward compatible. > > > > However if a SAX2 parser is present it'll allow namespaces to > > be used. > > > > I haven't had time to look at the code yet but how will the namespaces > be used? Just curious. At this moment - the helper is not doing anything with the namespaces, since the task registration can only deal with the tag name. I don't think it's a good idea to decide on any particular use of namespaces for ant1.5 - just to have them available and pass this info to the task factory ( assuming the task factory proposal is accepted). User code can do whatever it wants with it. BTW, I think Peter was right that SAX1 helper should remain the default, in order to have more feedback and experience with the SAX2 one. Costin -- To unsubscribe, e-mail: <mailto:ant-dev-unsubscribe@jakarta.apache.org> For additional commands, e-mail: <mailto:ant-dev-help@jakarta.apache.org>
http://mail-archives.apache.org/mod_mbox/ant-dev/200203.mbox/%3CPine.LNX.4.33.0203051511330.2072-100000@localhost.localdomain%3E
CC-MAIN-2015-32
refinedweb
181
63.49
Back to index #include "nsITransport.h" Go to the source code of this file. This function returns a proxy object for a transport event sink instance. The transport event sink will be called on the thread indicated by the given event target. Like events are automatically coalesced. This means that for example if the status value is the same from event to event, and the previous event has not yet been delivered, then only one event will be delivered. The progress reported will be that from the second event. If aCoalesceAllEvents is true, then any undelivered event will be replaced with the next event if it arrives early enough. This option should be used cautiously since it can cause states to be effectively skipped. Coalescing events can help prevent a backlog of unprocessed transport events in the case that the target thread is overworked. Definition at line 184 of file nsTransportUtils.cpp. { *result = new nsTransportEventSinkProxy(sink, target, coalesceAll); if (!*result) return NS_ERROR_OUT_OF_MEMORY; NS_ADDREF(*result); return NS_OK; }
https://sourcecodebrowser.com/lightning-sunbird/0.9plus-pnobinonly/ns_transport_utils_8h.html
CC-MAIN-2016-44
refinedweb
166
67.25
I? Added to events.config: /config/handlers += [{ "events" : "startDaemon", "handler" : "startTest.groovy" }] Added to startTest.groovy: def onStartDaemon() { logger.INFO { "onStartDaemon" }; } I did not add the onStartDaemon() method to the startTestBindings.java file. Is this necessary for sMash to find the method at startup? Topic Pinned topic startDaemon handler in groovy 2012-10-03T16:22:09Z | Updated on 2012-10-03T21:52:21Z at 2012-10-03T21:52:21Z by nilsbru Re: startDaemon handler in groovy2012-10-03T21:52:21Z This is the accepted answer. This is the accepted answer.User Error: I did not see the INFO level messages in the log because the trace level for the package was set to OFF. Once I changed the trace spec, I saw the desired messages in the log, so I can confirm that my code ran at startup.
https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014893573
CC-MAIN-2015-32
refinedweb
138
68.26
A bunch of quality refactorings and code fixes that are going to improve your C# development experience in Visual Studio and remove some common pain. See the change log for changes and road map. Ctrl+. on a field and choose Add constructor and initialize field. Allows you to quickly add missing braces to single statements: It works both ways and allows you to remove braces if you prefer so. Ctrl+. on a type declaration (either top level or nested) and choose one of proposed options. Suggest to declare a field as readonly if it's possible: Just type nguid where you want the new GUID to be inserted: nguid and press TAB: Ctrl+. on a constructor parameter and choose Add initialized field. Ctrl+. on a field and choose Initialize field in existing constructor. There is a standard command in Solution Explorer called 'Sync with Active Document'. People coming from ReSharper will appreciate its Shift+Alt+L equivalent. The command is available in the code editor either from the context menu or as a shortcut. Sometimes you copy code from another class into a new one and this quick fix allows you to update the constructor name. Analyze if a top level namespace does not match the path of the file where it is declared or does not start with the project's default namespace. Note that the code fix currently does not update references. If you're a fan of Xunit data driven tests this one's going to be a little time saver for you. You can scaffold MemberData: MemberData As well as InlineData: InlineData If your InlineData contains acceptable parameters they will be respected unless the test method already defines parameters (in which case neither of the scaffolding refactoring will work). Note that this feature works with Xunit 2.x only. All features can be individually enabled or disabled. Analyzes if a top level type name does not match the name of the file where it is declared and displays a warning. It also offers to either rename the type to match the file name or rename the file to match the type name. Check out the contribution guidelines if you want to contribute to this project. For cloning and building this project yourself, make sure to install the Extensibility Tools 2015 extension for Visual Studio which enables some features used by this project. MIT
https://marketplace.visualstudio.com/items?itemName=AndreiDzimchuk.ExperimentalTools
CC-MAIN-2021-43
refinedweb
397
61.97
10.1.1. How can I change the default timeout used by the waitForObject function (and for similar functions, like waitForObjectItem)? waitForObject waitForObjectItem The easiest solution is to create a custom function with the same name, but with the timeout you want. Here's how to create a version with a timeout of 5 seconds; it should go at the top of the test (before the main function), and before any calls to the function are made. main import sys # Add this line def wrapSquishFunction(functionName, wrappedFunctionGetter): # Add this function module = sys.modules["squish"] myWaitForObject(waitForObjectFunction): # Add this function def wrappedFunction(objectOrName, timeout=5000): return waitForObjectFunction(objectOrName, timeout) return wrappedFunction def main(): startApplication("addressbook") wrapSquishFunction("waitForObject", myWaitForObject) # Add this line # ... originalWaitForObject = waitForObject waitForObject = function(obj) { return originalWaitForObject(obj, 5000) } rename waitForObject originalWaitForObject proc waitForObject {obj} { return [originalWaitForObject $obj 5000] } The above approach works when AUTs are run automatically. For those started using the startApplication function, all this must be done after that function is called (since only then does all of the Squish API become available). See also, How to Modify Squish Functions (Section 5.20). startApplication (We don't currently have a solution for Perl.) 10.1.2. How good is Squish's support for custom Qt classes? Quite often, the subclassing of Qt classes is done to change the look and behavior of standard widgets. Or to store some extra internal information. Squish will recognize those classes at least up to the level it would support the underlying standard classes. So if an application's developers subclass e.g. QPushButton to create a round button or to make the button change shape if pressed, this will no pose a problem as Squish's emulation is purely concerned with user input in terms of keyboard and mouse input independent of the behavior of the underlying C++ functions that are called. QPushButton In many test scenarios test engineers are only concerned with standard visible properties, like text or color, and these work out of the box for custom QObject subclasses that inherit such properties. In addition, as of Squish 4, custom QObject subclasses' properties and slots (marked via the Q_PROPERTY and slots macros) are automatically detected. This means that their properties will show up in the Spy and their custom slots (provided the signatures use standard Qt types) are callable. text color QObject Q_PROPERTY slots So in almost all cases Squish's automatic detection is sufficient. Squish 3 used a less dynamic approach using the squishidl (Section 7.4.5) tool. The tool scans C and C++ header files and provides script bindings for the functions and classes it parses. When building Squish we also use this tool to create script bindings to the Qt libraries themselves. While the tool is powerful, it can be a bit involved to set up as you must perform some build steps with the right include search paths and pre-processor macros set up as when building your application. If there is a specific object that is causing problems in a planned test case contact froglogic technical support so that we can help solve the problem. 10.1.3. Why does text entered with the type function end up garbled in a Qt application? And why does a clickButton call have no effect? type clickButton The Qt 4.3 series has a bug that mixes up the order of posted events in the internal queue. In some rare cases this breaks text input or mouse click sequences. This problem was scheduled to be solved in Qt 4.4 and possibly also in Qt 4.3.4. A source code patch is available on request. 10.1.4. How do I Build Squish and Qt on IRIX with gcc? To build Squish, Qt must be linked as a shared library. When compiling Qt with gcc in IRIX, it is built as a static library by default. To build Qt as a shared library, specify the -shared option when running Qt's configure script. -shared The reason, why static is the default build is because Qt doesn't link as a shared library on IRIX when being built with gcc. To fix this problem, edit the file src/Makefile after you have run configure. Add the switch -Wl,-LD_LAYOUT:lgot_buffer=100 to the Makefile's LFLAGS variable. The line should then look similar to this one: src/Makefile -Wl,-LD_LAYOUT:lgot_buffer=100 LFLAGS LFLAGS = -Wl,-LD_LAYOUT:lgot_buffer=100 -shared -Wl,-soname,libqt-mt.so.3 -Wl,-rpath,/usr/people/reggie/qt32-gcc/lib After doing this just compile Qt as usual (i.e., by running make). Then compile Squish as documented elsewhere in this manual (Installing Squish for Qt from Desktop Source Packages (Section 3.1.2)). This fix was been submitted to Trolltech support and was incorporated into Qt in version 3.3.0. 10.1.5. How do I Build Squish with Python Support on MinGW? Some Python binary distributions already contain an import library for Python that can be used with MinGW. In such cases you don't have to do anything. On the other hand, if you don't have the import library you must generate it yourself. For this you need MinGW's pexports and dlltool tools. In the following example we assume that Python is installed in C:\Python27 and that the Python DLL is C:\WINNT\system32\python27.dll. To generate the import library, do the following: C:\Python27 C:\WINNT\system32\python27.dll cd C:\Python27\libs pexports C:\WINNT\system32\python27.dll > python27.def dlltool --dllname python27.dll --def python27.def --output-lib libpython27.a For more information see the MinGW Wiki on Python extensions (just the Basic Setup section). 10.1.6. I have problems building Squish with a single-threaded Qt 3 library on Windows. As mentioned in Installation for testing with a single-threaded Qt library (Section 3.1.2.5), most parts of Squish can be built with a single-threaded Qt library. On Windows there is one problem with this: The single-threaded Qt library is linked against the static LIBC.LIB instead of the shared MSVCRT.LIB. This leads to problems with memory management because of different heaps when using multiple DLLs linked against Qt, and this problem affects Squish. LIBC.LIB MSVCRT.LIB To work around the problem, open the file %QTDIR%\mkspecs\win32-msvc\qmake.conf (or %QTDIR%\mkspecs\win32-icc\qmake.conf, if you use Intel C++) and change the following lines: %QTDIR%\mkspecs\win32-msvc\qmake.conf %QTDIR%\mkspecs\win32-icc\qmake.conf QMAKE_CFLAGS_RELEASE = -O1 QMAKE_CFLAGS_DEBUG = -Z7 to: QMAKE_CFLAGS_RELEASE = -O1 -MD QMAKE_CFLAGS_DEBUG = -Z7 -MDd After doing this you must regenerate Qt's Makefile and then recompile the Qt: Makefile cd %QTDIR%\src qmake qt.pro nmake clean nmake 10.1.7. Why doesn't Squish record mouse move events? Squish only records mouse move events when a mouse button is pressed—even if Qt's mouse tracking is enabled. This is because in most cases mouse move events that take place when no mouse button is pressed can be ignored since they don't need to be replayed for the widgets to function properly. Furthermore, recording these events would produce a lot of script code which would not affect the application, but would make the script harder to read and maintain. For those rare cases where it is necessary to record all the mouse move events because they are essential for the widget to work correctly, you can change Squish's behavior. For example, let's assume that we have a custom widget of type CanvasView for which we want to record all mouse move events. We can tell Squish to switch on mouse tracking for widgets of this type by creating and registering a suitable init file. For example, we could create a file called myinit.tcl that contained this line: CanvasView myinit.tcl setMouseTracking CanvasView true Then we can tell Squish to execute this file at startup: squishrunner --config addInitScript Qt <absolute_path>/myinit.tcl After this, when recording a test case, mouse move events on CanvasView widgets will be recorded even if no mouse button is pressed. (See also, Configuring squishrunner (Section 7.4.3.25).) 10.1.8. Why do my widgets always get names like MyWidget34 rather than more descriptive names? MyWidget34 Squish tries to create the most descriptive names for GUI objects. By default is uses the QObject name. If this property isn't unique (or is empty), other possibly unique properties are examined (window caption, button text, etc.) before falling back to <ObjectType><num>. One solution (and probably the best!) to get better names in generated scripts is to give all the widgets descriptive, short and unique QObject names in the AUT's source code. If the custom widgets have other possibly unique properties, like a label, etc., Squish can be told to try using these properties. For example, suppose that we have a custom widget called CanvasView which has a unique label property. To tell Squish about this, create a file called, for example, myinit.tcl that contains the following line: label setUniqueProperty CanvasView label Then tell Squish to interpret this file at startup: After doing this, Squish will attempt to use the CanvasView's label property for identification, if its value is unique. (See also, Configuring squishrunner (Section 7.4.3.25).) 10.1.9. Why does Squish fail to hook into my AUT on AIX? Usually, Squish hooks into the AUT without requiring any special preparations. However, in some cases—such as on AIX—this is not possible due to technical limitations of the operating system. See the chapter Using the Built-in Hook (Section 7.13.2) in the Tools Reference Manual (Chapter 7) for a solution. 10.1.10. Why does squishrunner fail to start with the error cannot restore segment prot after reloc: Permission denied? cannot restore segment prot after reloc: Permission denied Due to the usage of third-party code and imperfect compilers, Squish contains libraries with so called "text relocations". By themselves these are harmless but a "hardened" Linux distribution with SELinux (Security Enhanced Linux) installed may nevertheless disallow such a property. We are still investigating this problem, but meanwhile one workaround is to change the SELinux policy from enforcing to permissive—on most modern systems this can be done via a GUI configuration tool, or it can be done by manually editing the /etc/selinux/config file, and changing the SELINUX=enforcing entry to SELINUX=permissive. Whichever way the change is made, the system must be rebooted for the change to take effect. /etc/selinux/config SELINUX=enforcing SELINUX=permissive A more fine-grained solution is also possible using the chcon tool. This allows for the setting of the security context for individuals files. Here is a command that a customer gave us that should make things work: find /path/to/squish -type f -name "*.so" | xargs chcon -t textrel_shlib_t 10.1.11. Why doesn't Squish replay paintings properly in my painting application? When a mouse press event is followed by mouse move events (while the button is still pressed), Squish compresses all these events into a single mouseDrag. mouseDrag Using a mouseDrag is much more robust in the face of changes in the AUT then recording the individual mouse events. However, during the replay of the mouseDrag, there is only one mouse move event synthesized, starting from the point where the mouse button was pressed to the point where the mouse button was released. To enable recording to work properly for applications that depend on all of the individual mouse move events being recorded individually, there's a way to disable the creation of the mouseDrag calls for a specific widget type. See setRecordMouseDrag for further details. setRecordMouseDrag 10.1.12. Why doesn't Squish work correctly on Ubuntu 11. Ubuntu 11 ships with the Unity desktop by default. This desktop system takes a macOS-like approach with a single menubar at the top of the screen rather than associated with each application. One of the things the Ubuntu developers have done to achieve this is to patch all the GUI libraries shipped with Ubuntu. Unfortunately, many many applications—including the Squish IDE—are broken by these patches, with the result that no menubar is shown at all. See for example, Ubuntu bug 618587 and Eclipse bug 330563. A workaround is to set the following environment variable: UBUNTU_MENUPROXY=0. (This workaround is used by the SQUISHDIR/squishide script which runs the SQUISHDIR/bin/ide/squishide application, but obviously if you run the application directly you must set the environment variable yourself.) UBUNTU_MENUPROXY=0 SQUISHDIR/squishide SQUISHDIR/bin/ide/squishide
https://doc.froglogic.com/squish/latest/faq.html
CC-MAIN-2018-43
refinedweb
2,119
63.8
so im writing some video/image editing software and im stumped... not about the code so much cuz what im doing works fine .. im not convinced it is the most efficient use of my computer but it works. first my code... #include <iostream> #include <fstream> using namespace std; int main() { fstream obj ("sample.jpg", ios::in | ios::binary); fstream obj2("sample2.jpg, ios::out | ios::binary); int x, length; obj.seekg(0, ios::end); length = obj.tellg(); char array[length]; obj.seekg(0, ios::beg); while(!obj.eof()) { array[x] = obj.get(); obj2.put(array[x]); x++; }; obj.close(); obj2.close(); } okay so my question is im not sure how jpg's or .mov's any image or movie extension is packed a way... i realize that it is a combination of 3 bytes.... so for black 3 characters would all have ffffff.. my question is where do i go to learn how these bits are packed away so i can manipulate them... any standard that would be published... (preferrably something simple but unlikely) thank you
https://www.daniweb.com/programming/software-development/threads/308761/video-image-editing
CC-MAIN-2017-22
refinedweb
176
80.48
Service. In this post I'll walk you through the basic capabilities of the heap viewer features and provide an example to illustrate the type of problem the heap viewer is intended to solve. Because of the automatic memory management features in .Net it is tempting to believe that managed applications will never have memory leaks. While it is true that the garbage collector will always free objects that are no longer referenced, it is possible to hold on to an object longer than you intend to, thereby preventing it from being collected and essentially creating a leak. Remember that the GC will not collect any object that is referenced by a "root" such as a static variable, a stack-based variable in the currently executing method and so on. So even if you're done using an object, if it is not detached from the root that is keeping it alive it won't be collected. In the vast majority of cases, making sure an object is no longer referenced when you are done with it requires no additional work from the programmer. For example, when a method exits, all references held by the method's locals go out of scope and therefore are no longer attached to a root. However, there are a few relatively common cases in which more thought is required to make sure you're detaching an instance from its root(s). A few examples: More details on these scenarios, and examples of others, can be found by searching the web for something like ".Net memory leaks" Memory leaks typically manifest themselves in obvious ways: your application starts seeing OutOfMemoryExceptions, performance degrades over time and so on. Your application likely has a leak if the amount of memory used by the GC continues to grow over time. The best way to track the amount of memory used to store objects in the GC heap is to monitor the "Managed Bytes in use After GC" counter in RPM. Graphing the value of this counter over time using Perfmon is the easiest way to determine whether the heap is continually growing. Now that we've seen some simple examples of memory leaks, it's easy to imagine the data that would be useful in tracking leaks down: The rest of this post describes how to use the heap snapshot feature in Remote Performance Monitor to gather this information. Let's start by looking at how to use RPM to capture a snapshot of the heap. First, launch an application as you always have using the "Live Counters..." menu option (if you haven't used RPM before the following post will help get you started: ). After your application launches, the "View GC Heap" button on the bottom toolstrip is enabled as shown in the following figure: Each time you press "View GC Heap" a GC occurs and a new window will open to display the contents of the GC heap after the collection completes. I call this new window the "Roots" view. The primary purpose of the roots view is to show the GC root that is causing each object instance to stay alive. Once you've narrowed down the type of instance that is causing the leak (more on how to do this later), you can use the roots view to determine why the instances in question aren't being collected. The data displayed in the roots view is shown in the following three controls: The number of live objects in the heap, and the total size of those objects, is shown in the statistics group. When looking at multiple views, a quick glance at these statistics will tell you how the size of your heap is changing over time. The types table shows how many instances of each type are alive in the GC heap. The cumulative size of all instances of each type is also shown. Over time, the data displayed in this table will tell you whether the number of instances of a given type is growing or shrinking. Later, I describe how to use the "comparison view" to easily compare this data across multiple snapshots of the heap. By default, the types table is sorted by number of instances. You can change the sort order by clicking on the table's column headers. The check boxes next to the type names show which instances are displayed in the roots tree to the right. By default, the intent is to display the types that are defined by the application itself, not those defined by the .Net Compact Framework. Types with namespaces starting with "Microsoft" or "System" are excluded. You can change which types are shown in the roots tree by selecting the desired boxes and choosing the "Refresh Tree" button. Each instance in the GC heap is displayed in the roots tree. Instances are grouped by type (unless there's only one instance of a given type, in which case that instance gets a top-level node of its own). The node for a given instance indicates the size in bytes of that instance along with a unique ID you can use to identify that instance throughout the tree. By expanding the node for a given instance, you can walk your way up the reference hierarchy to see which root is causing that instance to stay alive. For example, consider the following subtree for an instance of type "Dice": This tree shows us that our instance of Dice is referenced by an array of Dice. The array is then referenced by an instance of type Turn which is referenced by Form1. Form1 is a root as indicated by the word "root" and by the red color of the text. The hierarchy shown in the roots view is "upside down" in that you're viewing the instances that reference the instance in question (the referents) . While this is the most useful view for finding leaks, it can also be useful to view the graph of objects a given instance references. Such reference graphs are provided by the "references view". To view the reference graph for a given instance, right-click the instance in the roots tree and select "Display Object References" (this menu item is also available under the "View" pull down menu): The reference view consists of a group that displays some general graph statistics and a tree showing the graph itself. The following screenshot displays the object graph for a main form. The reference graph for this particular form has 207 unique objects and is about 16K in size: You can determine which instances are leaking by taking multiple snapshots of the heap over time and comparing how many instances of each type are present. If the number of instances of a given type unexpectedly increases over time, it's likely that the instances are leaking. The "Comparison" view can help you spot these trends. You can compare all open heap snapshots for a given application by selecting the "Compare Views..." item from the "View" menu on the roots view of any snapshot. The Comparison view looks like this: The Comparison view shows instances counts for all types that appear in all open snapshots. Each row represents a type while each column shows the number of instances of that type that are present in a given snapshot. The snapshot columns are ordered by time to make it easy to spot trends. The table is sorted based on the delta in the number of instances between snapshots. Those types that had the greatest change in the number of instances appear first, while those that didn't change at all are shown at the bottom. When the number of instances of a type changes from one snapshot to the next, a "+" or "-" sign is shown along with the delta. Those rows that represent a delta are shown in red. Rows with no change in the number of instances are shown in black. Memory leaks can be spotted by analyzing the rows in the table that are displayed in red. The types at the top of the table above are all from the .Net Compact Framework base class libraries. As I look at the list it's pretty clear that some sort of UI element is leaking. As I read down the list I eventually get to one of my own types: Players.Stats in this case. The number of instances of Players.Stats increased by 2 each time I took a snapshot. Players.Stats is a dialog box that displays basic statistical information about a player. Given that I create a new one of these each time a certain button is clicked, then let it go out of scope, I didn't suspect that instances of this type were leaking. Now that I know which type is causing a leak, I can go back to the Roots view to see what GC root is causing my instances of Players.Stats to stay alive. By looking at the referents tree for Players.Stat I can see that my instances are staying alive because I've added an event handler that I've forgotten to "free". This event handler has created a dependency between a button on my main form and my Players.Stats dialog: Armed with this data, I can go back into my code and detach my event handler using -= thereby breaking the dependency between my two forms and fixing my leak. Once you've opened a snapshot you can save it to a file for later analysis. Choose the "Save" item from the File menu to save a snapshot. Choose the "Open-> .gclog file" item from the File menu to view a previously saved shapshot. As always, please send any feedback my way... Thanks, Steven This posting is provided "AS IS" with no warranties, and confers no rights. PingBack from Me cachis…. Es que los chicos de Redmon no paran eh!!! Que no nos dejan poner los pies en el suelo y I have a few different development machines. On one of them, I cannot connect using this application. I get a "Object reference not set to an instance of an object" message when I try to connect. On another machine, everything works fine. When I get that message, no packets are ever being sent to the device (this is verified in Ethereal). Do you have any ideas on what this problem can be? Thanks. Come segnalato in questo post di rob , è uscito l'sp2 del compact framework.Oltre agli update elencati, I am always happy to see positive feedback from the A few weeks ago I posted an entry describing how to use the .Net Compact Framework Remote Performance I’m really itching to do a serious of posts on slaying the Virtual Memory monster… you know, that memory The last few versions of the Remote Performance Monitor enable you to view snapshots of the GC heap on Subtitles: - OutOfMemoryException (OOM) - SqlCeException: Not enough memory to complete this operation
http://blogs.msdn.com/stevenpr/archive/2007/03/08/finding-managed-memory-leaks-using-the-net-cf-remote-performance-monitor.aspx
crawl-002
refinedweb
1,837
68.7
In our last journey, we connected Gitlab to a Kubernetes cluster exploring the "Add Kubernetes Cluster" button. Now that our cluster is set up, let's put it to use! Running Applications GitLab's Auto DevOps is too magical for me. I don't understand Helm well enough to use it; and I suspect that my use of Bazel's docker rules will cause problems. Instead, we'll manually add a deploy step to update our application. 1. Add a Helm chart. Helm charts are essentially yaml templates for Kubernetes resources. They allow you to add variables and set/override them when applying the chart... something you might traditionally do using sed or another bash trick. To create a new Helm chart, run a command like: mkdir devops; cd devops helm create myapp This will create a myapp/ directory with all of the pieces of a Helm template inside. You can then preview the YAML output of this template using a command like: helm install --dry-run --debug myapp Helm charts can be pretty intimidating. The meat of the template is in templates/deployment.yaml file. If you want, you can delete all of the fancy templating in this file and replace it with a vanilla yaml for a deployment object. For simple apps, tweak things as necessary in your values.yaml. I kept doing this, and comparing the --dry-run output to a hand-written deployment yaml until I got it looking sort of correct. 2. Pulling from GitLab's container registry Auth If you're using a private container registry, like Gitlab's, you will need to store some login information in a secret in your Kubernetes cluster. To begin, head to your project settings in GitLab and navigate to the Registry section. Create a Deploy Token and note the username and password. Next, follow this tutorial to upload that secret to your cluster. I ended up using a command like: kubectl create secret docker-registry regcred \ --docker-server=registry.gitlab.com \ --docker-username=gitlab+deploy-token-123456 \ --docker-password=p@ssw0rdh3r3 \ --docker-email=me@gmail.com NOTE: You probably need to run this command with a --namespace flag that sets this secret in the same namespace that GitLab picks to run your application. If it's not set, you'll see errors trying to fetch the container. Helm Chart Tweak To use this regcred secret, add it to the imagePullSecrets section of your values.yaml file like this: image: repository: registry.gitlab.com/bamnet/project/image pullPolicy: IfNotPresent tag: "" imagePullSecrets: - name: regcred 3. .gitlab-ci.yml updates To apply this Helm chart as part of your CI/CD pipeline, add a job to your .gitlab-ci.yml file like the following: deploy_myapp: stage: deploy image: name: alpine/helm:latest entrypoint: [""] script: - helm upgrade --install --wait --set image.tag=${CI_COMMIT_SHA} myapp-${CI_COMMIT_REF_SLUG} devops/myapp environment: name: production The most important part of this entire job is the environment section. Gitlab only exposes Kubernetes connection information (via environment variables helm and kubectl automatically use) when the deploy stage has an environment set. Without this section, you will get errors connecting to your cluster. There are 3 parts of the helm upgrade command worth noting: --set image.tag=${CI_COMMIT_SHA}overrides the tagportion of our deployment.yaml, passing in the git commit hash. This assumes your containers are tagged with the commit that generates them. If you don't do this, consider a static value like latest. myapp-${CI_COMMIT_REF_SLUG}provides the name for this deployment. If you're deploying from the masterbranch, this will be myapp-master. This must be unique, so tweak the myapp-prefix if you have multiple applications. devops/myappat the end specified the folder where the Helm chart files are located. Pushing this new gitlab-ci file should trigger an automatic deployment to your Kubernetes cluster. Sit back, relax, and watch the dashboard to see it work. If this is your first push, be on the lookout for a new namespace to be created.. probably something like gitlabproject-123456-production. Troubleshooting Tips - Locally run helm install --dry-runto see the planned configuration. If it doesn't look right locally, there is no way GitLab is going to get it right. - Connect to the Kubernetes Dashboard to see why deployments fail. - Make sure your image names and tags match between your registry hosting the things and the deployment yaml trying to create them. - Use the --namespace flag to make sure your registry credentials end up in the right namespace. Monitoring Applications GitLab has a one-click install of Prometheus. I am a sucker for one-click install buttons and wanted to give it a spin monitoring my Go application. 1. Exporting Metrics The OpenTelemetry docs and examples are a good starting point. Prometheus needs and HTTP endpoint to grab the metrics from, a very simple Prometheus exporter looks like this. func initMeter() { exporter, err := prometheus.InstallNewPipeline(prometheus.Config{}) if err != nil { log.Panicf("failed to initialize prometheus exporter %v", err) } http.HandleFunc("/metrics", exporter.ServeHTTP) go func() { _ = http.ListenAndServe(":2222", nil) }() fmt.Println("Prometheus server running on :2222") } func main() { initMeter() // Rest of your code here. 2. Adding Annotations. Gitlab's one-click Prometheus automatically scrapes metrics from any resource that has certain annotations in place which tell it how to scrape. Add the following to your chart.yaml file: podAnnotations: prometheus.io/scrape: "true" prometheus.io/path: /metrics prometheus.io/port: "2222" That's it! Troubleshooting Tips - Manually connect to your application and see the exported metrics. Forward the port using kubectl port-forward -n <gitlab-created-namespace> deployments/myapp-master 2222:2222and point your browser at. - Manually connect to Prometheus and use the web UI to see what metrics are being scraped and run queries against them. Forward the port using kubectl port-forward -n gitlab-managed-apps service/prometheus-prometheus-server 9090:80and point your browser to. Discussion (0)
https://dev.to/bamnet/using-gitlab-managed-kubernetes-5b03
CC-MAIN-2021-49
refinedweb
985
58.48
Kernel APIs, Part 2 Deferrable functions, kernel tasklets, and work queues An introduction to bottom halves in Linux 2.6 Content series: This content is part # of # in the series: Kernel APIs, Part 2 This content is part of the series:Kernel APIs, Part 2 Stay tuned for additional content in this series. This article explores a couple of methods used to defer processing between kernel contexts (specifically, within the 2.6.27.14 Linux kernel). Although these methods are specific to the Linux kernel, the ideas behind them are useful from an architectural perspective, as well. For example, you could implement these ideas in traditional embedded systems in place of a traditional scheduler for work scheduling. Before diving into the methods used in the kernel to defer functions, however, let's start with some background on the problem being solved. When an operating system is interrupted because of a hardware event (such as the presence of a packet through a network adapter), the processing begins in an interrupt. Typically, the interrupt kicks off a substantial amount of work. Some amount of this work is done in the context of the interrupt, and work is passed up the software stack for additional processing (see Figure 1). Figure 1. Top-half and bottom-half processing The question is, how much work should be done in the interrupt context? The problem with interrupt context is that some or all interrupts can be disabled during this time, which increases the latency of handling other hardware events (and introduces changes in processing behavior). Therefore, minimizing the work done in the interrupt is desirable, pushing some amount of the work into the kernel context (where there is a higher likelihood that the processor can be gainfully shared). As shown in Figure 1, the processing done in the interrupt context is called the top half, and interrupt-based processing that's pushed outside of the interrupt context is called the bottom half (where the top half schedules the subsequent processing by the bottom half). The bottom-half processing is performed in the kernel context, which means that interrupts are enabled. This leads to better performance because of the ability to deal quickly with high-frequency interrupt events by deferring non-time-sensitive work. Short history of bottom halves Linux tends to be a Swiss Army knife of functionality, and deferring functionality is no different. Since kernel 2.3, softirqs have been available that implement a set of 32 statically defined bottom halves. As static elements, these are defined at compile time (unlike the new mechanisms, which are dynamic). Softirqs were used for time-critical processing (software interrupts) in the kernel thread context. You can find the source to the softirq functionality in ./kernel/softirq.c. Also introduced in the 2.3 Linux kernel are tasklets (see ./include/linux/interrupt.h). Tasklets are built on top of softirqs to allow dynamic creation of deferrable functions. Finally, in the 2.5 Linux kernel, work queues were introduced (see ./include/linux/workqueue.h). Work queues permit work to be deferred outside of the interrupt context into the kernel process context. Let's now explore the dynamic mechanisms for work deferral, tasklets, and work queues. Introducing tasklets Softirqs were originally designed as a vector of 32 softirq entries supporting a variety of software interrupt behaviors. Today, only nine vectors are used for softirqs, one being the TASKLET_SOFTIRQ (see ./include/linux/interrupt.h). And although softirqs still exist in the kernel, tasklets and work queues are recommended instead of allocating new softirq vectors. Tasklets are a deferral scheme that you can schedule for a registered function to run later. The top half (the interrupt handler) performs a small amount of work, and then schedules the tasklet to execute later at the bottom half. Listing 1. Declaring and scheduling a tasklet /* Declare a Tasklet (the Bottom-Half) */ void tasklet_function( unsigned long data ); DECLARE_TASKLET( tasklet_example, tasklet_function, tasklet_data ); ... /* Schedule the Bottom-Half */ tasklet_schedule( &tasklet_example ); A given tasklet will run on only one CPU (the CPU on which the tasklet was scheduled), and the same tasklet will never run on more than one CPU of a given processor simultaneously. But different tasklets can run on different CPUs at the same time. Tasklets are represented by the tasklet_struct structure (see Figure 2), which includes the necessary data to manage and maintain the tasklet (state, enable/disable via an atomic_t, function pointer, data, and linked-list reference). Figure 2. The internals of the tasklet_struct structure Tasklets are scheduled through the softirq mechanism, sometimes through ksoftirqd (a per-CPU kernel thread), when the machine is under heavy soft-interrupt load. The next section explores the various functions available in the tasklets application programming interface (API). Tasklets API Tasklets are defined using a macro called DECLARE_TASKLET (see Listing 2). Underneath, this macro simply provides a tasklet_struct initialization of the information you provide (tasklet name, function, and tasklet-specific data). By default, the tasklet is enabled, which means that it can be scheduled. A tasklet can also be declared as disabled by default using the DECLARE_TASKLET_DISABLED macro. This requires that the tasklet_enable function be invoked to make the tasklet schedulable. You can enable and disable a tasklet (from a scheduling perspective) using the tasklet_enable and tasklet_disable functions, respectively. A tasklet_init function also exists that initializes a tasklet_struct with the user-provided tasklet data. Listing 2. Tasklet creation and enable/disable functions DECLARE_TASKLET( name, func, data ); DECLARE_TASKLET_DISABLED( name, func, data); void tasklet_init( struct tasklet_struct *, void (*func)(unsigned long), unsigned long data ); void tasklet_disable_nosync( struct tasklet_struct * ); void tasklet_disable( struct tasklet_struct * ); void tasklet_enable( struct tasklet_struct * ); void tasklet_hi_enable( struct tasklet_struct * ); Two disable functions exist, each of which requests a disable of the tasklet, but only the tasklet_disable returns after the tasklet has been terminated (where the tasklet_disable_nosync may return before the termination has occurred). The disable functions allow the tasklet to be "masked" (that is, not executed) until the enable function is called. Two enable functions also exist: one for normal priority scheduling ( tasklet_enable) and one for enabling higher-priority scheduling ( tasklet_hi_enable). The normal-priority schedule is performed through the TASKLET_SOFTIRQ-level softirq, where high priority is through the HI_SOFTIRQ-level softirq. As with the normal and high-priority enable functions, there are normal and high-priority schedule functions (see Listing 3). Each function enqueues the tasklet on the particular softirq vector ( tasklet_vec for normal priority and tasklet_hi_vec for high priority). Tasklets from the high-priority vector are serviced first, followed by those on the normal vector. Note that each CPU maintains its own normal and high-priority softirq vectors. Listing 3. Tasklet scheduling functions void tasklet_schedule( struct tasklet_struct * ); void tasklet_hi_schedule( struct tasklet_struct * ); Finally, after a tasklet has been created, it's possible to stop a tasklet through the tasklet_kill functions (see Listing 4). The tasklet_kill function ensures that the tasklet will not run again and, if the tasklet is currently scheduled to run, will wait for its completion, and then kill it. The tasklet_kill_immediate is used only when a given CPU is in the dead state. Listing 4. Tasklet kill functions void tasklet_kill( struct tasklet_struct * ); void tasklet_kill_immediate( struct tasklet_struct *, unsigned int cpu ); From the API, you can see that the tasklet API is simple, and so is the implementation. You can find the implementation of the tasklet mechanism in ./kernel/softirq.c and ./include/linux/interrupt.h. Simple tasklet example Let's look at a simple usage of the tasklets API (see Listing 5). As shown here, a tasklet function is created with associated data ( my_tasklet_function and my_tasklet_data), which is then used to declare a new tasklet using DECLARE_TASKLET. When the module is inserted, the tasklet is scheduled, which makes it executable at some point in the future. When the module is unloaded, the tasklet_kill function is called to ensure that the tasklet is not in a schedulable state. Listing 5. Simple example of a tasklet in the context of a kernel module #include <linux/kernel.h> #include <linux/module.h> #include <linux/interrupt.h> MODULE_LICENSE("GPL"); char my_tasklet_data[]="my_tasklet_function was called"; /* Bottom Half Function */ void my_tasklet_function( unsigned long data ) { printk( "%s\n", (char *)data ); return; } DECLARE_TASKLET( my_tasklet, my_tasklet_function, (unsigned long) &my_tasklet_data ); int init_module( void ) { /* Schedule the Bottom Half */ tasklet_schedule( &my_tasklet ); return 0; } void cleanup_module( void ) { /* Stop the tasklet before we exit */ tasklet_kill( &my_tasklet ); return; } Introducing work queues Work queues are a more recent deferral mechanism, added in the 2.5 Linux kernel version. Rather than providing a one-shot deferral scheme as is the case with tasklets, work queues are a generic deferral mechanism in which the handler function for the work queue can sleep (not possible in the tasklet model). Work queues can have higher latency than tasklets but include a richer API for work deferral. Deferral used to be managed by task queues through keventd but is now managed by kernel worker threads named events/X. Work queues provide a generic method to defer functionality to bottom halves. At the core is the work queue (struct workqueue_struct), which is the structure onto which work is placed. Work is represented by a work_struct structure, which identifies the work to be deferred and the deferral function to use (see Figure 3). The events/X kernel threads (one per CPU) extract work from the work queue and activates one of the bottom-half handlers (as indicated by the handler function in the struct work_struct). Figure 3. The process behind work queues As the work_struct indicates the handler function to use, you can use the work queue to queue work for a variety of handlers. Now, let's look at the API functions that can be found for work queues. Work queue API The work queue API is slightly more complicated that tasklets, primarily because a number of options are supported. Let's first explore the work queues, and then we'll look at work and the variants. Recall from Figure 3 that the core structure for the work queue is the queue itself. This structure is used to enqueue work from the top half to be deferred for execution later by the bottom half. Work queues are created through a macro called create_workqueue, which returns a workqueue_struct reference. You can remote this work queue later (if needed) through a call to the destroy_workqueue function: struct workqueue_struct *create_workqueue( name ); void destroy_workqueue( struct workqueue_struct * ); The work to be communicated through the work queue is defined by the work_struct structure. Typically, this structure is the first element of a user's structure of work definition (you'll see an example of this later). The work queue API provides three functions to initialize work (from an allocated buffer); see Listing 6. The INIT_WORK macro provides for the necessary initialization and setting of the handler function (passed by the user). In cases where the developer needs a delay before the work is enqueued on the work queue, you can use the INIT_DELAYED_WORK and INIT_DELAYED_WORK_DEFERRABLE macros. Listing 6. Work initialization macros INIT_WORK( work, func ); INIT_DELAYED_WORK( work, func ); INIT_DELAYED_WORK_DEFERRABLE( work, func ); With the work structure initialized, the next step is enqueuing the work on a work queue. You can do this in a few ways (see Listing 7). First, simply enqueue the work on a work queue using queue_work (which ties the work to the current CPU). Or, you can specify the CPU on which the handler should run using queue_work_on. Two additional functions provide the same functionality for delayed work (whose structure encapsulates the work_struct structure and a timer for work delay). Listing 7. Work queue functions int queue_work( struct workqueue_struct *wq, struct work_struct *work ); int queue_work_on( int cpu, struct workqueue_struct *wq, struct work_struct *work ); int queue_delayed_work( struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay ); int queue_delayed_work_on( int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay ); You can use a global kernel-global work queue, with four functions that address this work queue. These functions (shown in Listing 8) mimic those from Listing 7, except that you don't need to define the work queue structure. Listing 8. Kernel-global work queue functions int schedule_work( struct work_struct *work ); int schedule_work_on( int cpu, struct work_struct *work ); int scheduled_delayed_work( struct delayed_work *dwork, unsigned long delay ); int scheduled_delayed_work_on( int cpu, struct delayed_work *dwork, unsigned long delay ); There are also a number of helper functions that you can use to flush or cancel work on work queues. To flush a particular work item and block until the work is complete, you can make a call to flush_work. All work on a given work queue can be completed using a call to flush_workqueue. In both cases, the caller blocks until the operation is complete. To flush the kernel-global work queue, call flush_scheduled_work. int flush_work( struct work_struct *work ); int flush_workqueue( struct workqueue_struct *wq ); void flush_scheduled_work( void ); You can cancel work if it is not already executing in a handler. A call to cancel_work_sync will terminate the work in the queue or block until the callback has finished (if the work is already in progress in the handler). If the work is delayed, you can use a call to cancel_delayed_work_sync. int cancel_work_sync( struct work_struct *work ); int cancel_delayed_work_sync( struct delayed_work *dwork ); Finally, you can find out whether a work item is pending (not yet executed by the handler) with a call to work_pending or delayed_work_pending. work_pending( work ); delayed_work_pending( work ); That's the core of the work queue API. You can find the implementation of the work queue API in ./kernel/workqueue.c, with API definitions in ./include/linux/workqueue.h. Let's now continue with a simple example of the work queue API. Simple work queue example The following example illustrates a few of the core work queue API functions. As with the tasklets example, you implement this example in the context of a kernel module for simplicity. First, look at your work structure and the handler function that you'll use to implement the bottom half (see Listing 9). The first thing you'll note here is a definition of your work queue structure reference ( my_wq) and the my_work_t definition. The my_work_t typedef includes the work_struct structure at the head and an integer that represents your work item. Your handler (a callback function) de-references the work_struct pointer back to the my_work_t type. After emitting the work item (integer from the structure), the work pointer is freed. Listing 9. Work structure and bottom-half handler #include <linux/kernel.h> #include <linux/module.h> #include <linux/workqueue.h> MODULE_LICENSE("GPL"); static struct workqueue_struct *my_wq; typedef struct { struct work_struct my_work; int x; } my_work_t; my_work_t *work, *work2; static void my_wq_function( struct work_struct *work) { my_work_t *my_work = (my_work_t *)work; printk( "my_work.x %d\n", my_work->x ); kfree( (void *)work ); return; } Listing 10 is your init_module function, which begins with creation of the work queue using the create_workqueue API function. Upon successful creation of the work queue, you create two work items (allocated via kmalloc). Each work item is then initialized with INIT_WORK, the work defined, and then enqueued onto the work queue with a call to queue_work. The top-half process (simulated here) is now complete. The work will then, at some time later, be processed by the handler as shown in Listing 10. Listing 10. Work queue and work creation int init_module( void ) { int ret; my_wq = create_workqueue("my_queue"); if (my_wq) { /* Queue some work (item 1) */ work = (my_work_t *)kmalloc(sizeof(my_work_t), GFP_KERNEL); if (work) { INIT_WORK( (struct work_struct *)work, my_wq_function ); work->x = 1; ret = queue_work( my_wq, (struct work_struct *)work ); } /* Queue some additional work (item 2) */ work2 = (my_work_t *)kmalloc(sizeof(my_work_t), GFP_KERNEL); if (work2) { INIT_WORK( (struct work_struct *)work2, my_wq_function ); work2->x = 2; ret = queue_work( my_wq, (struct work_struct *)work2 ); } } return 0; } The final elements are shown in Listing 11. Here, in module cleanup, you flush the particular work queue (which blocks until the handler has completed processing of the work), and then destroy the work queue. Listing 11. Work queue flush and destruction void cleanup_module( void ) { flush_workqueue( my_wq ); destroy_workqueue( my_wq ); return; } Differences between tasklets and work queues From this short introduction to tasklets and work queues, you can see two different schemes for deferring work from top halves to bottom halves. Tasklets provide a low-latency mechanism that is simple and straightforward, while work queues provide a flexible API that permits queuing of multiple work items. Each defers work from the interrupt context, but only tasklets run atomically in a run-to-complete fashion, where work queues permit handlers to sleep, if necessary. Either method is useful for work deferral, so the method selected is based on your particular needs. Going further The work-deferral methods explored here represent the historical and current methods used in the Linux kernel (excluding timers, which will be covered in a future article). They are certainly not new—in fact, they have existed in other forms in the past—but they represent an interesting architectural pattern that is useful in Linux and elsewhere. From softirqs to tasklets to work queues to delayed work queues, Linux continues to evolve in all areas of the kernel while providing a consistent and compatible user space experience. Downloadable resources Related topics - Develop and deploy your next app on the IBM Bluemix cloud platform. - Much of the information available on tasklets and work queues on the Internet tends to be a bit dated. For an introduction to the rework of the work queue API, check out this nice introduction from LWN.net. - This useful presentation from Jennifer Hou (of the Computer Science department at the University of Illinois at Urbana-Champaign) provides a nice overview of the Linux kernel, an introduction to softirqs, and a short introduction to work deferral with tasklets. - This synopsis from the Advanced Systems Programming course at the University of San Francisco (taught by Professor Emeritus, Dr. Allan Cruse) provides a great set of resources for work deferral (including the concepts that drive them). - The seminal Linux Device Drivers text provides a useful introduction to work deferral (PDF). In this free chapter ("Timers, Delays, and Deferred Work"), you'll find a deep (though slightly dated) discussion of tasklets and work queues. - For more information on where and when to use tasklets versus work queues, check out this exchange on the kernel mail list. - Browse all of Tim's articles on developerWorks. - In the developerWorks Linux zone, find hundreds of articles, tutorials, discussion forums, and a wealth.
https://www.ibm.com/developerworks/library/l-tasklets/index.html
CC-MAIN-2018-09
refinedweb
3,051
59.23
_lwp_cond_reltimedwait(2) - shared memory operations #include <sys/types.h> #include <sys/shm.h> void *shmat(int shmid, const void *shmaddr, int shmflg); int shmdt(char *shmaddr); int shmdt(const void *shmaddr); The shmat() function attaches the shared memory segment associated with the shared memory identifier specified by shmid to the data segment of the calling process. The permission required for a shared memory control operation is given as {token}, where token is the type of permission needed. The types of permission are interpreted as follows: 00400 READ by user 00200 WRITE by user 00040 READ by group 00020 WRITE by group 00004 READ by others 00002 WRITE by others See the Shared Memory Operation Permissions section of Intro(2) for more information. When (shmflg&SHM_SHARE_MMU) is true, virtual memory resources in addition to shared memory itself are shared among processes that use the same shared memory. When (shmflg&SHM_PAGEABLE) is true, virtual memory resources are shared and the dynamic shared memory (DISM) framework is created. The dynamic shared memory can be resized dynamically within the specified size in shmget(2). The DISM shared memory is pageable unless it is locked. The shared memory segment is attached to the data segment of the calling process at the address specified based on one of the following criteria: If shmaddr is equal to (void *) 0, the segment is attached to the first available address as selected by the system. If shmaddr is equal to (void *) 0 and ( shmflg&SHM_SHARE_MMU) or (shmflg&SHM_PAGEABLE) is true, then the segment is attached to the first available suitably aligned address. When (shmflg&SHM_SHARE_MMU) or (shmflg&SHM_PAGEABLE) is set, however, the permission given by shmget() determines whether the segment is attached for reading or reading and writing. If shmaddr is not equal to (void *) 0 and (shmflg&SHM_RND) is true, the segment is attached to the address given by (shmaddr- (shmaddr modulus SHMLBA)). If shmaddr is not equal to (void *) 0 and (shmflg&SHM_RND) is false, the segment is attached to the address given by shmaddr. The segment is attached for reading if (shmflg&SHM_RD. Upon successful completion, shmat() returns the data segment start address of the attached shared memory segment; shmdt() returns 0. Otherwise, -1 is returned, the shared memory segment is not attached, and errno is set to indicate the error. The shmat() function will fail if: Operation permission is denied to the calling process (see Intro(2)).M_RND) is false. The shmaddr argument is not equal to 0, is not properly aligned, and (shmfg&SHM_SHARE_MMU) is true. SHM_SHARE_MMU is not supported in certain architectures. Both (shmflg&SHM_SHARE_MMU) and (shmflg&SHM_PAGEABLE) are true. (shmflg&SHM_SHARE_MMU) is true and the shared memory segment specified by shmid() had previously been attached by a call to shmat() in which (shmflg&SHM_PAGEABLE) was true. (shmflg&SHM_PAGEABLE) is true and the shared memory segment specified by shmid() had previously been attached by a call to shmat() in which (shmflg&SHM_SHARE_MMU) was true. The number of shared memory segments attached to the calling process would exceed the system-imposed limit. The available data space is not large enough to accommodate the shared memory segment. The shmdt() function will fail if: The shmaddr argument is not the data segment start address of a shared memory segment. (shmflg&SHM_SHARE_MMU) is true and attaching to the shared memory segment would exceed a limit or resource control on locked memory. Using a fixed value for the shmaddr argument can adversely affect performance on certain platforms due to D-cache aliasing. See attributes(5) for descriptions of the following attributes: Intro(2), exec(2), exit(2), fork(2), shmctl(2), shmget(2), attributes(5), standards(5)
http://docs.oracle.com/cd/E18752_01/html/816-5167/shmop-2.html
CC-MAIN-2014-23
refinedweb
609
52.19
LOGICPRO ONLY - I have the CSV file (an example), also in the hyperlink, This was a solution you addressed a year ago, with the exception that the PIG latin is not required. Language is C# (sharp). Step 1: Given a file of data, read the data and parse it based on a fixed given field headers. To download the file, select the following link: Unit 4 Sample Data. We do not have to do the PIG latin in this assignment. The file is a comma delimited file with the following record structure: Step 2: Write a complete C# XXXXX in console mode to load the data file as a sequential file using C# XXXXX: Step 4: Next, sort the data in descending order based on the ZIP field, and display the following fields: Step 5: Display all the records (and all its fields) for everyone that is in the state "NY." Step 6: Submit the source code for the solution and the output screenshots for the following list. You can use any appropriate algorithm in the solution. Angela, Thank you so very much. Do you anticipate the requested professional to be online tonight? here is what I currently have, but errors,,,, Attachment: 2013-08-20_000636_unit4ip.docx Attachment: 2013-08-20_000735_unit4ip.docx Not sure how to attach... here it is pasted: using System; using System.IO; using System.Collections; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication24 { class dataItem { // this class will hold the data from one string in the file string FirstName; string LastName; // etc. } class Program static void Main(string[] args) { // create a place to store our data ArrayList data = new ArrayList(); // open and read a text file StreamReader sr = new StreamReader("Unit4IP"); try { using (StreamReader sr = new StreamReader("Unit4IP")) { // need a string to hold the fields and a string variable for each line of text. String line; string[] fields; while ((line = sr.ReadLine()) != null); } fields = line.Split(new Char [] {','}); Console.WriteLine(fields[0]); Console.WriteLine(fields[1]); // from here you can create a data to store your fields in it // then insert the object into your data ArrayList. // after that, you can do the sorting and output after this loop is done reading } } catch (Exception e) Console.WriteLine("The file could not be read:"); Console.WriteLine(e.Message); // Store text file contents in an appropriate data structure // Sort data // Output data // put this at the end to keep the console window from disappearing Console.WriteLine("Press enter to continue.."); Console.ReadLine(); } please send to link for solution to XXXXX@XXXXXX.XXX, as I cant get my XXXXX@XXXXXX.XXX account to log back in and its my alternate email address. I'm appreciative of the turn around time, and will rate you the highest rating once I d/l and run the file. Also, for step 6 of my project, was that completed? The algorithm piece? Thanks again - you are awesome! Yes please, I require the algorithm. Not a problem about alt email address, while you do that I will work to get my email address corrected for log in. Thanks LogicPro! Thank you! Code is perfect, I did adjust the comment for step three to exclude the wording for "pig latin", but no worries. Just require the step 6 algorithm and we are set to complete. Thanks for the team work LogicPro! Let me know if you require until tomorrow, as that is fine.
http://www.justanswer.com/homework/7y3wb-logicpro.html
CC-MAIN-2017-09
refinedweb
571
75.2
What are tags?! Are tags private? Who can see them?. What syntax do tags require? - Tags should be all-lowercase; leave out punctuation and spaces—a space is used to enter multiple tags. So use "bigbrother," not "big brother." - Numbers can appear but can't be first. Smoosh them together: "usb3," not "usb 3." - For the opposite of a tag, prefix it with "!," e.g. "!funny" means "not funny." - Keep your tags brief (64 characters max). What tags are defined? Certain tags trigger alerts to the editors, which means there are a few cases where we poach the namespace a bit. Just a few, for now: - Dupe - Use "dupe" when a Slashdot story is an actual duplicate of a previous story, offering no new information. This alerts the editors, but is generally only useful when the story is in The Mysterious Future. - Typo - Use "typo" when a story writeup has spelling or grammatical errors, or bad HTML (like a malformed link). How can I see what others are tagging? You can see some of the most popular tags at slashdot.org/tags. (You can see your own tags by visiting slashdot.org/my/tags.) How can I see stories that have been tagged with "gadget"? Or "stevejobs"? For any tag, you can see the stories it's been applied to by going to slashdot.org/tag/$tag (replace "$tag" with the term you're curious about). It's fun just to poke around!
http://games.slashdot.org/faq/tags.shtml
CC-MAIN-2015-11
refinedweb
243
78.35
Copyright © 2005 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark, document use and software licensing rules apply. This document defines a set of simple types to describe abstract content, e.g. in an XML Information Set [XML Information Set] or in an abstract model (e.g. WSDL 2.0's component model [WSDL 2.0 Core Language]). The types are defined in order to be largely indepedent of the version of XML used when serializing the abstract content as an XML document. This document is an editors' copy that has no official standing. This document is a draft intended to be reviewed by the Web Services Description Working Group for possible publication as a Working Group Note and has no formal status. The author feels that it contains useful information that is worth publishing for the benefit of others. 1 Introduction 1.1 Notational Conventions 2 Background 3 Definition of the Simple Types 3.1 string Type 3.2 Token Type 3.3 NCName Type 3.4 anyURI Type 3.5 QName Type 3.6 boolean Type 3.7 int Type 3.8 unsignedLong Type 3.9 anyType Type 4 Serialization as various versions of XML 4.1 Serialization as XML 1.0 and Relationship with XML Schema 1.0 Datatypes 4.2 Example of the serialization as different versions of XML of a simple type: stype:NCName 4.2.1 XML 1.0 serialization of an stype:NCName 4.2.2 XML 1.1 serialization of an stype:NCName 5 Using the simple types 6 Interoperability considerations 7 References 7.1 Normative References 7.2 Informative References 8 Acknowledgements This document defines a set of simple types commonly used in Web services specifications. They are defined independently of any version of XML. This document is an example of how to allow specifications to be abstracted from a particular version of XML, in particular XML 1.0. Other types MAY be added to this document depending on the feedback received. The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC2119 [IETF RFC 2119]. This specification uses a number of namespace prefixes throughout, listed in the table below. Note that the choice of any namespace prefix is arbitrary and not semantically significant (see [XML Information Set]). The use of XML Schema 1.0 datatypes [XML Schema: Datatypes] to define properties in a specification mandates an XML 1.0 [XML 1.0] serialization and prevents an XML 1.1 [XML 1.1], because the definitions of datatypes in [XML Schema: Datatypes] depend on XML 1.0 productions [XML 1.0]. This unfortunate side-effect of XML Schema datatypes is unnecessarilly certain specifications to be compatible with XML version 1.1, and probably other versions of XML that the community may come up with in the future. A previous Working Draft of WSDL 2.0 defined simple types independent of a particular version of XML to free itself from an unnecessary dependency from XML 1.0, making the XML Schema defined with [XML Schema: Structures] for WSDL 2.0 normative only for XML 1.0 serialization. However, the Working Group later took the decision that this additional layer of abstraction was too complex and decided to go back to defining its properties with XML Schema datatypes. This document captures the method which was used in this 2004-08-03 Working Draft of WSDL 2.0, explaining the objectives it was trying to reach, as it is believed that this technique to write specifications independent of a particular version of XML has merit. This specification provides its own definition of those types, patterned after [XML Schema: Datatypes] but independent of it. This allows processors to accept descriptions serialized using a mechanism that is not compatible with [XML Schema: Datatypes], such as XML 1.1 [XML 1.1]. All types defined in this section are formally assigned to the "" namespace. All references to them in this specification are made via qualified names that use the stype prefix. It should be noted though that there is no schema (in the sense of [XML Schema: Structures]) for that namespace, because the types defined here go beyond the capabilities of XML Schema to describe. All types listed above are such that their value spaces are a superset of the value space of the type with the same name defined by XML Schema [XML Schema: Datatypes]. In particular, the value space of the stype:string type is a strict superset of the value space of xsd:string, as shown by the one-character string consisting exclusively of the #x0 character. Note: The small list of types provided here is believed to cover list the WSDL 2.0 [WSDL 2.0 Core Language] and WS-Addressing 1.0 [WS-Addressing 1.0 - Core],[WS-Addressing 1.0 - SOAP Binding]. Other simple types may be defined. The value space of the stype:string type consists of finite-length sequences of characters in the range #x0-#x10FFFF inclusive, where a character is an atomic unit of text as specified by ISO/IEC 10646 [ISO/IEC 10646] and Unicode [Unicode]. The value space of the stype:Token type is the subset of the value space of the stype:string type consisting of strings that do not contain the line feed (#xA), tab (#x9) characters, that have no leading or trailing spaces (#x20) and that have no internal sequences of two or more spaces. The value space of the stype:NCName type is the subset of the value space of the stype:Token type consisting of tokens that do not contain the space (#x20) and ':' characters. The value space of the stype:anyURI type consists of all International Resource Identifiers (IRI) as defined by [IETF RFC 3987]. The value space of the stype:QName type consists of the set of 2-tuples whose first component is of type stype:anyURI and whose second component is of type stype:NCName. The value space of the stype:boolean type consists of the two distinct values true and false. An instance of a datatype that is defined as boolean can have the following legal literals {true, false, 1, 0}. The value space of the stype:int type consists of the infinite set {…,-2,-1,0,1,2,…} representing the standard mathematical concept of the integer numbers. An instance of a datatype that is defined as int has a lexical representation consisting of a finite-length sequence of decimal digits (#x30-#x39) with an optional leading sign ("-" or "+"). If the sign is omitted, "+" is assumed. The value space of the stype:unsignedLong type consists of the set {0,1,2,…,18446744073709551615} of integer numbers. unsignedLong has a lexical representation consisting of a finite-length sequence of decimal digits (#x30-#x39). Any combination of element, processing instruction, unexpanded entity reference, character, and comment information items as defined by [XML Information Set]. When serializing as other versions of XML, such as XML 1.0 [XML 1.0] or XML 1.1 [XML 1.1], the set of characters allowed by the simple types defined in section 3 Definition of the Simple Types are restricted to the ones allowed by those versions of XML. When serializing the information to XML 1.0 [XML 1.0], the simple types defined in section 3 Definition of the Simple Types map naturally to well-known datatypes defined in [XML Schema: Datatypes] which add additional constraints to the content serialized: Let's consider when stype:NCName may be serialized as XML 1.0 and as XML 1.1 as an example. An stype:NCName MAY be serialized in an XML 1.0 document if it is only composed of the characters allowed by XML 1.0, i.e. matching the NCName production from the Namespaces in XML specification [Namespaces in XML]. An stype:NCName MAY be serialized in an XML 1.0 document if it is only composed of the characters allowed by XML 1.1, i.e. matching the NCName production from the Namespaces in XML 1.1 specification [Namespaces in XML 1.1]. Typically, a specification with a dependency on XML 1.0 [XML 1.0] will have defined its content using types from XML Schema 1.0 Part 2 [XML Schema: Datatypes], and provided a normative XML 1.0 schema [XML Schema: Structures]. In order to allow XML versioning independence, types defined by this specification SHOULD be used. The XML 1.0 schema defined SHOULD be declared normative for XML 1.0 serializations only. Note: This document having gone through the W3C Recommendation Track Process and therefore not having received a wide review, a normative reference to this document is difficult. Conformance to a specification defined independent of any version of XML does NOT require to accept documents using all existing versions of XML, unless specifically called out. Conformance is considered for processing documents using the XML version supported by the implementation. The original idea for defining types independent of a version of XML was proposed by Jonathan Marsh. The core content of this document is extracted from the this 2004-08-03 Working Draft of WSDL 2.0 Part 1. The editors of this specifications were: Roberto Chinnici, Sun Microsystems Martin Gudgin, Microsoft Jean-Jacques Moreau, Canon Jeffrey Schlimmer, Microsoft Sanjiva Weerawarana, IBM Research Commenters on this part of the WSDL 2.0 specification are acknowledged, as well as Richard Ishida and Felix Sasaki for their feedback.
http://www.w3.org/2005/08/17-xml-simp-types.html
CC-MAIN-2017-30
refinedweb
1,585
58.58
. Canonical Links are something that comes up quite often when building Dnn modules that host content, like blogs, forums, e-commerce sites and more. As is typical of something released and easy to implement, a lot of different solutions appeared. Quietly included in the 7.4.1 release of Dnn Platform was the ability to set the Canonical Url Link for a page by using a Page Base property, rather than by injecting your own specific link. This post will cover how to use that change to generate Canonical links in your Dnn modules. If you’re not familiar with the term, a Canonical Link is a specific piece of html included in your <head> section. This defines, for the page, what the ‘Canonical’ Url is. Canonical just means ‘definitive’, and is used by search engines so that, if they find two pages with the same content, they know which is the correct one to include in search engines. A canonical link looks like this: <link href=”” rel=”canonical” /> If you search the world-wide-web, you’ll find many different ways of creating a canonical link in Dnn. Some of them may even be written by me. But this is now the Canonical blog post on this topic (see what I did there?). The Dnn Page Base class defines multiple properties of the page of content being created. The namespace of this class is DotNetNuke.Framework.CDefault. This class includes the Title and the Description, and now also includes a property called CanonicalLinkUrl. If a value is found in this field, then Dnn will include the value inside a Canonical Link placed inside the <head> section of the page. Setting the Canonical Url is then just a case of setting the value of this field. If Dnn sets the value of the Canonical Url at any point during the page lifecycle, any module that sets the value will overwrite the default value. The eventual ‘winner’ depends on who sets it the last time. Unless you have multiple modules on the same page trying to set the value, it should be your module. The page title works in the same way – it is set according to the page properties of the current page, but any module is able to override that value and have the page title change. Code Example I wrote a very short example module to show how this is used. This module just sets the value of the Canonical Url to the value of the current page, unless a specific querystring is found. Here is the code: No you can’t copy paste all of that (it’s an image), so here’s the relevant piece you want: 1) You need to define the Base Page as an accessible property: public DotNetNuke.Framework.CDefault BasePage { get { return (DotNetNuke.Framework.CDefault)this.Page; } } 2) You need to then set the CanonicalLinkUrl property with your value: //set the CanonicalLinkUrl property of the page this.BasePage.CanonicalLinkUrl = canonicalUrl; The rest of the code is just filling to show how it works. The code above shows four different behaviours – the first being when the page is requested as normal, with no extra parameters or querystring items. Here I request the page with the normal URL, and we get the page Url back. Note that in the code, I am doing this by calling the Friendly Url Provider with the actual TabId for the page – I do this because I want the unadorned URL of the page – if you add a ?test=whatever to the URL it will be ignored. If we just repeat what the requested URL is back out, then we are achieving nothing in terms of eliminating duplicate content. You can see that the Canonical Link element just emits the same Url as the page, which is what we want in this scenario. However, in many cases you have a different Url for a specific piece of code, such as when you load up a Blog post based on a Content Id. Or we might load up a product in an e-commerce catalog. Sometimes these Urls get adorned with other pieces of querystring which we don’t want acting as duplicate content – perhaps they are coupon codes, tracking codes or just a bad link from somewhere. The solution is to identify the piece of content and set the Canonical Link that you want for the page showing that content – regardless of the exact format of the Url. This example shows the result of sending the results as we want them. The requested Url has the querystring of ?option=2 – the code specifically states that this should be /option/2/something/else – so the Canonical Url is generated and placed into the Canonical Url link. This simple example shows how to use the Canonical Url page property – if you are developing Dnn modules which need to be optimized for Search Engines, you need to make the changes to take advantage of this ability to directly control the Canonical Url from within the code itself. If you need to release code that spans an install base earlier than 7.4.1, you could use some simple reflection to see if the property is available – but you’d need to cache the results so you aren’t continually running expensive reflection.
http://www.dnnsoftware.com/community-blog/cid/155321
CC-MAIN-2016-44
refinedweb
890
67.69
On occasion you will read or hear someone talking about C++ templates causing code bloat. I was thinking about it the other day and thought to myself, "self, if the code does exactly the same thing then the compiled code cannot really be any bigger, can it?" Here are the test cases presented, first without the use of templates, second with the use of templates. Exactly the same functionality and exactly the same code output: #include <iostream> void print(int i) { std::cout << i << std::endl; } void print(const std::string &s) { std::cout << s << std::endl; } void print(double d) { std::cout << d << std::endl; } void print(bool b) { std::cout << b << std::endl; } int main() { print(1); //Note, I have to put it in a std::string() otherwise the compiler thinks it's a const char * //which gets converted to an int or bool or something print(std::string("hello world")); print(4.5); print(false); } And with the use of templates: #include <iostream> template<typename T> void print(const T &t) { std::cout << t << std::endl; } int main() { print(1); print(std::string("hello world")); print(4.5); print(false); } There is no question that the templated version is smaller, easier to maintain and easier to grok than the first version (assuming a basic understanding of templates). They both produce exactly the same output: 1 hello world 4.5 0 And what about compiled code size? Each were compiled with the command g++ <filename>.cpp -O3. Non-template version: 8140 bytes, template version: 8028 bytes! The compiled size of the templated version was smaller. In the interest of full disclosure, with anything compiled less than -O3, the template version is 20 bytes larger than the non-template version. Also, build times do not vary between the two versions. Each takes approximately .623 seconds to compile. So, what are we seeing here, really? Templates do not "cause code bloat" or long compile times. The fact is, if one were to write the same exact code with both templates and non-template versions the compile times and resulting code size would probably be very close. However, the program sources for the non-template versions would be so insurmountably large they would likely be unmaintainable. For example, if one were to use a std::vector<std::string>, std::vector<int> and std::vector<std::vector<float> > in his code, the resulting compiled code would be large indeed, as the compiler would generate no less than 4 versions of vector for him. However, if he were to hand write string_vector, int_vector, float_vector and float_vector_vector the amount of code to maintain would be huge. Bugs found in string_vector would more than likely not get fixed in int_vector, and the compiled code would almost assuredly be the same size as the standard template versions. Careful, people normally mean Careful, people normally mean that if you use std::vector in two separate class files that get compiled separately and then linked together, you end up compiling all of the functions for std::vector twice. In a large project with hundreds of separate class files, this can really add up to a lot of bloat and kill you when you try to link the whole thing together. That's not necessarily true. That's not necessarily true. With g++ at least, the compiler is able to merge the linked template instantiations. Some truth... but both compilation time and size may be smaller This comment is true in that the code is compiled once for each instantiation thus increasing compile time. On the other hand Jason is right pointing out that the linker is responsible for eliminating redundant code. If you list the symbols in all compilation units you will see that all template generated code is marked as a weak symbol implying that it will be removed if an identical normal defined (or weak) symbol, without errors (this makes it One-Definition-Rule so important, when the linker eliminates symbols it depends on the symbols being exactly the same). Going back to compilation time, C++ standard explicitly states that only instantiated member functions are compiled, so it might be the case that the code that is compiled is smaller in the templated version (if some of the methods of the class are never called), which can reduce both compilation time and final binary size. If you only use std::vector::push_back, std::vector::begin and std::vector::end you will never compile std::vector::operator[], std::vector::insert... bloat due to templates While what you've said is correct, there's another factor involved. People who know and love templates tend to avoid old C-style generic programming, such as used in the qsort() routine commonly found on UNIX systems. Such old code worked on arbitrary arguments by taking arguments from the caller specifying void*s to values, sizes in bytes if necessary, and function pointers to operate meaningfully on the memory content. It was not type safe, but it did mean the one block of compiled code could be reused on arbitrary types. Compared to this, instantiating templates for myriad types does generate a lot of executable code. On the other hand, that code might run faster as inlining and other optimisations are possible. Similarly, doing things like "template " where A and B might be the sizeof a couple strings being passed to the constructor can very quickly lead to hundreds of copies of the template as A and B vary. Often, any optimisations that allows aren't worth the bloat. Sometimes, clever programmers will actually use a lightweight template to provide type safety, for example: template class X { typedef map Map; } where E is expected to be an enum, but it's not too painful to convert the values to ints for private Maps.... Cheers, Tony HTML crap chopped a bit of my HTML crap chopped a bit of my post: Templates have a lot of Templates have a lot of positives, I don't think fast compiles time is really one of them. Sure it may compile faster the first time (as you compile only the members you need) but since all template code has to live in header files (yes there is export, but it never seems to do the job). When code is being tweaked/developed/enhanced this can lead to a lot of compilation pain. When people mention template When people mention template bloat they sometimes meant to compare the extra initiative put forth into using templates versus some other lightweight type-unsafe method (c anonymous functions vs templates). In which case templates are most definitely bloat, but safe code comes at a cost. const T &t The template version is small because you're passing a reference (sort of a pointer). What happens if you do the following? executable blocking I should also say that blocks of executable code are often aligned to 4KB or 512byte boundaries, meaning the difference in size between these two programs might not be enough to push the actual size of the output over one of these boundaries. You might have to make 100 versions of the same code (e.g. printed from another program) to see the real effect. Use C++11 extern templates I believe extern templates introduced by C++11 will eventually solve the problem of compilation time, or template re-compilation. But before that happens: In this approach the template definitions are still header files in the practical sense, but they are only meant to be included in the template instantiations source files. The template instantiations sources will likely include other headers from throughout the project, in order to bring in types used as template arguments through the project. That might include splitting the standard templates library headers in this way, too, although much of the code there is inline with or without the template aspect. This approach has been known (and used?) before C++11 as "explicit template instantiation" model, but it required specific compiler options to make it work. Unfortunately this approach does not work with local types used as template arguments, becase the local types would need to be made available in the separate source file that deals with template instantiations only.
http://blog.emptycrate.com/node/307
CC-MAIN-2013-48
refinedweb
1,376
57.3
In this section, we first introduce the Simon problem, and classical and quantum algorithms to solve it. We then implement the quantum algorithm using Qiskit, and run on a simulator and device. Contents 1. Introduction Simon's algorithm, first introduced in Reference [1], was the first quantum algorithm to show an exponential speed-up versus the best classical algorithm in solving a specific problem. This inspired the quantum algorithms based on the quantum Fourier transform, which is used in the most famous quantum algorithm: Shor's factoring algorithm. 1a. Simon's Problem We are given an unknown blackbox function $f$, which is guaranteed to be either one-to-one ($1:1$) or two-to-one ($2:1$), where one-to-one and two-to-one functions have the following properties: - one-to-one: maps exactly one unique output for every input. An example with a function that takes 4 inputs is: - two-to-one: maps exactly two inputs to every unique output. An example with a function that takes 4 inputs is: This two-to-one mapping is according to a hidden bitstring, $b$, where:$$ \textrm{given }x_1,x_2: \quad f(x_1) = f(x_2) \\ \textrm{it is guaranteed }: \quad x_1 \oplus x_2 = b $$ Given this blackbox $f$, how quickly can we determine if $f$ is one-to-one or two-to-one? Then, if $f$ turns out to be two-to-one, how quickly can we determine $b$? As it turns out, both cases boil down to the same problem of finding $b$, where a bitstring of $b={000...}$ represents the one-to-one $f$. 1b. Simon's Algorithm Classical Solution Classically, if we want to know what $b$ is with 100% certainty for a given $f$, we have to check up to $2^{n−1}+1$ inputs, where n is the number of bits in the input. This means checking just over half of all the possible inputs until we find two cases of the same output. Much like the Deutsch-Jozsa problem, if we get lucky, we could solve the problem with our first two tries. But if we happen to get an $f$ that is one-to-one, or get really unlucky with an $f$ that’s two-to-one, then we’re stuck with the full $2^{n−1}+1$. There are known algorithms that have a lower bound of $\Omega(2^{n/2})$ (see Reference 2 below), but generally speaking the complexity grows exponentially with n. Quantum Solution The quantum circuit that implements Simon's algorithm is shown below. Where the query function, $\text{Q}_f$ acts on two quantum registers as:$$ \lvert x \rangle \lvert a \rangle \rightarrow \lvert x \rangle \lvert a \oplus f(x) \rangle $$ In the specific case that the second register is in the state $|0\rangle = |00\dots0\rangle$ we have:$$ \lvert x \rangle \lvert 0 \rangle \rightarrow \lvert x \rangle \lvert f(x) \rangle $$ The algorithm involves the following steps. - Two $n$-qubit input registers are initialized to the zero state: $$\lvert \psi_1 \rangle = \lvert 0 \rangle^{\otimes n} \lvert 0 \rangle^{\otimes n} $$ - Apply a Hadamard transform to the first register: $$\lvert \psi_2 \rangle = \frac{1}{\sqrt{2^n}} \sum_{x \in \{0,1\}^{n} } \lvert x \rangle\lvert 0 \rangle^{\otimes n} $$ - Apply the query function $\text{Q}_f$: $$ \lvert \psi_3 \rangle = \frac{1}{\sqrt{2^n}} \sum_{x \in \{0,1\}^{n} } \lvert x \rangle \lvert f(x) \rangle $$ - Measure the second register. A certain value of $f(x)$ will be observed. Because of the setting of the problem, the observed value $f(x)$ could correspond to two possible inputs: $x$ and $y = x \oplus b $. Therefore the first register becomes: $$\lvert \psi_4 \rangle = \frac{1}{\sqrt{2}} \left( \lvert x \rangle + \lvert y \rangle \right)$$ where we omitted the second register since it has been measured. - Apply Hadamard on the first register: $$ \lvert \psi_5 \rangle = \frac{1}{\sqrt{2^{n+1}}} \sum_{z \in \{0,1\}^{n} } \left[ (-1)^{x \cdot z} + (-1)^{y \cdot z} \right] \lvert z \rangle $$ - Measuring the first register will give an output only if: $$ (-1)^{x \cdot z} = (-1)^{y \cdot z} $$ which means: $$ x \cdot z = y \cdot z \\ x \cdot z = \left( x \oplus b \right) \cdot z \\ x \cdot z = x \cdot z \oplus b \cdot z \\ b \cdot z = 0 \text{ (mod 2)} $$ A string $z$ will be measured, whose inner product with $b = 0$. Thus, repeating the algorithm $\approx n$ times, we will be able to obtain $n$ different values of $z$ and the following system of equation can be written: $$ \begin{cases} b \cdot z_1 = 0 \\ b \cdot z_2 = 0 \\ \quad \vdots \\ b \cdot z_n = 0 \end{cases}$$ From which $b$ can be determined, for example by Gaussian elimination. So, in this particular problem the quantum algorithm performs exponentially fewer steps than the classical one. Once again, it might be difficult to envision an application of this algorithm (although it inspired the most famous algorithm created by Shor) but it represents the first proof that there can be an exponential speed-up in solving a specific problem by using a quantum computer rather than a classical one. 2. Example Let's see the example of Simon's algorithm for 2 qubits with the secret string $b=11$, so that $f(x) = f(y)$ if $y = x \oplus b$. The quantum circuit to solve the problem is: - Two $2$-qubit input registers are initialized to the zero state: $$\lvert \psi_1 \rangle = \lvert 0 0 \rangle_1 \lvert 0 0 \rangle_2 $$ - Apply Hadamard gates to the qubits in the first register: $$\lvert \psi_2 \rangle = \frac{1}{2} \left( \lvert 0 0 \rangle_1 + \lvert 0 1 \rangle_1 + \lvert 1 0 \rangle_1 + \lvert 1 1 \rangle_1 \right) \lvert 0 0 \rangle_2 $$ - For the string $b=11$, the query function can be implemented as $\text{Q}_f = CX_{1_a 2_a}CX_{1_a 2_b}CX_{1_b 2_a}CX_{1_b 2_b}$ (as seen in the circuit diagram above): $$ \begin{aligned} \lvert \psi_3 \rangle = \frac{1}{2} ( \; & \lvert 0 0 \rangle_1 \; \lvert 0\oplus 0 \oplus 0, & 0 \oplus 0 \oplus 0 \rangle_2 &\\[5pt] + & \lvert 0 1 \rangle_1 \; \lvert 0\oplus 0 \oplus 1, & 0 \oplus 0 \oplus 1 \rangle_2 &\\[6pt] + & \lvert 1 0 \rangle_1 \; \lvert 0\oplus 1 \oplus 0, & 0 \oplus 1 \oplus 0 \rangle_2 &\\[6pt] + & \lvert 1 1 \rangle_1 \; \lvert 0\oplus 1 \oplus 1, & 0 \oplus 1 \oplus 1 \rangle_2 & \; )\\ \end{aligned} $$ Thus: $$ \begin{aligned} \lvert \psi_3 \rangle = \frac{1}{2} ( \quad & \lvert 0 0 \rangle_1 \lvert 0 0 \rangle_2 & \\[6pt] + & \lvert 0 1 \rangle_1 \lvert 1 1 \rangle_2 & \\[6pt] + & \lvert 1 0 \rangle_1 \lvert 1 1 \rangle_2 & \\[6pt] + & \lvert 1 1 \rangle_1 \lvert 0 0 \rangle_2 & \; )\\ \end{aligned} $$ - We measure the second register. With $50\%$ probability we will see either $\lvert 0 0 \rangle_2$ or $\lvert 1 1 \rangle_2$. For the sake of the example, let us assume that we see $\lvert 1 1 \rangle_2$. The state of the system is then $$ \lvert \psi_4 \rangle = \frac{1}{\sqrt{2}} \left( \lvert 0 1 \rangle_1 + \lvert 1 0 \rangle_1 \right) $$ where we omitted the second register since it has been measured. - Apply Hadamard on the first register $$ \lvert \psi_5 \rangle = \frac{1}{2\sqrt{2}} \left[ \left( \lvert 0 \rangle + \lvert 1 \rangle \right) \otimes \left( \lvert 0 \rangle - \lvert 1 \rangle \right) + \left( \lvert 0 \rangle - \lvert 1 \rangle \right) \otimes \left( \lvert 0 \rangle + \lvert 1 \rangle \right) \right] \\ = \frac{1}{2\sqrt{2}} \left[ \lvert 0 0 \rangle - \lvert 0 1 \rangle + \lvert 1 0 \rangle - \lvert 1 1 \rangle + \lvert 0 0 \rangle + \lvert 0 1 \rangle - \lvert 1 0 \rangle - \lvert 1 1 \rangle \right] \\ = \frac{1}{\sqrt{2}} \left( \lvert 0 0 \rangle - \lvert 1 1 \rangle \right)$$ - Measuring the first register will give either $\lvert 0 0 \rangle$ or $\lvert 1 1 \rangle$ with equal probability. - If we see $\lvert 1 1 \rangle$, then: $$ b \cdot 11 = 0 $$ which tells us that $b \neq 01$ or $10$, and the two remaining potential solutions are $b = 00$ or $b = 11$. Note that $b = 00$ will always be a trivial solution to our simultaneous equations. If we repeat steps 1-6 many times, we would only measure $|00\rangle$ or $|11\rangle$ as $$ b \cdot 11 = 0 $$ $$ b \cdot 00 = 0 $$ are the only equations that satisfy $b=11$. We can verify $b=11$ by picking a random input ($x_i$) and checking $f(x_i) = f(x_i \oplus b)$. For example: $$ 01 \oplus b = 10 $$ $$ f(01) = f(10) = 11$$ # importing Qiskit from qiskit import IBMQ, BasicAer from qiskit.providers.ibmq import least_busy from qiskit import QuantumCircuit, execute # import basic plot tools from qiskit.visualization import plot_histogram from qiskit_textbook.tools import simon_oracle The function simon_oracle (imported above) creates a Simon oracle for the bitstring b. This is given without explanation, but we will discuss the method in section 4. In Qiskit, measurements are only allowed at the end of the quantum circuit. In the case of Simon's algorithm, we actually do not care about the output of the second register, and will only measure the first register. b = '110' n = len(b) simon_circuit = QuantumCircuit(n*2, n) # Apply Hadamard gates before querying the oracle simon_circuit.h(range(n)) # Apply barrier for visual separation simon_circuit.barrier() simon_circuit += simon_oracle(b) # Apply barrier for visual separation simon_circuit.barrier() # Apply Hadamard gates to the input register simon_circuit.h(range(n)) # Measure qubits simon_circuit.measure(range(n), range(n)) simon_circuit.draw() # use local simulator backend = BasicAer.get_backend('qasm_simulator') shots = 1024 results = execute(simon_circuit, backend=backend, shots=shots).result() counts = results.get_counts() plot_histogram(counts) Since we know $b$ already, we can verify these results do satisfy $b\cdot z = 0 \pmod{2}$: # Calculate the dot product of the results def bdotz(b, z): accum = 0 for i in range(len(b)): accum += int(b[i]) * int(z[i]) return (accum % 2) for z in counts: print( '{}.{} = {} (mod 2)'.format(b, z, bdotz(b,z)) ) 110.000 = 0 (mod 2) 110.001 = 0 (mod 2) 110.111 = 0 (mod 2) 110.110 = 0 (mod 2) Using these results, we can recover the value of $b = 110$ by solving this set of simultaneous equations. For example, say we first measured 001, this tells us: If we next measured 111, we have: Which tells us either:$$ b_2 = b_1 = 0, \quad b = 000 $$ or$$ b_2 = b_1 = 1, \quad b = 110 $$ Of which $b = 110$ is the non-trivial solution to our simultaneous equations. We can solve these problems in general using Gaussian elimination, which has a run time of $O(n^3)$. 3b. Experiment with Real Devices The circuit in section 3a uses $2n = 6$ qubits, while at the time of writing many IBM Quantum devices only have 5 qubits. We will run the same code, but instead using $b=11$ as in the example in section 2, requiring only 4 qubits. b = '11' n = len(b) simon_circuit_2 = QuantumCircuit(n*2, n) # Apply Hadamard gates before querying the oracle simon_circuit_2.h(range(n)) # Query oracle simon_circuit_2 += simon_oracle(b) # Apply Hadamard gates to the input register simon_circuit_2.h(range(n)) # Measure qubits simon_circuit_2.measure(range(n), range(n)) simon_circuit_2.draw() # Load our saved IBMQ accounts and get the least busy backend device with less than or equal to 5 qubits IBMQ.load_account() provider = IBMQ.get_provider(hub='ibm-q') backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= n and not x.configuration().simulator and x.status().operational==True)) print("least busy backend: ", backend) # Execute and monitor the job from qiskit.tools.monitor import job_monitor shots = 1024 job = execute(simon_circuit_2, backend=backend, shots=shots, optimization_level=3) job_monitor(job, interval = 2) # Get results and plot counts device_counts = job.result().get_counts() plot_histogram(device_counts) least busy backend: ibmq_burlington Job Status: job has successfully run # Calculate the dot product of the results def bdotz(b, z): accum = 0 for i in range(len(b)): accum += int(b[i]) * int(z[i]) return (accum % 2) print('b = ' + b) for z in device_counts: print( '{}.{} = {} (mod 2) ({:.1f}%)'.format(b, z, bdotz(b,z), device_counts[z]*100/shots)) b = 11 11.00 = 0 (mod 2) (49.2%) 11.11 = 0 (mod 2) (32.0%) 11.10 = 1 (mod 2) (9.1%) 11.01 = 1 (mod 2) (9.7%) As we can see, the most significant results are those for which $b\cdot z = 0$ (mod 2). The other results are erroneous, but have a lower probability of occurring. Assuming we are unlikely to measure the erroneous results, we can then use a classical computer to recover the value of $b$ by solving the linear system of equations. For this $n=2$ case, $b = 11$. 4. Oracle The above example and implementation of Simon's algorithm are specifically for specific values of $b$. To extend the problem to other secret bit strings, we need to discuss the Simon query function or oracle in more detail. The Simon algorithm deals with finding a hidden bitstring $b \in \{0,1\}^n$ from an oracle $f_b$ that satisfies $f_b(x) = f_b(y)$ if and only if $y = x \oplus b$ for all $x \in \{0,1\}^n$. Here, the $\oplus$ is the bitwise XOR operation. Thus, if $b = 0\ldots 0$, i.e., the all-zero bitstring, then $f_b$ is a 1-to-1 (or, permutation) function. Otherwise, if $b \neq 0\ldots 0$, then $f_b$ is a 2-to-1 function. In the algorithm, the oracle receives $|x\rangle|0\rangle$ as input. With regards to a predetermined $b$, the oracle writes its output to the second register so that it transforms the input to $|x\rangle|f_b(x)\rangle$ such that $f(x) = f(x\oplus b)$ for all $x \in \{0,1\}^n$. Such a blackbox function can be realized by the following procedures. Copy the content of the first register to the second register. $$ |x\rangle|0\rangle \rightarrow |x\rangle|x\rangle $$ (Creating 1-to-1 or 2-to-1 mapping) If $b$ is not all-zero, then there is the least index $j$ so that $b_j = 1$. If $x_j = 0$, then XOR the second register with $b$. Otherwise, do not change the second register. $$ |x\rangle|x\rangle \rightarrow |x\rangle|x \oplus b\rangle~\mbox{if}~x_j = 0~\mbox{for the least index j} $$ (Creating random permutation) Randomly permute and flip the qubits of the second register. $$ |x\rangle|y\rangle \rightarrow |x\rangle|f_b(y)\rangle $$ 6. References - Daniel R. Simon (1997) "On the Power of Quantum Computation" SIAM Journal on Computing, 26(5), 1474–1483, doi:10.1137/S0097539796298637 - Guangya Cai and Daowen Qiu. Optimal separation in exact query complexities for Simon's problem. Journal of Computer and System Sciences 97: 83-93, 2018,'}
https://qiskit.org/textbook/ch-algorithms/simon.html
CC-MAIN-2020-50
refinedweb
2,459
59.23
This is the mail archive of the cygwin mailing list for the Cygwin project. Hello, I've been trying to track down a segmentation fault that a rather large application I'm working on has been experiencing. I have seemed to narrow it down to two threads that are opening, appending, and closing files quite often. I've created a very simple example program that suffers from the same condition. Two threads are created, each open up their own unique file, followed by a close (no writing, in this example). The threads are both joined and then the processes is then repeated. After some indeterminate amount of time - millions of iterations - the application segfaults with no real useful information that I can see. Running in gdb doesn't seem to help (after compiling with debug flags, of course) as the backtrace is either corrupted or can't be followed because I'm not using a debug version of Cygwin. Note that this problem seems to be accelerated by having the target directory open in explorer and/or having the files highlighted. I've come across situations where even ofstream.open() will throw an exception when doing the above. The exception has been seen in both C++ and python, which makes me think it's something fundamental in Cygwin, and possibly related to this as well. Is there something inherently wrong with having different treads access different files at once? I have reproduced this issue across multiple machines. Compile: g++ FileTest.cpp -lpthread -oFileTest FileTest.cpp: #include <fstream> #include <string> #include <iostream> #include <pthread.h> using namespace std; struct ThreadData { string fileName; }; void * FileThread(void *arg) { try { ofstream outfile; ThreadData *td = (ThreadData*)arg; string fileName = td->fileName; try { outfile.open(fileName.c_str(), ios_base::app); } catch(...) { cerr << "Exception during open()" << endl; return NULL; } try { outfile.close(); } catch(...) { cerr << "Exception during open()" << endl; return NULL; } } catch(...) { cerr << "Exception while creating objects" << endl; return NULL; } return NULL; } int main(void) { unsigned long long count = 0; ThreadData td1; td1.fileName = "temp1.txt"; ThreadData td2; td2.fileName = "temp2.txt"; while(1) { count++; if(count%5000 == 0) cout << "Iteration " << count << endl; pthread_t thread1; pthread_t thread2; pthread_create(&thread1, NULL, FileThread, &td1); pthread_create(&thread2, NULL, FileThread, &td2); void *res = NULL; pthread_join(thread1, &res); pthread_join(thread2, &res); } // Not reached return 0; } Stackdump: Exception: STATUS_ACCESS_VIOLATION at eip=610B5FF2 eax=0D89466C ebx=006A02F0 ecx=61149C88 edx=0D89466C esi=61149C88 edi=006C05C8 ebp=0022CAC8 esp=0022CAB0 program=c:\Documents and Settings\sfilipek\test\FileTest.exe, pid 4344, thread main cs=001B ds=0023 es=0023 fs=003B gs=0000 ss=0023 Stack trace: Frame Function Args 0022CAC8 610B5FF2 (006A02F0, 00000000, 0022CAE8, 006A02F0) 0022CAE8 610B8B0D (006A0298, FFFFFFFF, 0022CC98, 006B0508) 0022CC08 610B1E4B (0022CC20, 0022CC94, 0022CCE8, 610935A8) 0022CC18 610779F8 (006A0298, 0022CC94, 00401150, 0022CCA0) 0022CCE8 610935A8 (00000001, 6116B798, 006A0090, 0022CC70) 0022CD98 610060D8 (00000000, 0022CDD0, 61005450, 0022CDD0) 61005450 61004416 (0000009C, A02404C7, E8611021, FFFFFF48) 34 [main] FileTest 4344 _cygtls::handle_exceptions: Error while dumping state (probably corrupted stack) Nothing was written to stderr in the end... just the segfault. Any advice, workaround, etc. would be extremely helpful. Regards, Stefan Filipek uname -a: CYGWIN_NT-5.1 [computer name] 1.5.25(0.156/4/2) 2008-06-12 19:34 i686 Cygwin Attachment: cygcheck.out Description: cygcheck.out -- Unsubscribe info: Problem reports: Documentation: FAQ:
https://sourceware.org/legacy-ml/cygwin/2009-02/msg00613.html
CC-MAIN-2022-27
refinedweb
543
56.86
Go WikiAnswers ® Categories Technology Computers Computer Programming Unanswered | Answered Computer Programming Parent Category: Computers A category for questions about computer programming and programming languages. Subcategories .NET Programming Active Server Pages ASP.NET C Programming C++ Programming Database Programming Delphi and Pascal Programming Java Programming Perl Programming PHP Programming Python Programming QBasic Ruby Programming Sharepoint Visual Basic Programming Web Programming 1 2 3 > Why is my server down on eclipse? Toll Free :0800-098-8906. For BT Yahoo - United Kingdom. Is this the right phone number? What is a compiled program? A compiled program is source code that is translated to either machine code (native code) or byte code. Native machine code requires no further translation and can be executed as-is, but byte code must be interpreted in order to produce the required machine code. Java is an example of a... Can you fix vbnet code? yes, but need to check more details about that. Advantages and Disadvantages of Electronics Data Processing? burrito Where was James Thompson Marshall? ... How interrupts are handled in rtos? Mould for removal. Hope that helps! What is the minimum possible number of attributes of a key in a relational model What is the maximum? Chapter 3 : Entity - Relationship Modelling The aims of this chapter are: To explain the need for entity-relationship modelling To explain the terms entity-relationship model, entity-relationship diagram To define the terms entity type, entity, attribute, attribute value,... How can you get Java through foxfire? To test whether Java is installed and enabled in Firefox, visit one of these Java test pages: Verify Java Version Verify Java Version (alternate) When you visit these pages, you will normally need to activate Java. The article How to allow Java on trusted sites... What is the last ps2 game? They might not have made the last game many have been released since the question was asked including. Major League Baseball 2K12 PS2 release was March 6 2012 and they may make a Major League Baseball 2K13 for the PS2FIFA 12 PS2 release was on September 27 2011 and they may make a FIFA 13 for PS2I... Where was the Belb created? China Town What is long-lived file and database servers? A session What is the best stack to use with anadrol? stop being re Why has the advent of graphical user interfaces influenced the shift from procedural programming to object oriented event driven programming? Simply because, GUIs can be of any level of complexity. Elements such as buttons, links, etc all need to properly reference to different objects. If you click here, go there, if you click down there, go here, etc. It is like a very big, complex web. You can imagine that procedural code would... What is Factor programming languages is all about? The Factor programming language combines powerful language features with a full-featured library. The implementation is fully compiled for performance, while still supporting interactive development. Factor applications are portable between all common platforms. Factor can deploy stand-alone... What is the difference between array and linklist? Linklist is sorted, array not. What VB Script keyword is used to formally declare a variable used by the script before trying to use it? to declare a Variable you use 'Dim' then you name the Variable so for example 'Dim Var1' What Program that will display the even numbers in data structures and algorithms? JAVA What are diifferent dialog boxes in java script? There are three Types of Dialog Boxes in JavaScript 1. alert() : The simplest to direct output to a dialog box is to use the alert() method. The alert dialog box is used to display an alert message to the user. 2. prompt() : The prompt() method dialog box allows the user the... Structured types in object oriented database? A structured complex object database differs from an unstructured complex object database in that the object's structure is defined by repeated application of the type constructors provided by the OODBMS. Hence, the object structure is defined and known to the OODBMS. What are 3 ways SQL can access a relational database? Important ways to access relational database: 1. Accessing the relational database using JDBC with Spring. 2. LIBNAME Statement for relational database. 3. Using HBase Describe the area of structured types in an object oriented database and give an example of their use? Data is stored as objects and can be interpreted only using the methods specified by its class. Why programmers use assembly to write programs?... Discusses the critical elements of object oriented client-server design? diskusi yang positif Why do you need common fields in a database? A common field is a field of data that is shared among all forms in a database. Without them, it would be difficult and/or time-consuming to create other forms. What is lex in system software and assembly language programming subject in M.C.A? M.C.A could be almost anything, maybe a name of a school or school department? Lex (short for Lexical Analyzer)`is a specialized programming language that takes a set of specifications for how to scan text and find specific items in the text and generates C language source code that is... Which language is used for windowsxp? Mainly C and C++ and a bit of assembly language. What is the definition of synthetic method? Synthetic method Concepts of nested structure in c plus plus? A nested structure is simply one structure inside another. The inner structure is local to the enclosing structure. struct A { struct B {}; }; Here, we can instantiate an instance of A as normal. A a; But to instantiate B we must qualify the type because it is local... Using c plus plus when you ask user to press Y or N to continue even if you press any other letters aside from N it will continue How do you make it continue by only pressing Y? Write a conditional that tests for the letter the user entered, if it is Y it continues, if it's not it exits. What are the Data types in middle level language? Depends, what language? How do you convert integer to objects in c? There are no objects in C, so you can't. However, in C++ you can convert an integer to an object if theobject's class exposes a public conversion constructor that acceptsan integer argument. For example: class X { public: X (int); // conversion constructor // ... }; How the conversion is... In which version of SharePoint was the concept of 'My Sites'? SharePoint Server 7 ETT results of J and K Board 2007-09? Ett Results for the J&K are available at the jkbose website jkbose .co .in/resultsallnoti.php website What are the significant IT milestones presented in the movie Pirates of Silicon Valley? The film opens with the creation of the 1984 commercial for Apple Computer, which introduced the first Macintosh. Steve Jobs (Noah Wyle) is speaking with director Ridley Scott (J. G. Hertzler), trying to convey his idea that "We're creating a completely new consciousness." Scott, however, is... What are the operators in c tokens? All arithmetic, logical operators are operators in c tokens. As: +, - , ++, --, %, &&, &, >>, << etc. How can write a C plus plus programming on matrix addition only? #include<iostream> #include<array> template<typename T, const size_t c> class Matrix1D { public: using data_type = std::array<T, c>; using iterator = typename data_type::iterator; using const_iterator = typename data_type::const... Which book is good for java programming beginners? Here is a website with all kinds of java programming books: Menu driven source code in c plus plus for selection sort bubble sort and insertion sort? #include<iostream> #include<time.h> #include<iomanip> #include<string> void swap(int& x, int& y) { x^=y^=x^=y; } void bubble_sort(int* A, int size) { while(size) { int n=0; for(int i=1; i<size; ... Where can I get a PHP tourism management script? I think you can't find PHP tourism management script online easily you need to hire some developer for this. if like to developer it please let me know. Hacker is associated with? Computer professional Similarities and differences of multitasking and multithreading? Multithreading: Multithreading is a process of executing multiple threads simultaneously. The concept of threads provides a mechanism to make maximum utilization of CPU.Basically a thread is a light weight process. Both multithreading and multiprocessing are used to achieve ... How do you build a software? You need to make a Program that usualy is in either a popular programming engine. To provide fault tolerance active directory utilizes what replication model? RODC (Read-Only Domain Controller) Why we need conceptual design of Management Information System? Since the conceptual design sets the direction for the management information system (MIS). It is vital that managers participate seriously and heavily at this stage. Conceptual design is sometimes called feasibility design, gross design or high level design. The conceptual design... C program using array to find how many no are even and odd? #include<stdio.h> #include<conio.h> void main() { int a[10],i,evc=0,odc =0;// sum=0; clrscr(); printf("enter 10 no.s in the array\\n"); for(i=0;i<=9;i++) { scanf("%d",&a[i]); } for(i=0;i<=9;i++) { if(a[i]%2==0) { evc=evc+1; } else { odc=odc+1; } } printf(... What is Operational data? Maybe mean the data can change? Write a simple java program using exceptions? The most common reason I have for catching Exceptions in Java is when reading information from a file. So let's look at an example where we read and display the contents of a file. // Note that the main method may throw a FileNotFoundException. // This means that if the file we try to read from... Coding in c language to find the difference between two dates in days? Program to compute difference between two dates #include<stdio.h> #include<math.h> void main() { int day1,mon1,year1,day2,mon2,year2; int ref,dd1,dd2,i; clrscr(); printf("Enter first day, month, year"); scanf("%d%d%d",&day1,&mon1... How is scanning of programming useful to students? It helps students think about why certain chunks of code are there and how they affect the program itself. This helps in programming, as once you see a block of code that does one job, you can usually replicate it within your own work to bypass problems.. C program to find the occurrence of an element in a number? Let "n" is that number and "occ" is the no of occurrence of a element "el" in "n". so, <pre> i = n; occ = 0; // Initializing "occ" to zero. while(i > 0){ rem = i % 10; // This gives a digit in the number. if(rem == el) occ++; // If "rem" is same as "el"... What is the difference between a knowledge base and a database? Database is a collection of facts, figures and statistics related to an object. Data can be processed to create useful information. A knowledge base is a special kind of database for knowledge management. It provides the means for the computerized collection, organization, and... What are the parts of the turbo pascal program? Every Pascal program must follow a basic structure. While this structure is very similar to Karel programming, there are several differences. Below is the basic structure that every Pascal program must follow. PROGRAM ProgramName; VAR VariableName : VariableType; VariableName :... Give the program for finding smallest number in 8085 microprocessor? You can use following steps for finding smallest number in 8085 microprocessor :- XRA ; clear the accumulator MVI B, 30H ; load a number to B Register MVI C, 40H ; load a number to C Register MOV A, B ; Move the content of B to A CMP C ;... How do you document development to adhere to the standards set forth for the project? Write a c program using function with argument and with return value to find sum of odd and even series? Is this what you mean? #include<stdio.h> int main(){ int number; printf("Enter any integer: "); scanf("%d",&number); if(number % 2 ==0) printf("%d is even number.",number); else printf("%d is odd number.",number); ... What are the methods and interfaces from string and vector class? I bought a 1 ton split A\\C this week. It's power consumption is 1000 W. The outer temperature in our area in day time is 34-36 degree Celsius. I set up the ac for a 27 degree which is comfortable for me. So i want to know, how to reduce then power consumtion of this ac. My room is much... Program in c to find whether number is a Armstrong or strong number? #include <stdio.h> #include <math.h> void main() { int number, sum = 0, rem = 0, cube = 0, temp; printf ("enter a number"); scanf("%d", &number); temp = number; while (number != 0) { rem = number % 10; cube = pow(rem, 3); sum = sum + cube; number =... What program language do computer games use? Java Flash Put these programming languages you order from oldest to newest basic c plus plus cobol fortran? From oldest to newest: FORTRAN, COBOL, BASIC, and C++ Time space trade off in data structure?... C plus plus programs - to replace every space in a string with a hyphen? char *string = "this is a test";char *p;for (p=string; *p!='\\0'; p++) if (*p==' ') *p='-'; What are the object features of pascal programming language? In the original, none; Pascal's thing was structured programming, which was the death knell for Basic's spaghetti code. Object oriented features were added later: Turbo Pascal. Much the same as c morphed into c++. What is the Definition of multiple inheritance? There are many way to define a term, one of which is: When a class inherits properties and behaviours of two more classes it is said to be multiple inheritance. What is called Dividing a program into function and modules? Modularisation. It could also be called refactoring, where large and complex functions are split into several simpler functions, making code easier to read. Refactoring can also help to reduce code duplication. What is abstraction why is it usefull in software engineering? Answer for abstractionAbstraction to... What are the methods in System class? The java.lang.System class contains several useful class fields and methods. It cannot be instantiated.Facilities provided by System: standard output error output streams standard input and access to externally defined properties and environment variables. ... What pays more computer science or information technologies? I would say Computer science because it covers things into more depth. Also there is a wider choice to pick from so you're not limited. Excitation-transfer theory by zillmann? Zillman's Excitation Transfer Effect What is the object-oriented version of C programming language that is used to develop software for PCs such as Fractal Design Painter Lotus 123 and games? object orient is a things that makes collaberating and has a communication between them, that are send via message Html code to reduce screen size of avi file? use height attribute How do you print notes saved in dates on Outlook's calendar? What version are you using. What is the difference between single tasking to multi-tasking? multiple tasking is that in which more than one tasks are are processed in a single cpu . but when a single task is is going on in a single cpu it single tasking What are Advantages and disadvantages of customized software versus off-the-shelf software? Customized software may be good for one task and nothing else while general off the shelf software will be sufficient for most tasks but not any better for a specific one. Why were computer programming languages invented? As a CPU only processes written in machine language (binary) programming languages, which uses words instead of numbers is the reason that programming languages were invented. It allows programmers to write application in programming language statements that then uses special software to convert the... How do you begin to become a aspnet programmer? Start with the basics and you will learn it easily. Then make sites for people and people will see that you can program. Try applying for your job as a web developer advanced in ASPNET and you will get your job if you did you homework. How does the internet cause artificial intelligence? A connection between the Internet of Things (IoT) and artificial intelligence will become a threat to mankind, provided basic questions are answered prior to considering imposing constraints on AI capabilities. How calculate the average scored by student for is subject in c plus plus program? Put all the values in an array, iterate through the array with a for loop, sum all the values, then divide by the count of the values. Different approaches of Event handling in vbNET? It is a bit sad that the compiler doesn't generate an error for this. You have to make it look like this: Private Sub Form1_Load(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles MyBase.Load obj1 = New Class1 AddHandler Class2.TestEvent, AddressOf ... What are modules of oracle? Oracle sells many functional modules which use the Oracle RDBMS as a back-end, notably Oracle Financials, Oracle HRMS, Oracle SCM, Oracle Projects, Oracle CRM, Oracle Procurement. What is need of oop paradigm in java? To overcome the problems faced because of the use of structured programming, the object oriented programming concept was created. The OOP makes use of bottom-up approach and manages the increasing complexity. Description of an OOP can be given as , a data that controls access to code. Is C plus plus a machine independent? Yes, it's. The only exception I can think of is Itanium. But even that case compiler can be configured in a way that you do need to worry about that. Write a c plus plus program to print floyd's triangle? #include<iostream> int main() { std::cout << "Floyd's triangle\\n" << std::endl; size_t value = 0; size_t row = 0; while (row++ < 10) { size_t col = 0; while (col++ < row) std::cout << ++value << '\\t'; ... What can you do with the Visual C plus plus App-Wizard? The App-Wizard is the Application Wizard. You use it to create a framework for your application by choosing the type of application and which features you require. The wizard generates the source files and headers for you according to your choices, you simply need to flesh it out with your specific... Can youShow the railway reservation mini project using oracle and vb 6.0? Yes, You can show the railway reservation mini project using the oracle and vb 6.0. Can you show the content of railway reservation mini project using oracle and vb 6.0? Yes you can. Write a C program to find sum of 3 matrices? matrix WAP in java to search for a given number using linear search? // This method will search through nums for target. // It will return the index of target in nums, or -1 if target is not in nums. public static int search(final int target, final int[] nums) { // Linear search means start at one end and search element-by-element for(int i = 0; i < nums... Attributes and methods of uppercase in java? There is not strict rule. It is only a convention. Most Java programmers use camel case for methods and variables. All uppercase are usually reserved for constants. The minimum number of bits required to store the hexadecimal number FF is? Count them: FF(16)=255(10)=11111111(2) How to make simple projects from basic c plus plus? There is no such thing as basic C++. You probably meant standard C++, which simply means that the implementation conforms to some ISO standard. The current standard is ISO/IEC 14882:2011, informally known as C++11. However, simple projects can be created in any version of C++ by creating console... Menu driven program for selection sort bubble sort and insertion sort in c? Yes. What is garbage collection in c plus plus? Garbage collection is used to released resources which were previously used by the application(s) which is called garbage collector. Garbage collection allows to prevent memory leaks which are the main problem of old style of programming. Uses of database in MS Access? Microsoft Access is designed to scale to support more data andusers by linking to multiple Access databases or using a back-enddatabase like Microsoft SQL Server. With the latter design, theamount of data and users can scale to enterprise-level solutionsfor people. WAP in java to take alphabet and print ascii value for it? Remember that chars in Java are just a special version of ints. Cast the char as an int and you get the Unicode value for it. Fortunately, the group of characters including letters and numbers have the same value in both encoding systems. for (char letter = 'a'; letter <= 'z'; ++letter) { System... Types of system call in process management? fork, exec, wait, exit Which back end you will use when you are using IIS server? PHP Vs. ASP.NET- What Serves Website Application Development Better? Write a Program to find sum of all elements above and below the main diagonal of a square matrix? #include<iostream.h> #include<conio.h> void main() { int a=0,b=0,i,j,s,c[10][10]; //initialising matrix cout<<"Enter size of square of Matrix \\n"; cin>>s; cout<<"Enter Values into Matrix of side(s) "<<s<<"\\n"; for(i=0; i<s; i++) // Input Matrix {... 1 2 3 >
http://www.answers.com/Q/FAQ/2096
CC-MAIN-2017-43
refinedweb
3,594
58.48
A HUD or any UI components are normally 2D graphics and text displayed on top of the game view. A number of libraries help you to display 2D graphics, for OpenGL there is GLUI and GLUT and for DirectX there is the sprite and font interfaces. To do it yourself the standard way is to create a square made of two triangles defined in screen space (already transformed) and then to texture the square with the 2D graphic you wish to display. Text is normally achieved by creating a texture with all the characters in the font on it and then display a quad on screen for each character and use the texture co-ordinates to reference the correct letter in the font texture. Look at the way it is done in the DirectX SDK samples for an example of this. Related notes: 2D Elements (Textures, Sprites, Text). Use vectors. Normalised vectors are great tools. They have a length of 1 so multiplying them by another vector leaves the other vectors size unchanged but alters the direction. If you want to move an object to a particular place you calculate the vector between the two positions, normalise it and add it to the current position. e.g. If you want to move from position A to position B and both are described with vectors: D3DXVECTOR3 direction=A - B;D3DXVec3Normalize(&direction,&direction);// to move the object from A you would then do something like:D3DXVECTOR3 newPosition=A + (direction * movementAmount); Since direction is normalised and has a length of 1 you multiply it by the amount you want the object to move during this calculation , in this case movementAmount. If you want the camera to follow your character around the world you need to take into account the direction and position of the character to be followed. You will probably want the camera to be a set distance away from the object and facing the same direction as the object. So the first thing to do is obtain the position and direction of the object. Once you have these you can position the camera a set distance from the object by multiplying the inverse of the object direction vector by the set distance away you want the camera to be: In code this may look something like: // objPos - is the position of the object in the world// objDir - is the normalised direction vector of the object in the world// cameraPos and cameraDir are the position and direction of the camera that we want to find// Position the camera 5 world units (dx) away from the object:D3DXVECTOR3 inverseObjDir = -objDir;cameraPos= objPos + (5.0f * inverseObjDir);// Camera direction is just set to the same as the object:cameraDir=objDir; The above will position the camera at the same height as the object, you may want the camera to always be above the object looking down. In this case you need to raise the camera position up and then calculate a new 'downward facing' camera direction. To do this you get the vector between the camera position and the object position. This then gives you the correct direction vector. // We want the camera to be 4 world units above the object it is looking at cameraPos.y+=4.0f; // We have to change the camera direction to look down toward the object D3DXVECTOR3 newDir=objDir-cameraPos; D3DXVec3Normalize(&newDir,&newDir); // now the newDir is the correct camera direction Relates notes: Camera Often you need to find out if a point is within a polygon. E.g. when doing collision you may have detected that a ray collides with the plane of a triangle and now you need to determine if the collision point is within the triangle. This is obviously in 2D. The method I have used is from Gems IV. Note: there are a few other methods out there so do a Google if interested - you will find plenty. The description above the code is not mine: /* The definitive reference is "Point in Polygon Strategies" by Eric Haines [Gems IV] pp. 24-46. outside the jumps will be even. The code below is from Wm. Randolph Franklin <wrf@ecse.rpi.edu> with some minor modifications for speed. It returns 1 for strictly interior points, 0 for strictly exterior, and 0 or 1 for points on the boundary. The boundary behaviour is complex but determined; in particular, for a partition of a region into polygons, each point is "in" exactly one polygon. The code may be further accelerated, at some loss in clarity, by avoiding the central computation when the inequality can be deduced, and by replacing the division by a multiplication for those processors with slow divides. numPoints = number of points poly = array of vectors representing each point x,y = point to test*/bool pnpoly(int numPoints, Vector *poly, float x, float z) { int i, j, c = 0; for (i = 0, j = numPoints-1; i < numPoints; j = i++) { if ((((poly[i].z<=z) && (z<poly[j].z)) || ((poly[j].z<=z) && (z<poly[i].z))) && (x < (poly[j].x - poly[i].x) * (z - poly[i].z) / (poly[j].z - poly[i].z) + poly[i].x)) c = !c; } return (c==1); } Vector can be your own vector structure / class or D3DXVECTOR3 if you are using DirectX. There are some issues with different processors relating to timing. Normally timeGetTime works fine on most computers however there are issues on some machines so often QueryPerformanceCounter is used instead. This is a high resolution counter but is not supported on older machines. So the best method is to check if this counter exists, if it does, use it otherwise use timeGetTime. I have a class I use to wrap all this and I have made it available, for reference, via this download: timeUtils.zip. You are advised to create your own so you understand what is going on. Take a look at this link on Gamasutra which has a number of good articles: Game Physics Related notes: collisions Firstly be sure you know what you mean when talking about distance between vectors. Vectors have a direction and a length however vectors are also often used to represent a position only (point vectors). So finding the distance between two vectors normally refers to finding the distance between two points. You can subtract one vector from another and get a new vector but to get the distance between the points you need to determine the length of the new vector. To find the distance between two points you use the Pythagorus theorem: 'the square on the hypotenuse is equal to the sum of the squares of the other two sides'. So if you have two points P0 and P1 the distance between them is: float dx=p1.x - p0.xfloat dy=p1.y - p0.y float distance=sqrt(dx*dx + dy*dy); Tip: often you just want to compare distances so rather than use the slow sqrt (square root) function you can safely compare the squared distances. It is wise to avoid sqrt where possible in game code. This used to be a much easier question to answer. In the past I would have said unroll loops (to avoid the loop code overhead), create look up tables for sin, cos, tan etc. (these functions can be slow) plus many other nice tricks. However nowadays with the complexity of modern processors these methods are no longer always correct. PC processors like the Pentium are now better at handling loops and maths functions are very fast compared to what they were. You also have the issue of the graphics card working in parallel to the CPU, you want to try to maximise that parallel effect so neither processor is waiting for the other. On consoles the situation is the same and in some cases worse. The PS2 has two vector units that need to be kept busy in order to maintain high speed. So nowadays you need really to look at your algorithm for increased speed rather than low level coding tricks. Most importantly you should profile the code to see where the slow points are. Often programmers are wrong about the place they think is slowing the code down and only when you run a profiler can you determine exactly where slow downs occur. You don't even need to buy a profiler you can use your own code to measure the time sections of code take. Of course the best way to optimise slow code is to not call it at all! If you can remove code by redesigning an algorithm this is the biggest gain. I wrote a program not long ago which was spending ages creating shadows on a terrain. I profiled it so I knew exactly where the problem was and I worked at it for many days reducing the speed by 0.5 % at a time until after a week or two I had reduced it by about 5%. A few weeks later I suddenly realised I could change the algorithm used and use a completely different one which would be quicker, it took a few hours to change but did the same job as before but ran 500% faster! All that time I spent optimising for a few meagre percent would have been better spent redesigning the algorithm. So my general suggestions for optimising slow code is: If you get to the final step and really have to look at the code there are a few things to bear in mind: There are more things you can do but basically the key is to make sure you know where the slow down is and then see if you can change the whole algorithm rather than resort to code level tricks. The area of a 3D triangle defined by the vertex V0, V1, V2 is half the magnitude of the cross product of the two edge vectors: 0.5 * | V0V1 * V0V2 | Why would you want to know this? Well it is useful if you want to calculate your Gouraud normals so large connecting triangles have more effect on the normal than the smaller ones. The standard way to calculate a Gouraud normal is to add up all the plane normals of the connecting triangles and divide by how many there were. This works fine in most cases but does not differentiate between large and small triangles - each is equally weighted. You often get a better effect by modifying the influence of each triangle to the normal based on its size. Easy. 2PI radians = 360 degrees, or PI radians = 180 degrees. So 1 radian is 0.017453The best thing is to set up a macro to do the conversions e.g. #define PI (3.141592654f)#define DEGTORAD(degree) ((PI / 180.0f) * (degree))#define RADTODEG(radian) ((180.0f /PI) * (radian))
http://www.toymaker.info/Games/html/techniques.html
CC-MAIN-2016-18
refinedweb
1,801
67.28
This question already has an answer here: My VMWare ESXi 4 server appears to be under a Denial of Service attack. I am getting massive packet loss to the server (60+%) and am barely able to load any services on the VMs running on the host. I have Cacti installed but cannot load it due to the attack. I can SSH in to the VMware host. Are there any commends I can run to either determine where the attack is coming from, or block all IP addresses except mine so that I can load Cacti again to troubleshoot? I tried esxcli network firewall get but received: Unknown Object firewall in namespace network esxcli network firewall get Unknown Object firewall in namespace network All the VMs with network access are directly connected to the internet, that is, there is a virtual switch between the internet-facing VMs and the router. EDIT: MDMarra had a great idea: disable the vswitch that the VMs are on. But I can't get the vSphere console to respond long enough to do this. Can this be done through SSH? This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. Sniff the wire and filter traffic to just that host. tcpdump / wireshark I would say first and foremost would be to call your datacenter and see if they can block the offending IP with their equipment. Hopefully their hardware has the bandwidth to handle something like that, which will then at least allow yours to start functioning like normal. The ISP was unable to determine the cause of the traffic, but what they were able to do was null-route all the IP addresses assigned to this server at the network switch. Then, one-by-one we removed the null routes, until we determined which IP addresses were being attacked. Once the target IPs were null-routed, the problem went away and I am able to access the server again. I am now going to console in to the affected VMs and start tcpdump, and then remove the null-routes to those VMs. This will allow me to find the source IPs of the attack, which can be blocked by my ISP before traffic from them enters the core network. tcpdump asked 2 years ago viewed 739 times active
http://serverfault.com/questions/411938/esxi-server-under-dos-attack-can-i-use-ssh-to-determine-where-from?answertab=oldest
CC-MAIN-2014-49
refinedweb
398
68.5
IRC log of tagmem on 2008-09-24 Timestamps are in UTC. 01:45:51 [noah] noah has joined #tagmem 13:58:22 [RRSAgent] RRSAgent has joined #tagmem 13:58:22 [RRSAgent] logging to 13:58:23 [jar] jar has joined #tagmem 13:58:45 [ht] Meeting: TAG f2f, Wednesday morning 2008-09-24 13:58:51 [Stuart] Stuart has joined #tagmem 13:59:12 [timbl] timbl has joined #tagmem 13:59:51 [Norm] Norm has joined #tagmem 14:00:12 [ht] Agenda: 14:00:21 [ht] Chair: Stuart Williams 14:00:28 [ht] Scribe: Henry S. Thompson 14:00:33 [ht] ScribeNick: ht 14:01:23 [DanC_lap] DanC_lap has joined #tagmem 14:01:32 [ht] Topic: 3.5 Self Describing Web 14:01:40 [ht] 14:02:07 [ht] SW: Noah has produced a draft: 14:02:24 [DaveO] DaveO has joined #tagmem 14:02:29 [ht] NM: There are some specific things the group might want to concentrate on 14:03:08 [DanC_lap] hmm... passive voice... "Web resource representations should be published using widely deployed standards and formats. " 14:03:22 [ht] NM: We reviewed this in Bristol, we were close to agreement to publish, NW and SW appointed as reviewers after I produced another draft 14:03:27 [timbl] ""Error loading stylesheet: An XSLT stylesheet does not have an XML mimetype: """ 14:03:31 [noah] 14:04:30 [ht] NM: There's an ednote there -- we have a choice here wrt RDF-XML, N3 or both for this example 14:04:52 [ht] SW: SWBP uses N3 for their tutorials 14:05:11 [Norm] noah, I suggest that you define a prefix for .../Employees# so that the example begins e:BobSmith rather than the full URI in angle brackets 14:05:35 [ht] AM: Why not both? 14:05:40 [ht] NM: longer 14:05:47 [ht] DC: I think not 14:06:32 [DanC_lap] hmm... hard to see this as a good-practice: "Representations provided directly in RDF, or those for which automated means can be used to discover corresponding RDF, contribute to the self-describing Semantic Web. " 14:07:11 [ht] JR: N3 is easier to read 14:07:23 [ht] ... but is their a media-type registered for N3? 14:07:33 [ht] TBL: It's in progress 14:07:52 [ht] SW: Not a great example, then 14:08:00 [ht] HT: That's a definitive 'no' for me 14:08:09 [DanC_lap] hmm... s/information/data/? "RDFa should be used to make information conveyed in HTML self-describing. " 14:08:23 [ht] DC: So that would be an unhelpful distraction at this time 14:08:35 [ht] JR: Reluctantly, agreed 14:08:59 [DanC_lap] missing quote: " > 14:09:24 [ht] RESOLUTION: Request the editor to use the XML version only at the beginning of chapter 5 14:09:56 [ht] NM: Next issue arose first at the Bristol f2f, to do with RDFa 14:10:01 [ht] ... in section 5.1 14:10:02 [Norm] -> 14:10:51 [noah] From Norm's paraphrase of the Bristol minutes: 14:10:51 [ht] NM: What is the RDFa 'follow-your-nose' story? I tried to implement the recommendation we arrived at in Bristol, but ran into trouble 14:10:56 [noah] | - The paragraph starting "Even though this document is of media type 14:10:56 [noah] | application/xhtml+xml " needs to be replaced with following your nose 14:10:56 [noah] | through: application/xhtml+xml -> RFC 3236 -> HTML M12N -> 14:10:56 [noah] | -> RDFa specification 14:11:47 [ht] [That quote my understanding of the Bristol minutes, ratified by the scribe at the time, NW] 14:12:03 [ht] s/[/NM: / 14:12:08 [ht] s/NW]/NW/ 14:13:44 [ht] NM: But the problem is that that chain is not well-supported by the documents involved 14:14:17 [ht] 14:14:56 [ht] ... and I haven't gotten a story from those exchanges which gave a full answer 14:15:46 [ht] HT: Why can't you go straight to the namespace document, from application/____+xml 14:16:39 [ht] NM: You may have uncovered another problem, because I don't see how 3023 licenses looking at namespace docs 14:16:54 [ht] HT: Then you have a problem higher up in this document, don't you, in section 4.2.3? 14:17:28 [ht] NM: Let's see -- the beginning of that is true, but not justified by 3023 14:18:51 [ht] ... Ah, actually, it does have a reference to namespace documents 14:19:25 [ht] ... So maybe I could/should back off from the XHTML modularization route, but I could go via 3023 14:20:05 [timbl] q+ to ask why has no link under text/html 14:20:08 [ht] JR: Does 3236 point to 3023? 14:20:13 [timbl] q? 14:20:14 [ht] NM: Checking -- yes. 14:20:41 [DaveO] q+ to say how about adding some of the proof statements, ala versioning.. 14:20:42 [ht] TBL: If we think it's necessary, we can always request changes that we need 14:21:18 [DanC_lap] ack danc 14:21:18 [Zakim] DanC_lap, you wanted to respond to concerns about various bits of prose with some notes about test cases 14:21:36 [timbl] 14:23:00 [ht] TBL: I thought there was an issue here, but I guess it's OK 14:23:44 [ht] DC: IANA are going to supply APIs to access this information: text, HTML and XML 14:23:51 [DanC_lap] (resolution? I think the editor is collecting advice. hmm.) 14:24:23 [ht] NM: So we're I'm going depends on the XHTML namespace document be updated, to document the use of RDFa within XHTML 14:24:38 [ht] TBL: I thought I suggested the _schema_ should be updated 14:25:37 [ht] DC: Namespace document not updated yet, either way 14:26:05 [DanC_lap] (hunting for a pointer to these plans...) 14:26:11 [DanC_lap] q+ 14:26:20 [timbl] ack timbl 14:26:20 [Zakim] timbl, you wanted to ask why has no link under text/html 14:26:26 [ht] NM: I plan to update the RDFa section to refer to a planned update to the namespace doc for XHTML saying that RDFa in XHTML is to be interpreted 14:26:46 [ht] ... Does that work? 14:27:13 [ht] TBL: In theory, yes. In practice, I would like to see the change to the NS doc. right away 14:27:22 [ht] DC: Should have been a CR exit criterion, but too late 14:27:50 [ht] NM: I would like to get this out now, not have it hostage to that change -- the impact on _this document_ is not great enough to justify delay 14:27:55 [DanC_lap] I 2nd the (implicit?) proposal to approve this finding contingent on RDFa-related edits to the satisfaction of Noah and 2 other TAG members. 14:28:13 [ht] TBL: It's a bug that this hasn't been addressed 14:28:39 [ht] DC: An important point of this finding is to impact the RDFa spec. 14:28:57 [ht] ... It's part of the TAG's job to make that happen 14:29:10 [ht] NM: They've said they will do it, I can explain to them why it matters 14:29:34 [ht] NM: Can we agree to the above proposal, wrt this finding? 14:30:10 [ht] TBL: We _also_ need an action to make sure the XHTML namespace change gets done 14:30:28 [DanC_lap] action-130? 14:30:28 [trackbot] ACTION-130 -- Tim Berners-Lee to consult with Dan and Ralph about the gap between the XHTML namespace and the GRDDL transformation for RDFa -- due 2008-04-10 -- CLOSED 14:30:28 [trackbot] 14:30:36 [DanC_lap] perhaps that's no longer done to timbl's satisfaction 14:31:19 [DanC_lap] action-130? 14:31:19 [trackbot] ACTION-130 -- Tim Berners-Lee to consult with Dan and Ralph about the gap between the XHTML namespace and the GRDDL transformation for RDFa -- due 2008-04-10 -- OPEN 14:31:19 [trackbot] 14:31:32 [timbl] Well, I consulted ... but I didn't get an action token back 14:31:35 [timbl] from Ralph 14:31:43 [DanC_lap] I thought you did, but I can't confirm 14:34:05 [ht] Proposed resolution: To instruct the editor to move to a path via 3023 and a planned update to the namespace doc for XHTML saying that RDFa in XHTML is to be interpreted to fix the RDFa section 14:34:24 [ht] TVR: That leaves actions dangling on other documents 14:34:57 [ht] NM: There is another action now, on TBL, to make the necessary changes 14:35:21 [ht] DC: I like TVR's story 14:35:24 [ht] NM: Which is? 14:35:37 [ht] DC: A pointer to the plan 14:35:48 [ht] NM: Happy to include it 14:36:16 [DanC_lap] +1 " to move to a path via 3023 ..." 14:37:53 [ht] RESOLUTION: To instruct the editor to fix the RDFa section by moving to a path via 3023 and a planned update to the namespace doc for XHTML saying that RDFa in XHTML is to be interpreted, including a pointer to the plan 14:38:41 [ht] NM: One more issue -- the Good Practice note about using RDFa 14:39:13 [ht] ... Is this too specific too early? Should I kill it? 14:39:40 [ht] HT: I would prefer to kill it, as I would use GRDDL if I wanted to move my H 14:39:59 [ht] s/my H/my HTML towards RDFa/ 14:40:09 [ht] TBL: It's ambiguous as written 14:40:50 [ht] SW: RDFa allows you to put RDF in your document, but it's not necessarily descriptive of that document. 14:41:00 [DanC_lap] DanC: RDFa is only specified for XHTML, so "in HTML" has a lot of subtlety that I'd rather we didn't into 14:41:12 [ht] NM: This doesn't disagree, it just says you should do this one 14:42:20 [ht] JR: I think you're using 'self-describing' in two different ways: one at e.g. the level of mime-types (which isn't so much descriptive as proscriptive), and the other the usage here 14:42:42 [ht] ... The first is about how to _interpret_ this document at all, the other is quite different 14:43:16 [ht] TBL: Something with metadata at the top is 'self-describing' -- the thing here is more like 'grounded in the web' 14:43:54 [ht] ... I think JR is on to something important -- I'm not happy with e.g. the 3rd sentence in the abstract 14:44:16 [ht] ... We would be better off using 'grounded in the web' 14:44:29 [ht] NM: This is a bit late in the process to make such a sweeping change 14:45:38 [ht] TVR: Connecting back to yesterday, I'm uneasy about the reliance on a mixture of English and fully mechanically exploitable information 14:46:27 [timbl] q+ 14:46:38 [ht] JR: I think I know how to fix this 14:47:11 [ht] ... Leave most of the wording in the document alone, by making clear what you mean by 'self-describing' 14:47:29 [ht] NM: So can we put this to one side while we work on the rest of the document 14:50:05 [timbl] q? 14:50:24 [ht] q+ to make some points 14:50:44 [ht] ack DaveO 14:50:44 [Zakim] DaveO, you wanted to say how about adding some of the proof statements, ala versioning.. 14:51:17 [ht] DO: I share your pain wrt comments late in the day, but it's something we all have to live with that 14:51:57 [ht] ... I think the requested changes are worth it 14:53:06 [ht] DO: Maybe you should at least once go through the specs the way we did it here today to establish in detail how the prose in the relevant specs connects everything together 14:53:13 [ht] NM: In which examples? 14:53:24 [ht] DO: Microformats maybe, XML itself 14:53:32 [ht] NM: I thought I had the references 14:53:44 [ht] DO: My suggestion is to look at actually pulling in the quotes 14:53:52 [ht] NM: Useful idea, I'll have a look at doing that 14:53:56 [ht] ack DanC 14:55:00 [ht] DC: I'm concerned we're ignoring HTML5 at our peril -- the extensibility via URIs story just doesn't play with them 14:55:17 [ht] ... and they don't care about RDF either . . . 14:55:21 [ht] NM: Change required? 14:55:24 [ht] DC: Not sure 14:55:57 [ht] DC: Several of the Good Practice notes are passive voice -- in AWWW we tried hard to identify _who_ is supposed to do things 14:56:07 [ht] ... I can live with these as they are 14:56:17 [ht] NM: Should we fix this now? 14:56:20 [timbl] 14:56:25 [ht] DC: SW, NW? 14:56:38 [ht] SW: I didn't comment on that, no 14:56:56 [Norm] I can live with the passive voice. I have a hard time weeding it out of my own writing 14:56:57 [ht] JR: I did have a problem with the tone of the little boxes 14:57:48 [ht] DC: The "Representations provided directly in RDF" doesn't describe a Good Practice at all. . . 14:57:56 [timbl] q? 14:59:26 [ht] HT: We could reframe it as "In order to contribute to the self-describing Semantic Web, provide representations directly in RDF, or those for which autoamted means can be used to discover corresponding RDF." 14:59:38 [ht] SW: Where did the middle clause come from? 14:59:47 [Norm] With regrets, I have to step away for 60 minutes or so. Back ASAP 15:00:03 [DanC_lap] q+ to note a comment from hsivonen that GRDDL goes against the principle of least power; I see LeastPower in the references section but no [LeastPower] in the body. 15:00:12 [ht] NM: The preceding discussion of the tradeoffs between direct and e.g. GRDDL-mediated provision 15:00:49 [ht] TBL: Is the distinction between what I think of as 'web-grounded' core of things, and the more extended notion of metadata in general? 15:05:03 [DaveO] q+ 15:05:04 [Stuart] q? 15:05:59 [ht] s/in general/in general there in the document/ 15:06:46 [ht] NM: 'web-grounded' is too geeky. . . 15:07:03 [DanC_lap] ah. I think I see tim's point now... self-describing is a property of the Web as a whole, not as a document. 15:07:08 [ht] TBL: JR is right that there is an important distinction here that you are blurring 15:07:25 [ht] NM: I think I got this all in the document 15:08:06 [ht] TBL: The crucial point is the the (short) list of things I have to know in advance 15:08:12 [ht] q- tbl 15:08:17 [ht] q- timbl 15:08:40 [ht] TBL: What is the basic core, for someone who understands English? 15:09:12 [ht] NM: see the end of section 2 15:10:16 [ht] TBL: That looks like a different argument -- point is not using widely deployed, but what the bare minimum is that a newcomer needs beside the pure idea of follow-your-nose 15:11:00 [ht] ... It's not like there are any engineers who are confused about this, but to bullet-proof ourselves against someone saying "Oh no, I didn't mean this as a jpeg" 15:11:06 [Stuart] q? 15:11:11 [ht] NM: So what change to the document do we need? 15:11:35 [ht] TBL: Particularly, not applying 'self-describing' to documents, just to the Web as a whole 15:11:50 [ht] NM: Are you optimistic that what JR proposes will fix the problem you have? 15:12:06 [ht] TBL: The list of core documents isn't in that, is it 15:12:39 [ht] ... I want something like "If you have a message, the core specs [what are they], and a knowledge of English, you have all you need to interpret the message 15:12:45 [ht] s/message/message"/ 15:13:57 [ht] DC: 's-d' applies well to the Web, but doesn't apply well to documents, is what TBL has been saying 15:14:00 [ht] NM: And you? 15:14:34 [ht] DC: I have been used to that usage, but I realise calling a message self-describing is wrong, because it doesn't describe itself 15:14:41 [ht] NM: I explained it's a term of art 15:14:48 [ht] DC: Yes, but it contradicts itself 15:15:10 [ht] JR: I suggested a compromise, in order to get this published, but a bigger fix could be done 15:15:21 [noah] q? 15:15:25 [ht] NM: I'd rather get it right if it makes a big difference 15:15:33 [Stuart] ack ht 15:15:33 [Zakim] ht, you wanted to make some points 15:15:44 [DanC_lap] (surveying usage of "self-describing document" in a larger community) 15:16:22 [timbl] grounded in the web, in the sense that the apropriate interpretation of a document follows by following a 15:16:31 [raman] raman has joined #tagmem 15:17:09 [jar] noah: Self-describingness is always a matter of degree 15:17:23 [jar] ht: Self-describing may be a bad choice for a term of art 15:17:32 [DanC_lap] (ew... scary... top hit for "self-describing document", after w3.org, is some patent stuff.) 15:18:15 [jar] ht: Documents are self-contained if they don't require more than the core to ... 15:18:34 [Stuart] q? 15:18:43 [DanC_lap] (what fix did noah just refer to?) 15:18:55 [timbl] "HTTP and other Web technologies can be used to deploy resources that are grounded in the web, in the sense that the apropriate interpretation of a document follows by following a series of references in the web. Starting with a URI, there is a standard algorithm that a user agent can apply to retrieve and interpret a representation of such resources." <-- sugegsted text for bstract 15:19:02 [jar] ht: An HTML document with RDFa is self-describing in the ordinary sense of the word 15:19:07 [noah] q? 15:19:47 [ht] TBL: You could try to use 'self-describing' in your way to a web-arch-sophisticate 15:20:24 [ht] NM: When you have an image/jpeg message and some bits, HST said that wasn't self-describing in the ordinary sense 15:20:30 [DanC_lap] (I find this abuse of "self-describing document" is reasonably wide-spread. I think it's OK in this document.) 15:20:38 [DanC_lap] ack danc 15:20:40 [Zakim] DanC_lap, you wanted to note a comment from hsivonen that GRDDL goes against the principle of least power; I see LeastPower in the references section but no [LeastPower] in the 15:20:43 [Zakim] ... body. 15:20:57 [ht] NM: But to me that feels like a matter of degree 15:21:13 [ht] TBL: But an image can't be self-describing 15:21:37 [ht] DC: Why is Least Power in the references section, but not in the body? 15:21:49 [timbl] 15:21:52 [ht] NM: I think it was, but just hasn't been pruned 15:22:17 [timbl] That is a document with a fix to the abstract and the phrase 'web-grounded" used 15:22:30 [ht] DC: Henri Sivonen points out that GRDDL means you have to run a programme to interpret your document 15:23:07 [ht] ... So you should either delete the reference, or add some explanation 15:23:12 [jar] An image can live inside a wrapper that holds metadata for the image. Customarily we don't make a distinction between the image and the container (.png file, etc) that carries it. ... so no the image isn't self-describing, the container isn't self-describing, but the container carries a description of the image ... 15:23:37 [ht] TBL: I have done an alternative draft with 'web-grounded': 15:24:45 [DanC_lap] (self-similar means it's elements-all-the-way-down...) 15:24:59 [ht] DO: DC once said to me "XML isn't self-describing, it's self-similar" -- the core problem is that you need a definition of self-describing that works across the whole document 15:25:12 [timbl] q+ 15:25:16 [ht] ... as in the way we drilled on the terms in the versioning findings 15:25:23 [ht] q- DaveO 15:25:43 [ht] DO: And that may mean some counterexamples 15:26:08 [ht] NM: My problem is I don't yet hear clearly what the TAG is trying to get me to say 15:26:53 [ht] DO: I'm happy for you to use 'self-describing' for documents, and for the Web, but with more clarity about what this means 15:27:07 [ht] ... This is our role as educating people 15:28:52 [DanC_lap] (hmm... I missed that about "self-describing web") 15:28:59 [ht] TBL: My preferred formulation is "resources that are _grounded in the web_, in the sense that the appropriate interpretation of a document follows by follwoing a series of referernces in the web." 15:29:29 [ht] NM: But that means that my own private format is grounded in the web. 15:30:17 [ht] TBL: It is, but the "use widely deployed standards" is a separate point, already made in AWWW 15:31:09 [ht] NM: So I'm willing to explore a way to make this work, but it's going to take some work 15:31:21 [ht] DC: I think we could finish this this week 15:31:29 [ht] NM: I want more time 15:32:07 [ht] DO: I don't like the proposed change, I think 'self-describing' is important not to lose. And I think the 'widely-deployed' does belong here 15:32:57 [DanC_lap] (I think a pure definition of "self-describing" can be combined with a good-practice note about popular formats. I think that's pretty close to where the finding is now.) 15:33:18 [ht] TBL: I disagree -- I'm interested in the pure question of whether it all connects up, and that's why I didn't want RDFa to go ahead until they had fixed their part in this 15:33:44 [ht] HST: Yes, I think the point that anybody who fits into the chain has to acknowledge that and take responsibility for it 15:35:44 [ht] NM: The follow-your-nose story really doesn't work unless what you hit as you go are widely deployed -- that's the main message, for me 15:36:00 [ht] SW: Break time 15:36:34 [ht] SW: I encourage TBL and NM to keep talking, and we can come back to this if we need a few minutes for a resolution 15:37:11 [ht] NM: I'm concerned that we get DO and e.g. HST on the same page 16:05:47 [ht] [Break] 16:06:20 [ht] Topic: 3.6 passwordsInTheClear-52 (ISSUE-52) 16:09:02 [ht] s/3.6 passwordsInTheClear-52 (ISSUE-52)/3.4.4 HTTP And HTML/ 16:09:15 [DanC_lap] (hmm... something from the self-describing-web discussion in the break should be recorded as an action, probably. maybe later today...) 16:09:36 [ht] TVR: Based on yesterday's discussion, this issue follows on from our 3rd discussion yesterday 16:10:07 [ht] ... There are ways in which what's happening in HTML5 interacts with other standards work 16:10:42 [DanC_lap] (for reference... HTTPbis WG ) 16:10:44 [ht] ... but rather than digging in to the specific technical issues, we should look at how to address the overlap/conflict problem 16:11:45 [ht] TVR: I think the http-bis IETF group are doing good work, with a good broad and well-informed membership, although they are short on representatives from the browser vendors 16:12:33 [ht] ... I don't think we have much to offer beyond what that group, and the What-WG group, have in the way of technical expertise 16:13:27 [Stuart] q? 16:13:52 [ht] DC: News to me that no browser participation in the http-bis work 16:14:08 [ht] TVR: Not sure, although pretty sure that WHAT-WG are not in there 16:14:49 [ht] DC: I got educated by the half-serious suggestion for an HTTP5, that there is tag soup in the HTTP header which the browsers fix up 16:15:16 [ht] TVR: Thinking back to the error recovery topic, there are two aspects: 16:15:46 [ht] ... 1) There are always corner-cases where a spec. isn't completely clear, and different implementations go in different ways; 16:16:24 [ht] ... 2) The case where things are clearly wrong, but accepted anyway, and then the bad drives out the good 16:16:32 [DanC_lap] Zakim, this is tag 16:16:32 [Zakim] ok, DanC_lap; that matches TAG_f2f()10:00AM 16:16:38 [DanC_lap] Zakim, who's on the phone? 16:16:38 [Zakim] On the phone I see ??P0, Norm 16:16:43 [DanC_lap] Zakim, ??P0 is KC 16:16:43 [Zakim] +KC; got it 16:17:01 [ht] ... Once we turn to the HTTP spec, the situation is better, because when there is uncertainty, people just do what Apache does 16:17:39 [ht] TVR: The hard case is at the intersection between HTTP and HTML, namely content-type sniffing 16:19:19 [ht] HST: What should we do about Content Type Sniffing? 16:19:35 [ht] DC: We have reopened the issue 16:19:58 [ht] ... Do people know what the browser vendors say when you tell them "get with the program"? 16:20:15 [ht] NM: There's lots of deployed stuff 16:20:52 [ht] DC: They will lose market share -- people will look at text/plain rendering of what's obvious (to them) HTML and say "your product is broken, get me one that works" 16:21:35 [ht] TVR: There is a lot of broken stuff out there, and that has to be acknowledged, but the market share argument is spurious 16:21:56 [ht] [scribe didn't understand why] 16:22:46 [ht] TBL: Sniffing today is mostly on text/plain, which is taken as sort of a wildcard 16:23:39 [ht] ... Roy Fielding suggested we would be better if servers just left off the Content Type header if they didn't know the type, rather than sending text/plain 16:23:48 [ht] ... On this front, HTML5 is not unreasonable 16:24:09 [ht] TVR: But browsers sniff on more than text/plain 16:25:31 [ht] ... Sam Ruby knows the details 16:26:03 [ht] HST: I thought there was sniffing of text/html, the whole standards-mode vs. whatever stuff 16:26:11 [ht] TBL: I didn't think so 16:26:51 [ht] TBL: There's a bit in the HTML5 spec about maybe waiting 500bytes before deciding what to do 16:28:21 [ht] DC: There is some negotiation for change: There's the whole "authorative: true" proposal, and Ian Hickson is in dispute with the browser vendors about how much sniffing will go into HTML5 16:29:02 [ht] ... One option is taking the state of the art wrt content type sniffing and freeze it -- not good, but better than the alternative? 16:29:10 [ht] s/alternative/alternatives/ 16:29:46 [ht] TVR: The question for this meeting is: does the W3C want to play a role in getting out of this local minimum? 16:29:54 [ht] DC: If we want to, then what? 16:30:33 [ht] TVR: There's the conflict between servers trying to do the right thing and browsers trying to do the right thing 16:30:46 [ht] ... plus the lag in updating either one, and the long tail of legacy 16:31:12 [ht] ... What the TAG can do I don't know -- it's a function of what W3C wants to do. 16:34:00 [ht] HST: Should we ask Apache to ship a "don't know, don't tell" policy wrt Content Type out of the box? 16:34:08 [ht] DC: I think they already do 16:34:57 [ht] TVR: Yes, Content Type is optional, but everybody _thinks_ Content Type is required because scripts always start "print 'text/html'" 16:36:12 [ht] DC: The problem arises when a new technology emerges, and someone puts e.g. a foo.jar file on their server, and can't edit the server configuration, and it gets serves as 'text/plain' 16:36:55 [ht] TVR: I don't think there's anything we can do, besides maybe talking to Apache, given the current structure of things 16:37:23 [ht] NM: I'm not even sure that making the "don't know don't tell" move would help -- what would clients do? 16:37:44 [ht] ... It would be bad for us if things got worse instead of better 16:38:12 [ht] DO: We need to do some due diligence research 16:38:41 [ht] DC: I think Roy F. already convinced Apache to move on the default, so we'll just have to see what happens 16:39:14 [ht] ... The other place we could try to help is wrt Microsoft's decision that sniffing is a security hole, and want to fix it 16:39:25 [ht] SW: Due diligence? 16:39:46 [ht] DO: Finding out what browsers do with no Content Type 16:40:14 [ht] DC: There's also the fact that some browsers now allow you to turn off sniffing 16:40:21 [ht] SW: I think I've done that 16:41:46 [ht] TVR: Are we assuming that the Web's growth has stop -- this is the justification for freezing the current state of error recovery 16:41:58 [ht] ... which will in turn ensure that the Web stops growing 16:43:08 [DanC_lap] fielding on apache defaults (not quite clear on "don't know don't tell") 16:48:28 [ht] HST: Should we be trying to help avoid this kind of problem in the next generation of (non-browser-based) distributed application development platform 16:49:05 [ht] DC: But the HTML5 WG believe that HTML5 _is_ the non-proprietary next generation distributed application platform 16:49:23 [ht] TVR: Not application development, but UI 16:49:59 [ht] [scribing partial] 16:50:18 [ht] TBL: role of SVG 16:51:09 [ht] NM: Things such as Silverlight don't have SVG (or HTML+CSS) but they are trying to achieve the SVG+HTML+CSS vision more thoroughly, wrt mutual recursion 16:51:44 [ht] TVR: We should have viewed SVG as the rendering language for HTML 16:53:23 [ht] NM: XAML has a distinction between an abstract list, a default rendering (a stack of boxes), but the potential for hugely varied actual rendering (a succession of fly-in circles, for example) 16:53:36 [ht] s/but the/and the/ 16:54:31 [ht] DC: HTML5 is intended to compete for developer mindshare against that stuff, yes 16:54:45 [ht] ... CSS+HTML as a Flash killer 16:55:50 [ht] NM: Video is the real qualitative change -- when I can paint movies on a shape just like painting a gradient, we're in a new place 16:56:04 [ht] ... SVG just isn't in that place 16:58:18 [ht] SW: Are we done on this topic? 16:59:09 [ht] TBL: A common thread here -- there's a lot of investment in the development of a distributed UI/applications platform based on HTML + CSS + SVG 16:59:14 [ht] DC: SVG? 16:59:21 [ht] NM: Well SVG like 16:59:53 [ht] HST: Well, SVG doesn't figure in the HTML5 WG's grand plan 16:59:57 [ht] DC: It doesn't? 17:00:27 [ht] TVR: Well, at least not as the styling language for HTML5 17:00:44 [ht] NM: How does this relate to the canvas tag? 17:01:09 [ht] s/doesn't?/doesn't? They seem to me to go back and forth on that./ 17:01:52 [Stuart] q? 17:01:54 [ht] NM: If people asked say Webkit should we use canvas or SVG, what would they say? 17:02:00 [Stuart] q- timbl 17:02:01 [ht] TVR: canvas 17:11:19 [ht] SW: We reopened [the content type sniffing issue] -- what might we do? 17:11:37 [ht] DC: We could add an appendix to the finding which says "Yeah, but what happens in practice is this: ..." 17:14:17 [ht] HST: What would we be conveying as the conclusion to draw? 17:14:25 [ht] DC: That this was unfortunate 17:14:32 [ht] NM: I don't want to undercut it 17:16:57 [ht] SW: DC, do you have an open action on this front? 17:17:10 [ht] DC: I come back to what I said about Microsoft's concerns here 17:17:18 [ht] NM: Would this finding help them? 17:21:51 [Zakim] -Norm 17:21:52 [Zakim] -KC 17:21:52 [Zakim] TAG_f2f()10:00AM has ended 17:21:52 [Zakim] Attendees were Norm, KC 17:24:41 [ht] SW: Adjourned for lunch 19:26:08 [DanC_lap] DanC_lap has joined #tagmem 19:31:42 [timbl] timbl has joined #tagmem 19:31:52 [Ashok] Ashok has joined #tagmem 19:33:29 [noah] noah has joined #tagmem 19:33:43 [noah] scribenick: noah 19:34:01 [noah] meeting: W3C TAG F2F Afternoon of 24 Sept 2008 19:34:13 [noah] scribe: Noah Mendelsohn 19:34:25 [noah] agenda: 19:34:37 [noah] topic: HTML5: Embedding And Embedability 19:34:47 [noah] See: agenda item at 19:35:42 [noah] TVR: You want to embed other languages (MathML) in HTML, but also to embed HTML in other languages (ATOM), and you want recursion. Question: do you only get to embed particular languages that the browser has planned for, or do you also get to experiment with other things. 19:37:05 [noah] TVR: In the 1996-1997 timeframe the assumption for XHTML etc. was that the more general extensibility would be supported, typically with browser plugins. Now the direction is toward centralization through one working group and a few browser vendors. I think that's a bad idea and leads to bad language design. It gets harder over time to add new things. Question: are we going to grow linearly from here, or continue to grow exponentially. 19:37:23 [noah] TVR: Real distributed extensibility need not necessarily be in terms of namespaces. 19:37:53 [noah] TVR: Early versions of Opera provided some support for extensibility by loading CSS that styled new, nonstandard elements. 19:38:22 [noah] TVR: So, that's both the context and my personal opinion. 19:39:30 [noah] TVR: Things to be aware of socially: there is a strong community among some of the browser vendors who believe that the era of rapid growth in specs is over. 19:39:54 [noah] DC: Internet explorer is dominant, has some namespace support, and is continuing to refine the design. 19:40:11 [noah] DO: You can can register handlers for namespaces. 19:40:22 [noah] HT: That's how XForms works in Firefox. 19:40:57 [noah] TVR: Browser extensions need hooks, and I don't see the browser vendors on a path to support that. 19:41:32 [noah] DC: Should canvas have been <apple:canvas>. 19:41:48 [noah] DO: The claim was it's better without namespace, because the transition to standard status is much easier. 19:42:21 [noah] TVR: I think it's flawed to let one or two or three vendors control. 19:47:08 [DanC_lap] I think SKW means this message of mine on distributed extensibility during the ARIA discussion 19:47:11 [noah] NM: Wondering if Dan has signaled an interesting middle ground, in which namespaces are there for experimentation <apple:canvas>, but HTML 6 (say) can announce that the apple prefix is no longer needed for canvas, and that <canvas> is now the preferred spelling of what was in earlier worlds <apple:canvas>. You get the ability for people to experiment, but have at least some path to moving those experiments to be part of the base language later. 19:47:59 [Stuart] q? 19:48:16 [noah] SW: Related to email ? 19:50:06 [noah] HT: There are a number of plausible approaches consistent with W3C preferred direction, e.g. Sam Ruby's, the SVG proposal to HTML5 WG, and the media-type based namespace binding idea. This constellation of approaches will likely not be explored by the current HTML 5 WG, but I think should be taken seriously. 19:56:25 [noah] q+ noah 20:01:00 [DaveO] Here's the issue 41 raised in public-html 20:01:27 [DaveO] Henri's response: 20:19:17 [noah] SW: What's the story with SVG and MATHML? What's the preferred way from the SVG & MATHML wgs, and what's preferred by the HTML 5 folks? 20:21:03 [noah] TBL: I don't think the HTML 5 spec talks about SVG and MATHML, but it's been discussed in the group. I think at least two approaches have been mentioned: 1) pour all SVG tags into HTML or 2) use appearance of <SVG> to enable svg vocabularies in the children. 20:21:14 [noah] DC: The SVG stuff is commented out. 20:21:31 [noah] HT: The SVG group made a proposal which was basically to switch to XML mode when you hit an SVG tag. 20:22:17 [noah] DC: I had thought HTML 5 provided for drawing a circle when you saw a <circle>, but it doesn't. I think it just parses the tag. 20:22:31 [noah] NM: Is there any hook for pointing to a spec that does draw a circle. 20:22:34 [noah] DC: Not sure. 20:28:27 [DanC_lap] draft HTML WG agenda 20:30:56 [timbl] I note that the XHTML namespace document has been updated 20:32:18 [noah] JR: I'm not clear on the constituency for distributed extensibility. 20:32:29 [noah] TBL: Facebook ML? 20:34:29 [noah] TBL: Aria is an example. Facebook had to extend HTML to put FBML in. 20:35:59 [noah] JR: How do you appropriately position this for the W3C? 20:36:12 [noah] DC: The modern way to do this is to write a blog article and get 700 comments. 20:36:20 [noah] NM: We have a blog. 20:38:05 [DaveO] fbml is a namespaced language, 20:38:41 [DaveO] And XFBML is the language that gets embedded in html, 20:41:35 [noah] NM: I think, if we try to write something in this space, we need to decide whether we are being careful and trying to get to the definitive analysis that's helpful in the long term, or are we trying to start a discussion quickly, with the risk of not doing a balanced analysis? I think the choice of blog, vs. email vs. draft finding should follow from the decision on what we're trying to acheive. I think to do something "carefully", you almost have to take 20:41:47 [noah] Dan starts drafting some points in notepad using the projector. 20:45:31 [noah] TVR: I'd like to do something that's somewhat independent of particular technologies. E.g. talk about serving small communities. 21:13:09 [DaveO] 21:14:05 [raman] raman has joined #tagmem 21:14:45 [DaveO] 21:15:04 [noah] ***10 Minute Break*** 21:23:34 [jar] jar has joined #tagmem 21:56:54 [DanC_lap] DanC_lap has joined #tagmem 21:59:14 [noah] After the debate there was more noodling at the whiteboard. So far no conclusive result to show for it. 22:02:23 [noah] topic: Uniform access to metadata 22:03:59 [noah] JR: The question I have now is the same I had earlier, I.e. how to proceed. There appears to be call from a number of quarters for a prototocol that, given a URI, provides uniform access to metadata. The metadata may or may not come from the "owner" of the resource, and is typically in the form of document. This comes up in many contexts, and inconsistent answers are being invented. 22:04:37 [noah] JR: The latest I've read is XRDS Simple, which I had not been aware of when I last looked at this subject. Document is from March of this year, and came up with the XRI work. 22:05:38 [DanC_lap] (is AM talking about ?) 22:05:53 [noah] AM: I'm curious regarding which approaches do you like, and why? 22:06:38 [noah] JR: I sent messages to www-tag a few days ago. I think I sort of like the link header, and for those who are concerned about round trips a caching strategy might be possible, but I could change my mind. 22:06:46 [noah] TBL: A way forward would be to write a finding. 22:07:50 [DanC_lap] q+ 22:08:42 [noah] DC: The TAG is on record as saying link is great. 22:08:53 [noah] JR: Going on longer about it might be worth doing. 22:09:53 [DanC_lap] (ah... "a primer on the use of Link: for uniform access to metadata". hmm.) 22:11:35 [noah] NM: I think a finding like this should start by setting out the perceived requirements and needs of various communities of interest. When a technical solution is proposed, we should indicate which of those requirements are or or are not addressed. 22:12:08 [DanC_lap] trackbot, status 22:12:21 [noah] ACTION: Jonathan to prepare initial draft of finding on uniform access to metadata. 22:12:22 [trackbot] Created ACTION-178 - Prepare initial draft of finding on uniform access to metadata. [on Jonathan Rees - due 2008-10-01]. 22:12:49 [noah] HT: I think it's worth looking at the ark work, as it has some attractive characteristics.
http://www.w3.org/2008/09/24-tagmem-irc
CC-MAIN-2016-50
refinedweb
7,214
62.41
Call Graph Visualisation with AspectJ and Dot 27 September 2008 Posted in Coding, Visualisation One of my favourite tools to render graphs is GraphViz Dot and in an earlier entry I described how to use it to visualise Spring contexts. Today I want to showcase a different application. Call graphs show how methods call each other, which can be useful for a variety of reasons. The example I use here is the graph rooted in a unit test suite, and in this case the graph gives an understanding of how localised the unit tests are, how much they are real unit tests or how close they are to mini-integration tests. In an ideal case the test method should call the method under test and nothing else. However, even with mock objects that's not always practical. And if, like myself, you fall into the classicist camp of unit testers, as described by Martin Fowler in Mocks aren't Stubs, you might actually not be too fussed about a few objects being involved in a single test. In either case, looking at the call graph shows you exactly which methods are covered by which unit tests. There are several ways to generate calls graphs and I'm opting for dynamic analysis, which simply records the call graph while the code is being executed. A good theoretical reason is that dynamic analysis can handle polymorphism but a more practical reason is that it's actually really easy to do dynamic analysis; provided you use the right tools. The approach I describe in this article uses Eclipse AJDT to run the unit tests with a simple Java aspect that records the call graph and writes it out into a format that can be rendered more or less directly with Dot. Of course, this technique is not limited to creating graphs for unit test; it only depends on weaving an AspectJ aspect into a Java application. Let's start with the result. The following diagram shows the call graph for a few simple unit tests in the test suite for the CruiseControl dashboard. The tests methods are in the leftmost column and method invocations occur from left to right. It's clearly visible that the tests are quite localised. The next example shows a section that's a bit more intertwined. Some tests exercise the same method, some tests call more than one method, and some methods are called indirectly through different paths. All in all pretty reasonable, though. Now, I don't want to showcase only the good parts. There are also sections in the graph that show how some of the test and dependencies are, well, a bit messy. In fairness, though, this is partly caused by some domain objets that are used across multiple tests. How hard can it be to create these graphs? Not very, as it turns out. Step 1 is to create an aspect that intercepts all interesting method calls, which normally means calls to methods that are part of the project. It's also possible, though, to include calls to libraries and frameworks. Depending on your experience with AspectJ this is as simple as follows: package com.example.callgraph; public aspect CallInterceptor { pointcut allInterestingMethods(): !within(CallInterceptor) && !within(CallLogger) && execution(public * com.example..*(..)); before() : allInterestingMethods() { CallLogger.INSTANCE.pushMethod(thisJoinPointStaticPart.getSignature()); CallLogger.INSTANCE.logCall(); } after() : allInterestingMethods() { CallLogger.INSTANCE.popMethod(); } } The first part of the aspect is the definition of the pointcut: it matches all public methods in "com.example" and sub-packages but excludes calls to the two classes that form part of the call graph logger itself. The before advice, which is run before each method that matches the pointcut, places the signature of the current method on a stack maintained by the graph logger class. With the current method signature as the topmost element and the one that called it as the second from the top, the advice asks the logger to log the corresponding call. The after advice simply pops the topmost call from the stack. This is almost all the excitement. The implementation of CallLogger deals with maintaining the stack and writing out the call in the format required by Dot. The complete implementation with some comments follows below. package com.example.callgraph; import java.io.BufferedWriter; import java.io.FileWriter; import java.io.IOException; import java.io.Writer; import java.util.Stack; import java.util.HashSet; import java.util.Set; import org.aspectj.lang.Signature; public class CallLogger { public static CallLogger INSTANCE = new CallLogger(); An instance is made available as a public singleton. The main reason for using a singleton, rather than creating an instance in the aspect, is that if coherent logging is required across multiple projects in Eclipse it's easiest to add a copy of the aspect to each project and have all aspects use the same logger instance. private Stack<String> callStack = new Stack<String>(); private Set<String> callLog = new HashSet<String>(); private Writer writer; The fields hold the actual stack of method names, a set that contains all calls that have been logged already, and a writer to which the actual output is written. The set is a simple optimisation to prevent the same call from being written over and over again. public CallLogger() { try { writer = new BufferedWriter(new FileWriter("calls.txt")); } catch (IOException e) { throw new RuntimeException("Cannot open 'calls.txt' for writing.", e); } } public void pushMethod(Signature s) { String type = s.getDeclaringType().getName(); String method = type.substring(type.lastIndexOf('.') + 1) + "." + s.getName(); callStack.push(method); } If anyone knows a better way to turn the AspectJ signature into a pretty string, please leave a comment. public void popMethod() { callStack.pop(); } public void logCall() { if(callStack.size() < 2) return; String call = "\"" + top(1) + "\" -> \"" + top(0) +"\""; if(!callLog.contains(call)) { write(call); callLog.add(call); } } Following a guard statement that protects the code from trying to log a call when only one method is on the stack, the code creates a description of the call in the format required by Dot for a graph edge, ie. the origin node in double quotes, followed by a stylised arrow, followed by the target node, again in double quotes. If this call is not in the call log, the code writes it to the writer and then stores the call in the log so that it does not get written again. private Signature top(int i) { return callStack.get(callStack.size() - (i + 1)); } private void write(String line) { try { writer.write(line + "\n"); writer.flush(); } catch(Exception e) { throw new RuntimeException(e); } } Running some code, the unit tests in this example, will now create a file named "calls.txt" that contains all calls to methods that match the pointcut description in the aspect. The calls are written in Dot format but to make the output a valid input file for Dot a header and a closing bracket at the end are required. It would be easy to write the required header when the writer is initialised but realistically it's not possible to know when logCall is called for the last time, and thus it's not possible to write the closing bracket from the Java code. Therefore... Step 2 is a small shell script, or a script in your favourite scripting language, that wraps the required header and footer around the raw graph edges. #!/bin/bash cat <<-EOS strict digraph G { graph [ ratio="auto compressed", rankdir="LR", ranksep=2, remincross="true", ]; node [ shape=box, style=filled, fillcolor=white, fontname=helvetica fontsize=10, fontcolor=black ]; EOS cat calls.txt cat <<-EOS } EOS I have an admission to make: This script is simplified and would not create the output I've shown further up. With a graph definition like this Dot would not put all unit tests nodes into the leftmost column but instead it would move them just left of the method they call. If, for example, test 1 called method A, which in turn called method B, and test 2 called method B directly, then test 1 would be in the first column, method A and test 2 in the second column and method B in the third column. The following UNIX command line wizardry, when added just below the cat calls.txt line in the script above, solves this problem. The explanation is left as an exercise to the reader... echo '{ rank = same; ' fgrep Test.test calls.txt | awk -F " -> " '{ print $1 }' | sort -u | awk '{ printf "%s; ", $1 }' echo '}' I hope this article shows how two simple but powerful tools combined make it easy to create a really useful visualisation; and how clever but cryptic UNIX command lines can be.
https://erik.doernenburg.com/2008/09/call-graph-visualisation-with-aspectj-and-dot/
CC-MAIN-2021-04
refinedweb
1,440
63.19
I load an XML file into an XElement object from a String(Reader) and the convert it back to a string, before every newline (\n) a carriage-return (\r) is inserted. If you run the test program I have attached under Mono 3.2.8 (32-bit Linux) you will get the output 3C-4D-61-69-6E-3E-0A-20-20-61-0A-20-20-62 (first line is the ascii encoding of the original XML string, the other two are the output of XElement.ToString(), loaded with different options). This shows the \r inserted in front of every newline. This is a regression. In mono 2.10.2 (also 32-bit linux) the output is as should be expected. Created attachment 6354 [details] Test program demonstrating the issue I'd like to add, that, evne though that may sounds like a minor issue, if you use mono to create and verify XMLDSIG signatures it is a very critical bug. I just found the reason for the problem. It is not in the parsing but in the String output of XElemen (as inherited by XNode). In this commit the default value of NewLineHandling in XmlWriterSettings (used by XNode.ToString()) was changed from None to Replace. According to MSDN, the new behaviour is the correct one, and it was a bug before:
https://xamarin.github.io/bugzilla-archives/18/18459/bug.html
CC-MAIN-2019-35
refinedweb
222
74.08
« Updated Flickr PhotoSearch for Flex 2 Beta 2 | Main | Flex 2.0 Now Available » April 24, 2006 Programmatic Skinning in Flex 2 Beta 2 This. Since I have created so many skins you might think to yourself that I have created a theme. This is fairly accurate, but I have not packaged these skins as such. I'll leave that for a future article. Disclaimer I am not an artist. I will not be offended if you think these skins are unattractive. The point is learn how to do skinning, not to make a MOMA piece. You can call this collection of skins my Orange Period. Specifying Skins We'll start with a top-down approach and look at how you specify the skins and relate them to the components. I've done this with styles, but you can also place these styles on individual component definitions - it just depends on what you want skinned. This is the style definition for a Button: Button { upSkin: ClassReference("skins.MyButtonSkin"); overSkin: ClassReference("skins.MyButtonSkin"); downSkin: ClassReference("skins.MyButtonSkin"); disabledSkin: ClassReference("skins.MyButtonSkin"); } The names of the skins are found in the Flex component API under the style section. The ClassReference style attribute names the ActionScript class which defines that skin. You can see that I have specified the same ActionScript class for each of the skins listed for the Button. When you see the code for the class it will be more obvious why I did it that way. Many times a skin for each state or part of a component share the same code and it often makes sense to use the same class. But you do not have to do that, you can write a separate class for every skin. This is the code for the skins.MyButtonSkin class. package skins { import mx.skins.ProgrammaticSkin; import flash.display.*; public class MyButtonSkin extends mx.skins.ProgrammaticSkin { private var _backgroundFillColor:uint; private var _lineThickness:int; public function MyButtonSkin() { _backgroundFillColor = 0xFFBB00; _lineThickness = 1; } override protected function updateDisplayList( w:Number, h:Number ) : void { var g:Graphics = graphics; if( getStyle("lineThickness") ) { _lineThickness = getStyle("lineThickness"); } if( getStyle("backgroundFillColor") ) { _backgroundFillColor = getStyle("backgroundFillColor"); } switch( name ) { case "upSkin": break; case "overSkin": // make lighter _backgroundFillColor = 0xFFCC00; break; case "downSkin": // make darker _backgroundFillColor = 0xFFAA00; break; case "disabledSkin": _backgroundFillColor = 0x999999; break; } g.clear(); // now draw the real button g.beginFill(_backgroundFillColor,1.0); g.lineStyle(_lineThickness,0x000000,0.4); g.moveTo(w,0); g.lineTo(10,0); g.curveTo(0,0,0,10); g.lineTo(0,h); g.lineTo(w-10,h); g.curveTo(w,h,w,h-10); g.lineTo(w,0); g.endFill(); } } } I have two class variables: _backgroundFillColor and _lineThickness which I initialize in the class constructor function. Perhaps the most important piece of the skin is the updateDisplayList function. This function is declared as an override because it is replacing the function of the same signature in the base class, mx.skins.ProgrammaticSkin. It is in this function that the skin is drawn. Flex will call upon this function whenever it needs to have the skin drawn. Within the updateDisplayList function you can see where the name of the skin is being tested. Look back to the Style section and you can see those names: "upSkin", "downSkin", and so forth. These names correspond to the case clauses in the switch statement. The only difference between the skins is the color. For "upSkin" and "downSkin" is tried to lighten and darken the color a bit. For "disabledSkin" I made it gray. The button graphic is drawn using the drawing functions of the graphics property of the component. The graphics property does have some handy drawing functions, such as drawRect and drawCircle. You can read more about this class under flash.display.Graphics. Another thing to notice in the updateDisplayList function is how the color and line style are set. While I did initialize these in the class constructor, I test the component's style for settings that would override the defaults. The rest of the skins work in a similar fashion and you should have no problem adapting them. Just be sure to read all of the API information for the components you want to skin. Skins for scrollbars are often referenced via custom styles. One example is the Panel which you can see in the Style section. Skinning in Flex 2 is fairly straightforward and once you get the hang of it you will be able to create innovative skins. Just let your imagination run wild. Posted by pent at April 24, 2006 10:21 PM
http://weblogs.macromedia.com/pent/archives/2006/04/programmatic_sk.cfm
crawl-002
refinedweb
758
66.03
#157 – You Can Set Standard CLR Properties from XAML If you create a custom class, you can instantiate instances of that class from XAML by adding the object to a resource dictionary. <Window.Resources> <m:Person x: </Window.Resources> You might wonder whether your properties have to be WPF dependency properties in order to set their values from XAML. It turns out that the properties on the custom class do not have to be dependency properties in order to be set from XAML. They can be standard CLR properties. public class Person { public string FirstName { get; set; } public string LastName { get; set; } public Person() { } } However, you can’t use the binding syntax to bind another object’s properties unless your custom Person object properties were implemented as Dependency Properties.
https://wpf.2000things.com/2010/12/16/157-you-can-set-standard-clr-properties-from-xaml/
CC-MAIN-2017-43
refinedweb
129
50.46
Capturing User Input in Unity3D to Change Behavior/Movement Although it isn't important to the content of this article, you may—or may not—have been following along with my series of Unity3d articles. In these articles, we covered creating a scene, animation, gravity, and building a UWP application from Unity3D. Reading the previous articles isn't necessary to follow this article; however, what you'll learn here will round up those lessons quite nicely. Today, you'll look at building a small scene where the elements react to each other in small ways. This interaction will limited for the scope of this article, but the technique used to achieve that interaction will be of great use when expanding your own game. Building the Unity3D Scene Start Unity3D. If you don't have Unity3D, you can download a copy from its Web site. Once started, create an empty 3D project, and give it a useful name: Figure 1: The Unity3D Create project dialogue Be sure to select '3D,' and click the Create project button. Before looking at adding the elements and scripts to the example scene, it is worth taking a look at the finished product. Figure 2 shows the main view port Plane with two spheres and a camera. Figure 2: The Unity3D scene as shown once created In the image, you can see the camera's field of view indicated by the thin lines coming from the camera. If you select the camera, you'll also see what the camera sees via the preview window at the bottom right of the scene view. Figure 3: The camera preview window (visible when the camera is selected) What this camera sees here is very important. It's what the user will see when the game is running. There are a number of actions that will occur when the scene shown in Figure 2 is complete. Clicking the left mouse button on the sphere closest to the camera will send it towards the second sphere. The second sphere, using an attached script of its own, will do something when hit by the first sphere. For this article, the second sphere will be destroyed when the first collides with it. However, a Plane will need to be added so the spheres don't fall off the bottom of the scene when the game is played. To add the Plane, select 3D Object→Plane from the GameObjects menu. (In Unity 4, this would be from the Create Object→Plane from the GameObjects menu.) Figure 4: Adding a Plane to our scene Once the Plane is created, go ahead and create two spheres, which will need to be positioned. If you look at Figure 5, you'll see the settings I've entered for the first sphere. Figure 5: Sphere One's starting position The settings for the second sphere, which I have renamed to 'TargetSphere,' are show in Figure 6. Figure 6: TargetSphere's position settings If you run the scene at this point, you should see the two spheres from the camera's field of view, but they will not fall due to the gap between them and the Plane below. This is because we need to add a component named 'Rigidbody' to each of the spheres. It is this component that will deal with gravity and enable movement later in this article. To do this, select a sphere—be sure to have the Inspector tab open—and click add component, then physics, and then Rigidbody. Please use Figure 7 as a reference. Figure 7: Adding a 'Rigidbody' to your selected sphere Do the same for the other sphere. You may have noticed in Figure 2 that there are two scripts in the scene showing on the assets panel. Figure 8: The two game scripts These two scripts will contain the code to push the first sphere, and add behaviour to the second, 'Target,' sphere. To add a script, right-click the assets panel and click Create, then C# Script. I've named my two scripts 'GameScript' and 'TargetBehaviourScript.' The first will be attached to the camera, and the second to the sphere that will be hit; this will be covered in a moment. Firstly, look at the code in these scripts. The 'GameScript' code looks like this: public class GameScript : MonoBehaviour { public float force = 5f; // Use this for initialization void Start () { } // Update is called once per frame void Update () { if (Input.GetMouseButtonUp(0)) { Ray ray = Camera.main.ScreenPointToRay (Input.mousePosition); RaycastHit hit; if (Physics.Raycast(ray, out hit)) { if (hit.collider.name == "Sphere") { var sphere = hit.collider.gameObject; sphere.GetComponent<Rigidbody>().AddForce (new Vector3(0.0f, 0.0f, force), ForceMode.VelocityChange); } } } } } In this script, which will be attached to the camera, the mouse button 0 (which is usually the left button) is being watched for a press. When it's been pressed, a ray is drawn from the point of the click through the game scene. From this ray, you want to detect if any colliders are hit; specifically, you are looking for the collider attached to the first sphere, named 'Sphere,' being hit by the ray. If it's been hit, you know it was the first sphere hit by the click and now you can add some force to make it move across the Plane. For this article, the sphere will only move in one direction. You should be able to work out how to make the sphere travel along the direction of the click in your own good time. Moving on, look at the code that is to be attached to the second Sphere. It looks something like the following: public class TargetBehaviourScript : MonoBehaviour { // Use this for initialization void Start() { } // Update is called once per frame void Update() { } void OnCollisionEnter(Collision collision) { if (collision.collider.name == "Sphere") { DestroyObject(this.gameObject); } } } Because 3D objects have a collider added to the component out of the box (by default); you can make use of the OnCollsionEnter to method, which has passed into it the collision object from which you can work out what collided with the object the script is attached to. Then, all you have to do is check to see if it was the first sphere that collided with this second sphere, and then destroy the second sphere if the check comes back as true. If all has gone according to plan, you now have the code needed to make this scene work. Next, attach the scripts to their respective components. You can use Figure 9 as a reference to do that now. Figure 9: Attaching a script to a component Looking at 1 on Figure 9, drag the 'GameScript' to the main camera, indicated by 2. If the action was successful, and you select the camera, you'll see the script on the list of components added to the object indicated by 3. Do the same with the 'TargetBehaviourScript,' but this time attach it to the 'TargetSphere.' And, that's it! Run the game by pressing the play button found at the top of the Unity desktop; click the first sphere and observe the results. Figure 10: The game in play mode after the first sphere was clicked Conclusion What you've seen in this article is a very simple implementation of cause and effect. You have a sphere which strikes another, causing it to disappear. Although this is very simple, this is the basis of much of the behaviour you might see in a game; it is enough to open a world of interaction within your own scene. If you have any questions, you can always find me on Twitter @GLanata There are no comments yet. Be the first to comment!
https://www.codeguru.com/csharp/csharp/cs_graphics/capturing-user-input-in-unity3d-to-change-behaviormovement.html
CC-MAIN-2019-26
refinedweb
1,285
69.72
A screamingly fast Python WSGI server written in C. @stuz5000 concurrenct connections != threading != shared memory. What are you trying to accomplish? bjoern can easily support a large number of concurrent connections. It doesn't support threads or multiprocessing. However you can spawn multiple bjoern workers to listen on the same port with SO_REUSEPORT (), or you can fork multiple workers (see tests/fork.py) @stuz5000 From your web application's point of view you've got a typical I/O bound model here. bjoern does not care about I/O out of the box -- you'll have to make your application yield manually: A loop like while not done(): yield which will be polled by the bjoern event loop. But this doesn't scale well and wastes loads of CPU. You'll be off much better using something like gunicorn/geventlet, meinheld, or something asyncio based like uvloop. I was hoping maybe someone can help me out, I must be doing something wrong here but I've tried everything and cant figure it out. Im getting the following error: Traceback (most recent call last): File "./src/bjoern.py", line 5, in <module> bjoern.run(api.app) AttributeError: module 'bjoern' has no attribute 'run' My bjoern.py file contains the following: import bjoern import api if __name__ == '__main__': bjoern.run(api.app) Thanks!
https://gitter.im/jonashaag/bjoern?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge
CC-MAIN-2021-17
refinedweb
221
66.44
- Advertisement TheLifelessOneMember Content Count7 Joined Last visited Community Reputation104 Neutral About TheLifelessOne - RankNewbie Looking For Engine Examples TheLifelessOne replied to TheLifelessOne's topic in For Beginners's ForumNope, but I'll take a look at that. Thanks. I suppose that's true. I'll have to see about buying one of those. Which one would you recommend purchasing first? Looking For Engine Examples TheLifelessOne replied to TheLifelessOne's topic in For Beginners's ForumI've had a look at Orge, but it was my understanding that it only handles the rending if the graphics, and is not a full engine itself. I'm mostly looking for examples of source file structures, techniques on various aspects of a game (window / state management, networking, etc), and example rendering code using pure OpenGL 2.1 shaders (no SDL, SFML, Orge, etc). Looking For Engine Examples TheLifelessOne posted a topic in For Beginners's ForumDoes anyone know of any open-source game engines I can use as an example? I'd like to study as many as possible to see common practices and techniques. Thanks. [web] Verlet Water in HTML5 TheLifelessOne replied to SillyCow's topic in General and Gameplay ProgrammingYou might have better performance if you used requestAnimationFrame. Also, you might want to look into using an canvas animation library. They are usually much more optimized than what is normally hand-written (not saying your code is bad), and they provide a lot of objects that will allow you to do the same thing in less time, and usually more efficiently. Only libraries I know off the top of my head are Processing.js and Three.js, though I'm sure a great many more exist. [web] PHP Framework TheLifelessOne replied to jimmybrion's topic in General and Gameplay ProgrammingIf I understand you correctly, then yes, as long as they don't collide namespaces. [web] Moving object with Javascript TheLifelessOne replied to bugeja's topic in General and Gameplay ProgrammingYou shouldn't pass strings. I remember reading somewhere that they are eval'd, and this can cause security problems. It's better to pass an anonymous function: setTimeout(function() { slideOverImage(id); }, 90); It's easier to read, to work with, and is overall good practice (also, why are you putting the integer in a string? 'Tis not needed.). - Advertisement
https://www.gamedev.net/profile/185507-thelifelessone/
CC-MAIN-2019-22
refinedweb
386
53.21
CsoundExpr.Tutorial.Composition Contents Description Module CsoundExpr.Base.Score provides functions to construct csound's score section - Prev : CsoundExpr.Tutorial.Intro - Next : CsoundExpr.Tutorial.Orchestra Synopsis - exmpEventList :: EventList Double Irate - exmpScore :: Score String - exmpScoFunctor :: MediaUnit Dur () SignalOut - exmpScoMonad :: MediaUnit Dur () Irate - exmpScoTemporal :: Dur - exmpScoStretchable :: MediaUnit Dur () Irate - exmpScoArrangeable :: Score String - exmpScoTemporalFunctor :: Score SignalOut - main :: IO (). csd function takes in EventList Double SignalOut. Double is type of time-marks. SignalOut represents instrument structure. Score's instances Score is a Functor, Monad, Temporal, Stretchable, Arrangeable and TemporalFunctor Functor It makes possible to represent csound's instrument as a function from note representation to SignalOut. To play on instrument means to apply instrument to Score of its notes. -- oscillator instrument instr :: Irate -> SignalOut instr x = out $ oscilA [] (num 1000) (cpspch x) $ gen10 4096 [1] exmpScoFunctor = fmap instr $ line $ map (note 1) [d 0, f 0, a 0, d 1] Monad Gives way to more structured composition. return a makes note of a that lasts for 1 sec. ma >>= f is better understood by its join function. ma >>= f = joinScore $ fmap f ma joinScore :: Score (Score a) -> Score a is a tree. Nodes represent sequent/parallel composition and leaves represent value Score a a or rest that lasts for some time t. joinScore takes in Score that contains some more Score 's in its leaves, and builds one tree by substituting values of Scores by Scores. Note that while substituting it stretches duration of Score by duration of value. type ChordType = [Irate] majC, minC :: ChordType majC = [0, 0.04, 0.07] -- in csound 0.01 is one half-tone minC = [0, 0.03, 0.07] arpeggi :: (Irate, ChordType) -> Score Irate arpeggi baseNote chordType = line $ map return (pchs ++ pchs) where pchs = map ((+ baseNote) . (chordType !! )) [0, 1, 2, 1, 2, 1] harmony = line $ map return [(e 0, minC), (a (-1), minC), (d 0, majC), (g 0, majC), (c 0, majC), (f 0, minC), (b (-1), majC), (e 0, minC)] sco = harmony >>= arpeggi exmpScoMonad :: MediaUnit Dur () IrateSource Temporal There are two methods defined on Temporal objects. none :: Dur -> a -- construct rest dur :: a -> Dur -- ask for duration Stretchable Arrangeable Constructing things in sequent '(+:+)' and parallel ways '(=:=)' TemporalFunctor There is class called TemporalFunctor with method tmap. First argument of tmap's function means function from duration of value t and value itself a to new value b. class Dur t => TemporalFunctor f where tmap :: (t -> a -> b) -> f a -> f b It allows to construct instruments that can rely on note duration. instr :: Dur -> Irate -> SignalOut instr t vol = out $ (env t <*> ) $ fst $ se1 $ unirandA vol where env t | t < 1 = lineK 1 idur 0 | otherwise = exponK 1 idur 0 v1 = 1.5 * v0 v0 = 5000 sco = tmap instr $ line [note 0.5 v1, note 0.5 v0, rest 1, note 2 v1] Note : stretch t (tmap instr sco) =/= tmap instr (stretch t sco) Example radiohead - weird fishes (intro), see src
http://hackage.haskell.org/package/csound-expression-0.0.2/docs/CsoundExpr-Tutorial-Composition.html
CC-MAIN-2015-22
refinedweb
483
63.29
Falcon AST IASNode toString and -dump-ast=true mxmlc options? MXML: the Falcon compiler turns MXML AST into straight ABC instructions. "I don't see why you would need to transform the MXML AST into yet another model. There are already two models for MXML and they should be sufficient. The syntactic model produced from the MXML tokens is MXMLData. Like any XML DOM, it simply represents the nesting of the tags, their attributes, and the text inside, without attributing any meaning to anything. The semantic model for MXML is the MXML AST, which has determined what every tag, attribute, and piece of text means. For example, it has understood that when you write <s:Button you are creating an instance of spark.components.Button, setting the label property to "OK", setting the color style to 0xFF0000, and adding an event handler to handler the click event. The MXML tree captures every piece of semantic information that was in the MXML file." "But if you prefer to turn MXML into JavaScirpt source code, it should be straightforward to do that. The MXMLClassDirectiveProcessor is a simple recursive descent through the MXML AST. The main complexity is that it can't simply generate a continuous stream of code in tree order. For example, an attribute value that is a databinding expression has to generate all sorts of extra stuff like Watcher instances in other places." FalconJS FalconJS is a project started by Alex Harui. It is currently the most complete implementation and has (limited?) MXML parsing. The FalconJS compiler will take an MXML and AS project and output a valid HTML/JS application. FalconJS does depend on a custom AS framework (i.e. won't work with the Flex SDK) and corresponding JS framework. This framework goes by the name of FlexJS. Read more about it on the Wiki:\ "I may be missing something, but I think the MXMLClassDirectiveProcessor is walking the AST not some intermediate model. What it does is a bit tricky because it is trying to flatten the tree into the data array and use the same APIs (of passing around Contexts). I haven't looked at your walker recently, but it might be simpler to implement in the walker/visitor pattern." FalconJx FalconJx is the 'alternative' project from Michael Schmalle. It uses an alternative approach to AS3 compilation (don't ask for details, I have no clue about the innards of that code). One of it's main selling points (from my humble point of view) is that it has a very flexible architecture for outputting different flavours of JS. The status of this project is that we are working on getting complete AS3 language feature coverage in place. This means that we are working towards ~100% translation of AS into JS. I'm using the Google Closure Tools to augment standard JS to try and match the original AS language features. This is coming along nicely, but I'm sure the devil will be in the details. Read more on the 'goog' way here:\ FalconJx future: once we have AS (and hopefully MXML, at some point) translating into JS and have functional tests in place, the challenge will become to come up with both AS and JS framework to actually allow for application development. I'm silly enough to still cling to the idea that we'll be able to use (most of) the Flex SDK and create a compatible JS library... but I'm sure others will declare me insane for just dreaming about that :-) What FalconJx is at the moment is a cross compiler for ActionScript business logic to JavaScript business logic. FalconJx is being designed to be able to compile ActionScript3 source code business logic to JavaScript business logic. "When I say business logic, I mean, no views, no ui components, no flash display list to javascript DOM conversations. Just business logic." The parser ------------------- AST stands for Abstract Syntax Tree avery complicated way of saying Objects that have parents and children. When an .as file is parsed by Falcon, the parser creates "blocks" of things it recognizes from the string tokens feed to it as it's running through the source file. A token is a String defined my the AS3 language spec IE "class", "if", "{", "foo" etc. As the parser accepts these tokens from the stream, it recognizes patterns like "public" is a namespace if it happens just before the "class" token. So as the parser runs through these rules and finds matches, it creates IASNode instance such as IClassNode, IIdentifier node etc. Each node in the tree represents a part of the ActionScript file. If you actually look at an .as you can see how it's very nature is heiracle, that is the AST tree and the IASNode subclasses represent that tree. We can also call this tree a DOM or Document Object Model. So when we say AST we also mean ActionScript DOM because there is only one semantic way that DOM will be constructed by the ASParser's grammar. It's exactly what a Browser does with the HTML DOM, there are rules and the only way the browser's parser will create an element in the DOM is if you source code matches the rules. The walker ------------------------- Just as you would create a recursive function to traverse HTML with JavaScript, we can do the same thing with the AS3 DOM (AST) which I have done. The only hard part about this is you have to be familiar with the "grammar" or API of the DOM to make sure you recursively traverse it in the correct order and get everything. That is why Erik and I have over 500 unit tests, this is making sure we are producing the whole language correctly. So once the DOM is created, we call walke.visitFile(fileNode), which the IFileNode is the root of the tree, just like window is the root of the HTML DOM. Visit file will then abstractly call visitPackage(), then visitClass(), then visitMethod(), visitBlock(), visitIf(), visitBinary() etc. The visit methods are called through the node switch strategy class. This is kindof complicated but not really, it's kindof a juggling act back and forth so the whole node handler traversing calls the correct visit method based on the current node's type, is it a function, expression etc. You see, that when each of the nodes in the tree are visited recursively, the String emitter is called to emit the javascript or actionscript source code into the buffer. The backend ---------------------------- The backend stuff is going to be refactored and I have some ideas about the compiler setup but basically, the way the actual compiler is setup and the design of FalconJx, it allows tremendous flexibility on overriding things for different JavaScript output, the final string that gets written to disk with the .js extension. Like right now Erik has a Goog and AMD output that are separate implementations but use the same core framework. On a final note, this framework was first written by me to create valid ActionScript. So what is actually happening is we are taking advantage of the fact JavaScript is ECMA just like AS, so we are only overriding the parts of the two languages that differ for source code production. In the end, FalconJx is actually an ActionScript emitter first, other languages next. FlexJS GPU rendering (Starling?) The FalconJS compiler in compiler.js, plus the FlexJSTest_again example and the framework in the asjs develop branch already take MXML and result in an HTML DOM. The 'goog' way AMD - RequireJS 1 Comment Anonymous Is this (ASJS) dead now? No changes since January...
https://cwiki.apache.org/confluence/display/FLEX/ASJS+-+From+Flash+Player+to+web+native
CC-MAIN-2019-22
refinedweb
1,277
69.21
Newbie in Python and do not quite understand how I can count the average relative error of approximation by the import pandas as pd Import Math. From Sklearn Import SVM From Sklearn Import Preprocessing DF = PD.Read_CSV ('file1.csv', ";", Header = None) X_train = df.drop ([16,17], axis = 1) Y_train = df [16] test_data = pd.read_csv ('file2.csv', ";", Header = none) X_test = test_data.drop ([16,17], axis = 1) Y_Test = Test_Data [16] normalized_x_train = preprocessing.normalize (x_train) Normalized_x_Test = Preprocessing.Normalize (X_TEST) xgb_model = svm.svr (kernel = 'linear', c = 1000.0) CL = XGB_MODEL.FIT (Normalized_X_TRAIN, Y_TRAIN) PREDICTIONS = Cl.Predict (Normalized_X_TEST) Is there any finished function to get this error or only a cycle? If a cycle, then you need to normalize y_test – real values? Answer 1, Authority 100% You can: from sklearn.metrics import mean_absolute_error MAPE = Mean_absolute_error (Y_Test, Y_Predicted) / Y_Test.abs (). SUM () If you need percentages, then MAPE must be multiplied by 100. PS It is also worth mentioning that this metric is rarely used in practice. It can cause division to zero.
https://computicket.co.za/python-python-medium-relative-approximation-error-regression/
CC-MAIN-2022-21
refinedweb
166
54.59
Q - Built on QR - Dart - Automatic QR code version/type detection or manual entry - Supports QR code versions 1 - 40 - Error correction / redundancy - Configurable output size, padding, background and foreground colors - Supports image overlays - Export to image data to save to file or use in memory - No internet connection required Installing Note: If you're using the Flutter master channel, if you encounter build issues, or want to try the latest and greatest then you should use the master branch and not a specific release version. To do so, use the following configuration in your pubspec.yaml: dependencies: qr_flutter: git: url: git://github.com/lukef/qr.flutter.git Keep in mind the master branch could be unstable. After adding the dependency to your pubspec.yaml you can run: flutter packages get or update your packages using Getting started To start, import the dependency in your code: import 'package:qr_flutter/qr_flutter.dart'; Next, to render a basic QR code you can use the following code (or something like it): QrImage( data: "1234567890", version: QrVersions.auto, size: 200.0, ), Depending on your data requirements you may want to tweak the QR code output. The following options are available: Examples There is a simple, working, example Flutter app in the /example directory. You can use it to play with all the options. Also, the following examples give you a quick overview on how to use the library. A basic QR code will look something like: QrImage( data: 'This is a simple QR code', version: QrVersions.auto, size: 320, gapless: false, ) A QR code with an image (from your application's assets) will look like: QrImage( data: 'This QR code has an embedded image as well', version: QrVersions.auto, size: 320, gapless: false, embeddedImage: AssetImage('assets/images/my_embedded_image.png'), embeddedImageStyle: QrEmbeddedImageStyle( size: Size(80, 80), ), ) To show an error state in the event that the QR code can't be validated: QrImage( data: 'This QR code will show the error state instead', version: 1, size: 320, gapless: false, errorStateBuilder: (cxt, err) { return Container( child: Center( child: Text( "Uh oh! Something went wrong...", textAlign: TextAlign.center, ), ), ); }, ) Has it been tested in production? Can I use it in production? Yep! It's stable and ready to rock. It's currently in use in quite a few production applications including: Outro Credits Thanks to Kevin Moore for his awesome QR - Dart library. It's the core of this library. For author/contributor information, see the AUTHORS file. License QR.Flutter is released under a BSD-3 license. See LICENSE for details.
https://pub.dev/documentation/qr_flutter/latest/
CC-MAIN-2021-21
refinedweb
424
55.54
ftputil 2.5 is now available from . Changes since version 2.4.2 --------------------------- - As announced over a year ago [1], the `xreadlines` method for FTP file objects has been removed, and exceptions can no longer be accessed via the `ftputil` namespace. Only use `ftp_error` to access the exceptions. The distribution contains a small tool `find_deprecated_code.py` to scan a directory tree for the deprecated uses. Invoke the program with the `--help` option to see a description. - Upload and download methods now accept a `callback` argument to do things during a transfer. Modification time comparisons in `upload_if_newer` and `download_if_newer` now consider the timestamp precision of the remote file which may lead to some unnecessary transfers. These can be avoided by waiting at least a minute between calls of `upload_if_newer` (or `download_if_newer`) for the same file. See the documentation for details [2]. - The `FTPHost` class got a `keep_alive` method. It should be used carefully though, not routinely. Please read the description [3] in the documentation. - Several bugs were fixed [4-7]. - The source code was restructured. The tests are now in a `test` subdirectory and are no longer part of the release archive. You can still get them via the source repository. Licensing matters have been moved to a common `LICENSE` file. ). [1] [2] [3] [4] [5] [6] [7] Stefan
https://mail.python.org/pipermail/python-announce-list/2010-October/008628.html
CC-MAIN-2014-10
refinedweb
218
69.07
Is there a benefit to using one over the other? In Python 2, they both seem to return the same results: >>> 6/3 2 >>> 6//3 2 In Python 3.0, 5 / 2 will return 2.5 and 5 // 2 will return 2. The former is floating point division, and the latter is floor division, sometimes also called integer division. In Python 2.2 or later in the 2.x line, there is no difference for integers unless you perform a from __future__ import division, which causes Python 2.x to adopt the behavior of 3.0 Regardless of the future import, 5.0 // 2 will return 2.0 since that's the floor division result of the operation. You can find a detailed description at
https://codedump.io/share/PHf8oqtleQnI/1/in-python-2-what-is-the-difference-between-3939-and-3939-when-used-for-division
CC-MAIN-2017-13
refinedweb
126
79.46
jackalope This module is no longer actively maintained. have a look at: if you liked it Javascript with antlers npm install jackalope Jackalope Javascript with antlers Introduction Jackalope is a class extension system with an API based entirely on Moose, the post-modern class system for perl.Jackalope is written in Coffeescript, and has primarily been written with a server-side environment in mind. That being said, Coffeescript is just javascript so Jackalope should be fairly portable. Implementation is kept as close to the Moose API as is in Javascript sane, and possible. Fortunately, surprisingly little trickery is needed to implement a nearly identical vocabulary and appearance usage using basic Javascript techniques. Usage Jackalope comes in two flavours: as a base class for coffeescript objects to extend from, and as set of mixins that extend your object on the fly. The Jackalope.Class offers a base class with constructor that initializes attributes when needed. The Jackalope.extend function mixes Jackalope features into your object. It does not mess with your constructor or any other properties of our object, but offers a factory method for constructing instead, as well as new object methods to declare attributes. Using Jackalope.Class class as a base-class in Coffeescript Jackalope = require('../lib/Jackalope') class Circle extends Jackalope.Class @has 'radius', isa: 'Int' required: true writer: 'setRadius' trigger: ( value ) -> @clearCircumference() @has 'circumference', isa: 'Number' lazy_build: true clearer: ' clearCircumference' _build_circumference: () -> return (2 * @diameter()) * Math.PI # built in constructor circle = new Circle( radius: 23 ); cf = circle.circumference(); circle.setRadius 90 cf2 = circle.circumference(); Alternative using mixin ( also works for non-Coffeescript classes) Foo = ()-> # your things in constructor Jackalope.extend Foo Foo.has 'name', isa: 'Str' required: true foo = Foo.create( name: 'Harry' ) Typeconstraints Jackalope implements a typeconstraint similar to Moose, but javascript flavoured. Basic types are equal to Javascript's built-in types, but with additional checks for null and NaN. - Str: Javascript 'string' type, not nullor undefined - Bool: Javascript 'boolean', not nullor undefined - Number: Javascript 'number', not NaN, nullor undefined - Int: Javascript 'number', without any decimal points - Object: Anything that answers to typeof === 'object'and not null - Function: Anything that answers to typeof === 'function' - Instance: Value must be an instanceofthis type Example of the Instance type class Point extends Jackalope.Class @has 'x', isa: 'Int' required: true @has 'y', isa: 'Int' required: true class Line extends Jackalope.Class @has 'start', isa: Point required: true @has 'end' isa: Point required: true # now do: [a, b] = [new Point( x: 0, y: 0 ), new Point( x: 10, y: 5 )] line = new Line( start: a, end: b )
https://www.npmjs.org/package/jackalope
CC-MAIN-2014-10
refinedweb
427
53.61
18 October 2010 12:44 [Source: ICIS news] LONDON (ICIS)--UBS on Monday downgraded Wacker Chemie from “buy” to “neutral” but said it remains a "favourite" solar stock. The shares had been flagged up by the bank earlier this year based on the Germany-based producer's strong polysilicon position and the quality of its chemicals portfolio. The shares have risen 23% to date this year and are now close to fair value, UBS said. The bank said it expected Wacker to announce a 10,000 tonne polysilicon plant in the ?xml:namespace> The bank’s analysts in UBS’s European analysts expect slowing momentum in chemicals in 2010, however. “We believe restocking is complete in the silicones and polymers divisions,” they said in a note to clients. The bank expects Wacker’s silicones division sales to rise by about 20% this year, with polymers division sales up 11% and biosolutions sales growth of 25%. Wacker Chemie shares were down 2.91% at €145.30 at 13:16 Central European Time on Monday. ($1 = €0.71) For more on Wacker Chem
http://www.icis.com/Articles/2010/10/18/9402383/ubs-eases-advice-on-wacker-chemie-on-slowing-sales-momentum.html
CC-MAIN-2014-42
refinedweb
181
64.41
You must have noticed that there are many occassions while developing an ATL component when you feel that you should trace some messages at some point. For such purpose, first thing which almost every one tries is ATLTRACE or ATLTRACE2 both of which Trace the messages to the debug output. That is perfect, but the problem is that you can trace only while you are in debug mode. You cannot trace anything while you are running the application. One option is to look for NT Event logger (if you are using Windows NT) But the problem is there are so many different events there that you feel like giving up. Moreever you need to learn the APIs used to write anything to NT Event Loggger. So I thought of developing a simple ATL component which you can use to log events. Remember this is just the first version. Based on feedback I am planning to enhance the logger and make some more functions available, like Saving the log to a database or file, sorting by time etc. When you extract the zip file you will get three directories: \ATLLogApp: This is the Log server. i.e it is responsible for logging events. If you want you can straightaway compile this DLL and then use it in your application. This DLL hosts one component CoLoggerwhich exposes ICoLoggerinterface. It has following methods: Initialize() Log([in] BSTR Message) UnInitialize() \ATLClientForTracer: This is sample ATL client for the server which just forwards calls from MFC Client \MFCClient:This is an MFC CLient which logs messages after every second. There is a timer which fires events . In you application(MFC or ATL) just type #import "..\ATLLogApp.tlb" no_namespace named_guids //specify the path to the tlb. and you will get two files .tlh and .tli in your source output directory (generally debug).Now to use it just delcare a smart pointer of type ICoLoggerPtr .e.g CYourclass : public ... { ICoLoggerPtr m_Log; ... } in the cpp file creat the instance of the logger and then call Initialize // implemention file... const HRESULT hrCreate = m_Log.CreateInstance(__uuidof(CoLogger)); if(FAILED(hrCreate)) return E_FAIL; const HRESULT hrInit = m_Log->Initialize(); Now call the Log method on this interface wherever you want to. Here is a sample screen short when you run the application . Press Atl-S. You get this screen after some time. Any suggestions are welcome. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/atl/atllog.aspx
crawl-002
refinedweb
401
65.93
I found out about HivePlots this past summer, and although I thought they looked incredibly useful and awesome, I didn't have a personal use for them at the time, and therefore put off doing anything with them. That recently changed when I encountered some particularly nasty hairballs of force-directed graphs. Unfortunately, the HiveR package does not create interactive hiveplots (at least for 2D), and that is particularly important for me. I don't necessarily want to be able to compare networks (one of the selling points made by Martin Krzywinski), but I do want to be able to explore the networks that I create. For that reason I have been a big fan of the RCytoscape Bioconductor package since I encountered it, as it allows me to easily create graphs in R, and then interactively and programmatically explore them in Cytoscape So I decided last week to see how hard it would be to generate a hive plot that could be visualized and interacted with in Cytoscape. For this example I'm going to use the data in the HiveR package, and actually use the structures already encoded, because they are useful. Load Data require(RCytoscape) require(HiveR) require(graph) options(stringsAsFactors = F) dataDir <- file.path(system.file("extdata", package = "HiveR"), "E_coli") EC1 <- dot2HPD(file = file.path(dataDir, "E_coli_TF.dot"), node.inst = NULL, edge.inst = file.path(dataDir, "EdgeInst_TF.csv"), desc = "E coli gene regulatory network (RegulonDB)", axis.cols = rep("grey", 3)) ## No node instructions provided, proceeding without them str(EC1) ## List of 5 ## $ nodes :'data.frame': 1597 obs. of 6 variables: ## ..$ id : int [1:1597] 1 2 3 4 5 6 7 8 9 10 ... ## ..$ lab : chr [1:1597] "pstB" "hybE" "fadE" "phnF" ... ## ..$ axis : int [1:1597] 1 1 1 1 1 1 1 1 1 1 ... ## ..$ radius: num [1:1597] 1 1 1 1 1 1 1 1 1 1 ... ## ..$ size : num [1:1597] 1 1 1 1 1 1 1 1 1 1 ... ## ..$ color : chr [1:1597] "transparent" "transparent" "transparent" "transparent" ... ## $ edges :'data.frame': 3893 obs. of 4 variables: ## ..$ id1 : int [1:3893] 932 612 932 1510 1510 413 528 652 1396 400 ... ## ..$ id2 : int [1:3893] 832 620 51 525 797 151 5 1058 1396 1559 ... ## ..$ weight: num [1:3893] 1 1 1 1 1 1 1 1 1 1 ... ## ..$ color : chr [1:3893] "red" "red" "green" "green" ... ## $ desc : chr "E coli gene regulatory network (RegulonDB)" ## $ axis.cols: chr [1:3] "grey" "grey" "grey" ## $ type : chr "2D" ## - attr(*, "class")= chr "HivePlotData" Process Data So here we have the data. The nodes is a data frame with the id, a label describing the node, which axis the node belongs on, and its radius, or how far out on the axis the node should be, as well as a size. These are all modifiable attributes that can be changed depending on how one wants to map different pieces of data. This of course is the beauty of hive plots, because they result in networks that are dependent on attributes that the user decides on. In this case, we have a transcription factor regulation network. I am going to point you to the previous links as to why a normal force-directed network diagram is not really that informative for these types of networks. I'm not out to convince you that HivePlots are useful, if you don't get it from the publication and examples, then you should stop here. This is more about how to do some calculations to lay them out and work with them in Cytoscape. Bryan has implemented some nice functions to work with this type of network and perform simple calculations to assign axes and locations based on properties of the nodes. For example, it is easy to locate nodes on an axis based on the total number of edges. EC2 <- mineHPD(EC1, option = "rad <- tot.edge.count") sumHPD(EC2) ## E coli gene regulatory network (RegulonDB) ## This hive plot data set contains 1597 nodes on 1 axes and 3893 edges. ## It is a 2D data set. ## ## Axis 1 has 1597 nodes spanning radii from 1 to 434 ## ## Axes 1 and 1 share 3893 edges And then to assign the axis to be plotted on based on the whether edges are incoming (sink), outgoing (source), or both (manager). These are the types of decisions that influence whether you get anything insightful or useful out of a HivePlot, and changing these options can of course change the conclusions you will make on a particular network. EC3 <- mineHPD(EC2, option = "axis <- source.man.sink") sumHPD(EC3) ## E coli gene regulatory network (RegulonDB) ## This hive plot data set contains 1597 nodes on 3 axes and 3893 edges. ## It is a 2D data set. ## ## Axis 1 has 45 nodes spanning radii from 1 to 83 ## Axis 2 has 1416 nodes spanning radii from 1 to 11 ## Axis 3 has 136 nodes spanning radii from 2 to 434 ## ## Axes 1 and 2 share 400 edges ## Axes 1 and 3 share 21 edges ## Axes 3 and 2 share 3158 edges ## Axes 3 and 3 share 314 edges We also remove any nodes that have zero edges. EC4 <- mineHPD(EC3, option = "remove zero edge") ## ## 125 edges that start and end on the same point were removed And finally re-order the edges (not sure how this would affect plotting using Cytoscape). edges <- EC4$edges edgesR <- subset(edges, color == "red") edgesG <- subset(edges, color == "green") edgesO <- subset(edges, color == "orange") edges <- rbind(edgesO, edgesG, edgesR) EC4$edges <- edges EC4$edges$weight = 0.5 Calculate Node Locations In this case we have three axes, so we are going to calculate the axes locations as 0, 120, and 240 degrees. However, we need to use radians, because the conversion from spherical to cartesian coordinates involves using cosine and sine, which in R is based on radians. r2xy <- function(inRad, inPhi) { x <- inRad * sin(inPhi) y <- inRad * cos(inPhi) cbind(x, y) }) # contains cartesian coordinates Create GraphNEL Initialize the graph with the nodes and the edges. hiveGraph <- new("graphNEL", nodes = as.character(EC4$nodes$id), edgemode = "directed") hiveGraph <- addEdge(as.character(EC4$edges$id1), as.character(EC4$edges$id2), hiveGraph) We also want to put information we know about the nodes and edges in the graph, so that we can modify colors and stuff based on those attributes. For example, in this case we might want to modify the node color based on the axis it is on. Using attributes means we are not stuck using the colors that we previously assigned. nodeDataDefaults(hiveGraph, "nodeType") <- "" attr(nodeDataDefaults(hiveGraph, "nodeType"), "class") <- "STRING" nodeTypes <- c(`1` = "source", `2` = "man", `3` = "sink") nodeData(hiveGraph, as.character(EC4$nodes$id), "nodeType") <- nodeTypes[as.character(EC4$nodes$axis)] edgeDataDefaults(hiveGraph, "interactionType") <- "" attr(edgeDataDefaults(hiveGraph, "interactionType"), "class") <- "STRING" interactionType <- c(red = "repressor", green = "activator", orange = "dual") edgeData(hiveGraph, as.character(EC4$edges$id1), as.character(EC4$edges$id2), "interactionType") <- interactionType[EC4$edges$color] Transfer to Cytoscape ccHive <- CytoscapeWindow("hiveTest", hiveGraph) displayGraph(ccHive) ## [1] "nodeType" ## [1] "label" ## [1] "interactionType" Now lets move those nodes to their positions based on the Hive Graph calculations. setNodePosition(ccHive, as.character(EC4$nodes$id), nodeXY[, 1], nodeXY[, 2]) fitContent(ccHive) setDefaultNodeSize(ccHive, 5) ## [1] TRUE And set the colors based on attributes: nodeColors <- hcl(h = c(0, 120, 240), c = 55, l = 45) # darker for the nodes edgeColors <- hcl(h = c(0, 120, 60), c = 45, l = 75) # lighter for the edges setNodeColorRule(ccHive, "nodeType", c("source", "man", "sink"), nodeColors, "lookup") setNodeBorderColorRule(ccHive, "nodeType", c("source", "man", "sink"), nodeColors, "lookup") setEdgeColorRule(ccHive, "interactionType", c("repressor", "activator", "dual"), edgeColors, "lookup") setNodeFontSizeDirect(ccHive, as.character(EC4$nodes$id), 0) redraw(ccHive) fitContent(ccHive) saveImage(ccHive, file.path(imgPath, "hive_nonScaled.png"), "PNG") This view doesn't help us a whole lot, unfortunately. What if we normalize the radii for each axis to use a maximum value of 100? useMax <- 100 invisible(sapply(c(1, 2, 3), function(inAxis) { isCurr <- EC4$nodes$axis == inAxis currMax <- max(EC4$nodes$radius[isCurr]) scaleFact <- useMax/currMax EC4$nodes$radius[isCurr] <<- EC4$nodes$radius[isCurr] * scaleFact }))) setNodePosition(ccHive, as.character(EC4$nodes$id), nodeXY[, 1], nodeXY[, 2]) fitContent(ccHive) redraw(ccHive) fitContent(ccHive) saveImage(ccHive, file.path(imgPath, "hive_scaledAxes.png"), "PNG") This looks pretty awesome! And I can zoom in on it, and examine it, and look at various properties! And I get the full scripting power of R if I want to do anything else with, such as select sets of edges or nodes and then query who is attached to whom. Disadvantages We don't get the arced edges. This kind of sucks, but from what little I have done with these, that actually is not that big a deal. Would be cool if there was a way to do that, however. I do see that the web version of Cytoscape does allow you to set a value for how much “arcness” you want on an edge. This does mean that any plot with only two axes would need special consideration. Instead of doing two axes end to end (using 180 deg), it might be better to make them parallel to each other. With more than three axes, line crossings may become a problem. In that case, it may be worth looking to see if there are ways to tell Cytoscape in what order to draw edges. I don't know if that is possible using the XMLRPC pipe that is used by RCy. RCy Tip If you want to know how the image will look when saving a network to an image, use showGraphicsDetails(obj, TRUE). Other Visualizations Of course, I had just wrapped my head around using HivePlots in my own work, when I encountered ISBs BioFabric. Given how they are representing this, could we find a way to draw this in Cytoscape?? deleteWindow(ccHive) ## [1] TRUE Session Info Sys.time() ## [1] "2013-09-16 14:53:05 EDT"] HiveR_0.2-16 RCytoscape_1.10.0 XMLRPC_0.3-0 ## [4] graph_1.38.2 samatha_0.3 XML_3.98-1.1 ## [7] RJSONIO_1.0-3 markdown_0.6.1 knitr_1.3 ## [10] stringr_0.6.2 ## ## loaded via a namespace (and not attached): ## [1] BiocGenerics_0.6.0 digest_0.6.3 evaluate_0.4.4 ## [4] formatR_0.8 grid_3.0.0 parallel_3.0.0 ## [7] plyr_1.8 RColorBrewer_1.0-5 RCurl_1.95-4.1 ## [10] stats4_3.0.0 tcltk_3.0.0 tkrgl_0...
http://www.r-bloggers.com/hive-plots-using-r-and-cytoscape/
CC-MAIN-2014-15
refinedweb
1,727
54.22
All of dW ----------------- AIX and UNIX Information Mgmt Lotus Rational Tivoli WebSphere ----------------- Java technology Linux Open source SOA & Web services Web development XML ----------------- dW forums ----------------- alphaWorks ----------------- All of IBM Create a Greasemonkey script to highlight search entries relative to nearby content Document options requiring JavaScript are not displayed Sample code Connect to your technical community Help us improve this content Level: Introductory Nathan Harrington, Programmer, IBM 12 Aug 2008 The. Native text-search capabilities in Firefox provide useful highlighting of contiguous search terms and phrases. Additional Firefox extensions are available to incorporate regular-expression searches and other text-highlighting capabilities. This article presents tools and code needed to add your own text-searching interface to Firefox. With a Greasemonkey user script and some custom algorithms, you'll be able to add grep -v functionality to text searches — that is, highlighting a first search term where a second one is not located nearby. grep -v Requirements Hardware Text searches on typical Web pages with older (pre-2002) hardware are nearly instantaneous. However, the code presented here is not designed for speed and may require faster hardware to perform at a user-friendly speed on large Web pages. Software The code was developed for use with Firefox V2.0 and Greasemonkey V0.7. Newer versions of both will require testing and possibly modifications to ensure their functionality. As a Greasemonkey script, the code presented here should work on any operating system that supports Firefox and Greasemonkey. We tested on Microsoft® Windows® and Linux® Ubuntu V7.10 releases. Greasemonkey and Firefox extensions User modification to Web pages is the role Greasemonkey fulfills, and the code presented here uses the Greasemonkey framework to search for and highlight the relevant text. See Resources for the Greasemonkey Firefox extension. Examples of what this Greasemonkey script is designed to do Those familiar with the UNIX grep command and its common -v option know how indispensable grep is for extracting relevant lines of text from a file. Text files conforming the UNIX tradition of simplicity generally store their text in a line-by-line format that makes it easy to find words close together. The -v option prints lines where the specified text is not found. grep -v Unlike text files, Web pages generally divide text with tags and other markers rendered into lines by the browser. A wide variety of browser window sizes makes it difficult to isolate nearby text based on expected line positions. Tables, links, and other text markup also make it difficult to isolate text that is in the same "line." Algorithms in this article are designed to address some of these difficulties by providing a simple grep-like functionality piped to a function that works like grep's -v option. This allows the user to find a certain word of text, then only highlight entries where a different word is not nearby. Figure 1 shows what this can look like. In the top portion of the image, the search text of "DOM" is highlighted by the script. In the bottom portion, notice how only the first three "DOM" entries are highlighted because the second search text of "hierarchy" is found in close proximity to the third "DOM." Consider Figure 2. The first portion of the image shows all the 2008 entries, while the second portion only shows the before-noon entries due to the -v keyword of PM. Read on for full details and further examples of how to implement this functionality. greppishFind.user.js Greasemonkey user script An introduction to the unique aspects of the Greasemonkey programming environment are beyond the scope of this article. Familiarity with Greasemonkey, including how to install, modify, and debug scripts, is assumed. Consult the Resources for more information about Greasemonkey and how to Generally speaking, the greppishFind.user.js user script is started on a page load, provides a text area after a specific key combination is entered, and performs highlighting searches based on user-entered text. Listing 1 shows the beginning of the greppishFind.user.js user script. // ==UserScript== // @name greppishFind // @namespace IBM developerWorks // @description grep and grep -v function-ish for one or two word searches // ==/UserScript== var boxAdded = false; // user interface for search active var dist = 10; // proximity distance between words var highStart = '<high>'; // begin and end highlight tags var highEnd = '</high>'; var lastSearch = null; // previous highlight text window.addEventListener('load', addHighlightStyle,'true'); window.addEventListener('keyup', globalKeyPress,'true'); After defining the required metadata that describes the user script and its function, global variables, and highlighting tags, the load and keyup event listeners are added to process user-generated events. Listing 2 details the addHighlightStyle function called by the load event listener. load keyup addHighlightStyle function addHighlightStyle(css) { var head = document.getElementsByTagName('head')[0]; if( !head ) { return; } var style = document.createElement('style'); var cssStr = "high {color: black; background-color: yellow; }"; style.type = 'text/css'; style.innerHTML = cssStr; head.appendChild(style); }//addHighlightStyle The function creates a new node in the current DOM hierarchy with the appropriate highlighting information. In this case, it's a simple yellow-on-black text attribute. Listing 3 shows the code of the other event listener, globalKeyPress, as well as the boxKeyPress function. globalKeyPress boxKeyPress function globalKeyPress(e) { // add the user interface text area and button, set focus and event listener if( boxAdded == false && e.altKey && e.keyCode == 61 ) { boxAdded = true; var boxHtml = "<textarea wrap='virtual' id='sBoxArea' " + "style='width:300px;height:20px'></textarea>" + "<input name='btnHighlight' id='tboxButton' " + "value='Highlight' type='submit'>"; var tArea = document.createElement("div"); tArea.innerHTML = boxHtml; document.body.insertBefore(tArea, document.body.firstChild); tArea = document.getElementById("sBoxArea"); tArea.focus(); tArea.addEventListener('keyup', boxKeyPress, true ); var btn = document.getElementById("tboxButton"); btn.addEventListener('mouseup', processSearch, true ); }//if alt = pressed }//globalKeyPress function boxKeyPress(e) { if( e.keyCode != 13 ){ return; } var textarea = document.getElementById("sBoxArea"); textarea.value = textarea.value.substring(0,textarea.value.length-1); processSearch(); }//boxKeyPress Catching each keystroke and listening for a specific combination is the purpose of globalKeyPress. When the Alt+= keys are pressed (that is, hold Alt and press the = key), the user interface for the search box is added to the current DOM. This interface consists of a text area for entering the keywords and a Submit button. After the new items are added, the text area needs to be selected by the getElementById function to set the focus correctly. Event listeners are then added to process the keystrokes in the text area, as well as executing the search when the Submit button is clicked. getElementById The second function in Listing 3 processes each keystroke in the text area. If the Enter key is pressed, the text area's value has the newline removed and the processSearch function executed. Listing 4 details the processSearch function. processSearch function processSearch() { // remove any existing highlights if( lastSearch != null ) { var splitResult = lastSearch.split( ' ' ); removeIndicators( splitResult[0] ); }//if last search exists var textarea = document.getElementById("sBoxArea"); if( textarea.value.length > 0 ) { var splitResult = textarea.value.split( ' ' ); if( splitResult.length == 1 ) { oneWordSearch( splitResult[0] ); }else if( splitResult.length == 2 ) { twoWordSearch( splitResult[0], splitResult[1] ); }else { textarea.value = "Only two words supported"; }//if number of words }//if longer than required lastSearch = textarea.value; }//processSearch Each search is stored in the lastSearch variable to be removed each time processSearch is called. After the removal, the search query is highlighted using oneWordSearch if there is only one query word or if the twoWordSearch function if the grep -v functionality is desired. Listing 5 shows the details on the removeIndicators function. lastSearch oneWordSearch twoWordSearch grep -v removeIndicators function removeIndicators( ) { // find the appropriate parent node with the innerHTML to be removed var getNode = getHtml( textNode ); if( getNode != null ) { var temp = getNode.parentNode.innerHTML; var reg = new RegExp( highStart, "g"); temp = temp.replace( reg, "" ); reg = new RegExp( highEnd, "g"); temp = temp.replace( reg, "" ); getNode.parentNode.innerHTML = temp; }//if correct parent found }//if word found }//for each text node }//removeIndicators Instead of traversing the DOM tree manually, removeIndicators uses XPath to extract the text nodes in the document quickly. If any of the text nodes contains the lastSearch text (the most recent highlighted word), getHtml finds the appropriate parent node, and the highlighted text is removed. Note that combining the extract of innerHTML and assignment of innerHTML into one step will cause various issues, so temporarily assigning the innerHTML to an external variable is required. Listing 6 is the getHtml function that shows in detail how to find the appropriate parent node. getHtml innerHTML function getHtml( tempNode ) { // walk up the tree to find the appropriate node var stop = 0; while( stop == 0 ) { if( tempNode.parentNode != null && tempNode.parentNode.innerHTML != null ) { // make sure it contains the tags to be removed if( tempNode.parentNode.innerHTML.indexOf( highStart ) != -1 ) { // make sure it's not the title or greppishFind UI node if( tempNode.parentNode.innerHTML.indexOf( "<title>" ) == -1 && tempNode.parentNode.innerHTML.indexOf("btnHighlight") == -1) { return( tempNode ); }else{ return(null); } // the highlight tags were not found, so go up the tree }else{ tempNode = tempNode.parentNode; } // stop the processing when the top of the tree is reached }else{ stop = 1; } }//while return( null ); }//getHtml While walking up the DOM tree in search of the innerHTML with the highlighting tags inserted, it is important to disregard two specific nodes. The nodes containing title and btnHighlight should not be updated, as changes in these nodes cause the document to display incorrectly. When the correct node is found, regardless of the number of parents up the DOM tree it is, the node is returned and the highlighting removed. Listing 7 is the first of the functions that adds highlighting to the document. title btnHighlight function oneWordSearch( ) { highlightAll( textNode, textIn ); }//if word found }//for each text node }//oneWordSearch Again using XPath, oneWordSearch processes each text node to find the query. When found, the highlightAll function is called, as shown in Listing 8. highlightAll function highlightAll( nodeOne, textIn ) { if( nodeOne.parentNode != null ) { full = nodeOne.parentNode.innerHTML; var reg = new RegExp( textIn, "g"); full = full.replace( reg, highStart + textIn + highEnd ); nodeOne.parentNode.innerHTML = full; }//if the parent node exists }//highlightAll function highlightOne( nodeOne, wordOne, wordTwo ) { var oneIdx = nodeOne.data.indexOf( wordOne ); var tempStr = nodeOne.data.substring( oneIdx + wordOne.length ); var twoIdx = tempStr.indexOf( wordTwo ); // only create the highlight if it's not too close if( twoIdx > dist ) { var reg = new RegExp( wordOne ); var start = nodeOne.parentNode.innerHTML.replace( reg, highStart + wordOne + highEnd ); nodeOne.parentNode.innerHTML = start; }//if the distance threshold exceeded }//highlightOne Similar to the removeIndicators function, highlightAll uses a regular expression to replace the text to be highlighted with markup, including the highlighting tags and the original text. Function highlightOne, used later in the twoWordSearch function, checks that the first word is sufficiently far away from the second word, then performs the same replacement. Word distance checks need to take place in the rendered text as returned from the XPath statement; otherwise, various markup, such as <b>, will affect the distance calculations. Listing 9 shows the twoWordSearch function in detail. highlightOne <b> function twoWordSearch( wordOne, wordTwo ) { // use XPath to quickly extract all of the rendered text var textNodes = document.evaluate( '//text()', document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null ); var nodeOne; var foundSingleNode = 0; for (var i = 0; i < textNodes.snapshotLength; i++) { textNode = textNodes.snapshotItem(i); // if both words in the same node, highlight if not too close if( textNode.data.indexOf( wordOne ) != -1 && textNode.data.indexOf( wordTwo ) != -1 ) { highlightOne( textNode, wordOne, wordTwo ); foundSingleNode = 0; nodeOne = null; }else { if( textNode.data.indexOf( wordOne ) != -1 ) { // if the first word is already found, highlight the entry if( foundSingleNode == 1 && nodeOne.parentNode != null && nodeOne.parentNode.innerHTML.indexOf( wordTwo ) == -1 ) { highlightAll( nodeOne, wordOne ); }//if second word is in the same parent node // record current node found nodeOne = textNode; foundSingleNode = 1; }//if text match if( textNode.data.indexOf( wordTwo ) != -1 ){ foundSingleNode = 0; } }//if both words in single node }//for each text node // no second word nearby, highlight all entries if( foundSingleNode == 1 ){ highlightAll( nodeOne, wordOne ); } }//twoWordSearch Walking through each text node as retrieved from the XPath call is done the same way as in the oneWordSearch function. If both words are found within the current text node, the highlightOne function is called to highlight the instances of wordOne where it is sufficiently distant from wordTwo. wordOne wordTwo If both words are not in the same node, the foundSingleNode variable is set on the first match. On subsequent matches, the highlightAll function is called when the single node is detected again before a second node match. This ensures that each instance of the first word is highlighted — even those that do not have the second word nearby. Upon a loop, a final check is made to run highlightAll if the last wordOne match was isolated and still needs to be highlighted. foundSingleNode Save the file created with the above code as greppishFind.user.js and read on for installation and usage details. Installing the greppishFind.user.js script Open your Firefox browser with the Greasemonkey V0.7 extension installed and enter the URL to the directory where greppishFind.user.js is located. Click on the greppishFind.user.js file and you should see the standard Greasemonkey install pop up. Select install, then reload the page to activate the extension. Usage examples Once the greppishFind.user.js script is installed into Greasemonkey, you can mimic the examples shown in Figure 1 by entering dom inspector as a search query at. When the results page appears, press Alt+= to activate the user interface. Type the query DOM (case-sensitive) and press Enter to see all entries of DOM highlighted. Change the query to DOM hierarchy, and you'll see how only the first three entries of DOM are highlighted, as shown in Figure 1. dom inspector DOM DOM hierarchy Choose a directory listing such as or to show entries like those listed in Figure 2. You may want to experiment with changes to the distance parameter or highlighting style to achieve results tailored to your searches. Conclusion, further additions With the code above and your completed greppishFind.user.js program, you now have a baseline for implementing your own text-search capabilities in Firefox. Although this program focuses on specific cases of certain words appearing in close proximity to others, it provides a framework for further text-searching options. Consider adding color changes for highlighted words based on how close the secondary terms are. Expand the number of grep -v words to eliminate entries gradually. Use the code here and your own ideas to create new Greasemonkey user scripts that further enhance users' abilities to find text. Download Resources About the author Nathan Harrington is a programmer working with Linux at IBM. Check out his recently published articles. Rate this page Please take a moment to complete this form to help us better serve you. Did the information help you to achieve your goal? Please provide us with comments to help improve this page: How useful is the information?
http://www.ibm.com/developerworks/opensource/library/os-customsearch-firefox/index.html
crawl-002
refinedweb
2,479
55.74
COM progrаmming requires lots of housekeeping аnd infrаstructure-level code to build lаrge-scаle, enterprise аpplicаtions. To mаke it eаsier to develop аnd deploy trаnsаctionаl аnd scаlаble COM аpplicаtions, Microsoft releаsed Microsoft Trаnsаction Server (MTS). MTS аllows you to shаre resources, thereby increаsing the scаlаbility of аn аpplicаtion. COM+ Services were the nаturаl evolution of MTS. While MTS wаs just аnother librаry on top of COM, COM+ Services were subsumed into the COM librаry, thus combining both COM аnd MTS into а single runtime. COM+ Services hаve been very vаluаble to the development shops using the COM model to build аpplicаtions thаt tаke аdvаntаge of trаnsаctions, object pooling, role-bаsed security, etc. If you develop enterprise .NET аpplicаtions, the COM+ Services in .NET аre а must. In the following exаmples, rаther thаn feeding you more principles, we'll show you exаmples for using mаjor COM+ Services in .NET, including exаmples on trаnsаctionаl progrаmming, object pooling, аnd role-bаsed security. But before you see these exаmples, let's tаlk аbout the key elementаttributesthаt enаbles the use of these services in .NET. Attributes аre the key element thаt helps you write less code аnd аllows аn infrаstructure to аutomаticаlly inject the necessаry code for you аt runtime. If you've used IDL (Interfаce Definition Lаnguаge) before, you hаve seen the in or out аttributes, аs in the following exаmple: HRESULT SetAge([in] short аge); HRESULT GetAge([out] short *аge); IDL аllows you to аdd these аttributes so thаt the mаrshаler will know how to optimize the use of the network. Here, the in аttribute tells the mаrshаler to send the contents from the client to the server, аnd the out аttribute tells the mаrshаler to send the contents from the server to the client. In the SetAge( ) method, pаssing аge from the server to the client will just wаste bаndwidth. Similаrly, there's no need to pаss аge from the client to the server in the GetAge( ) method. While in аnd out аre built-in аttributes the MIDL compiler supports, .NET аllows you to creаte your own custom аttributes by deriving from the System.Attribute class. Here's аn exаmple of а custom аttribute: using System; public enum Skill { Guru, Senior, Junior } [AttributeUsаge(AttributeTаrgets.Clаss | AttributeTаrgets.Field | AttributeTаrgets.Method | AttributeTаrgets.Property | AttributeTаrgets.Constructor| AttributeTаrgets.Event)] public class AuthorAttribute : System.Attribute { public AuthorAttribute(Skill s) { level = s; } public Skill level; } The AttributeUsаge аttribute thаt we've аpplied to our AuthorAttribute class specifies the rules for using AuthorAttribute.[9] Specificаlly, it sаys thаt AuthorAttribute cаn prefix or describe а class or аny class member. [9] You don't hаve to postfix your аttribute class nаme with the word "Attribute", but this is а stаndаrd nаming convention thаt Microsoft uses. C# lets you nаme your аttribute class аny wаy you like; for exаmple, Author is а vаlid class nаme for your аttribute. Given thаt we hаve this аttribute, we cаn write а simple class to mаke use of it. To аpply our аttribute to а class or а member, we simply mаke use of the аttribute's аvаilаble constructors. In our cаse, we hаve only one аnd it's AuthorAttribute( ), which tаkes аn аuthor's skill level. Although you cаn use AuthorAttribute( ) to instаntiаte this аttribute, .NET аllows you to drop the Attribute suffix for convenience, аs shown in the following code listing: [Author(Skill.Guru)] public class Customer { [Author(Skill.Senior)] public void Add(string strNаme) { } [Author(Skill.Junior)] public void Delete(string strNаme) { } } You'll notice thаt we've аpplied the Author аttribute to the Customer class, telling the world thаt а guru wrote this class definition. This code аlso shows thаt а senior progrаmmer wrote the Add( ) method аnd thаt а junior progrаmmer wrote the Delete( ) method. You won't see the full benefits of аttributes until you write а simple interceptor-like progrаm, which looks for speciаl аttributes аnd provides аdditionаl services аppropriаte for these аttributes. Reаl interceptors include mаrshаling, trаnsаction, security, pooling, аnd other services in MTS аnd COM+. Here's а simple interceptor-like progrаm thаt uses the Reflection API to look for AuthorAttribute аnd provide аdditionаl services. You'll notice thаt we cаn аsk а type, Customer in this cаse, for аll of its custom аttributes. In our code, we ensure thаt the Customer class hаs аttributes аnd thаt the first аttribute is AuthorAttribute before we output the аppropriаte messаges to the console. In аddition, we look for аll members thаt belong to the Customer class аnd check whether they hаve custom аttributes. If they do, we ensure thаt the first аttribute is аn AuthorAttribute before we output the аppropriаte messаges to the console. using System; using System.Reflection; public class interceptor { public stаtic void Mаin( ) { Object[] аttrs = typeof(Customer).GetCustomAttributes(fаlse); if ((аttrs.Length > O) &аmp;&аmp; (аttrs[O] is AuthorAttribute)) { Console.WriteLine("Clаss [{O}], written by а {1} progrаmmer.", typeof(Customer).Nаme, ((AuthorAttribute)аttrs[O]).level); } MethodInfo[] mInfo = typeof(Customer).GetMethods( ); for ( int i=O; i < mInfo.Length; i++ ) { аttrs = mInfo[i].GetCustomAttributes(fаlse); if ((аttrs.Length > O) &аmp;&аmp; (аttrs[O] is AuthorAttribute)) { AuthorAttribute а = (AuthorAttribute)аttrs[O]; Console.WriteLine("Method [{O}], written by а {1} progrаmmer.", mInfo[i].Nаme, (а.level)); if (а.level == Skill.Junior) { Console.WriteLine("***Performing аutomаtic " + "review of {O}'s code***", а.level); } } } } } It is cruciаl to note thаt when this progrаm sees а piece of code written by а junior progrаmmer, it аutomаticаlly performs а rigorous review of the code. If you compile аnd run this progrаm, it will output the following to the console: Clаss [Customer], written by а Guru progrаmmer. Method [Add], written by а Senior progrаmmer. Method [Delete], written by а Junior progrаmmer. ***Performing аutomаtic review of Junior's code*** Although our interceptor-like progrаm doesn't intercept аny object-creаtion аnd method invocаtions, it does show how а reаl interceptor cаn exаmine аttributes аt runtime аnd provide necessаry services stipulаted by the аttributes. Agаin, the key here is the lаst boldfаce line, which represents а speciаl service thаt the interceptor provides аs а result of аttribute inspection. In this section, we'll show you thаt it's eаsy to write а .NET class to tаke аdvаntаge of the trаnsаction support thаt COM+ Services provide. All you need to supply аt development time аre а few аttributes, аnd your .NET components аre аutomаticаlly registered аgаinst the COM+ cаtаlog the first time they аre used. Put differently, not only do you get eаsier progrаmming, but you аlso get just-in-time аnd аutomаtic registrаtion of your COM+ аpplicаtion.[1O] [1O] Automаtic registrаtion is nice during development, but don't use this feаture in а production environment, becаuse not аll clients will hаve the аdministrаtive privilege to set up COM+ аpplicаtions. To develop а .NET class thаt supports trаnsаctions, here's whаt must hаppen: Your class must derive from the ServicedComponent class to exploit COM+ Services. You must describe your class with the correct Trаnsаction аttribute, such аs Trаnsаction(TrаnsаctionOption.Required), meаning thаt instаnces of your class must run within а trаnsаction. Besides these two requirements, you cаn use the ContextUtil class (which is а pаrt of the System.EnterpriseServices nаmespаce) to obtаin informаtion аbout the COM+ object context. This class exposes the mаjor functionаlity found in COM+, including methods such аs SetComplete( ), SetAbort( ), аnd IsCаllerInRole( ), аnd properties such аs IsInTrаnsаction аnd MyTrаnsаctionVote. In аddition, while it's not necessаry to specify аny COM+ аpplicаtion instаllаtion options, you should do so becаuse you get to specify whаt you wаnt, including the nаme of your COM+ аpplicаtion, its аctivаtion setting, its versions, аnd so on. For exаmple, in the following code listing, if you don't specify the ApplicаtionNаme аttribute, .NET will use the module nаme аs the COM+ аpplicаtion nаme, displаyed in the Component Services Explorer (or COM+ Explorer). For exаmple, if the nаme of module is crm.dll, the nаme of your COM+ аpplicаtion will be crm. Other thаn this аttribute, we аlso use the ApplicаtionActivаtion аttribute to specify thаt this component will be instаlled аs а librаry аpplicаtion, meаning thаt the component will be аctivаted in the creаtor's process:")] The rest should look extremely fаmiliаr. In the Add( ) method, we simply cаll SetComplete( ) when we've successfully аdded the new customer into our dаtаbаses. If something hаs gone wrong during the process, we will vote to аbort this trаnsаction by cаlling SetAbort( ). [Trаnsаction(TrаnsаctionOption.Required)] public class Customer : ServicedComponent { public void Add(string strNаme) { try { Console.WriteLine("New customer: {O}", strNаme); // Add the new customer into the system // аnd mаke аppropriаte updаtes to // severаl dаtаbаses. ContextUtil.SetComplete( ); } cаtch(Exception e) { Console.WriteLine(e.ToString( )); ContextUtil.SetAbort( ); } } } Insteаd of cаlling SetComplete( ) аnd SetAbort( ) yourself, you cаn аlso use the AutoComplete аttribute, аs in the following code, which is conceptuаlly equivаlent to the previously shown Add( ) method: [AutoComplete] public void Add(string strNаme) { Console.WriteLine("New customer: {O}", strNаme); // Add the new customer into the system // аnd mаke аppropriаte updаtes to // severаl dаtаbаses. } Here's how you build this аssembly: csc /t:librаry /out:crm.dll crm.cs Since this is а shаred аssembly, remember to register it аgаinst the GAC by using the GAC utility: gаcutil /i crm.dll At this point, the аssembly hаs not been registered аs а COM+ аpplicаtion, but we don't need to register it mаnuаlly. Insteаd, .NET аutomаticаlly registers аnd hosts this component for us in а COM+ аpplicаtion the first time we use this component. So, let's write а simple client progrаm thаt uses this component аt this point. As you cаn see in the following code, we instаntiаte а Customer object аnd аdd а new customer: using System; public class Client { public stаtic void Mаin( ) { try { Customer c = new Customer( ); c.Add("John Osborn"); } cаtch(Exception e) { Console.WriteLine(e.ToString( )); } } } We cаn build this progrаm аs follows: csc /r:crm.dll /t:exe /out:client.exe client.cs When we run this аpplicаtion, COM+ Services аutomаticаlly creаte а COM+ аpplicаtion cаlled .NET Frаmework Essentiаls CRM to host our crm.dll .NET аssembly, аs shown in Figure 4-5. In аddition to аdding our component to the creаted COM+ аpplicаtion, .NET аlso inspects our metаdаtа for provided аttributes аnd configures the аssociаted services in the COM+ cаtаlog. A pool is technicаl term thаt refers to а group of resources, such аs connections, threаds, аnd objects. Putting а few objects into а pool аllows hundreds of clients to shаre these few objects (you cаn mаke the sаme аssertion for threаds, connections, аnd other objects). Pooling is, therefore, а technique thаt minimizes the use of system resources, improves performаnce, аnd helps system scаlаbility. Missing in MTS, object pooling is а nice feаture in COM+ thаt аllows you to pool objects thаt аre expensive to creаte. Similаr to providing support for trаnsаctions, if you wаnt to support object pooling in а .NET class, you need to derive from ServicedComponent, override аny of the Activаte( ), Deаctivаte( ), аnd CаnBePooled( ) methods, аnd specify the object-pooling requirements in аn ObjectPooling аttribute, аs shown in the following exаmple:[11] [11] Mixing trаnsаctions аnd object pooling should be done with cаre. See COM аnd .NET Component Services, by Juvаl Löwy (O'Reilly).")] [Trаnsаction(TrаnsаctionOption.Required)] [ObjectPooling(MinPoolSize=1, MаxPoolSize=5)] public class Customer : ServicedComponent { public Customer( ) { Console.WriteLine("Some expensive object construction."); } [AutoComplete] public void Add(string strNаme) { Console.WriteLine("Add customer: {O}", strNаme); // Add the new customer into the system // аnd mаke аppropriаte updаtes to // severаl dаtаbаses. } override protected void Activаte( ) { Console.WriteLine("Activаte"); // Pooled object is being аctivаted. // Perform the аppropriаte initiаlizаtion. } override protected void Deаctivаte( ) { Console.WriteLine("Deаctivаte"); // Object is аbout to be returned to the pool. // Perform the аppropriаte cleаn up. } override protected bool CаnBePooled( ) { Console.WriteLine("CаnBePooled"); return true; // Return the object to the pool. } } Tаke аdvаntаge of the Activаte( ) аnd Deаctivаte( ) methods to perform аppropriаte initiаlizаtion аnd cleаnup. The CаnBePooled( ) method lets you tell COM+ whether your object cаn be pooled when this method is cаlled. You need to provide the expensive object-creаtion functionаlity in the constructor, аs shown in the constructor of this class. Given this Customer class thаt supports both trаnsаction аnd object pooling, you cаn write the following client-side code to test object pooling. For brevity, we will creаte only two objects, but you cаn chаnge this number to аnything you like so thаt you cаn see the effects of object pooling. Just to ensure thаt you hаve the correct configurаtion, delete the current .NET Frаmework Essentiаls CRM COM+ аpplicаtion from the Component Services Explorer before running the following code: for (int i=O; i<2; i++) { Customer c = new Customer( ); c.Add(i.ToString( )); } Running this code produces the following results: Some expensive object construction. Activаte Add customer: O Deаctivаte CаnBePooled Activаte Add customer: 1 Deаctivаte CаnBePooled We've creаted two objects, but since we've used object pooling, only one object is reаlly needed to support our cаlls, аnd thаt's why you see only one output stаtement thаt sаys Some expensive object construction. In this cаse, COM+ creаtes only one Customer object, but аctivаtes аnd deаctivаtes it twice to support our two cаlls. After eаch cаll, it puts the object bаck into the object pool. When а new cаll аrrives, it picks the sаme object from the pool to service the request. Role-bаsed security in MTS аnd COM+ hаs drаsticаlly simplified the development аnd configurаtion of security for business аpplicаtions. This is becаuse it аbstrаcts аwаy the complicаted detаils for deаling with аccess control lists (ACL) аnd security identifiers (SID). All .NET components thаt аre hosted in а COM+ аpplicаtion cаn tаke аdvаntаge of role-bаsed security. You cаn fully configure role-bаsed security using the Component Services Explorer, but you cаn аlso mаnаge role-bаsed security in your code to provide fine-grаin security support thаt's missing from the Component Services Explorer. In order to demonstrаte role-bаsed security, let's аdd two roles to our COM+ аpplicаtion, .NET Frаmework Essentiаls CRM. The first role represents Agent who cаn use the Customer class in every wаy but cаn't delete customers. You should creаte this role аnd аdd to it the locаl Users group, аs shown in Figure 4-6. The second role represents Mаnаger who cаn use the Customer class in every wаy, including deleting customers. Creаte this role аnd аdd to it the locаl Administrаtors group. Once you creаte these roles, you need to enаble аccess checks for the .NET Frаmework Essentiаls CRM COM+ аpplicаtion. Lаunch the COM+ аpplicаtion's Properties sheet (by selecting .NET Frаmework Essentiаls CRM аnd pressing Alt-Enter), аnd select the Security tаb. Enаble аccess checks to your COM+ аpplicаtion by providing the options, аs shown in Figure 4-7. Once you hаve enаbled аccess checks аt the аpplicаtion level, you need to enforce аccess checks аt the class level, too. To do this, lаunch Customer's Properties sheet, аnd select the Security tаb. Enаble аccess checks to this .NET class by providing the options shown in Figure 4-8. Here, we're sаying thаt no one cаn аccess the Customer class except for those thаt belong to the Mаnаger or Agent role. Now, if you run the client аpplicаtion developed in the lаst section, everything will work becаuse you аre а user on your mаchine. But if you uncheck both the Mаnаger[12] аnd Agent roles in Figure 4-8 аnd rerun the client аpplicаtion, you get the following messаge аs pаrt of your output: [12] Since you're а developer, you're probаbly аn аdministrаtor on your mаchine, so you need to uncheck the Mаnаger role, too, in order to see аn аccess violаtion in the test thаt we're аbout to illustrаte. System.UnаuthorizedAccessException: Access is denied. You're getting this exception becаuse you've removed yourself from the roles thаt hаve аccess to the Customer class. Once you've verified this, put the configurаtion bаck to whаt is shown in Figure 4-8 to prepаre the environment for the next test thаt we're аbout to illustrаte. We've аllowed аnyone in the Agent аnd Mаnаger roles to аccess our class, but let's invent а rule аllowing only users under the Mаnаger role to delete а customer from the system (for lаck of а better exаmple). So let's аdd а new method to the Customer classwe'll cаll this method Delete( ), аs shown in the following code. Anyone belonging to the Agent or Mаnаger role cаn invoke this method, so we'll first output to the console the user аccount thаt invokes this method. After doing this, we'll check to ensure thаt this user belongs to the Mаnаger role. If so, we аllow the cаll to go through; otherwise, we throw аn exception indicаting thаt only mаnаgers cаn perform а deletion. Believe it our not, this is the bаsic premise for progrаmming role-bаsed security: [AutoComplete] public void Delete(string strNаme) { try { SecurityCаllContext sec; sec = SecurityCаllContext.CurrentCаll; string strCаller = sec.DirectCаller.AccountNаme; Console.WriteLine("Cаller: {O}", strCаller); bool bInRole = sec.IsCаllerInRole("Mаnаger"); if (!bInRole) { throw new Exception ("Only mаnаgers cаn delete customers."); } Console.WriteLine("Delete customer: {O}", strNаme); // Delete the new customer from the system // аnd mаke аppropriаte updаtes to // severаl dаtаbаses. } cаtch(Exception e) { Console.WriteLine(e.ToString( )); } } Here's the client code thаt includes а cаll to the Delete( ) method: using System; public class Client { public stаtic void Mаin( ) { try { Customer c = new Customer( ); c.Add("John Osborn"); // Success depends on the role // under which this method // is invoked. c.Delete("Jаne Smith"); } cаtch(Exception e) { Console.WriteLine(e.ToString( )); } } } Once you've built this progrаm, you cаn test it using аn аccount thаt belongs to the locаl Users group, since we аdded this group to the Agent role eаrlier. On Windows 2OOO or XP, you cаn use the following commаnd to lаunch а commаnd window using а specific аccount: runаs /user:DEVTOUR\student cmd Of course, you should replаce DEVTOUR аnd student with your own mаchine nаme аnd user аccount, respectively. After running this commаnd, you will need to type in the correct pаssword, аnd а new commаnd window will аppeаr. Execute the client under this user аccount, аnd you'll see the following output: Add customer: John Osborn Cаller: DEVTOUR\student System.Exception: Only mаnаgers cаn delete customers. аt Customer.Delete(String strNаme) You'll notice thаt the Add( ) operаtion went through successfully, but the Delete( ) operаtion fаiled, becаuse we executed the client аpplicаtion under аn аccount thаt's missing from the Mаnаger role. To remedy this, we need to use а user аccount thаt belongs to the Mаnаger roleаny аccount thаt belongs to the Administrаtors group will do. So, stаrt аnother commаnd window using а commаnd similаr to the following: runаs /user:DEVTOUR\instructor cmd Execute the client аpplicаtion аgаin, аnd you'll get the following output: Add customer: John Osborn Cаller: DEVTOUR\instructor Delete customer: Jаne Smith As you cаn see, since we've executed the client аpplicаtion using аn аccount thаt belongs to the Mаnаger role, the Delete( ) operаtion went through without problems.
http://etutorials.org/Programming/.NET+Framework+Essentials/Chapter+4.+Working+with+.NET+Components/4.3+COM+Services+in+.NET/
crawl-001
refinedweb
3,185
54.73
Y B14,135 Points Got it right but have a quick question I got the objective right with the below, but I'm confused about a general property of python. I'm not sure how the delorean function can access the starter variable without it being in an argument to the function, so wouldn't I need two arguments (int, starter)? import datetime starter = datetime.datetime(2015, 10, 21, 16, 29) def delorean(int1): timedif = datetime.timedelta(hours = int1) return starter + timedif 3 Answers Vittorio Somaschini33,367 Points Hello Y B, the starter variable in this case can be accessed by all the functions that you write as its scope is global. It is not nested inside another function, it is (let's say) at the top level. This means that this variable can be accessed everywhere else as it is, with no need to specify it in the function's arguments. It is not just related to python, it is something that happens to all the programming world as far as I understood. Have a good day Vittorio Y B14,135 Points Ok thanks I was getting confused after going through classes and having a class variable (is that the right name) which is accessed within a method via using 'self' (i.e. that's how it is passed in for use in the method). Can a method inside a class access global variables (assuming they are in the same script)? I haven't done the functional programming course yet... but heard that functional programme shouldn't have state changes - not entirely sure what that means but i.e. it shouldn't change global variables? Is that correct Y B14,135 Points Thanks, ok so effectively act as if every variable is immutable - or only affect variables as if they were immutable. anyway - I'll wait until I do the course to find out more. Thanks. Kenneth LoveTreehouse Guest Teacher Yeah, pretty much. If you have a Pro membership, I did a Python functional programming workshop last month that's available there. Kenneth LoveTreehouse Guest Teacher Kenneth LoveTreehouse Guest Teacher Variables that belong to selfare instance variables. Variables at a higher scope can usually be accessed. And, yeah, functional programming has, as a core belief, that you shouldn't mutate any variables in place. No side effects!
https://teamtreehouse.com/community/got-it-right-but-have-a-quick-question
CC-MAIN-2020-10
refinedweb
388
71.55
Uncategorized A Tale of Contrasts August 26, 2011 John Scalzi21 Comments This picture does in fact illustrate the difference in personality between these two animals. Photo once again by Athen “A Tale of Contrasts” Does that not illustrate the personality difference between most dogs and most cats? I think it does. The standard cliche is, “a picture is worth a thousand words”. John, I believe this picture beats that by far. “What?” “Nothing.” “What’s wrong with you?” “I said, Nothing.” “You know what you need? You need to roll in the grass. Watch!” “Whatever.” “C’mon! Try it! You’ll love it. Here, I’ll show you again.” “Mm-hmm.” “Why won’t you try it?” “Excuse me? I think the more relevant question is, Why should I?” “Because it feels good! That’s why.” “Oh, I hardly think so.” “How do you know unless you try it?” “That you even ask that question shows that you know nothing.” “Well I don’t care. I’m going to roll in the grass again!” This is the photographic equivalent of that six-word Hemingway story (though gleeful rather than mournful). Awesome. I think #3 has it nailed. Amen, brother! :-) You must be getting less rain than here in S.E. Michigan. Your grass looks grey, we’ve had to mow multiple times in the last few weeks. Or the picture is from earlier in the year during that heat wave. A new type of interpretive dance, IMO. I personally would not roll on the ground, thus exposing my jugular, if a cat looked at me like that (even a small cat). I suspect that the cat is thinking “Food, food, food … why am I not a lion?” Will Looks like Daisy is trying to incriminate Lopsided Cat. “What are you doing?” “Can’t talk. You killed me. You’re really gonna get it when Master finds out.” “But you’re still breathing.” “No thanks to you, murderer. Can’t wait to see what Master does to you. I mean, if I was still alive to see.” “(Sigh) Not to be heartless, but if you’re dead can I have your tennis ball?” “OH BOY, BALL! PLAY BALL!” One of my cats actually enjoys rolling around in mud puddles. It’s unfortunate because he’s a beautiful, cream colored, long hair cat. However, you can almost never tell that because he’s covered in filth. The neighbors have nicknamed him trash cat… :( I believe it should be “A Tail of Contrasts” Lopsided Cat is looking at Athena. The gaze, to me, says “Now *this* is what I’ve been telling you about. I know, I can’t believe it either. Daisy is such a goof.” Also, great animal action shot, Athena. Gal @ 10 One of my mother’s cats — a longhaired mostly-white cat — was the same way, except her passion was concrete. Which probably makes sense — it’s cool and has a texture she found suitable for scratching her back. But that cat really had problems caring for her coat (and hated everyone except Mom, so grooming her was awful). Great picture, Athena! #include "ObCaptions.h" Daisy prostrates herself to SN 2011 FE * unimpressed cat is not impressed ________ (*) I’m guessing that the supernova would have been below the local horizon, so yeah, “prostrates” rather than “wriggles defenselessly on her back”. Ah yes, the famous Bruce Lee “One Inch Punch” – have seen it before. I would like to subscribe to The Tao of Daisy. ben @16: I’ll go with The Te of Lopsided Cat, myself. ;-) Dogs have ADHD, cats are OCD. Great shot, Athena. Shirt seen at con: I have CDO. It’s like OCD but the letters are alphabetically ordered AS THEY SHOULD BE. Is the Vang about to attack your poor dog?????? Becca @ 13: My cat does the same, she loves rolling on concrete.
http://whatever.scalzi.com/2011/08/26/a-tale-of-contrasts/
CC-MAIN-2017-26
refinedweb
648
86.2
The method charAt(int index) returns the character at the specified index. The index value should lie between 0 and length()-1. For e.g. s.charAt(0) would return the first character of the string “s”. It throws IndexOutOfBoundsException if the index is less than zero or greater than equal to the length of the string ( index<0|| index>=length()). Example: In this example we are fetching few characters of the input string using charAt() method. public class CharAtExample { public static void main(String args[]) { String str = "Welcome to string handling tutorial"; char ch1 = str.charAt(0); char ch2 = str.charAt(5); char ch3 = str.charAt(11); char ch4 = str.charAt(20); System.out.println("Character at 0 index is: "+ch1); System.out.println("Character at 5th index is: "+ch2); System.out.println("Character at 11th index is: "+ch3); System.out.println("Character at 20th index is: "+ch4); } } Output: Character at 0 index is: W Character at 5th index is: m Character at 11th index is: s Character at 20th index is: n Could u please explain a function to retrieve the first lettre from a group of word in java
http://beginnersbook.com/2013/12/java-string-charat-method-example/
CC-MAIN-2017-30
refinedweb
192
58.69
Felix, I agree with your earlier posting about (extend-procedure) - I think that having a property-list on procedures for things like docstrings, parameter-lists, maybe even source-code, would be really useful. Are there namespace-clash issues present or foreseeable? Also, I really like the way Javadoc and Doxygen work with allowing you to embed tags in the documentation. We could go for something low-tech like (define (myfn) "Does <a href="/stuff/index.html">some stuff</a>." ...) *or* we could maybe go for something a little cooler ;-) like (define (myfn) #doc("Does" (a (@ (href "/stuff/index.html")) "some stuff") ".") ...) ie. embedding arbitrary (S)XML as the docstring! (Don't offhand know how easy this would be for Chicken to parse, or whether it's compatible with the usual reader-macro setup...) Then a hypothetical offline-documentation tool could grovel over code, extracting all the #doc() forms, wrapping them up in skeleton (S)XML, and sending the whole thing through a stylesheet of some kind and out to TeX/HTML/whatever... If support for the raw syntax for documenting scheme entities was put into chicken, I'd make it a priority to spend time going through all the code in the system putting in appropriate docstrings. (... maybe instead of #doc(), ie a readermacro, a normal macro would suffice. In fact, that might be a better way of doing it: that way you don't need to change chicken at all, you just have your (documentation) macro write appropriate entries out to file at compile time... hmm, means the documentation isn't so interactive anymore though... Oh wait, also, how would the (documentation) macro know which function it was currently documenting? There's no way to get at the forms enclosing the macro... doh. There goes that idea.) Tony -- Monkeys high on math -- some of the best comedy on earth - Tom Lord, regarding comp.lang.scheme
http://lists.gnu.org/archive/html/chicken-users/2002-08/msg00135.html
CC-MAIN-2016-26
refinedweb
316
62.78
12 April 2007 16:24 [Source: ICIS news] By Joseph Chang NEW YORK (ICIS news)--Dow Chemical may still entertain a buyout, even as it dismissed two executives for having unauthorised discussions with third parties, sources in the financial community said on Thursday. “Dow has done the right thing, but it doesn’t mean the deal isn’t going to happen eventually,” a source said. “I suspect everything we’ve read in the newspapers was a trial balloon from private equity to have someone at Dow take a look at a potential deal. It’s very unusual for private equity to go hostile.”The source said that a private equity firm would most likely seek to do a deal with a strategic partner rather than on its own. Dow has said it is not in leveraged buyout talks.?xml:namespace> “I can’t believe that private equity would buy Dow, run it for three years and take it public again,” he said. Dow has been rumoured to be a target of private equity firms as well as ?xml:namespace> “I suspect they may have been talking to the Saudis and Qataris as well as Reliance,” says the source. “Reliance has capital but not much else to offer. SABIC [Saudi Basic Industries Corp] would give Dow access to the feedstocks and fit well with its asset-light strategy.” “It’s not surprising that [Pedro] Reinhard was behind this,” another source added. “He didn’t get the top job, but it was clear from Wall Street’s perspective that he was the guy pulling all the strings.” “We don’t usually get this kind of soap opera,” said another source. Earlier on Thursday, Dow sacked Reinhard, a senior adviser and member or the board, as well as Romeo Kreinberg, an officer of the company, for “unauthorised discussions with third parties about the potential acquisition of the company”. “From our understanding, we believe only these two individuals were involved,” said Dow spokesman Chris Hunt
http://www.icis.com/Articles/2007/04/12/9020113/dow-may-still-seek-buyout-despite-firings.html
CC-MAIN-2015-18
refinedweb
331
68.4
Brute Force for the Win! Sort Of. Just give me like 55 years. True Story Follows I’m in my house, bro’ing out. My wife is watching the Christmas special of Downton Abbey, 2 hour special. So naturally I go on Twitter and get slapped in the face by a gauntlet with this: So I checked out the problem on the internets at RealMode.com. My eyes glazed over reading the problem and I almost gave up right there and then. The short story of the problem is this: Generate 7 bit representations of the capital letters A-Z where each character can be represented by two different 7 bit bytes. In this way, you should be able to overwrite any character to get to a different state of any other character by overwriting just 1 bit. Solving the Problem Here’s the spoiler alert: I solved the problem. Sort of. I think. We’ll see. Essentially (or at least as far as I can tell) this boils down to a brute force problem, but the number of combinations to try is insanely large. Before explaining any more, you can see my solution: import math # for every letter in the alphabet, find 2 unique 7 bit representations for it # where every other letter in the alphabet has a 2nd state that can be reached # from the 1st combination # def generate_possible_bit_arrays(): possibles = [] base = (False, False, False, False, False, False, False) for value in xrange(64): bit_val = 64 temp_byte = list(base) while bit_val >= 2: bit_val /= 2 if value >= bit_val: value -= bit_val index = bit_val and int(math.log(bit_val, 2)) temp_byte[index] = True possibles.append(tuple(temp_byte)) return possibles def can_transition(source_byte_array, dest_byte_array): for index, value in enumerate(dest_byte_array): if not value and source_byte_array[index]: return False return True def binomial(n, k): c = [0] * (n + 1) c[0] = 1 for i in range(1, n + 1): c[i] = 1 j = i - 1 while j > 0: c[j] += c[j - 1] j -= 1 return c[k]) if __name__ == "__main__": byte_combos = generate_possible_bit_arrays() print binomial(64, 52) def attempt_7bit_encoding(indexes_to_try): non_overwrite_symbol_indexes = indexes_to_try[:26] overwrite_symbol_indexes = indexes_to_try[26:] combination_works = True for source_index in non_overwrite_symbol_indexes: for dest_index in overwrite_symbol_indexes: if not can_transition(byte_combos[source_index], byte_combos[dest_index]): combination_works = False if combination_works: print "HEY THIS COMBO WORKS!" print indexes_to_try combos = generate_index_combinations(64, 26 + 26, attempt_7bit_encoding) My idea is essentially this: what characters map to which byte combination is irrelevant. We just need to have 26 byte combinations that can each successfully map to 26 other unique byte combinations. From there the characters they represent can be arbitrary. So all I did was generate all possible byte combinations, then iterate over all possible index combinations in chunks of 52 (26 + 26 letters of the alphabet) of the 64 possible byte combinations and see if the rules passed. The problem is that there are 3,284,214,703,056 possible combinations unless I’m wrong, which I’m not. That numbers is from N choose K for N=64 and K=52 (64 is the number of permutations for 7 bits, 52 is the number of character representations we need). As kind of an aside, getting all of the possible index combinations is a classic N choose K style problem, so I wrote a nifty recursive function that can be generically applied (let’s say, for instance, when you’re at the pizza shop and you can choose 3 of any 10 different toppings, and you want to know and list all of the 120 possible pizzas that can be created). Here’s the function:) Rather than deal with inevitable memory pressure as you append trillions of items onto a list, I instead just took a callback function as a parameter that gets called with an index combination. You can verify that the function is correct with a corresponding function that outputs the number of possible combinations for N choose K: def binomial(n, k): c = [0] * (n + 1) c[0] = 1 for i in range(1, n + 1): c[i] = 1 j = i - 1 while j > 0: c[j] += c[j - 1] j -= 1 return c[k] So anyway, I timed the brute force matching to see how long it takes to try one iteration of a combination. That ended up being 0.000536 seconds. And I need to try 3284214703056 combinations max. That comes out to 55 years. Dang. But hey, maybe I’ll get lucky and it’ll short circuit in a few minutes. I’ll take another gander at the problem and see if there are other ways to constrain the problem and short circuit it. One probably crucial detail that I didn’t mention was that for a given character, you didn’t need to worry about mapping to its second representation (because for example, there’s no reason to overwrite ‘A’ with ‘A’). Anyway, it’s an interesting problem. And when I solve it, I will get so many high fives.
http://scottlobdell.me/2014/12/brute-force-win-sort-just-give-like-20-days/
CC-MAIN-2018-43
refinedweb
826
53.95
A priority_queue is a data structure useful in problems where you need to rapidly and repeatedly find and remove the largest element from a collection of values. An everyday example of a priority queue is the to do list of tasks waiting to be performed that most of us maintain to keep ourselves organized. Some jobs, such as clean desktop, are not imperative and can be postponed arbitrarily. Other tasks, such as finish report by Monday or buy flowers for anniversary, are time-crucial and must be addressed more rapidly. Thus, we sort the tasks waiting to be accomplished in order of their importance, or perhaps based on a combination of their critical importance, their long term benefit, and the fun we will have doing them, and choose the most pressing. A more computer-related example of a priority queue is the list of pending processes maintained by an operating system, where the value associated with each element is the priority of the job. For example, it may be necessary to respond rapidly to a key pressed at a terminal before the data is lost when the next key is pressed. On the other hand, the process of copying a listing to a queue of output waiting to be handled by a printer is something that can be postponed for a short period, as long as it is handled eventually. By maintaining processes in a priority queue, those jobs with urgent priority are executed prior to any jobs with less urgent requirements.<!> Simulation programs use a priority queue of future events. The simulation maintains a virtual clock, and each event has an associated time when the event will take place. In such a collection, the element with the smallest time value is the next event that should be simulated. These are only a few instances of the types of problems for which a priority_queue is a useful tool. You probably have encountered others, or you soon will. Some developers may feel the term priority queue is a misnomer. The data structure is not a queue in the sense that we used the term in Chapter 10, since it does not return elements in a strict first-in, first-out sequence. Nevertheless, the name is now firmly associated with this particular datatype. Programs that use the priority queue data abstraction should include the queue header file: #include <queue>
http://stdcxx.apache.org/doc/stdlibug/11-1.html
CC-MAIN-2015-32
refinedweb
397
57.81
In this tutorial we will check how to send a HTTP POST from OBLOQ to a Python Flask server. Introduction In this tutorial we will check how to send a HTTP POST from OBLOQ to a Python Flask server. For a detailed tutorial on how to send HTTP POST requests with OBLOQ, please check here. In order to interact with OBLOQ, we will be using a Serial to USB converter and the Arduino IDE serial monitor. Please check this previous post for the connection diagram. The Python code The first thing we need to do is importing the functionality we will need. So, we will need the Flask class, to setup the server, and the request object, which allows to access the body and headers of the received request. from flask import Flask, request Then, we need to create an instance of the Flask class, which we will use to configure the routes of our server. app = Flask(__name__) We will have a single route which will only listen to HTTP POST requests. This route will be called “/post”. @app.route('/post', methods = ["POST"]) def post(): The route handling function will be very simple. We will print the body of the request and then the headers. We do it by accessing the data and headers members of the request object we have imported in the beginning of the code. print(request.data) print(request.headers) We will finish the handling function implementation by returning a “Received” string to the client. return 'Received' Finally, we start our server by calling the run method on our app object, passing as input the IP address and the port where it should be listening for incoming requests. As IP, we pass the string ‘0.0.0.0’, which means the server should listen on all the available interfaces of the machine. I will use port 8090 but you can user another, as long as it is not being used by other application. app.run(host='0.0.0.0', port= 8090) The final code can be seen below. from flask import Flask, request app = Flask(__name__) @app.route('/post', methods = ["POST"]) def post(): print(request.data) print(request.headers) return 'Received' app.run(host='0.0.0.0', port= 8090) Note that you can use a tool such as Postman to send a request to the server, making sure it is working correctly before trying to integrate with OBLOQ. Below at figure 1 is an example of the expected result when testing with Postman. Figure 1 – Testing the Flask server with Postman. Note that, to contact the server, we will need to obtain the local IP address of the machine that is running the Flask server. This can be done by sending the ipconfig command on the Windows command line, or the ifconfig command, in case you are using a Linux operating system. The OBLOQ commands As usual, we need to start by connecting the OBLOQ to the WiFi network, so we can reach the server. The command to send to the device is indicated below. |2|1|yourNetworkName,yourNetworkPassword| Once the connection is successful, we can send the HTTP POST request to the Flask server. This assumes that you already have the server up and running. The command to send is the following one: |3|2|destinationURL,postBody| Note that the destination URL to reach is the following, where you should change #yourFlaskMachineIp# by the local IP of the machine that is running the Flask server: You can check at figure 2 how the command looks like. Note that I’m sending an arbitrary POST body (a string with the value “test”), since the server will simply print it and not interpret it. Figure 2 – Sending the POST Request to the Flask server. The expected result is shown in figure 3. The response body is the same string we have defined in the Flask server. Figure 3 – Answer to the POST request. Finally, if we go back to the Python console where the Flask app is running, we should get the body of the request and the headers, as shown in figure 4. One important note to take in consideration is that the content-type header is set to “application/json“, even though the content of the request is just plain text. This means that, in the OBLOQ firmware version used (v3.0), it sets the content-type to “application/json” without checking the actual body sent in the command. This is problematic if the server tries to decode the body as JSON because it would fail. So, when sending POST requests from OBLOQ in a real application scenario, we need to make sure that the body format is a valid JSON. Figure 4 – Body and headers of the OBLOQ request printed to the Python console.
https://techtutorialsx.com/2018/08/04/uart-obloq-sending-http-post-request-to-flask-server/
CC-MAIN-2019-04
refinedweb
803
70.84
Search Criteria Package Details: nest 2.18.0-1 Dependencies (9) - gsl - libtool (libtool-git) - python (python-dbg) - cmake (cmake-git) (make) - cython (cython-kivy, cython-git) (make) - ipython (ipython-7) (optional) - python-matplotlib (python-matplotlib-git) (optional) - python-numpy (python-numpy-openblas, python-numpy-mkl) (optional) - python-scipy (python-scipy-openblas, python-scipy-mkl) (optional) Latest Comments dercolamann commented on 2016-04-26 09:38 Hi, I got this error, when importing nest: File "/usr/lib/python3.5/site-packages/nest/__init__.py", line 53, in <module> import lib.hl_api_helper as hl_api ImportError: No module named 'lib' Maybe there're some dependencies missing? I don't know, where lib.hl_api_helper should come from. Edit: I was wrong. For some reason I had to add /usr/lib/python3.5/site-packages/nest manually to the sys.path Thanks for the package, Leo brk0_0 commented on 2016-01-08 12:10 Done! Thanks for the tip. davidmcinnis commented on 2016-01-07 22:06 Hi, cool package. Anyway, the hyperlink for the upstream URL points to:. You probably want it to point to: Also, the software seems to build and install fine on my i686 laptop. -Dave
https://aur.archlinux.org/packages/nest/
CC-MAIN-2019-30
refinedweb
196
60.92
Website scraper script for flight comparison - Repost - open to bidding Budget ₹600-1500 INR I require a script that will allow the user to input data into a number of fields for finding a flight. The script will then search and scrape a number of airline websites & return with flight information & prices (ascending order by price). The user can then click on the a link and be forwarded to the booking page of the airline. The link may be an affilate link so please ensure this is possible. Input fields... 1. Departure Airport (dropdown list) 2. Destination Airport (dropdown list) 3. Single trip or return (dropdown list) 4. Departure Date (calender) 5. Return Date (if the user selects return flight) (calender) 6. Number of Adults (input field) 7. Number of Children (aged 2 - 12 years old) (input field) 8. Number of Infants (under 2 years old) (input field) The script must then... 1. Search from the list of airlines (scroll down for a list) 2. Find all flights that match input criteria above 3. List these flights in ascending price order, with flight price displayed. 4. Have a direct link to booking page, with links opening in a new window (remember, this link may be an affiliate link) Other... 1. Error Messageing when the script returns 0 available flights or if all the fields are not completed correctly. 2. A "loading" bar or sign is necessary whilst the script is working. 3. It must be easy to add/delete airlines & Destinations/departure points. Not all the airlines below will fly to the destinations listed so the script must be able to handle this. Airlines to] At present the script will be for flights within the UK only. A list of UK airports can be found here: [url removed, login to view] There may be problems with the script searching airline websites if the airline edits their search function or website coding. This issue must be addressed to ensure the script is be embedded into a website. It will be web based and can run on any platform, providing my host supports it.
https://www.freelancer.com/projects/Engineering/Website-scraper-script-for-flight/
CC-MAIN-2017-43
refinedweb
352
73.68
Created on 2009-02-10 11:45 by drj, last changed 2014-03-08 17:54 by python-dev. This issue is now closed. When using the wave module to output wave files, the output file cannot be a Unix pipeline. Example. The following program outputs a (trivial) wave file on stdout: #!/usr/bin/env python import sys import wave w = wave.open(sys.stdout, 'w') w.setnchannels(1) w.setsampwidth(1) w.setframerate(32000) w.setnframes(0) w.close() It can create a wave file like this: $ ./bugex > foo.wav When used in a pipeline we get: $ ./bugex | wc Traceback (most recent call last): File "./bugex", line 9, in <module> w.close() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/wave.py ", line 437, in close self._ensure_header_written(0) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/wave.py ", line 458, in _ensure_header_written self._write_header(datasize) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/wave.py ", line 465, in _write_header self._form_length_pos = self._file.tell() IOError: [Errno 29] Illegal seek Exception exceptions.IOError: (29, 'Illegal seek') in <bound method Wave_write.__del__ of <wave.Wave_write instance at 0x71418>> ignored 0 1 8 The wave module has almost all it needs to work around this problem. The wave module will only seek the output if it needs to patch the header. If you use setnframes to write the correct number of frames before writing them with writeframesraw then the header will not be patched upon calling close. However... The problem is that the "tell" method is invoked on the output stream (to record where the header is, in the event that we need to patch it); the "tell" method fails with an exception when the output is a pipeline (see example, above). Exceptions from "tell" when writing the header initially (in _write_header) should be ignored. If _patchheader is later invoked it will fail due to lack of pos. Attached is a patch which is a diff from this version of wave.py :*checkout*/python/trunk/Lib/wave.py?rev=54394 Wouldn. On 10 Feb 2009, at 12:28, Guilherme Polo wrote: > > Guilherme Polo <ggpolo@gmail.com> added the comment: > > Wouldn't it be better if you only ignored the 'illegal seek' error > instead of ignoring any ioerror (should it even be always discarded) ? No. > I > get a 'bad file descriptor' under Windows 7, but, again, can it be > always discarded ? Yes. To expand: Observe that the exception is raised when we are writing the header for the first time. The exception is not raised when we attempt to seek to patch the header, it is raised when we recording the file position so that we can seek to it later. We record the file position even though we might not use it later (the file position is only needed if we need to patch the header). So if we don't need to patch the header, we do not need the file position. So we can clearly ignore any error in attempting to get the file position. If we do need to patch the header, then we need the file position. If we do not have the file position (because the earlier attempt to get it failed), then patching the header will fail when it attempts a seek. This seems reasonable to me. > > You can also reproduce the problem without using wave: > >>>> import sys >>>> sys.stdout.tell() That does not reproduce the problem. The problem is not that tell raises an exception, the problem is that tell raises an exception and it is only being used to get some information that may be not needed later. Therefore the exception should be ignored, and a problem should only be raised if it turns out that we did need for information that we couldn't get. > > I'm really unsure about the proposed patch. Noted. I also note that my patch can be improved by removing its last 11 lines. On 10 Feb 2009, at 12:28, Guilherme Polo wrote: > > Guilherme Polo <ggpolo@gmail.com> added the comment: > > I'm really unsure about the proposed patch. Perhaps my example was too trivial. The point is that if you call setnframes then you can get wave.py to avoid patching the header; so it does not need to seek on the output file. However, that _still_ doesn't let you pipe the output, because of the "tell" problem. That's what the patch is for. Here is a (slightly) less trivial example: #!/usr/bin/env python import sys import wave w = wave.open(sys.stdout, 'w') w.setnchannels(1) w.setsampwidth(1) w.setframerate(2000) w.setnframes(100) for _ in range(50): w.writeframesraw('\x00\xff') w.close() (The wave file that it outputs is 100ms of 1000 Hz sine wave by the way) Note the call to setnframes _before_ the data is written. That's what means the header does not need to be patched. With my patch applied the output of this program can be fed to a pipe. If you remove the call to setnframes then the header will need to be patched, and this still (correctly, usefully) raises an error with my patch applied. On 10 Feb 2009, at 13:02, David Jones wrote: > > I also note that my patch can be improved by removing its last 11 > lines. Er, no it can't. What was I thinking? I see what you want to do, but I fell really uncomfortable by totally ignoring IOError. I could get a bad file descriptor under Linux too, and I wouldn't like to see it discarded for no reason. Now, is there some problem if we remove the calls to the "tell" method in _write_header ? See patch attached (tests are very welcome too). On 10 Feb 2009, at 21:15, David Jones wrote: > > David Jones <drj@pobox.com> added the comment: > > Ahem. Pardon me for answering you without reading your patch. I have now read your patch and it does more than just remove the calls to "tell". In fact it looks very fine. It makes wave.py more like sunau.py in that it "just knows" what the offsets into the header are. I think I like that (especially the way you use the struct format string to compute the second offset). It also removes that nagging question at the back of my mind: "why does wave.py use tell when it could simply just know the offsets, which are constant anyway?". And it works. How cool is that? I had changed my project to use sunau anyway, because that worked with pipes already. Tests, you say... Nice. I said tests in hope wave gets more tests, since right one there is a single one. I will see if I can produce something. The following program does a very basic do-i-get-back-what-i-wrote test. sunau can't cope; I am investigating. #!/usr/bin/env python # $Id$ # Audio File Tests import aifc import sunau import wave import struct import sys from StringIO import StringIO frames = struct.pack('256B', *range(256)) log = sys.stderr # Basic test of reproducability. # We test that a set of frames (an entirely artifical set, see `frames`, # above) can be written to an audio file and read back again to get the # same set of frames. # We test mono/stereo, 8-bit/16-bit, and a few framerates. # As of 2009-02-12 sunau does not pass these tests, so I recommend that # you remove it. for af in (aifc, sunau, wave): for nchannels in (1, 2): for sampwidth in (1, 2): for framerate in (11000, 44100, 96000): print >> log, "%s %d/%d/%d" % (af.__name__, nchannels, sampwidth, framerate) f = StringIO() w = af.open(f, 'w') w.setnchannels(nchannels) w.setsampwidth(sampwidth) w.setframerate(framerate) w.writeframesraw(frames) w.close() s = f.getvalue() f = StringIO(s) w = af.open(f) assert w.getnchannels() == nchannels assert w.getsampwidth() == sampwidth assert w.getframerate() == framerate assert w.readframes(len(frames)//nchannels//sampwidth) == frames assert w.readframes(1) == '' On 12 Feb 2009, at 09:00, David Jones wrote: > > David Jones <drj@pobox.com> added the comment: > > The following program does a very basic do-i-get-back-what-i-wrote > test. > sunau can't cope; I am investigating. I see. sunau uses mu-law compression by default which makes it non- invertable. How stupid. Inserting: w.setcomptype('NONE', 'Pointless Argument') just after setframerate fixes the tests so that all 3 modules pass. Of course, this is still only the most very basic test that one might want to do. And it doesn't cover the case mentioned in this bug report anyway. (drat, just found this, should've sent it yesterday) Is this still a problem with 2.7-3.2? GP, what state do you think either patch is in? > Now, is there some problem if we remove the calls to the "tell" method > in _write_header ? See patch attached (tests are very welcome too). Yes, there is a problem. User can pass already open file to wave.open() and file position can be not 0 at the start of the WAVE file. But you can do with only one tell(). Note a magic number 36 in many places of the code. This is struct.calcsize(wave_header_format). Test is needed as well as a documentation change. I think this is rather a new feature and should be added only in 3.4. Actually the current behavior is documented: "If *file* is a string, open the file by that name, otherwise treat it as a seekable file-like object." Here is corrected patch (it uses relative seek()) with a lot of tests. Oh, I forgot attach a patch. In any case it already slightly outdated. After looking at other audio modules I think David's approach is better. It is also used in the chunk module. Here is updated patch with tests (tests are not final, issue18919 provides better tests). Here is simplified and updated to tip patch. New changeset 6a599249e8b7 by Serhiy Storchaka in branch 'default': Issue #5202: Added support for unseekable files in the wave module. New changeset b861c7717c79 by R David Murray in branch 'default': whatsnew: Wave_write handles unseekable files. (#5202)
http://bugs.python.org/issue5202
CC-MAIN-2015-40
refinedweb
1,713
76.42
condense UTF-8 to Unicode, we create a String Object which has the parameters as the UTF-8 byte array name and the charset the array of bytes which it is in i.e. UTF-8. Let us see a program to convert UTF-8 to Unicode by creating a new String Object. public class Example { public static void main(String[] args) throws Exception { String str = "hey\u6366"; byte[] charset = str.getBytes("UTF-8"); String result = new String(charset, "UTF-8"); System.out.println(result); } } hey捦 Let us understand the above program. Firstly we converted a given Unicode string to UTF-8 for future verification using the getBytes() method − String str = "hey\u6366"; byte[] charset = str.getBytes("UTF-8") Then we converted the charset byte array to Unicode by creating a new String object as follows − String result = new String(charset, "UTF-8"); System.out.println(result);
https://www.tutorialspoint.com/convert-utf-8-to-unicode-in-java
CC-MAIN-2021-49
refinedweb
147
53.81
. There will be multiple questions, sounds, animations and more. You can either play along and build something similar for your own ends, or you can take each lesson as it comes and apply it to another project. Either way, I do recommend that you read part one first. You can find that here. Also, fair warning: this is not all going to be easy. By the end, we’ll be working with strings, arrays, nested if statements… you name it. I’m sure a lot of you won’t have the patience to build this whole thing but in that case you can tell from the headings what each section is about and just learn the things you’re interested in. If you are playing along, then grab a cup of coffee, put on some Daft Punk and let’s get to work! Oh and you can find all the resources and code on GitHub here. Adding pretty colored buttons Straight out the gate let’s add something easy that looks good. That way, we’ll have an early win in our pockets. Just add this line to the button widgets in activity_questions.xml: style="@style/Widget.AppCompat.Button.Colored" Note: You need to add this line twice, once for each button. If you recall, we previously edited the file ‘colors.xml’ and defined values for ‘colorPrimaryDark’ and ‘colorAccent’ using the palette we created at Paletton. This means that when you make your buttons colored, they should automatically match the color scheme you’ve been using and it looks pretty great. It’s certainly much more professional looking than the default ‘plain’ buttons we had before. This was nice and easy but don’t be deceived. It’s going to get a LOT more difficult… But fun too. Definitely fun… Decorating our questions page Next up, it’s time to add in a fancy animation. The toast message is nice and all, but it’s not a terribly attractive way to congratulate our users for getting the right answer. We want to make something with a little polish! To accomplish this, first we need to create a new ‘ImageView’. This is simply a type of view that shows an image. It is aptly named… If you remember, activity_questions.xml used both a vertical and horizontal linear layout. This is going to go after the first linear layout closes, but before the second one closes: <ImageView android: ‘Weirdtick’ is another image I made. It’s a weird tick that is supposed to be in-keeping with the rest of this app’s design. This will go in our ‘drawables’ folder with the logo from part 1. If you’ve done this right, then the screen should now have a little tick just below the buttons in the center. The ‘id’ for this image view is ‘tickcross’. That will make sense in a moment… Below that, we’re going to add some text congratulating our winner: <TextView android: And finally, let’s put a button just below that so they can progress to the next question: <Button android: So now you might be wondering: ‘wait… what?’ Currently we’re saying ‘correct’ before the user has actually written anything. That is obviously not what we want… So now you’re going to change that by going back to the Java for this page (questions.java) and inserting these three lines of code: findViewById(R.id.tickcross).setVisibility(View.INVISIBLE); findViewById(R.id.correctornot).setVisibility(View.INVISIBLE); findViewById(R.id.nextbutton).setVisibility(View.INVISIBLE); This will go right underneath ‘onCreate’ within the curly brackets. This means that as soon as the activity appears, those views are going to disappear so that we can’t see them. This will happen so fast that no-one will possibly see them. Notice that we are now changing attributes of our layout programmatically. This will come in handy a lot, so it pays to remember that your xml files are really only setting the starting conditions for your UI. And can you guess what happens when the user gets the right answer? They appear again! To test this, you can simply find the ‘Right!’ toast message in questions.java and replace it with these three lines: findViewById(R.id.tickcross).setVisibility(View.VISIBLE); findViewById(R.id.correctornot).setVisibility(View.VISIBLE); findViewById(R.id.nextbutton).setVisibility(View.VISIBLE); So now, when the user gets the answer right, these congratulatory views will spring up. But that’s not very pretty now, is it? Fancy animations What we need is a fancy animation to make this a little nicer. We can do this pretty easily in our questions.java by adding this code after we set ‘tickcross’ to visible: TranslateAnimation animation = new TranslateAnimation(0,0,2000,0); animation.setDuration(1000); findViewById(R.id.tickcross).startAnimation(animation); All you really need to know is that this creates an animation that’s affecting our tick. To talk you through it a little, we create the new animation and define how it’s going to work in the top line. ‘Translate’ means that the animation is moving (as opposed to spinning or fading), while the four numbers in the brackets are coordinates that relate to its current position. The first two refer to the ‘x’ coordinate and refer to where it is moving to and where it is moving from respectively (with 0 being the current position). The latter two numbers are the same thing but for the ‘y’ coordinate. Here we are moving along the Y axis from 2000 (far down the screen) to the starting position. Note: You will need to import TranslateAnimation by clicking on it and then pressing alt + return when instructed to. This is how the animation will look when we’re done… The next line tells us how quick the animation is. In this case, it lasts one second. Lastly, the third line tells the view ‘tickcross’ to use our animation and sets it into motion. As you can see, everything appears at once, except the tick which moves upward from the bottom of the screen. But wouldn’t it look better if the text and the ‘next’ button appeared only once the tick reached its final resting place? (Weirdly ominous phrasing there, sorry…) We can do this by adding an ‘animationListener’. What this means is that your app is now observing the animation and will know when it starts, ends and repeats (we haven’t told it to repeat, so we don’t need to worry about this). To use one, you want to add this line underneath ‘setDuration’ and before you start the animation: animation.setAnimationListener(new Animation.AnimationListener() When you do this, you should find that Android Studio automatically ads in some extra code for you with a curly bracket. If it doesn’t, then the code should look look like this: animation.setAnimationListener(new Animation.AnimationListener() { @Override public void onAnimationStart(Animation animation) { } @Override public void onAnimationEnd(Animation animation) { } @Override public void onAnimationRepeat(Animation animation) { } }); What we’re interested in is the ‘onAnimationEnd’ part, which fires once the animation has finished (one second after you hit ‘Okay’). Move the code around so that the text and button are set to visible in this event and that way, they’ll pop up once the tick is nicely in position. It all just looks a whole lot nicer. After this, you’re then starting the animation on the view. So the whole thing looks as follows: if (answer.equals(correctanswer)) { findViewById(R.id.tickcross).setVisibility(View.VISIBLE); TranslateAnimation animation = new TranslateAnimation(0,0,2000,0); animation.setDuration(1000); animation.setAnimationListener(new Animation.AnimationListener() { @Override public void onAnimationStart(Animation animation) { } @Override public void onAnimationEnd(Animation animation) { findViewById(R.id.correctornot).setVisibility(View.VISIBLE); findViewById(R.id.nextbutton).setVisibility(View.VISIBLE); } @Override public void onAnimationRepeat(Animation animation) { } }); findViewById(R.id.tickcross).startAnimation(animation); } else { Toast toasty = Toast.makeText(getApplicationContext(), "Nope!", Toast.LENGTH_SHORT); toasty.show(); } Run the app and see for yourself what a difference that makes! Remember, it’s the little details that make your app look and feel more professional. Creating a method So that’s what happens when our users get the answer right. How about when they get it wrong? In this case, you want to do the exact same thing, except you’re showing a cross and you’re not telling them they’re correct. In fact, it would be great if we could show the right answer so they learn for next time. First, let’s get the ‘wrong’ button to do the same thing as the right button; then we can tweak the specifics. Before you set about copying and pasting though, know that this isn’t good coding practice as it’s unnecessarily lengthy. It’s okay, you weren’t to know. Ideally, when programming you want to avoid doing anything more than once if at all possible. Programming is one aspect of life where laziness is encouraged. As such, the best way for us to go about this is to take everything we just wrote and drop it into a separate method (also called a function). This is a separate ‘event’ we can trigger from anywhere else in our code whenever we need a certain sequence to happen. To do this, you will create a new public void just like the onClick listeners and place it anywhere within questions.java – as long as it’s not inside another method (so it will be inside the ‘public class’ curly brackets but not inside any ‘public void’ curly brackets). This will look like so: public void answersubmitted() { } Don’t worry about the brackets for now, just know that you always need them when you create a new method. You can now put any code you like inside those brackets and then run that code from within other functions. So paste all of the code that made the views become visible and that handled our animation into here. In other words, all the code from within the if statement that checked if the answer given equals the correct answer: And now, where that code used to be (in the onClick method), you can just write ‘answersubmitted();’ to make the same thing happen. That means we can also put this line where we used to have the toast message for incorrect answers, rather than writing everything out twice. if (answer.equals(correctanswer)) { answersubmitted(); } else { answersubmitted(); } But, by calling answersubmitted when the answer is wrong then the same thing happens whether the user gets the answer right or wrong. We can change that by manipulating our views from within the code again. This time, we’re finding the views the ‘proper’ way, by creating new ‘TextView’ and ‘ImageView’ references so that we can mess around with their specific properties. Then we’re just going to change the text and the image before running the animation. This looks like this:(); } Note: You may need to import TextView by clicking on it and then pressing alt + return when instructed to. You’ll also notice that the way we change the answer for the wrong response is a little different. This allows us to show the correct answer using the ‘correctanswer’ string we made earlier, as well as some text. By doing it this way, we’ll be able to have the correct answer change as the question changes and we won’t have to rewrite any code. Likewise, we’re setting the drawable either to the ‘weirdtick’ or to a ‘weirdcross’, the latter of which is another image I’ve created for the drawable folder. It’s a cross. And it’s weird. I also think that we should make everything consistently capitals. Remember in part 1 we set the answer to lower case? Now we’re going to change that by setting the answer and the question to upper case (this also means we don’t need to worry about using the correct case when we add to strings.xml). Swap that lower case code with these two lines: correctanswer = correctanswer.toUpperCase(); answer = answer.toUpperCase(); So now when you get an answer wrong, the same thing happens except the image and text are different to indicate you didn’t get it right. We’re still a little way off though, as there’s currently only one question and you can keep putting in different answers to get different responses. So in the next section, we’ll be introducing variables! Introducing booleans A variable is something you can use to carry data. In math, you might remember using variables like ‘x’ and ‘y’ for equations, where those letters would have represented numbers. x + y = 13 x – y = 7 Find x and y Sound familiar? We’ve already used one type of variable when we used strings. Strings are variables that can ‘stand in’ for characters rather than numbers. Now we’re going to use another variable type called a ‘boolean’. Essentially, a boolean is a variable that can be either a ‘1’ or a ‘0’, which in computer speak means ‘true’ or ‘false’. In this case, we’re going to use a boolean to record and test whether or not the question has been answered. So just above the ‘onCreate’ method, add this line: private boolean done; This boolean will be ‘false’ by default (all variables equal zero when you create them) but after the user clicks ‘Okay’, we’re going to set it to ‘true’. The ‘Okay’ button will only work the first time, when it is 0, as everything inside the ‘onClick’ will also be inside an if statement. It should look like this: public void onAnswerClick(View view) { if (done == false) { String answer = ((EditText) findViewById(R.id.answer)).getText().toString(); String correctanswer = getString(R.string.A1); //gets the answer and correct answer from the edit text and strings.xml respectively answer = answer.toLowerCase(); //makes sure that the strings are lower case; } } } Run the app and you’ll find that after your first answer, the app now won’t accept any more input. Now we want to make the ‘Next’ button clean things up a bit. The next bit you should be familiar with. We’re adding an ‘onClick’ to our ‘Next’ button and by now you’ve done this a few times for different widgets. First, hop back into ‘activity_questions.xml’ and insert this line anywhere in the next button widget: android:onClick="onNextClick" Now return to the questions.java and add your onClick method. You know the drill, it’s: public void onNextClick(View view) { } And you can put this anywhere, as long as it’s not inside another method. This will run whenever we click that button and the first thing we’re going to do is to clear the answer and images away and refresh all the text. Again, you should know how most of this code is working at this point: if (done) {; } Notice that we’re also setting ‘done’ to false – which lets people click the ‘Okay’ button again with their new answer. The whole thing is also inside an ‘if (done)’ statement, which means the user can’t accidentally click ‘Next’ while it’s invisible before they’ve answered the question. The eagle-eyed among you will also have noticed that I didn’t right ‘if (done == true)’. That’s because booleans let you skip that bit. If ‘done’ is true, then that if statement statement is true. Choose the names for your booleans wisely and this means it can read like plain English, making it easier to look through your code later. For instance ‘If (userhasclickedexit) { finish() }’. This is a pretty short experience for our users at the moment, so now we need to start adding extra questions. This is where things get a little more complicated. You ready? Sure? And now: arrays! At this point, hitting next after submitting your answer simply returns you to the position you were in to begin with and lets you do the first question again. Obviously that’s not we want and this is where we’re going to need two more types of variables: an ‘integer’ (just called ‘int’) and an ‘array’. We’ll be looking at the array first. An array is essentially a variable that contains multiple other variables and assigns each one an index. We’re making an array of strings and this is going to allow us to retrieve the string we want by using its corresponding number. Probably best if I just show you… So open up strings.xml. You should remember at this is where we stored our questions, hints and answers as strings. Now though, we’re adding in some arrays. This will look like so: <string-array <item>What is the letter A in the phonetic alphabet?</item> <item>What is the letter B in the phonetic alphabet?</item> <item>What is the letter C in the phonetic alphabet?</item> </string-array> <string-array <item>alpha</item> <item>bravo</item> <item>charlie</item> </string-array> <string-array <item>A tough, domineering bloke</item> <item>Well done!</item> <item>Snoopy\'s mate</item> </string-array> That’s three different arrays – ‘questions’, ‘answers’ and ‘hints’ – and each one has three different strings inside it. Notice the ‘\’ in the third hint; you need to insert a backslash first whenever you use an apostrophe to differentiate it from opening or closing your quotes. Now to grab these strings, we need to create a string array in our java and then say which string from that array we want to retrieve. A string is written as ‘String[]’ and when retrieving strings, you put the index inside those square brackets. But because this wasn’t complicated enough already, there’s an extra caveat you need to keep in mind, arrays are indexed from zero. This means that the second string has an index of one. So if you have 7 strings, the index of the last string is ‘6’. Right, so if we add this line to our ‘Next’ button’s ‘onClick’ method in questions.java, we can see this in action: String[] questions = getResources().getStringArray(R.array.Questions); TextView t = (TextView) findViewById(R.id.question); t.setText(questions[1]); You will probably see an error for R.id.question, that is because during part 1 we didn’t give the TextView which shows the questions and ID. So jump over to activity_questionts.xml and add the following line to the TextView which is used to display strings/Q1: android:id="@+id/question" Now, when you click ‘Next’ everything will clear and the question will change to question two (stored in the first position). Study that code for a moment and make sure you can see how it’s all working. Fun with integers There’s a problem with this though, which is that we’re having to manually tell our app which string to grab and at the moment it sticks at ‘2’. Instead, we want it to move from question 1 to question 2 and beyond all on its own. This is where our ‘integer’ comes in. This is a variable that simply stores a single whole number (i.e. no decimal points). We’re going to create our integer and stick it up the top of the questions.java underneath our ‘done’ boolean. I’m calling mine ‘QuestionNo’. As QuestionNo represents a number, that means you can replace: t.setText(questions[1]); With: t.setText(questions[QuestionNo]); At the moment, QuestionNo equals ‘0’, so this will just send the user back to the start again. To fix that, we want to increase the value of our question number each time the user clicks ‘Next’. We can do this by inserting the following line in the onNextClick method, right before we use the variable: QuestionNo = QuestionNo + 1; Now the value of the question number goes up by one each time, meaning that the next question will be shown from the array on each refresh. You can also write this as ‘QuestionNo++;’ which is shorthand for when you want to incrementally increase an integer. There’s one more problem though, which is that our app will crash once the user gets past question three. We need another ‘if’ statement then, this time showing the following: if (QuestionNo < (questions.length - 1)) { Here, ‘questions.length’ will return an integer that corresponds to the number of questions in your array. We can treat it just like any other integer, just as some lines of code previously stood-in for strings. We’re now comparing the length of our array with ‘QuestionNo’ and want to stop once the value of QuestionNo is one less. Remember: the last filled position is ‘2’, not ‘3’. Now the whole thing should look like this: public void onNextClick(View view) { if (done) { String[] questions = getResources().getStringArray(R.array.Questions); if (QuestionNo < (questions.length - 1)) { QuestionNo = QuestionNo + 1; TextView t = (TextView) findViewById(R.id.question); t.setText(questions[QuestionNo]);; } } } Hey, I told you it wasn’t easy! Just to recap though, this code fires when the user clicks ‘Next’. It then clears up all our UI elements and increases the QuestionNo to the next question (up until the last question). At the moment though, the correct answer is always going to be ‘alpha’, which we don’t want! To fix this little problem, we need to refer to our other arrays to get the hints and the answers elsewhere in the code. ‘onAnswerClick’ now looks like this: public void onAnswerClick(View view) { if (done == false) { String answer = ((EditText) findViewById(R.id.answer)).getText().toString(); String[] answers = getResources().getStringArray(R.array.Answers); String correctanswer = answers[QuestionNo]; //gets the answer and correct answer from the edit text and strings.xml respectively correctanswer = correctanswer.toUpperCase(); answer = answer.toUpperCase();; } } And ‘onHintClick’ look like this: public void onHintClick(View view) { String[] hints = getResources().getStringArray(R.array.Hints); Toast toasty = Toast.makeText(getApplicationContext(), hints[QuestionNo], Toast.LENGTH_SHORT); toasty.show(); } I’ve also opted to create the question programmatically in my ‘onCreate’ method. In other words, I don’t want to manually define the first question in ‘activity_questions.xml’ any more, but rather by using this again: String[] questions = getResources().getStringArray(R.array.Questions); TextView t = (TextView) findViewById(R.id.question); t.setText(questions[QuestionNo]); This means that you should be able to delete all references to ‘Q1’, ‘A1’ and ‘H1’ throughout your code and in your strings.xml. It’s just a bit tidier and it means if you want to change the questions later on, you only have to change them in that one place. Finishing touches – sound and orientation The cool thing about the way we’ve structured this app is that you can add as many questions to the array as you like it be able to adapt with no changes to the code. Just make absolutely sure that you have the same number of hints and answers to go along with those questions. One thing you might notice that still isn’t quite right though, is that rotating the app makes us lose our place and go back to the first question. This is because apps essentially refresh every time you rotate the screen and to fix this, you’ll need to either freeze the orientation of the activity or learn about app life-cycles and saveInstanceState. I’ve given you the links so you can start doing your own research but the most logical way for us to go about this is to lock the orientation. We do this by opening ‘AndroidManifest.xml’ and adding this line to the two activities: android:screenOrientation="portrait" I’ve also taken the liberty of adding some sound effects to the app as well. To do this, I created a new folder called ‘raw’, in the ‘res’ directory (just using Windows Explorer) and I put two ‘.wav’ files in there (created with Bfxr). One of these is called ‘right.wav’ and one is called ‘wrong.wav’. Have a listen and see what you think. If you think they’re horrid, you can make your own. If you don’t think they’re horrid… then you’re wrong. I then added these two lines to the ‘onAnswerClick’ method where the ‘correct’ sequence of events are: MediaPlayer mp = MediaPlayer.create(getApplicationContext(), R.raw.right); mp.start(); We can also do the same but with ‘R.raw.wrong’ for the ‘incorrect’ sequence: if (answer.equals(correctanswer)) { TextView t = (TextView) findViewById(R.id.correctornot); t.setText("CORRECT!"); MediaPlayer mp = MediaPlayer.create(getApplicationContext(), R.raw.right); mp.start();); MediaPlayer mp = MediaPlayer.create(getApplicationContext(), R.raw.wrong); mp.start(); ImageView i = (ImageView) findViewById(R.id.tickcross); i.setImageDrawable(getDrawable(R.drawable.weirdcross)); answersubmitted(); } Remember to import Media Player as well, as prompted by Android Studio. Where to next? Okay, so as you can see, programming can be complex, but it isn’t impossible. Hopefully you’re still with me and hopefully you managed to take something helpful from this tutorial. Don’t worry if it doesn’t work at first, just carefully read through the code and double check everything – normally the answer is staring you in the face. And remember, you can just copy and paste from my code here and reverse engineer it. There are lots more things I’d like to add to this but I think we’ve covered more than enough for one post. It would be good to add in some kind of message congratulating the user when the get to the end for example. Giving them the opportunity to start again would also make sense and to do this you could create a new activity or use dialogs. It would also be cool to have more than one set of questions and maybe to let the user create their own questions too (using OutputStreamWriter perhaps). You could also add some animations to the text when the next question loads. And how about keeping tabs on a score? This is where the fun bit comes in – deciding what you want to do next and then looking up the best way to do it. Copy and paste the examples you find and expect a little trial-and-error to get it to run. Gradually, you’ll start to understand how it’s all working and you’ll find yourself adding more and more elaborate features. Once you’ve Goolged and implemented your first line of code, you’re officially an app developer. Welcome to the club!
http://www.androidauthority.com/build-an-android-app-part-2-676322/
CC-MAIN-2017-13
refinedweb
4,424
64
Plugins. Building a great WordPress plugin begins with careful planning. Whether you’re building one from scratch, or based off a boilerplate, following well-documented best practices is absolutely essential. In this tutorial, you’ll learn how to build a simple WordPress plugin the right way. If you want to review the final source code as you read along, you can find it here. Start with a plan. First, let’s list the features our plugin will have and outline exactly what it needs to do. The plugin we’re building will allow site visitors to save content to read later. For registered users, we’ll store the list in the database, and for anonymous users, we’ll save the list using cookies. Below is an outline of the features and functionalities that our plugin will provide. Settings Screen - The ability for admins to add the “Save Item” button to the end of the content. - The ability to choose the type of posts where we want this button added. - Offer users the option to decide whether they want to use our predefined styling or not - Provide an option to enable the functionality only for logged in users. - Provide an option to change the messages that appear on the visitor facing part of the plugin. Saving the Content - If user is logged in, save content to a custom user field - If user is not logged in, save content to cookies Messages The messages below will appear on-screen in response to a visitor’s interaction with the plugin or as labels on actionable items: - “Save item.” - “Unsave item.” - “Saved. See saved items.” - “You don’t have any saved items.” Saved Screen This is where visitors view the list of posts they’ve saved. - Show a list of saved items - Create a Saved page on activation of the plugin - Delete Saved page on deactivation of the plugin Shortcode With a shortcode, the Saved page can be rendered wherever it is added. Use a boilerplate. This is the best boilerplate I’ve found. It’s well-structured, object-oriented, and efficient. It follows every best practice. And it’s fast and light. You can use this page to generate a plugin codebase based on this WordPress Plugin Boilerplate: You should get a .zip file. Extract it, and put it in your WordPress installation folder: wp-content/plugins/. If you open up your WordPress Dashboard, and go to plugins, you’ll see that your plugin is listed there. Don’t activate it just yet. Handle activation and deactivation. It’s important for our plugin to properly handle activation and deactivation. When our plugin is activated, we’ll create a page named “Saved,” which will hold the user’s saved items in it. While creating that page, we will add a shortcode for our saved items into the content of that page. At the end, we’ll save the page; get its ID; and store it in the database, so we can access it later on deactivation of the plugin. When our plugin is deactivated, we will get the “Saved” page ID from the database, and then delete the “Saved” page, removing any trace of the plugin itself. We can do all of this in includes/class-toptal-save-activator.php and includes/class-toptal-save-deactivator.php. Let’s start with the activation process: <?php // includes/class-toptal-save-activator.php // ... class Toptal_Save_Activator { /** * On activation create a page and remember it. * * Create a page named "Saved", add a shortcode that will show the saved items * and remember page id in our database. * * @since 1.0.0 */ public static function activate() { // Saved Page Arguments $saved_page_args = array( 'post_title' => __( 'Saved', 'toptal-save' ), 'post_content' => '[toptal-saved]', 'post_status' => 'publish', 'post_type' => 'page' ); // Insert the page and get its id. $saved_page_id = wp_insert_post( $saved_page_args ); // Save page id to the database. add_option( 'toptal_save_saved_page_id', $saved_page_id ); } } The activate() function is called when the plugin is activated. It creates a new page using the wp_insert_post() function and saves the page ID to the database using add_option(). Now, let’s proceed with the plugin deactivation. <?php // includes/class-toptal-save-activator.php // ... class Toptal_Save_Deactivator { /** * On deactivation delete the "Saved" page. * * Get the "Saved" page id, check if it exists and delete the page that has that id. * * @since 1.0.0 */ public static function deactivate() { // Get Saved page id. $saved_page_id = get_option( 'toptal_save_saved_page_id' ); // Check if the saved page id exists. if ( $saved_page_id ) { // Delete saved page. wp_delete_post( $saved_page_id, true ); // Delete saved page id record in the database. delete_option( 'toptal_save_saved_page_id' ); } } } The deactivate() function, which is called when the plugin is deactivated, retrieves the page using the get_option() function, removes the corresponding page from the database using wp_delete_post(), and removes the saved ID from the options table using delete_option(). If we activate our plugin, and go to pages, we should see a page called “Saved” with a shortcode in it. If we were to deactivate the plugin, that page would be removed. Since we used true as an argument in our wp_delete_post() method, this page won’t go in the trash, but rather, it will be deleted completely. Create a plugin settings page. We can create our settings page inside the admin/class-toptal-save-admin.php file, and the first thing we need to do in that file is remove or comment out the call to wp_enqueue_style() inside the enqueue_styles() function and call to wp_enqueue_script() inside the enqueue_scripts() function if we won’t be adding any CSS/JS to the admin screen. However, if we are going to add some styling, then, I recommend we load those files only in the settings page of our plugin, rather than on all WordPress admin pages. We can do that by placing the following code directly above the lines we would have commented: if ( 'tools_page_toptal-save' != $hook ) { return; } wp_enqueue_style( $this->plugin_name, plugin_dir_url( __FILE__ ) . 'css/toptal-save-admin.css', array(), $this->version, 'all' ); if ( 'tools_page_toptal-save' != $hook ) { return; } wp_enqueue_script( $this->plugin_name, plugin_dir_url( __FILE__ ) . 'js/toptal-save-admin.js', array( 'jquery' ), $this->version, false ); If you’re wondering where did I get that ‘tools_page_toptal-save’ part from. Well, here’s the thing, I know that I’m going to create a settings page with a slug toptal-save, and I also know I’m going to add it to the Tools (tools.php) screen. So, putting those two together, we can tell that the value of the variable $hook is going to be ‘tools_page_toptal-save’ - a concatenation of the two values. If we’re not in our plugin settings page, we use return to immediately terminate the execution of the function we are in. Since I won’t be adding any custom styling to my admin screen – because I want my plugin screen to look like native WordPress screen – I won’t add that code. Now, we can proceed with creating our settings page. We’re going to start by adding a simple method to Toptal_Save_Admin class that will call the add_submenu_page() function. /** * Register the settings page for the admin area. * * @since 1.0.0 */ public function register_settings_page() { // Create our settings page as a submenu page. add_submenu_page( 'tools.php', // parent slug __( 'Toptal Save', 'toptal-save' ), // page title __( 'Toptal Save', 'toptal-save' ), // menu title 'manage_options', // capability 'toptal-save', // menu_slug array( $this, 'display_settings_page' ) // callable function ); } That’s quite a handful of arguments we are passing to the add_submenu_page() function. Here is what each of them means. Parent slug: The slug name for the parent menu (or the filename of a standard WordPress admin page). You can see the full list of parent slugs here. Page title: The text to be displayed in the title tags of the page when the menu is selected. Menu title: The text to be used for the menu title. Capability: The capability required by the user for this menu to be displayed to them. We have used “manage_options” which allows access to Administration Panel options. You can read more about Roles and Capabilities over here. Menu slug: The slug name to refer to this menu. Callable function: The function to be called to output the content for this page. Since we have defined the name of our callable function, we need to create it, but before we do, we used $thisto reference an instance of a class from within itself. Here’s what PHP documentation has to say about it: The pseudo-variable $this is available when a method is called from within an object context. $this is a reference to the calling object (usually the object to which the method belongs, but possibly another object, if the method is called statically from the context of a secondary object). Next, we will add another method to the class: /** * Display the settings page content for the page we have created. * * @since 1.0.0 */ public function display_settings_page() { require_once plugin_dir_path( dirname( __FILE__ ) ) . 'admin/partials/toptal-save-admin-display.php'; } This callable function is including our template that is going to show our settings page. You can see that we are referencing a file located in the admin/partials called toptal-save-admin-display.php. Now, if you go to Tools, you won’t see that screen. Why? Because we haven’t hooked our register_admin_page() method to the admin_menu hook. We can do that by opening up our includes/class-toptal-save.php file and adding this chunk of code inside the define_admin_hooks() method, right below where the $plugin_admin = new Toptal_Save_Admin( $this->get_plugin_name(), $this->get_version() ); part is. /** * Register all of the hooks related to the admin area functionality * of the plugin. * * @since 1.0.0 * @access private */ private function define_admin_hooks() { $plugin_admin = new Toptal_Save_Admin( $this->get_plugin_name(), $this->get_version() ); $this->loader->add_action( 'admin_menu', $plugin_admin, 'register_settings_page' ); $this->loader->add_action( 'admin_enqueue_scripts', $plugin_admin, 'enqueue_styles' ); $this->loader->add_action( 'admin_enqueue_scripts', $plugin_admin, 'enqueue_scripts' ); } Don’t worry about the calls to add_action() since that’s something that we’re going to cover later on. For now, simply open up the Tools page, and you’ll be able to see the Toptal Save page. If we open it, it works, but we see a blank screen since there’s nothing on it. We’re making some progress, but hey, we need to display some settings here, so let’s do that. We’re going to start creating the fields, and that’s something that we are going to do with the help of WordPress Settings API. If you’re not familiar with it, it allows us to create form fields that we can use to save our data. /** * Register the settings for our settings page. * * @since 1.0.0 */ public function register_settings() { // Here we are going to register our setting. register_setting( $this->plugin_name . '-settings', $this->plugin_name . '-settings', array( $this, 'sandbox_register_setting' ) ); // Here we are going to add a section for our setting. add_settings_section( $this->plugin_name . '-settings-section', __( 'Settings', 'toptal-save' ), array( $this, 'sandbox_add_settings_section' ), $this->plugin_name . '-settings' ); // Here we are going to add fields to our section. add_settings_field( 'post-types', __( 'Post Types', 'toptal-save' ), array( $this, 'sandbox_add_settings_field_multiple_checkbox' ), $this->plugin_name . '-settings', $this->plugin_name . '-settings-section', array( 'label_for' => 'post-types', 'description' => __( 'Save button will be added only to the checked post types.', 'toptal-save' ) ) ); // ... } Inside the register_settings() function we can add and configure all the fields. You can find the complete implementation of the function here. We have used the following in the function shown above: register_setting(): Registers a setting and its sanitization callback. add_settings_section(): Adds a new section to a settings page. add_settings_field(): Adds a new field to a section of a settings page. Whenever we used one of those three functions, a sanitization callback was provided. This allows the data to be sanitized and, if it is a field, to show the appropriate HTML element (checkbox, radio, input, etc). Also, we have passed an array of data to those callbacks, such as label_for, description or default as necessary. Now, we can create those sanitization callbacks. You can find the code for those callbacks here. This is all fine, however, we need to hook the fields into admin_init hook and then show them. We will use add_action which is a hook that the WordPress core starts at specific points during execution, or when a specific event occurs. admin_init is triggered before any other hook when a user accesses the admin area. First, we need add an action in includes/class-toptal-save.php file. /** * Register all of the hooks related to the admin area functionality * of the plugin. * * @since 1.0.0 * @access private */ private function define_admin_hooks() { $plugin_admin = new Toptal_Save_Admin( $this->get_plugin_name(), $this->get_version() ); // Hook our settings page $this->loader->add_action( 'admin_menu', $plugin_admin, 'register_settings_page' ); // Hook our settings $this->loader->add_action( 'admin_init', $plugin_admin, 'register_settings' ); $this->loader->add_action( 'admin_enqueue_scripts', $plugin_admin, 'enqueue_styles' ); $this->loader->add_action( 'admin_enqueue_scripts', $plugin_admin, 'enqueue_scripts' ); } Next, in admin/partials/topal-save-admin-display.php, we need to provide a view for the admin area of our plugin: <?php /** * Provide a admin area view for the plugin * * This file is used to markup the admin-facing aspects of the plugin. * * @link * @since 1.0.0 * * @package Toptal_Save * @subpackage Toptal_Save/admin/partials */ ?> <!-- This file should primarily consist of HTML with a little bit of PHP. --> <div id="wrap"> <form method="post" action="options.php"> <?php settings_fields( 'toptal-save-settings' ); do_settings_sections( 'toptal-save-settings' ); submit_button(); ?> </form> </div> The settings_fields() function is used to output nonce, action, and option_page fields for a settings page. It’s followed by the do_settings_sections() which prints out all settings sections added to a particular settings page. Finally, a submit button is added using the provided text and appropriate class(es) using the submit_button() function. Now, if we take a look at our page, it will look like this: This is all we have to do in our admin area. Let’s begin working on the public part of our plugin. Create the plugin functionality. Here comes the interesting part. We need to create multiple functions to separate our functionality: - A function that will show the “Save Item” button. This needs to check if the current user has already saved that item or not, depending on that, we’ll show different text as well as color. - A function that will save/unsave an item (AJAX). - A function that will show all saved items. - A function that will generate our shortcodes. So let’s get started with showing the button. We’ll be doing all of this in public/class-toptal-save-public.php. While doing this, we’ll need to create some additional helper functions to take care of certain things like: - Creating a unique cookie name for the website - Creating a cookie - Getting the cookie value - Getting the membership status from the settings The code for these helper functions can be found here. The get_unique_cookie_name() function will help us generate a unique cookie name from the website URL, website name, and our custom defined suffix. This is so that the generated cookie name will not conflict when used in multiple WordPress sites under the same domain. The toptal_set_cookie() and toptal_get_cookie() functions will create and get the value of our cookies respectively. The get_user_status() function will get the status of our membership checkbox in the settings (returning 1 when checked, 0 otherwise). Now, the juicy part, creating the function that will be responsible for showing the save button. The implementation for our show_save_button() function can be found here. And we have used some new functions from the WordPress API here: get_queried_object_id(): Retrieves the ID of the current queried object. is_user_logged_in(): Checks if the current visitor is a logged in user. get_user_meta(): Retrieves user metadata field for a user. wp_create_nonce(): Creates a cryptographic token tied to a specific action, user, user session, and window of time. Now, let’s create a function that will append our button to the end of the content. Here, we have two key requirements. - Make sure that the button is shown only on the post type(s) that is/are selected in the settings. - Make sure that the checkbox for appending the button is checked. /** * Append the button to the end of the content. * * @since 1.0.0 */ public function append_the_button( $content ) { // Get our item ID $item_id = get_queried_object_id(); // Get current item post type $current_post_type = get_post_type( $item_id ); // Get our saved page ID, so we can make sure that this button isn't being shown there $saved_page_id = get_option( 'toptal_save_saved_page_id' ); // Set default values for options that we are going to call below $post_types = array(); $override = 0; // Get our options $options = get_option( $this->plugin_name . '-settings' ); if ( ! empty( $options['post-types'] ) ) { $post_types = $options['post-types']; } if ( ! empty( $options['toggle-content-override'] ) ) { $override = $options['toggle-content-override']; } // Let's check if all conditions are ok if ( $override == 1 && ! empty( $post_types ) && ! is_page( $saved_page_id ) && in_array( $current_post_type, $post_types ) ) { // Append the button $custom_content = ''; ob_start(); echo $this->show_save_button(); $custom_content .= ob_get_contents(); ob_end_clean(); $content = $content . $custom_content; } return $content; } Now, we need to hook this function to the the_content hook. Why? Because the_content is used to filter the content of the post after it is retrieved from the database and before it is printed to the screen. With this, we can add our save button anywhere in the content. We can do that in includes/class-toptal-save.php in define_public_hooks() method, like this: /** * ); $this->loader->add_action( 'wp_enqueue_scripts', $plugin_public, 'enqueue_styles' ); $this->loader->add_action( 'wp_enqueue_scripts', $plugin_public, 'enqueue_scripts' ); } Now, if go to plugin settings, and check posts and pages, as well as append the button, we’ll see on any blog post that the button is shown. From here, we should go ahead and style that button. We can do that in public/css/toptal-save-public.css. Find the updated CSS file here. Now, let’s create a function that will actually save the item. We’re going to do this in our public class, and, we’re going to do it with AJAX. The code is here. Let’s hook this function into WordPress AJAX. /** *' ); } You can read more about AJAX in plugins here. Before we finish this part, we need to do two more things. - Localize a script. - Create our AJAX call in public/js/toptal-save-public.js Localizing a script will be done via the wp_localize_script() function inside our public/class-toptal-save-public.php file. Also, while we’re at it, we’ll also make sure to implement showing CSS and JS files depending on the state of our “use our style” checkbox. /** * Register the stylesheets for the public-facing side of the site. * * @since 1.0.0 */ public function enqueue_styles() { /** *. */ $options = get_option( $this->plugin_name . '-settings' ); if ( ! empty( $options['toggle-css-override'] ) && $options['toggle-css-override'] == 1 ) { wp_enqueue_style( $this->plugin_name, plugin_dir_url( __FILE__ ) . 'css/toptal-save-public.css', array(), $this->version, 'all' ); } } /** * Register the JavaScript for the public-facing side of the site. * * @since 1.0.0 */ public function enqueue_scripts() { /** *. */ wp_enqueue_script( $this->plugin_name, plugin_dir_url( __FILE__ ) . 'js/toptal-save-public.js', array( 'jquery' ), $this->version, false ); // Get our options $options = get_option( $this->plugin_name . '-settings' ); // Get our text $item_save_text = $options['text-save']; $item_unsave_text = $options['text-unsave']; $item_saved_text = $options['text-saved']; $item_no_saved = $options['text-no-saved']; $saved_page_id = get_option( 'toptal_save_saved_page_id' ); $saved_page_url = get_permalink( $saved_page_id ); wp_localize_script( $this->plugin_name, 'toptal_save_ajax', array( 'ajax_url' => admin_url( 'admin-ajax.php' ), 'item_save_text' => $item_save_text, 'item_unsave_text' => $item_unsave_text, 'item_saved_text' => $item_saved_text, 'item_no_saved' => $item_no_saved, 'saved_page_url' => $saved_page_url ) ); } Now, we can proceed with our AJAX call. Our front-end script will look for elements with class “toptal-save-button.” A click handler will be registered to all matching elements, which will perform the API call and update the UI accordingly. You can find the code here and the necessary CSS here. I have also added a function that will handle the notification when the item is added. Here’s how it all works. Next, we need to create a shortcode for users to insert wherever they want. We can do that in public/class-toptal-save-public.php: /** * Create Shortcode for Users to add the button. * * @since 1.0.0 */ public function register_save_unsave_shortcode() { return $this->show_save_button(); } We also need to register it, since the function by itself won’t do anything. In includes/class-toptal-save.php add this code after that line where we appended our button. // Add our Shortcodes $this->loader->add_shortcode( 'toptal-save', $plugin_public, 'register_save_unsave_shortcode' ); Now, this isn’t going to work because we haven’t yet loaded the add_shortcode() method inside our loader class. Here is the full code of the includes/class-toptal-save-loader.php file. I have added a new protected variable called shortcodes, then in the constructor method of the class, I’ve turned it into an array. On line 104, I’ve added a function that will be responsible for the creation of our shortcodes; you can see that it’s pretty much the same as the function above it ( add_filter()), except I changed the “filter” into “shortcode” and “filters” into “shortcodes.” Also, in the run() method, I have added another foreach that will go through our shortcodes array, and register them with WordPress. That was easy. Remember, in the beginning, we used a shortcode [toptal-saved], so let’s create a method that will show all of our saved items. Find the full code for this method here. Now, as always, we need to register the shortcode in includes/class-toptal-save.php: /** * ); // Add our Shortcodes $this->loader->add_shortcode( 'toptal-save', $plugin_public, 'register_save_unsave_shortcode' ); $this->loader->add_shortcode( 'toptal-saved', $plugin_public, 'register_saved_shortcode' ); //' ); } We have two more things to do here. - Style our saved items page. - Make sure that when a user removes a saved item, it disappears from the saved items page. For the first task, you can find the necessary CSS code here. For the second one, it involves a bit of front-end scripting. The full JavaScript code for that can be found here. As you’ll see on line 52, I searched for the div with a class “toptal-saved-item.” Then, on lines 70-75, we check if that parent div has a class toptal-saved-item. If it does, we hide our item with fadeOut and then, after the animation is over, we completely remove the item from the screen. Now, let’s move on to the more difficult part – making it modular. Make the plugin modular. The basic definition of a modular plugin is: Extensible, or modular code, is code that can be modified, interacted with, added to, or manipulated - all without ever modifying the core code base. Now, when it comes to this plugin, I would make sure that users can change the HTML inside the saved item on the saved items page. So, we’re going to need to make a few changes in our register_saved_shortcode() method: - Change html_to_returnto inner_html_to_returnwherever we want users to be able to change the HTML. Make sure that the first declaration of our inner_html_to_returnvariable has “=” without a dot preceding it. - Use the apply_filters()method to register our filter. With these two changes, you should end up with something like this. Now, if users want to interact with our code, they can add something like this inside their functions.php file: <?php add_filter( 'toptal_saved_item_html', 'change_toptal_saved_item_html'); function change_toptal_saved_item_html( $inner_html_to_return ) { // Some custom code return $inner_html_to_return; } Generate translation files. Translation is very important because it allows WordPress community members and polyglots to translate your plugin, making it accessible to non-English sites. That being said, let’s dive into some technical details about how WordPress handles translations. WordPress uses the GNU gettext localization framework for translation. In this framework, there are three types of files: - Portable Object Template (POT) - Portable Object (PO) - Machine Object (MO) Each of these files represents a step in the translation process. To generate a POT file, we need a program that will search through WordPress code, and get all the text passed to our translation functions, such as __e() and _e(). You can read more about the translation functions here. Here we translate the text from POT file, saving both English and our translation in a PO file, and we convert the PO file into an MO file. Doing this manually would take a lot of time since you would have to write a few lines of code for each translatable file you have in your plugin. Fortunately, there’s a better way, using a handy little plugin called Loco Translate. Once you install and activate it, go to Loco Translate > Plugins > Toptal Save. From there, click Edit template, then Sync and Save. This will edit our toptal-save.pot file inside our languages folder. Now, the plugin is available for translation. Build your WordPress plugin now. We have built a rather simple plugin in this article, but in the process, we followed the practices and standards that would allow us to maintain and extend this plugin easily. We’ve used WordPress functionalities in ways that won’t hamper the overall performance of the platform. Whether it is a simple plugin or a complicated one, planning and following best practices is key to building a robust plugin.
https://www.toptal.com/wordpress/ultimate-guide-building-wordpress-plugin
CC-MAIN-2018-30
refinedweb
4,132
55.95