text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
I learned something interesting about about Groovy recently. I was tasked with building a tool for advanced admin users, to provide them an easy way to run batch jobs. It was designed so that users could point their browser at either /service/algorithm1 or /service/algorithm2. The request is handled by a controller, written in Groovy, which simply verifies that the parameter is either “algorithm1” or “algorithm2”. If that condition is met, the controller simply delegates to a service-layer class to actually process the request. The Strings “algorithm1” and “algorithm2” also happen to be the names of methods provided by the service class. So, in an attempt to be clever and avoid a bunch of conditional statements, I thought it’d be cool to implement the controller logic like this: if (param in [‘algorithm1’, ‘algorithm2’]) { service.”${param}”() } How slick is that?! I was tempted to take the rest of the day off after coming up with that beauty. I was feeling prettay, prettay proud of myself. I submitted a pull request and waited for the code-review kudos to come rolling in. And the approvals did come, except from one guy on the team who was not as big a fan of Groovy as I am. He was concerned that the line service.”${param}”() amounted to a security hole. He worried that someone could invoke arbitrary code by exploiting that line of code. Instead of my beautiful “dynamic” code, he preferred something like: if (param == ‘algorithm1’) { service.algorithm1() } else if (param == ‘algorithm2’) { service.algorithm2() } else { /* throw an error */ } Boring! Now, this teammate is a smart guy whose opinion I respect, so I was a bit troubled by his suggestion. My initial, almost visceral, reaction was to dismiss his concern as fear-of-the-unfamiliar (since he once said he was not very familiar with Groovy). I tried to assuage his concern. My reasoning went something like this: 1) The expression ${param} would have to evaluate to a String whose value matches the name of a method of the service class. It couldn’t cause arbitrary code to be executed – only a method of the service class could be invoked. 2) Besides, it’s guarded with the if param in ['algorithm1', 'algorithm2'] clause. Chill, brah! Ultimately, I figured, his suggestion/concern was just a code-style preference. But I wasn’t quite comfortable with my shallow argument-from-arrogance. I decided to fire up the Groovy console to test my claims. It’s a good thing I did, because they were not correct! While my implementation did guard against proceeding when the param value is anything other than “algorithm1” or “algorithm2”, my first argument was dead wrong. Let’s look at an example: class Greeting { void sayHello() { println "Hello" } void sayGoodbye() { println "Goodbye" } } def greeting = new Greeting() Here we just create a class with two simple methods and then create an instance of that class. Now let’s see if we can execute arbitrary code using the dynamic method invocation technique. First, let’s create a GString that contains arbitrary Groovy code but that should ultimately resolve to the String “sayHello”: def param = "${ new File('/Users/me/Documents/Financial').eachFile { file -> println('Sending this file to malicious server: ' + file.name) }; 'sayHello' }" Great, now let’s see what happens when we try to execute it: greeting."${param}"() Whoa, it works! My teammate was right, I was wrong, I’ll take my crow breaded and fried, please. Go ahead, fire up groovyConsole and try it for yourself. You’ll want to change the /Users/me/Documents/Financial bit to a directory path on your system. You should see that for each file in the specified directory, the message Sending this file to malicious server: <fileName> is printed out. Finally, the greeting.sayHello() method is invoked. While my initial implementation guarded against this kind of attack with the if param in ['algorithm1', 'algorithm2'] condition, I decided to use the approach suggested by my teammate instead. It’s conceivable that some unsuspecting developer may come in later and remove that guard or change it to something like if(param.contains('sayHello')) { greeting."${param}"() } My takeaway: Don’t let the appeal of clever techniques cloud your judgement with respect to more important concerns, such as security and performance. I should have recognized the danger of code like greeting.”${param}”. Doing anything with user-input should be treated with the utmost suspicion, even in a “secure” environment such as the system I was working with. Thankfully, I had an alert teammate who called me on it. UPDATE: The main assumption in this article is that you’re starting with a GString. Now, hopefully your web framework would not plop user-input into GStrings, but into plain old Strings. If that is the case, then the “danger” mentioned in the article isn’t really there. For example, this code: String plainString = new String(“new File(‘/Users/me/Documents/Financial’).eachFile { file -> println(‘Sending this file to malicious server: ‘ + file.name) }; ‘sayHello'”) assert plainString.class == java.lang.String greeting.”${plainString}”() should result in groovy.lang.MissingMethodException, as I initially expected. So…. nevermind? Heh, maybe, but I think it’s still an interesting thing to watch out for. One thought on “Flirting with disaster: A dangerous use of Groovy’s dynamic method invocation” Don’t give up: Map service = [sayHello: { println ‘hello’ }, sayGoodbye: {println ‘good bye’}] def methods = service.keySet() as List def param = ‘sayGoodbye’ def id = methods.indexOf(param) service.”${methods[id]}”() You can articulate your service methods as a list and then interrogate the list. After working with Clojure and Groovy for awhile, I have begun to think of classes as just typesafe maps. Thanks Ben! That’s an interesting approach. If I had more than just 2 service methods to deal with, probably would have been a bit more stubborn, and would have tried something like you suggest. Actually, I’d probably prefer not to essentially “redefine” the service interface in some collection (sorry if I misunderstood your point). Instead I’d be inclined to try using respondsTo(). I actually just tried using respondsTo like so: if (greeting.respondsTo(param)) { println “Oh gosh, the code in the ‘param’ was executed” greeting.”$param”() } And what happens is that the ‘param’ GString gets evaluated before it can be passed to respondsTo, which means the nasty code embedded in it gets executed. So, that’d be bad. So now you’ve caused me to learn something else 😉 . Maybe this a red-herring – maybe the ‘param’ could never actually be a GString, but a regular String? I don’t know… Wow, that does look dangerous. Kurt would be terrified.
https://objectpartners.com/2016/01/05/flirting-with-disaster-a-dangerous-use-of-groovys-dynamic-method-invocation/
CC-MAIN-2019-04
refinedweb
1,109
56.96
The hidden beauty of @protocol namespaces Interview by Tyler McNierney Introduction At the surface level, the @protocol appears to be a simple, robust software development infrastructure that magically provides a backend service for apps that are built on top of it. Beyond the list of verbs (e.g. put, getKeys), its monitoring service, and libraries that provide @protocol functionalities, The @ Company’s Internet protocol at its highest abstraction layer doesn’t really reveal too much to the software developer (and rightly so to make their programming lives easier!). Diving deeper into the @protocol, you will soon realize that there is a lot at play, from various rounds of data encryption to device-to-cloud synchronization. Rather than absorbing the entirety of the @protocol in one go, it’s best to begin by taking the @protocol one step at a time. Today, we will introduce a critical piece of the @protocol puzzle — application namespaces — through an interview with The @ Company co-founders Colin Constable (CTO) and Kevin Nickels (CPO). Tyler M: Let’s start from a big-picture perspective. What was your vision for apps that are developed on the @protocol? Kevin: First and foremost, we wanted apps focused on delivering new user experiences that weren’t possible before the @protocol. Rather than just repurposing old apps, we wanted developers to think differently. Part of that is predicated on the notion that all your data should be accessible through any app. We also have libraries that allow you to embed services and widgets that are available to any app (e.g. any app can add chats). Being able to inherit context as well as knowledge from other apps and using it is very important. Colin: Since the beginning, what we wanted to do was allow people to own their data in a simple fashion, with a data model that was a little bit more human-like. If you ask a computer a question, you’ll get the correct answer every time, but if you ask a human a question, depending on who’s asking it, you’ll get a different answer. Before the @protocol, there was no way of doing that in computer science at the protocol level. By moving this interaction down the stack (from the application layer), the Internet can effectively log into you and ask the same questions. You can then identify who’s asking you questions and give different answers. Once the @protocol was developed, we alone couldn’t imagine what could come out of it. The reason why we’re participating in hackathons, appathons, and speaking to as many people who will listen to us is because it sparks new neurons, opens new pathways, and causes new apps to just emerge. Kevin and I find that incredibly exciting — the fact that we can create a new technology, know it’s important, and not know what amazing things somebody’s going to do with it. Tyler M: So, what exactly is a namespace, and how is it used in software development? Colin: A namespace is a place to put a set of strings or characters together. Most people are familiar with DNS (domain name system): for example, if you type “cnn.com”, “fox.com”, or “bbc.com”, you get news sites. But you can’t just type in “news” and expect the Internet to tell you which particular flavor of news you want. We need to create namespaces so that humans can remember the name and computers can translate it to Internet protocol. Once there is a namespace like “bbc.com,” you can reliably know that somebody owns that particular space, and it needs to be managed so that there are no clashes. For instance, you don’t want to type “bbc.com” and get sent to Amazon’s home page. That’s why they have to be unique, and we at The @ Company created a new namespace with @Namespace. Tyler M: How exactly do namespaces work with @protocol applications? Kevin: Ooh, my favorite! Let’s say you have an app called “spacesignal.” The goal of the app is to chat with a complete stranger that you randomly connect with. You can decide whether you really want to chat with them or not, and the entirety of those interactions live in the “spacesignal” namespace for chats. You can have another app called “attached,” which is using exactly the same backend and libraries, but is designed for intimate conversations with your significant other. Again, same backend, same libraries, but the two apps are for diametrically opposite use cases. The only differences between these applications are their namespaces and user experiences. Namespaces allow you to differentiate your data and your user experiences for widely varying things while keeping it very simple to build those applications. Colin: The @protocol namespace is a little bit like an iceberg. You may only see “phone@colin” in the namespace, but there are a whole bunch of things in that namespace (e.g. specifying to whom you’re giving permission for your phone number). We tried to make it as simple as possible for the developer, but behind the scenes there are a lot more complications, so a lot of engineering happened to make sure that the namespace is accurate and the right people get the right data at the right time. Tyler M: That brings me to my final question concerning the security of @protocol namespaces. What precautions and safety features were considered in its design? Colin: A lot of things! The namespace is really just a pointer to data. Behind the scenes, the @protocol relies on a key-value pair — the key is the namespace, and the value is whatever data is in that namespace (and every bit of data also has metadata!). But here’s the catch: that data that I’ve shared (my phone number if Tyler asked for it) is encrypted with Tyler’s public key. We’ve got cryptography going on here that relies on public/private key crypto as well as symmetric key encryption. So only Tyler, who I’ve shared the data with, can actually decrypt that data. If you somehow broke into my phone and looked through my data, all of that is encrypted for the individuals that I’m going to share them with. Since that data is encrypted with another set of keys, if I have 2000 connections on my @protocol app, you’d have to break into 2001 phones to access all of it. There is in fact additional security on top of the crypto which is already incredibly difficult to break (RSA and AES are what we use), and if we come up with more security layers, we’ll add those too. Kevin: From a namespace perspective, “phone@colin” is something that you can look up. As Colin said, there are additional things under the cover (like “phone@colin” for who) that make these namespaces polymorphic because there are different values for different combinations of “phone@colin” (e.g. “phone@colin:@tyler” or “phone@colin:@kevin”). Tyler McNierney is a consultant at The @ Company as well as a UC Berkeley student studying electrical engineering and computer science. He is a firm believer in The @ Company’s mission statement and decided to remain with the company after the end of his summer internship. In his free time, Tyler M. loves to play piano, write music, and play Super Smash Bros. Ultimate on his Nintendo Switch. Learn more about The @ Company from our GitHub repo.
https://atsigncompany.medium.com/the-hidden-beauty-of-protocol-namespaces-6f5fab7f7a09
CC-MAIN-2022-21
refinedweb
1,252
61.77
stat - get file status #include <sys/types.h> #include <sys/stat.h> int stat(const char *path, struct stat *buf); The stat() function obtains information about the named file and writes it to the area pointed to by the buf argument. The path argument points to a pathname naming a file. Read, write or execute permission of the named file is not required, but all directories listed in the pathname leading to the file must be searchable. An implementation that provides additional or alternate file access control mechanisms may, under implementation-dependent conditions, cause stat() to fail. In particular, the system may deny the existence of the file specified by path. The buf argument is a pointer to a stat structure, as defined in the header <sys/stat.h>, into which information is placed concerning the file. The stat() function updates any time-related fields (as described in the definition of File Times Update in the XBD specification), before writing into the stat structure.. Upon successful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error. The stat() function will fail if: - [EACCES] - Search permission is denied for a component of the path prefix. - [EIO] - An error occurred while reading from the file system. - : - [ENAMETOOLONG] - Pathname resolution of a symbolic link produced an intermediate result whose length exceeds {PATH_MAX}. - [EOVERFLOW] - A value to be stored would overflow one of the members of the stat structure. None. None. None. fstat(), lstat(), <sys/stat.h>, <sys/types.h>. Derived from Issue 1 of the SVID.
http://pubs.opengroup.org/onlinepubs/007908799/xsh/stat.html
CC-MAIN-2018-34
refinedweb
259
57.57
Couchbase Lite is an embedded NoSQL database for mobile devices that runs locally on the device with full CRUD and query capabilities. In this article, we discover how to integrate Couchbase Lite with Android Studio. Android for mobile devices currently comes with an inbuilt local database—SQLite. This is an RDBMS based lightweight database that is available by default on all Android operating systems and provides CRUD operations to efficiently power your apps. SQLite is really a great choice when the requirement is just to have a simple database for your app to manage structured data. However, when the need is to store semi-structured or unstructured data and also to handle complex queries at scale without worrying about the schema of tables, then SQLite may not suit all of the developer’s requirements. A NoSQL database can be a better fit with these scaling requirements. Comparisons between an SQL and a NoSQL database have fuelled many debates, but both complement each other rather than compete with each other. In this article, we start by discussing the general database requirements for mobile devices, followed by NoSQL’s prominence in today’s mobile world and, finally, look at integrating a NoSQL database called Couchbase Lite with Android Studio. Databases for mobile devices Deciding on a database for mobile devices requires us to consider various factors like memory constraints, the user experience, a lightweight UI, etc – parameters that are very different compared to what would be required for a desktop or Web environment. So before we jump into integrating Couchbase Lite with Android, let us first check out the various requirements for databases in the mobile environment, which are listed below. - Unlike desktops or servers, mobile devices tend to have a lower battery life and relatively slower CPUs. Databases should hence not be performance intensive and should be able to effectively perform frequent operations like searches and updates. - Mobile databases should have a smaller memory footprint. In certain ways, higher memory requirements would also lead to increased CPU cycles, where the kernel tries to intensively search for available memory space in the RAM. Lower footprint demands not only lead to lower CPU cycles but also ensure other mobile apps don’t get impacted. - All the winning mobile apps are high performance with a fast loading time. Apps that freeze constantly are always on the backbench. Data consistency is another requirement when talking about these local databases on the mobile. If one were to go with a distributed database, the data may become inconsistent with respect to its remote counterpart if not taken care of, and the device might even discover this only on connecting to the Internet. A mobile developer with a cloud based database backend, like say Firebase, needn’t worry about most of these constraints and requirements, but needs to include these factors in the equation when opting for a local database. NoSQL’s relevance in the mobile world With the increased usage of mobile devices, a tremendous amount of data is being generated these days. This fact, clubbed with technology proliferation into new spaces like mobility, IoT and analytics, has led to a demand for mobile devices and apps to handle this high volume of data at a high speed. Besides, the nature of data (especially when it comes from IoT devices for which data exchange is in real-time) is continuous and either semi-structured or the application needs to cater or adapt to various schemas. Some of the fundamental philosophies that NoSQL brings in address these challenges in the mobile space, as discussed below. - The very fact that NoSQL is schema-less will help developers handle the data that lacks schema or structure. Besides, this property will also let them scale to changing or evolving requirements of data. The change in schema or structure could be done with ease at any point in time, in an independent way, without affecting the existing code. All of this will directly result in the agile delivery of apps, a quick turnaround time to market as against the time consuming process of design considerations, and limited scope for scalability and modularity in code when using a relational database. - The distributed architecture of NoSQL databases ensures that they perform better than RDBMS. Besides, NoSQL doesn’t have complex join operations and normalised data nor does it include complex queries. These factors give it the upper hand when it comes to database performance. Effective performance directly results in the better user experience of mobile apps because of the reduced load time of UI components and activities. This directly improves the battery life too. - Security is another aspect that should never be ignored while trying to achieve all these goals. The databases should be able to communicate to the server over secure channels. Besides, the channels that communicate with mobile devices over the Internet demand low latency for an improved mobile user experience. Also,exchange of data on the network should be lightweight to meet these performance requirements. All that said, we still may not be able to eliminate SQL databases which complement NoSQL in many ways. NoSQL doesn’t guarantee atomicity or integrity of data which an RDBMS is capable of. So, it is the developer’s needs at the end of day that decide which database to go with. Integrating Couchbase Lite with Android Studio Couchbase Lite is an open source project available under Apache License 2.0. It is an embedded JSON database that can work as a standalone, in a P2P network, or as a remote endpoint for a Sync Gateway. In this article, we explain how to power your Android apps with Couchbase Lite. Before getting onto integration, let us check out a few key features about this database. - Couchbase Lite stores and manages all the data locally on your mobile device in a lightweight JSON format. - Considering the requirements for a mobile database, Couchbase Lite qualifies with its lower memory footprint, built-in security with user authentication, AES based data based encryption, and transport to the server through TLS. - Couchbase Lite provides CRUD and query support through native APIs, and also works well with existing REST architectures with its programmatic access through REST API. - Stream and batch APIs from Couchbase Lite enable the transfer of real-time data in batches with low network latency and throughput, thereby addressing the exact demands of mobile apps. Let us now go through the steps of installing Couchbase Lite and the other basic operations to get started. It is assumed that readers are already conversant with the Android Studio IDE for developing Android apps. Integrating Couchbase dB with Android is straightforward. You could start off by adding the dependency elements given below in your application’s build.gradle: dependencies { compile 'com.couchbase.lite:couchbase-lite-android:+' } In the Java part of the application, you would need the following basic set of packages to start with: import com.couchbase.lite.*; import com.couchbase.lite.android.AndroidContext; Now that you are all set to use CouchBase APIs in your Android app, I’d like to illustrate a sample code, used for creating a database, and do an insert, update and delete of a document in it as mentioned in (1), (2), (3) and (4), respectively. // (1) Get the database or create it if it doesn’t already exist. Manager manager = new Manager(new JavaContext(),Manager.DEFAULT_OPTIONS); Database db = manager.getDatabase("couchdB"); // (2) Create a new document (a record) in the database. Document doc = db.createDocument(); Map properties = new HashMap(); properties.put("firstName", "OSFY"); doc.putProperties(properties); // (3) Update a document. doc.update(new Document.DocumentUpdater() { @Override public boolean update(UnsavedRevision newRevision) { Map properties = newRevision.getUserProperties(); properties.put("firstName", "Johnny"); newRevision.setUserProperties(properties); return true; } }); // (4) Delete a document. doc.delete();
https://opensourceforu.com/2017/08/integrate-couchbase-lite-android-studio/
CC-MAIN-2018-26
refinedweb
1,296
52.6
Opened 15 years ago Last modified 11 years ago #1955 defect new — at a forgotten resetTimeout in twisted.protocols.postfixInitial Version Description I think after sent code to client, daemon should reset timeout. If data source is a big and slow object, getting and sending result to client may consume a lot of time, if you didn't reset the timer, the connection maybe lost before client reply to daemon. --- postfix.py.orig Tue Jul 25 00:57:49 2006 +++ postfix.py Tue Jul 25 01:08:13 2006 @@ -48,6 +48,7 @@ def sendCode(self, code, message=): "Send an SMTP-like code with a message." self.sendLine('%3.3d %s' % (code, message or )) + self.resetTimeout() def lineReceived(self, line): self.resetTimeout() Change History (0) Note: See TracTickets for help on using tickets.
https://twistedmatrix.com/trac/ticket/1955?version=0
CC-MAIN-2022-05
refinedweb
133
57.16
The problem Assign cookies Leetcode Solution provides with two arrays. One of the arrays represents the size of the cookies and the other represents the greediness of the children. The problem states that you are the parent of children, and you want the maximum number of children to be content. You can only give one cookie to a child. Find the maximum number of content children. Let’s take a few examples first. g = [1,2,3], s = [1,1] 1 Explanation: We have two cookies of size 1 each, and three children with different content values. We can make only a single child content that has greediness value 1. Since the other two children require larger cookies. g = [1,2], s = [1,2,3] 2 Explanation: We can make both of the children feel content because the cookies that we have are greater or equal to the required sizes. Thus the output is 2. Approach for Assign Cookies Leetcode Solution The problem Assign Cookies Leetcode Solution asks us to find the maximum number of content children if we can give only a single cookie to a child. Thus the best approach is to greedily try to make the least greedy child feel content with the smallest cookie that can be used. So, to simulate this process we sort both the given arrays and try to assign the cookies to the children. If we can’t assign the current cookie to the current child then we try the next cookie until we satisfy the current child. Because if we can not satisfy the current child, we cannot satisfy the next children with the current cookie. Code C++ code for Assign Cookies Leetcode Solution #include <bits/stdc++.h> using namespace std; int findContentChildren(vector<int>& g, vector<int>& s) { sort(g.begin(), g.end()); sort(s.begin(), s.end()); int numberOfChildren = g.size(), numberOfCookies = s.size(); int cookie = 0, answer = 0; for(int i=0;i<numberOfChildren && cookie<numberOfCookies;){ if(s[cookie] >= g[i]){ i++; answer++; } cookie++; } return answer; } int main(){ vector<int> g = {1, 2, 3}; vector<int> s = {1, 1}; cout<<findContentChildren(g, s); } 1 Java code for Assign Cookies Leetcode Solution import java.util.*; import java.lang.*; import java.io.*; class Ideone { public static int findContentChildren(int[] g, int[] s) { Arrays.sort(g); Arrays.sort(s); int numberOfChildren = g.length; int numberOfCookies = s.length; int cookie = 0, answer = 0; for(int i=0;i<numberOfChildren && cookie<numberOfCookies;){ if(s[cookie] >= g[i]){ i++; answer++; } cookie++; } return answer; } public static void main (String[] args) throws java.lang.Exception { int[] g = {1, 2, 3}; int[] s = {1, 1}; System.out.println(findContentChildren(g, s)); } } 1 Complexity Analysis Time Complexity O(NlogN), because of the time required to sort the given arrays. Here N represents the number of elements in given arrays. Space Complexity O(NlogN), because of the space required to sort the given arrays. Again the N represents the number of elements in the arrays.
https://www.tutorialcup.com/leetcode-solutions/assign-cookies-leetcode-solution.htm
CC-MAIN-2021-04
refinedweb
496
56.66
Spark Language Package – Preview 1spark, visual studio December 19th, 2008. Download Now Update: This link is obsolete – please visit the download page for the latest release bits. I’ve tested installing and uninstalling which will remove the files and registry entries. Eventually there may be a single installer that contains the VSIP files in addition to the things currently in the zip download – but for now they’re entirely separate. The copy of Spark.dll installed by the msi is used used by the language service for parsing and colorizing. It won’t conflict with the Spark.dll in the zip download, and that’s the file you should continue to use as an assembly reference in your projects. As always feedback is welcomed. Update: There’s been a report of “no effect” on XP sp2. I should also mention the installer has no user interface, so it’s normal for it for a small dialog box to show a progress bar move back and forth a bit then disappear without confirmation. Also the spark file will need to be “Open With…” the “Source Code (Text) Editor” in case you’re using the xml or html editor. <use namespace="System.Collections.Generic"/> <use namespace="System.Linq"/> <use namespace="System.Web.Mvc.Html"/> <use namespace="MyApp.Models"/> December 20th, 2008 at 2:02 pm it does not pickup and extensions methods where does it look for the using namespaces because i do not have the spark configuration i the web.config file but in a spark.config file December 22nd, 2008 at 9:28 am [...] DeJardin has posted a preview of Spark Language integration with Visual Studio. Spark is a view engine for ASP.NET MVC and [...] December 23rd, 2008 at 2:09 am Awesome. Seems to work well. I can’t seem to get intellisense to work if I have R# intellisense enabled though. I have to switch back to VS intellisense. Makes sense I suppose, but it’d be nice if I could get both. Also, I don’t auto complete closing tags.. would it be possible to support intellisense in an unclosed tag? This is great though, keep it up. December 31st, 2008 at 2:21 am @Subnus try preview 2 – the config information should be used now @Aaron I noticed that too – though the hotkeys (ctrl+space, ctrl+shift+space) still seemed to work even with resharper enabled. That’s definitely a high priority. Which type of tags do you mean? Html tags or the ${} syntax? December 31st, 2008 at 3:14 am I’m referring to both ${} and tags. If you type ${foo. then intellisense doesn’t work, but if you type ${} and then go back and type foo. it seems to. December 31st, 2008 at 1:55 pm Try grabbing preview 2 – the } should be added automatically for you. To be honest I’m not really sure what direction to take the tags. The big drawback to making a language service is you lose the VS support in the Html/Xml editors for schema, outlining, format-document, completion of end tags and attribute quotes. Some of those can be annoying – colorized text is a bit extreme. I’ve always liked how the Xml editor’s indentation rules for formatting helped to validate the structure was correct. January 2nd, 2009 at 4:47 am [...] [...] January 2nd, 2009 at 6:21 am Heh – reminds me of the old joke – you know you’ve been working on computers for too long when 256 seems like a nice round number January 8th, 2009 at 8:23 pm I hear Spark from Haaked’s ASP.NET MVC Northwind Demo Using the Spark View Engine ;The Spark bring us what ? how about it’s performance ? January 10th, 2009 at 1:57 pm All of the .spark files being rendered are used to generate and load a csharp class with a RenderView() method, so the performance is very good. The type is re-used for the same view/master each time so all it’s creating an instance, setting context, and calling RenderView. The code like ${product.Name} and each=”var item in Items” turns into compiled code like Output.Write(H(product.Name)); and foreach(var item in Items) {…}. So really the view engine doesn’t have any language or interpreter built into it at all – it’s a class generator. One benefit it brings is a smaller, text/html friendly file format. There are other features like declaring helper functions (macro) in the template language instead of in csharp language. December 7th, 2013 at 12:11 am I dream to visit the various nations, to get acquainted with interesting people. Interventional radiology treatment may include a CT scan or ultrasound guidance in detecting and aspirating the pus from the abscess. Tumors and abnormal cell mass can cause UTs obstruction such as bladder and kidneys that can lead to urinary tract infection, in a study of “Occurrence of urinary tract infection in children with significant upper urinary tract obstruction” by Roth CC, Hubanks JM, Bright BC, Heinlen JE, Donovan BO, Kropp BP, Frimberger D.
http://whereslou.com/2008/12/19/spark-language-package-preview-1
CC-MAIN-2014-10
refinedweb
853
74.59
Video Tutorial and example for the most searched topic "Java Read File Line By Line" This video tutorial explains you the steps to create a program in Java programming language to read a text file line by line. The "Java Read File Line by line" is most searched topics by the Java developers. This means Java developers are using the Java for reading the file line by line. Reading a text file line by line is provided many benefits. Java provides the API for reading the text file line by line efficiently. There are many situations where it is required to read the large text file ranging from 1 GB to 10 GB in size. In this situation you must read the file line by line and the process the one line at a time as per your business needs. Commonly developers are using the BufferedReader class to read the file line by line. The BufferedReader class reads the characters in a buffer thus increases the performance of the application. So, in your program it is advisable to use this feature of Java. Following is the video tutorial that shows you how you can write code to read a text file in Java line by line. Steps to create a test program for reading file line by line: Step 1: Copy or create your text file and save in a directory Step 2: Create a java file and save on your computer. Step 3: Add the following code into your program: import java.io.*; public class ReadTextFileLineByLine{ public static void main(String args[]){ try{ FileInputStream fstream = new FileInputStream("myfile.txt"); DataInputStream in = new DataInputStream(fstream); BufferedReader br = new BufferedReader( new InputStreamReader(in)); String line=""; while((line = br.readLine()) != null){ System.out.println(line); } in.close(); }catch(Exception e){ System.out.println("Error while reading the file:" + e.getMessage()); } } } Step 4: Save the above file and compile and run the code. The important line of the program is line = br.readLine(), which reads the data from text file line by line. This program is very useful as it can be used to read the data file of big size also. Developers are trying to find the example code for "Java Read file line by line". Code explained here is good enough to use in production environment with little modification to process large text file also. Developers used the java.io.BufferedReader class to read files in console, JSP, Servlet, Struts, Spring etc.. based applications. Most of the developers follows the following way to read the file line by line: String line=""; while((line = br.readLine()) != null){ System.out.println(line); } In the above code BufferedReader class is used, which has the readLine() method to read one line at a time from input stream. The readLine() method of the BufferedReader class reads one line of text from the input stream and returns as a String. Here is the use of readLine() method: String oneLine = br.readLine(); Import methods of the BufferedReader class: close() - The close() method is used to close the input stream. read() - The read() is used to read a single character from the stream. readLine() - The readLine() method is very useful and it is used to read one line at a time. Other method of reading file in Java line by line The java.util.Scanner class can also be used to read a file in Java line by line. Liked it! Share this Tutorial Ask Questions? Discuss: Video Tutorial - Java Read File Line By Line Post your Comment
http://roseindia.net/java/javafile/Video-Tutorial-Java-Read-File-Line-By-Line.shtml
CC-MAIN-2014-15
refinedweb
589
72.56
Angie ORM About Angie ORM is a feature-complete database relationship manager designed for NodeJS. Current Database Support - MySQL via the node-mysql package - sqlite3 via the node-sqlite3 package Planned Database Support - Firebase - MongoDB - Couchbase - Postgres Usage npm i -g angie-orm angie-orm help Building databases is easy! In a file called AngieORMFile.json: { "databases": { "default": { "type": "sqlite3", "name": "angie.db" }, "test_site": { "type": "mysql", "alias": "angie_site", "username": "root" } } } You can make models in several ways global.app.Model('test', function($Fields) { let obj = {}; obj.test = new $Fields.CharField({ default: () => 'test ' + 'test' }); obj.many = new $Fields.ManyToManyField('test2', { name: 'test' }); return obj; }); @Model class test { constructor($Fields) { this.test = new $Fields.CharField({ default: () => 'test ' + 'test' }); this.many = new $Fields.ManyToManyField('test2', { name: 'test' }); } } These two models are functionally equivalent. To actually build your databases: angie-orm syncdb [name] Where if no name is specified, the default database will be synced. This will also automatically migrate your database, but a command for migrating databases is also available: angie-orm syncdb [name] [--destructive] Where destructive will force stale tables and columns to be deleted. All of the above is done for you in your AngieFile.json if you are building an Angie application. The first argument provided the function passed to the model will be $Fields, an object containing all of the available field types. Individual fields can also be required from their source in angie-orm/src/models/$Fields: import {CharField} from 'angie-orm/src/models/$Fields'; // or require('angie-orm/src/models/$Fields').CharField; Available Field Types Include: CharField IntegerField KeyField ForeignKeyField ManyToManyField Additionally, Fields can be passed configuration options on instantiation: minValue maxValue minLength maxLength nullable unique default In that order as arguments or in an Object. Foreign key and many to many fields require as a first argument a related table with which a reference is made. Additionally, these fields support nesting and deepNesting options. Many to many fields require a name be passed as reference. If you have a need for a Field type that is not included, feel free to make a Pull Request (and follow the current format), or ask. All queries return a Promise with a queryset global.app.Models.test.all().then(function(queryset) { return queryset[1].update({ test: 'test' }).then(function() { process.exit(0); }); }); querysets are extended Arrays, which include an index list of records with extended methods, a list of unmodified results ( queryset.results), and methods Model methods include: all: Fetch all of the rows associated with a Models fetch: Fetch a certain number of rows filter: Filter a queryset. Supports conditionals. create: Create a record delete: Delete a record update: Update a record exists: Does the filterquery return any results (passes a boolean to the resolve) All supports no arguments. The other READ methods on the tables support an Object as an argument with the following keys: - values: The list of fields you would like to see - ord: Order 'ASC' or 'DESC' - int: The number of rows you would like to have returned - each key in the table, with a WHERE value (eg: id>1would be { id: '>1' }) Create/Update queries require all non nullable fields to have a value in the arguments object or an error will be thrown. All queries support the database argument, which will specify the database that is hit for results. Update queries are available on the entire queryset as well as each row. Additionally, methods to retrieve the first and last row in the returned records are available. Many to many fields have the added functionality of fetching all related rows, fetching, filtering related rows, and adding, and removing related rows. The arguments to these methods must be existing related database objects..
https://doc.esdoc.org/github.com/angie-framework/angie-orm/
CC-MAIN-2021-17
refinedweb
619
55.95
The path to reproducing a scenario for a custom is sometimes a long-ish one. I needed to include a custom assembly in an Azure Function App which I describe here “How to add assembly references to an Azure Function App” and documented this one here "Could not load file or assembly" along the way here as well. Here are the steps you need to do, I also include how to add the reference to a Console applictio as well. - Create a Class Library project - Add the code, build and compile to make the DLL - Create a consumer Console application - Add the code and run it Create a Class Library project I used Visual Studio 2015 community, selected File –> New –> Project as shown in Figure 1. Figure 1, create a DLL in C# for referencing from a Console or Azure Function App Click the OK button and the pro´ject and solution are created for you. Add the code, build and compile to make the DLL Add / modify the existing code so that it resembles the following. The name space will be the name you provided for the project, leave that as is, it will also be the name of your DLL, for example, here the dll will be called benjamin.dll. using System; namespace benjamin { public class Greetings { public static string Hello(string name) { return $"Greetings {name}"; } } } Name the class and add a method which does something. The class I named as Greetings which contained a method name Hello that accepts a string and returns the Greetings message. Presss CTRL + SHIFT+ B, F6 or select Build – Build Soluiont from the menu. Depending wheter you created a Debug or Release version of the dll, navigate to the project directory and you will find the dll, as shown in Figure 2. Figure 2, create a DLL in C# for referencing from a Console or Azure Function App Create a consumer Console application To consume the method in the dll from another program, you will need to add a reference to the assembly. But first, create the application that will consume the method. For example, create a Console application, as shown in Figure 3. Figure 3, consume a DLL in C# for referencing from a Console Then, as seen in Figure 4, add the reference to the dll by right-clicking on the Reference folder in the Console application than, Browse –> Browse… button –> navigate to the location of the dll –> select it and press the Add button. Figure 4, how to reference a DLL in C# from a Console Add the code and run it Lastly, add the using reference from within the console application and call the method, similar to that shown here. using System; using static System.Console; using benjamin; namespace consume_benjamin { class Program { static void Main(string[] args) { WriteLine("What is your name? "); var name = ReadLine(); if (name != null) { WriteLine(benjamin.Greetings.Hello(name)); } else { WriteLine("You must enter a name, good bye"); } ReadLine(); } } } Press F5, enter a Name and the code is called and displyed as seen in Figure 5- Figure 5, how to reference a DLL in C# from a Console
https://blogs.msdn.microsoft.com/benjaminperkins/2017/04/13/how-to-make-a-simple-dll-as-an-assembly-reference-just-for-fun/
CC-MAIN-2018-39
refinedweb
524
62.41
I have to compute the average in this program, but the average cannot consists of zero which ends the program. For example if I enter 1, 2, 3, -1, then 0 to exit, the average will be the sum (5) over 4 and not 5 since the last number, the zero does not go in the sum. Any help? Thanks. public class Problddm1 { public static void main (String []args){ Scanner scan = new Scanner(System.in); // Object declaration int number = 1; int sum = 0; float average = 0; float count = 0; int posnum =0; int negnum =0; // While loop do{ count++; System.out.print("Enter a number, program exits on 0: "); number = scan.nextInt(); sum = sum + number; average = sum / count; if (number <0) { negnum++; } else if (number >0) { posnum++; } } while (number != 0); // Results System.out.println("The number of positives is: " +posnum); System.out.println("The number of negatives is: " +negnum); System.out.println("The total is: " + count); System.out.println("The average is: " +average); } }
http://www.javaprogrammingforums.com/whats-wrong-my-code/5585-something-wrong-computing-average.html
CC-MAIN-2017-47
refinedweb
163
58.08
Chat messaging is everywhere today. We can talk to customer support personnel through a web app that allows them to see our request and respond in real-time. We can interact with our friends and family, no matter where we are, through apps like WhatsApp and Facebook. There are multitudes of instant messaging apps, for many use cases, available today, even some that allow you to customize for a particular community or team (e.g Slack), however, you still may find you need to create your own real-time messaging app in order to reach and interact with a particular audience. This may be a social app for language learners or an app for a school to interact with students and parents. And you might be wondering, “...how do I do this?”. There are many options available for building real-time applications, however, in this post, I'll show you how to use Stream Chat API with its custom React components to build a messenger style app. In addition, we will add authentication to the application using Auth0. Using these managed services helps us focus on building the application, leaving the concern of server management and scaling to the provider. The application we're going to build by the end of this post will support: - A conversation list where a user can see their chat history. - A typing indicator to tell who is typing. - Message delivery status. - A message thread to keep the discussion organized. - Online/Offline statuses for users. - Emoji support. - File Attachment and Link preview. And it's going to behave like this: In the next post, we will add functionality to make phone calls, so stick around 😉. To follow along with this tutorial, you'll need to have knowledge of React.js, Node.js and npm installed (npm is distributed with Node.js – which means that when you download Node.js, you automatically get npm installed on your machine). Alternatively, you can use yarn with any of the commands. Getting Started With The React App To save time on setup and design, we will be using create-react-app to create our React project. Open your command line application and run the following commands: npx create-react-app react-messenger cd react-messenger This will set up the React project and install necessary dependencies. We used npx, which is a tool that gets installed alongside npm (starting from version 5.2). Setting Up Auth0 We will be using Auth0 to handle user authentication and user management. Auth0 is an Authentication-as-a-Service (or Identity-as-a-Service) provider that provides an SDK to allow developers to easily add authentication and manage users. Its user management dashboard allows for breach detection and multifactor authentication, and Passwordless login. You need to create an application on Auth0 as a container for the users of this messenger app. You'll need some API keys to use the SDK. To create an application on Auth0, visit Auth0's home page to log in. Once you have logged in, click on the big button at the upper right-hand corner that says New Application. This should show a modal asking for an application name and a type. Give it the name react-messenger, select Single Page Web Application, and then click the Create button. This should create an application on Auth0 for you. Next, we need to set up an API on Auth0. In the side menu, click on APIs to show the API dashboard. At the upper right corner of the page, click the big Create API button. This shows a modal form asking for a name and an identifier. Enter react-messenger-api as the name, and as the identifier. This will create an API for us. Click on the Settings tab and it should display the id, name, and identifier of the API. We will need this identifier value later on, as the audience parameter on authorization calls. To learn more about this parameter, check out the documentation. Secure the React App With Auth0 Now that we have our application setup in Auth0, we need to integrate it with React. We will create a class that will handle login, log out, and a way for the app to tell if the user is authenticated. In the src directory, add a new file auth/config.js with the content below: export default { clientId: "your auth0 clientId", domain: "yourauth0domain.auth0.com", redirect: "", logoutUrl: "", audience: "" }; Replace the placeholder for domain and clientId with the data in your Auth0 application dashboard. In the settings page of the Auth0 application, update the fields Allowed Callback URLs with, and Allowed Logout URLs with to match what we have in config.js. The Allowed Callback URLs setting is the URL that the Auth0 Lock widget will redirect to after the user is signed in. The other setting, Allowed Logout URLs, is the URL to redirect to after the user is logged out. Create another file src/auth/service.js and add the code below to it: import config from "./config"; import * as Auth0 from "auth0-js"; class Auth { auth0 = new Auth0.WebAuth({ domain: config.domain, clientID: config.clientId, redirectUri: config.redirect, audience: config.audience, responseType: "id_token token", scope: "openid profile email" }); authFlag = "isLoggedIn"; userProfileFlag = "userProfile"; localLogin(authResult) { localStorage.setItem(this.authFlag, true); localStorage.setItem( this.userProfileFlag, JSON.stringify(authResult.idTokenPayload) ); this.loginCallback(authResult.idTokenPayload); } login() { this.auth0.popup.authorize({}, (err, authResult) => { if (err) this.localLogout(); else { this.localLogin(authResult); } }); } isAuthenticated() { return localStorage.getItem(this.authFlag) === "true"; } getUserProfile() { return JSON.parse(localStorage.getItem(this.userProfileFlag)); } } const auth = new Auth(); export default auth; In the code above, we used the Auth0 client-side library, which we will add later as a dependency. We initialized it using details from the config.js. We have the login() function which, when called, will trigger a pop-up window where users can log in or signup. The localLogin() function stores some data to localStorage so that we can access them on page refresh. The loginCallback function will be set later in src/App.js so it can use the authentication result for some other operations. The idTokenPayload has information such as email, name, and user id. We are also going to build our logout functionality here. This will clear whatever we stored in localStorage from the previous section, as well as sign the user out of the system. Add the following code to the class we defined in the previous section: localLogout() { localStorage.removeItem(this.authFlag); localStorage.removeItem(this.userProfileFlag); this.logoutCallback(); } logout() { this.localLogout(); this.auth0.logout({ returnTo: config.logoutUrl, clientID: config.clientId }); } Working With Our Auth Service With the authentication service class complete, we will now use it in the React component. We will install the Auth0 dependency used earlier and add bootstrap, to beautify the UI a little. Open your terminal and run npm install --save bootstrap auth0-js to install those dependencies. Then, open src/index.js and add import 'bootstrap/dist/css/bootstrap.css to include the bootstrap CSS on the page. Open src/App.js and update it with the following code: import React, { Component } from "react"; import authService from "./auth/service"; import Conversations from "./Conversations"; import Users from "./Users"; class App extends Component { constructor(props) { super(props); authService.loginCallback = this.loggedIn; authService.logoutCallback = this.loggedOut; const loggedIn = authService.isAuthenticated(); this.state = { loggedIn, page: "conversations" }; } loggedIn = async ({ email, nickname }) => { this.setState({ loggedIn: true }); }; loggedOut = () => { this.setState({ loggedIn: false }); }; switchPage = page => this.setState({ page }); render() {>{/* content goes here */}</div> </div> ); } } export default App; What this component does is render a page with a navigation header. When the user is not logged in we show the login button which, when clicked, calls the login function from the auth service. If they're logged in, they get two links to switch between the two pages for this application and a logout button. Since it's a small app we'll be using a boolean variable to determine what to display in the main content area below the navigation header. When the login button is clicked, it pops out a new window with a page asking the user to log in or sign up. When they're done with signup or login, it will redirect to the URL we set for Allowed Callback URLs in the application's settings page in Auth0's dashboard, which is. At the moment we don't have that page so we'll set it up. Add a new file in the root public folder named close-popup/index.html with the content below: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" content="font-src: 'self' data: img-src 'self' data: default-src 'self'" /> <title></title> <script src=""></script> </head> <body> <script type="text/javascript"> const webAuth = new auth0.WebAuth({ domain: "yourname.auth0.com", clientID: "your client id" }); webAuth.popup.callback(); </script> </body> </html> You should replace the two lines indicating domain and clientID with your Auth0 application credentials. This will close the window once the page gets redirected here. Adding Stream Chat Messaging For Real-Time Conversation So far we have our app set up to allow users to log in and log out. Now we need to allow them to chat with each other. We're going to build this functionality using Stream Chat’s messaging SDK. The awesomeness of using this is that it provides a Chat SDK with an easy-to-work-with API for building real-time messaging applications. Some of its features include: - Chat threads to provide a good way to reply to specific messages. - Emoji chat reactions just like you would on Facebook or Slack. - Ability to send emojis and file attachments. - Direct and group chats. - Search function for messages or conversations. Another interesting addition is that it provides UI components that you can use in your app to speed up development. At the time of this writing, it's only available for React Native and React. We will be using the React UI component to add messaging functionality to our React application. This is because out of the box it provides components to view a list of existing conversations, send and receive messages in real-time, chat threads and message reactions. To get started using Stream messaging SDK, you'll need to sign up and sign in to the dashboard. Then, click the Create App button at the upper right corner of the page. Enter the app name react-messenger, select your preferred server location, and whether it's a production app or in development. Once created, you should see the secret, key, and region it's hosted on. Copy the app's key as you'll be needing this soon. Open your command line and run npm install --save stream-chat-react. This package contains the Stream Chat React component which we will use and also install the stream chat SDK stream-chat. We're going to use the stream-chat module to create a chat client and connect to the Chat server. Add a new file src/chat/service.js and paste the content below in it: import { StreamChat } from "stream-chat"; const tokenServerUrl = ""; const chatClient = new StreamChat("API_KEY"); const streamServerFlag = "streamServerInfo"; let isClientReady = localStorage.getItem(streamServerFlag) !== null; export const initialiseClient = async (email, name) => { if (isClientReady) return chatClient; const response = await fetch(tokenServerUrl, { method: "POST", mode: "cors", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ email, name }) }); const streamServerInfo = await response.json(); localStorage.setItem(streamServerFlag, JSON.stringify(streamServerInfo)); chatClient.setUser( { id: streamServerInfo.user.id, name: streamServerInfo.user.name, image: streamServerInfo.user.image }, streamServerInfo.token ); isClientReady = true; return { chatClient, user: { ...streamServerInfo.user } }; }; export const getClient = () => { const streamServerInfo = JSON.parse(localStorage.getItem(streamServerFlag)); chatClient.setUser( { id: streamServerInfo.user.id, name: streamServerInfo.user.name, image: streamServerInfo.user.image }, streamServerInfo.token ); return { chatClient, user: { ...streamServerInfo.user } }; }; export const isClientInitialised = () => isClientReady; export const resetClient = () => { localStorage.removeItem(streamServerFlag); }; The code we added allows us to create a chat client and set the user for the client. It is with this chat client that the application will interact with the stream chat server. To initialize the chat client you need the API key which you copied from the Stream dashboard. We’ll then call chatClient.setUser() to set the current user. The setUser() function takes two parameters. An object which contains the user's name and id, and the token needed to authenticate the client. That information will be coming from a server we will add later. We call into that server with the name, and Adding The User List Page With our chat service done, we're going to add a page that will list the users in the application and a user can select who to chat with. Add a new file src/Users.js with the content below: import React, { Component } from "react"; export default class Users extends Component { constructor(props) { super(props); this.state = { users: [] }; } async componentDidMount() { const { users } = await this.props.chatClient.queryUsers({ id: { $ne: this.props.user.id } }); this.setState({ users }); } startConversation = async (partnerId, partnerName) => { const userId = this.props.user.id; const userName = this.props.user.name; const filter = { id: { $in: [userId, partnerId] } }; const channels = await this.props.chatClient.queryChannels(filter); if (channels.length > 0) { alert("chat with this user is already in your conversation list"); } else { const channel = this.props.chatClient.channel("messaging", userId, { name: `Chat between ${partnerName} & ${userName}`, members: [userId, partnerId] }); await channel.create(); this.props.switchPage("conversations"); } }; render() { return ( <div> <div class="list-group"> {this.state.users.map(user => ( <button onClick={() => this.startConversation(user.id, user.name)} key={user.id} {user.name} {": "} {user.online ? "online" : `Last seen ${new Date(user.last_active).toString()}`} </button> ))} </div> </div> ); } } We've created a component that'll receive the chat client as props from a parent container. It queries the stream chat server for users using chatClient.queryUsers({ id: { $ne: this.props.user.id } }). The queryUsers function allows you to search for users and see if they are online/offline. The filter syntax uses Mongoose style queries and queryUsers takes in three parameters. The first argument is the filter object, the second is the sorting and the third contains any additional options. Above, we used queryUsers to query for all users except the currently logged in user. As an aside, because this function doesn't run MongoDB in the background, only a subset of its query syntax is available. You can read more in the docs. The startConversation function is called when a user is selected from the rendered user list. It checks if a conversation between those two users exists, and if not, it creates a conversation channel for them. To start the conversation we create a channel by calling chatClient.channel() and passing it the type of channel and the channel id, as well as an object specifying the channel name and it's members (if it's a private channel), as the third argument. This object can contain any custom properties but the ones we've used, in addition to an image field are reserved fields for Stream Chat. We used the logged in user's id as the channel id and, because we're building a messenger style app, I've set the channel type (see below) to messaging. There are 5 built-in channel types. They are: - Livestream: Sensible defaults in case you want to build chat like Twitch or football public chat stream. - Messaging: Configured for apps such as Whatsapp or Messenger. - Gaming: Configured for in-game chat. - Commerce: Good defaults for building something like your own version of Intercom or Drift. - Team: For if you want to build your own version of Slack or something similar. While those are the custom defined channel types, you can also create your own and customize it to fit your needs. Check the documentation for more info on this. When we initialize a channel by calling chatClient.channel(), it returns a channel object. Then, the app creates the channel by calling await channel.create(), to create it on the server. When that's completed, switchPage("conversations") is called, to take the user back to the conversation screen where they see a list of their conversations and chats with other users. Adding The Conversation Page Next up is to create the conversation page. We're going to make a new React component. We will use the components from the stream-chat-react library. Add a new file src/Conversations.js and update it with the content below: import React from "react"; import { Chat, Channel, ChannelList, Window, ChannelHeader, MessageList, MessageInput, Thread } from "stream-chat-react"; import "stream-chat-react/dist/css/index.css"; const App = props => { const filters = { type: "messaging", members: { $in: [props.userId] } }; return ( <Chat client={props.chatClient} theme={"messaging dark"}> <ChannelList filters={filters} /> <Channel> <Window> <ChannelHeader /> <MessageList /> <MessageInput /> </Window> <Thread /> </Channel> </Chat> ); }; export default App; Here we have used eight components from stream-chat-react library. The <Chat /> component creates a container to hold the chat client and the theme which will be passed down to child components, as needed. The <ChannelList /> component is used to render a list of channels. The <Channel /> component is a wrapper component for a channel. It has two required props which are channel and client. The client prop will be set automatically by the Chat component while the channel prop will automatically be set by the <ChannelList /> component when a channel is selected. When a channel is selected, we want to render a view where users can see the list of messages for that conversation/channel, enter messages, and respond to message threads. For this we've used the <ChannelHeader />, <MessageList />, <MessageInput />, and <Thread /> components. Using these components automatically gives us the following features: - URL preview (Try sending a link to a Youtube video to see this in action) - Video Playback - File uploads & Previews - Slash commands such as /giphy and /imgur. - Online Presence – Who is online - Typing Indicators - Message Status Indicators (sending, received) - Emoticons - Threads/Replies - Reactions - Autocomplete on users, emoticons, and commands With these components ready, we need to render them in App.js when the user is logged in and navigates pages using the links in the navigation header. Open src/App.js and import the chat service as follows: import { getClient, initialiseClient, isClientInitialised, resetClient } from "./chat/service"; Then update line 18 (in the constructor) to: if (loggedIn && isClientInitialised()) { const { chatClient, user } = getClient(); this.state = { loggedIn, page: "conversations", chatClient, user }; } else this.state = { loggedIn, page: "conversations" }; This will call getClient() to create a chat client using the info we already have from the token server. We will also update the loggedIn and loggedOut function to initialize the chat client and invalidate the chat client respectively. loggedIn = async ({ email, nickname }) => { const { chatClient, user } = await initialiseClient(email, nickname); this.setState({ loggedIn: true, chatClient, user }); }; loggedOut = () => { resetClient(); this.setState({ loggedIn: false }); }; We will update our render() function to add new variables used in determining the page to show as follows: const showConversations = this.state.loggedIn && this.state.page === "conversations"; const showUsers = this.state.loggedIn && this.state.page !== "conversations"; Then replace the comment {\* content goes here *\} with the following: { showConversations && ( <Conversations chatClient={this.state.chatClient} userId={this.state.user.id} /> ); } { showUsers && ( <Users chatClient={this.state.chatClient} user={this.state.user} switchPage={this.switchPage} /> ); } With all these modifications the App.js file should look exactly like this: import React, { Component } from "react"; import authService from "./auth/service"; import Conversations from "./Conversations"; import Users from "./Users"; import { getClient, initialiseClient, isClientInitialised, resetClient } from "./chat/service"; class App extends Component { constructor(props) { super(props); authService.loginCallback = this.loggedIn; authService.logoutCallback = this.loggedOut; const loggedIn = authService.isAuthenticated(); if (loggedIn && isClientInitialised()) { const { chatClient, user } = getClient(); this.state = { loggedIn, page: "conversations", chatClient, user }; } else this.state = { loggedIn, page: "conversations" }; } loggedIn = async ({ email, nickname }) => { const { chatClient, user } = await initialiseClient(email, nickname); this.setState({ loggedIn: true, chatClient, user }); }; loggedOut = () => { resetClient(); this.setState({ loggedIn: false }); }; switchPage = page => this.setState({ page }); render() { const showConversations = this.state.loggedIn && this.state.page === "conversations"; const showUsers = this.state.loggedIn && this.state.page !== "conversations";> {showConversations && ( <Conversations chatClient={this.state.chatClient} userId={this.state.user.id} /> )} {showUsers && ( <Users chatClient={this.state.chatClient} user={this.state.user} switchPage={this.switchPage} /> )} </div> </div> ); } } export default App; Adding The Token Server Now our frontend is done and we're close to completion! Next, we need to add the token server we mentioned earlier, which is needed to generate a user token and other data for use with the stream chat client. We won't build this from scratch but rather clone a project from GitHub which will do this for us. The project repository can be found on GitHub. Follow the instructions below to set it up: - Open your terminal and run git clone && cd stream-chat-boilerplate-apito clone the repository. - Run npm installto install the Node.js dependencies. - Once that's done, add a new file .envwith the content below. NODE_ENV=development PORT=8080 STREAM_API_KEY=your_api_key STREAM_API_SECRET=your_app_secret Replace the value for STREAM_API_KEY and STREAM_API_SECRET with what you find in your Stream Chat dashboard. Then start the token server by running npm start. This will start the token server and display Running on port 8080 in development mode. 🚀 in the console. Running And Testing The App We have the token server running. Now we run the React app by running npm start. This will open the browser and navigate us to localhost:3000. Then you need to login and try out the app! Try running it from different browsers with different users. Use the /giphy command and freely share videos. Add messages reaction and try out the features I mentioned earlier! That's A Wrap 🚀 Almost everything in today's world happens in real-time. You receive a real-time notification if someone you follow starts a live video on Instagram. You can send messages in real-time through WhatsApp and get the other individuals response within milliseconds. You may have the need to add real-time messaging to your app, build a Slack competitor or some other social app allowing users to communicate in real-time. In this post, I showed you how to build a messenger style chat application in React using the Stream Chat React SDK, and the Stream Chat React components. You have tested the application and have seen how rich it is with just a few lines of code. We also added security to the app using Auth0. With this knowledge, you can start building a messaging app under few hours and shipping your prototype in a short time. While we focused on text in this post, in the next one we'll add video call feature to the app. So, don't miss the next one!! 😎 Here's the link to the repository for what we built on GitHub. For more information on, you’ll enjoy the API tour here. Discussion (1) Hello peter, love the tutorial and tried the app but i came across a problem i have been trying to fix dont know if you could help
https://dev.to/pmbanugo/secure-react-chat-messaging-app-with-auth0-4893
CC-MAIN-2022-05
refinedweb
3,829
58.58
Pro Micro RP2040 Hookup Guide Introduction The Pro Micro RP2040 is a low-cost, high-performance board with flexible digital interfaces featuring the Raspberry Pi Foundation's RP2040 microcontroller. The board uses well known Pro Micro footprint with castellated mounting holes. Required Materials To follow along with this tutorial, you will need the following materials. You may not need everything though depending on what you have. Add it to your cart, read through the guide, and adjust the cart as necessary. USB 3.1 Cable A to C - 3 FootCAB-14743 Tools You will need a soldering iron, solder, and general soldering accessories for a secure connection when using the plated through hole pads. Solder Lead Free - 15-gram TubeTOL-09163 Prototyping Accessories Depending on your setup, you may want to use IC hooks for a temporary connection. However, you will want to solder header pins to connect devices to the plated through holes for a secure connection. Breadboard - Self-Adhesive (White)PRT-12002 Break Away Headers - StraightPRT-00116 IC Hook with PigtailCAB-09741 Photon Stackable Header - 12 PinPRT-14322 For those that want to take advantage of the Qwiic enabled devices, you'll want to grab a Qwiic cable. SparkFun Qwiic Cable KitKIT-15081 Qwiic Cable - 100mmPRT-14427 Qwiic Cable - 500mmPRT-14429 Qwiic Cable - Breadboard Jumper (4-pin)PRT-14425 Suggested Reading If you aren't familiar with the Qwiic system, we recommend reading here for an overview if you decide to take advantage of the Qwiic connector. We would also recommend taking a look at the following tutorials if you aren't familiar with them. How to Solder: Through-Hole Soldering Serial Communication Serial Peripheral Interface (SPI) Pulse Width Modulation Logic Levels I2C Analog vs. Digital Hardware Overview Old School to New School The Pro Micro RP2040 design uses the original Pro Micro and Qwiic Pro Micro USB C's footprint. The Pinout All of the Pro Micro RP2040's GPIO and power pins are broken out to two, parallel headers. Some pins are for power input or output, other pins are dedicated GPIO pins. Further, the GPIO pins can have special function depending on how they are multiplexed Here's a map of which pin is where and what special hardware functions it may have. Power There are a variety of power and power-related nets broken out to connectors and through hole pads. Each pad has a castellated edge. The back of the board also has the USB pins broken out for power These nets consist of the following: - V is the voltage provided from the USB connector. - + is the raw, unregulated voltage input for the Pro Micro RP2040. If the board is powered via USB (i.e. V), the voltage at this pin will be about 4.8V (USB's 5V minus a Schottky diode drop). On the other hand, if the board is powered externally, through this pin, the applied voltage can be up to 5.3V. - 3.3V is the voltage supplied to the on-board RP2040. We suggest using regulated 3.3V when connecting to this pin. If the board is powered through the raw "+" pin, this pin can be used as an output to supply 3.3V other devices. - RST can be used to restart the Pro Micro RP2040. There is a built-in reset button to reset the board. However, the pin is broken out if you need to access this pin externally. This pin is pulled high by a 10kΩ resistor on the board, and is active-low, so it must be connected to ground to initiate a reset. The Qwiic Pro Micro will remain "off" until the reset line is pulled back to high. - GND, of course, is the common, ground voltage (0V reference) for the system. USB Pins On the back of the board you can access the USB data pins and power for either USB 1.1 Host or Device. GPIO Pins The Pro Micro RP2040 breaks out the GPIO pins to plated through holes pads on the edge of the board. Each pad is castellated as well. The Pro Micro's GPIO pins — 20 in all — (if you include the two pins on the Qwiic connector as well) are multi-talented. Every pin can be used as a digital input or output, for blinking LEDs or reading button presses. These pins are referenced via an integer value between 0 and 29. Four pins feature analog to digital converters (ADCs) and can be used as analog inputs. These are useful for reading potentiometers or other analog devices. All pins can be set with the pulse width modulation (PWM) functionality, which allows for a form of analog output. The RP2040 can only provide a total of up to 16 controllable PWM outputs. There are hardware UART (serial), I2C, and SPI pins available as well. These can be used to interface with digital devices like serial LCDs, XBees, IMUs, and other serial sensors. The RP2040 has 26 external interrupts, which allow you to instantly trigger a function when a pin goes either high, low, or changes state. Qwiic Connector The board includes a Qwiic connector to easily connect Qwiic enabled I2C devices to the board. SCL is connected to GPIO17 while SDA is connected to GPIO16 Qwiic Cable - Breadboard Jumper (4-pin)PRT-14425 Qwiic Cable - Female Jumper (4-pin)CAB-14988 On-Board LEDs There are two LEDs on the Pro Micro RP2040. The red LED indicates whether power is present. The other is the addressable WS2812 RGB LED. The addressable LED is connected to GPIO25. You'll to define functions or use the WS2812 library to control that LED. There is a pad on the back of the board that is connected to the WS2812's DO pin. If you decide to daisy chain more LEDs, you'll want to solder to this pad. External Flash Memory The Pro Micro RP2040 includes a W25Q128JVPIM, which adds 128Mb (16MB) of flash memory externally. Boot Button The boot button is connected to the external flash memory. Pressing this button forces USB boot mode so that the board shows up as a USB mass storage device. Reset Button As explained earlier, there is a reset button to reset the RP2040. This adds the option of forcing the RP2040 into bootloader mode without needing to unplug/replug the board back into your USB port. To keep the board size at a minimum, the buttons are not labeled. To distinguish between the two buttons, just remember that the reset button is on the same side of the board as the reset pin. SWD Pins For advanced users, there are two pins (i.e. D for data/SWDIO and C for clock/SWCLK) on the back broken out for SWD programming on the back of the board. You'll need to solder wire to connect to these pins. Board Dimensions The board measures 1.3" x 0.7". Keep in mind that the USB-C connector is not flush with the board and will protude about 0.05" from the edge of the board. Not included in the image below is the PCB thickness, which is 0.8mm. This is thinner than a majority of PCBs used for SparkFun original designs. Hardware Hookup Header pins were left off the Pro Micro RP2040. For advanced users, you could also design a PCB to take advantage of the castellated edges for a lower profile. Here are a few tutorials to connect to the pads depending on your personal preference. How to Solder: Through-Hole Soldering September 19, 2013 How to Solder: Castellated Mounting Holes May 12, 2015 In order to power and upload to the board, you will simply need a USB C cable connected to your computer. Connecting to GPIO There a few methods of connecting to the GPIO. For a temporary connection when prototyping, you can use IC hooks to connect your circuit to a breadboard. Below is an example of a basic 5mm LED connected to a 330Ω current limiting resistor on a breadboard. For a secure connection to the Pro Micro RP2040's GPIO, it is recommended to solder header pins. If you designed the board using the footprint, you can also solder the board using the castellated headers. In this case, two 1x12 male header pins are being soldered to the board with the help of a breadboard. While originally intended to connect Qwiic-enabled devices, you can also use the SDA and SCL pins as a quick, visible test to see if the board is working. In this case, the SCL pin was used as a GPIO pin. Similar to the circuit with the IC hooks, the pin was connected to a basic 5mm LED and a 330Ω current limiting resistor on a breadboard. Qwiic Enabled Device You can also easily connect a Qwiic enabled device to the Qwiic connector. Below is an example of the Qwiic VL53L1X distance sensor and Qwiic Micro OLED connected to the Qwiic Pro Micro. UF2 Bootloader The Pro Micro 2040 Overview. Examples (MicroPython). Press and hold the boot button down with one hand. Press the reset button momentarily. Release the boot button. The board should appear. Once you open it in Thonny, adjust the pin number to the GPIO that the LED is connected to. Since we are using the example that was written specifically for the Pico as opposed to the Pro Micro RP2040, we'll need to adjust the pin. In this case, we'll use the RP2040's GPIO17 that is connected to the Qwiic connector so that we do not have to solder pins to the board. This is assuming that we are connecting a current limiting resistor and an LED between the GPIO17 and GND. language:python # ========== DESCRIPTION========== # The following code was originally written by # the Raspberry Pi Foundation. You can find this # example on GitHub. # # # # Note that the GPIO for the LED was adjusted # for the Pro Micro RP2040. Make sure to use # a current limiting resistor with the LED. from machine import Pin, Timer led = Pin. WS2812 If you have the MicroPython examples saved, head to the following folder in your downloads .../pico-micropython-examples/pio/neopixel_ring/neopixel_ring.py . Your code should look like the following. Of course, you can also copy and paste the code provided after the next paragraph as well. Once you open it in Thonny, adjust the pin number to the GPIO that the LED is connected to for PIN_NUM. Since we are using the example that was written specifically for the Pico as opposed to the Pro Micro 2040, we'll need to adjust the pin. In this case, we'll use the RP2040's GPIO25 that is connected to the WS2812. We will also adjust the NUM_LEDs to 1. language:python # ========== DESCRIPTION========== # Example using PIO to drive a set of WS2812 LEDs. # # The following code was originally written by # the Raspberry Pi Foundation. You can find this # example on GitHub. # # # # Note that the 'NUM_LEDs' was adjusted to 1. Also # the GPIO for the addressable WS2812 RGB LED called # `PIN_NUM` was adjusted for the Pro Micro RP2040. import array, time from machine import Pin import rp2 # Configure the number of WS2812 LEDs. NUM_LEDS = 1 PIN_NUM = 25 brightness = 0.2 @rp2.asm_pio(sideset_init=rp2.PIO.OUT_LOW, out_shiftdir=rp2.PIO.SHIFT_LEFT, autopull=True, pull_thresh=24) def ws2812(): T1 = 2 T2 = 5 T3 = 3 wrap_target() label("bitloop") out(x, 1) .side(0) [T3 - 1] jmp(not_x, "do_zero") .side(1) [T1 - 1] jmp("bitloop") .side(1) [T2 - 1] label("do_zero") nop() .side(0) [T2 - 1] wrap() # Create the StateMachine with the ws2812 program, outputting on pin sm = rp2.StateMachine(0, ws2812, freq=8_000_000, sideset_base=Pin(PIN_NUM)) # Start the StateMachine, it will wait for data on its FIFO. sm.active(1) # Display a pattern on the LEDs via an array of LED RGB values. ar = array.array("I", [0 for _ in range(NUM_LEDS)]) ########################################################################## def pixels_show(): dimmer_ar = array.array("I", [0 for _ in range(NUM_LEDS)]) for i,c in enumerate(ar): r = int(((c >> 8) & 0xFF) * brightness) g = int(((c >> 16) & 0xFF) * brightness) b = int((c & 0xFF) * brightness) dimmer_ar[i] = (g<<16) + (r<<8) + b sm.put(dimmer_ar, 8) time.sleep_ms(10) def pixels_set(i, color): ar[i] = (color[1]<<16) + (color[0]<<8) + color[2] def pixels_fill(color): for i in range(len(ar)): pixels_set(i, color) def color_chase(color, wait): for i in range(NUM_LEDS): pixels_set(i, color) time.sleep(wait) pixels_show() time.sleep(0.2) def wheel(pos): # Input a value 0 to 255 to get a color value. # The colours are a transition r - g - b - back to r. if pos < 0 or pos > 255: return (0, 0, 0) if pos < 85: return (255 - pos * 3, pos * 3, 0) if pos < 170: pos -= 85 return (0, 255 - pos * 3, pos * 3) pos -= 170 return (pos * 3, 0, 255 - pos * 3) def rainbow_cycle(wait): for j in range(255): for i in range(NUM_LEDS): rc_index = (i * 256 // NUM_LEDS) + j pixels_set(i, wheel(rc_index & 255)) pixels_show() time.sleep(wait) BLACK = (0, 0, 0) RED = (255, 0, 0) YELLOW = (255, 150, 0) GREEN = (0, 255, 0) CYAN = (0, 255, 255) BLUE = (0, 0, 255) PURPLE = (180, 0, 255) WHITE = (255, 255, 255) COLORS = (BLACK, RED, YELLOW, GREEN, CYAN, BLUE, PURPLE, WHITE) print("fills") for color in COLORS: pixels_fill(color) pixels_show() time.sleep(0.2) print("chases") for color in COLORS: color_chase(color, 0.01) while True: print("rainbow") rainbow_cycle(0) Hit the "Run current script" button. Once the code runs, it will display each color at "0.2" brightness. The LED will then animate. Since there is only one LED attached, it will look like it will be blinking through the colors. Once the board jumps into the while loop, the LED will begin cycling between colors smoothly. Remember, you can have the board run the example every time the board is powered up by following the note provided in an earlier example. If you are looking to simplify the code, you can also use the library written for the WS2812. This saves some of the functions in a separate file. Just make sure to adjust the GPIO to connect to the WS2812(s) and the number of LEDs. Resources and Going Further For more information, check out the resources below: - Schematic (PDF) - Eagle Files (ZIP) - Board Dimensions (PNG) - Graphical Datasheet (PDF) -
https://learn.sparkfun.com/tutorials/pro-micro-rp2040-hookup-guide/all
CC-MAIN-2022-40
refinedweb
2,399
64
public class | source import History from 'flarum/utils/History.js' History The History class keeps track and manages a stack of routes that the user has navigated to in their session. An item can be pushed to the top of the stack using the push method. An item in the stack has a name and a URL. The name need not be unique; if it is the same as the item before it, that will be overwritten with the new URL. In this way, if a user visits a discussion, and then visits another discussion, popping the history stack will still take them back to the discussion list rather than the previous discussion. Constructor Summary Member Summary Method Summary Public Methods public canGoBack(): Boolean source Check whether or not the history stack is able to be popped.
https://api.flarum.dev/js/v0.1.0-beta.5/class/js/forum/src/utils/History.js~History.html
CC-MAIN-2020-29
refinedweb
137
69.01
Controlling Your Lights with Your PC In this article I'll examine part of Amazon's web service functionality and use the new Pocket Outlook API to create a useful application for the Windows Mobile 5.0 Smartphone. The current version of the application lets the user search Amazon's inventory, view book details, create and edit a cart, and checkout from the device. A Pocket Outlook task is used to store the cart's checkout link. I wrote Amazon Mobile using Visual Studio 2005 Professional and a Motorola Q Smartphone running Windows Mobile 5.0. Windows Mobile devices use Microsoft's ActiveSync software to synchronize with a desktop, so you'll need to have it installed to set up the environment on the phone. Next you'll need the .NET Compact Framework, since this application is written in managed code. Lastly, download the appropriate software development kit (SDK) for your platform. I installed the Windows Mobile 5.0 SDK for Smartphone because I am writing code for the Q. If you are using a Pocket PC device you should download and install the Pocket PC version. The rest of this article will assume that you are writing a Smartphone application, but if that is not the case you will find it easy enough to follow along using a Pocket PC device. Readers who are not familiar with Amazon's web service may find it helpful to review Peter Bernhardt's introduction in this previous Coding4Fun article. I used ECS version 4.0 for this project. You won't have to download an SDK as in version 3.0, but you will need to create an account with Amazon to get a unique access key. Make sure to replace the “Your Access Key Here” string in Constants.cs with your own access key before running my code. Once you've set up your dev environment, start by creating a new project for your application. Select New > Project from Visual Studio's File menu. You'll find the Device Application template under Visual C# > Smart Device > Windows Mobile 5.0 Smartphone in the Project Types tree on the left side of the New Project dialog. Figure 1: The New Project dialog. Visual Studio will use the default resolution and DPI settings of the current “form factor” to create the form you see in the designer. If you open a project written for a device with a 320x240 screen (such as the project I wrote for this article) before changing the settings, the IDE will move or resize the contents of the forms to make them fit the smaller 176x220 resolution. You can prevent this by changing the designer's form factor to match that of the Q. Choose Tools > Options, and then expand “Device Tools” in the tree on the left side of the dialog and select “Form Factors.” Find “Windows Mobile 5.0 Smartphone” in the menu and edit its properties. Clear the “Enable rotation support” checkbox and make sure that both the horizontal and vertical resolutions are set to 131 pixels per inch. Now clear the “Show Skin” checkbox and enter 320 for the screen width and 240 for the screen height. You can save this form factor with a new name, such as “MotoQ,” and make it your default if you prefer. Now let's get started in the code. First you'll need to add a reference to Amazon's Web Services Definition (Description) Language (WSDL). The web reference wizard will use the WSDL to create a proxy class that you'll use to access the service's functionality. Right-click on “Web References” in Solution Explorer, select “Add Web Reference” and enter the address of the WSDL as shown in the figure below. Figure 2: Feel free to change the unwieldy name that Visual Studio assigns your Web reference. Visual Studio has created a startup form that, by default, is the entry and exit point for your application. This form is a great place to accept search criteria from the user and display the results you get back from the web service. Find the Toolbox and add a TextBox control to your form to capture the user's input. You'll also need a ListView control to display the results. The softkey menu that's built into the default Smartphone form is a convenient way to let the user start the search. Add a MenuItem and handle its click event to call the search method you're about to write. Figure 3: The SearchForm. Make sure you've set a reference to the namespace created by the WSDL wizard in a using statement at the top of your code. To perform a keyword search against Amazon's product database, start by creating an ItemSearchRequest object and setting a few of its properties. ItemSearchRequest is declared as a partial class in the web reference's Reference.cs file, along with all of the other ECS classes. Here is the code I used to set up the search: Imports AmazonMobile.com.amazon.webservices ... Private Sub PerformSearch(ByVal searchText As String) Dim aws As New AWSECommerceService() ' set up the request Dim itemSearchRequest As New ItemSearchRequest() itemSearchRequest.Keywords = searchText itemSearchRequest.SearchIndex = "Books" itemSearchRequest.ResponseGroup = New String() {"ItemAttributes"} itemSearchRequest.ItemPage = "1" Dim itemSearch As New ItemSearch() itemSearch.Request = New ItemSearchRequest(0) {itemSearchRequest} itemSearch.SubscriptionId = Constants.AMAZON_ACCESS_KEY using AmazonMobile.com.amazon.webservices; ... private void PerformSearch(string searchText) { Cursor.Current = Cursors.WaitCursor; // clear out old results resultsView.Clear(); AWSECommerceService aws = new AWSECommerceService(); // set up the request ItemSearchRequest itemSearchRequest = new ItemSearchRequest(); itemSearchRequest.Keywords = searchText; itemSearchRequest.SearchIndex = "Books"; itemSearchRequest.ResponseGroup = new string[] { "ItemAttributes" }; itemSearchRequest.ItemPage = "1"; ItemSearch itemSearch = new ItemSearch(); itemSearch.Request = new ItemSearchRequest[1] { itemSearchRequest }; itemSearch.SubscriptionId = Constants.AMAZON_ACCESS_KEY; Notice that I've set the ResponseGroup property to a string array containing the string “ItemAttributes.” The web service uses the values in this array to determine the subset of fields it should populate in the response object. You'll find a list of all possible options in the web service's documentation, a helpful guide to the functionality that the service exposes. Now you can ask the web service to perform the search. // perform the search ItemSearchResponse itemSearchResponse = aws.ItemSearch(itemSearch); Items[] itemsResponse = itemSearchResponse.Items; 'perform the search Dim response As ItemSearchResponse = aws.ItemSearch(itemSearch) Dim itemsResponse As Items() = response.Items Almost every interaction with ECS will follow the straightforward pattern you've just seen. First, create and set the properties of one or more request objects. Next, create the appropriate search object and use its properties to wrap one or more request objects, as well as your ECS access key. Finally, pass the search object to the appropriate web service method. The results of your request will be encapsulated by the object that is returned from the web service. You'll use the response object's properties to access the information that you want to display. Loop through the response object's Item collection, adding a new ListViewItem to the ListView's Items collection for each of the results. You can use the ListViewItem's Tag property to store the item's Amazon Standard Identification Number (ASIN) for future reference. In most cases, a book's ASIN is simply its ISBN. The next step is creating a form to lookup any offers that might be associated with a search result. Right-click on the project node in Solution Explorer and select Add > Windows Form from the context menu. Give your form a name and click “Add” to create it in your project. For the offer form you'll probably want a ListView control to display the results of your lookup operation, as well as a few menu items at the bottom of the form for navigation. If you add a back button to the menu to return to the previous form the button's event handler should call the form's Close method. In my code I construct an OfferForm using an ASIN string, which simply sets an instance variable to track the book associated with the current instance of the form. Your application will use a book's ASIN to lookup offers. You already know the item's ASIN, so you should create an ItemLookupRequest, in this case, instead of the ItemSearchRequest that you used to perform the initial search. You'll need to set the properties of one or more of the request objects and then set the ItemLookup's Request property as I did above. Finally, call the ItemLookup method on your instance of the web service and use the resulting ItemLookupResponse object to populate the ListView object. I suggest doing all this in a method that can be called by the form's Load event handler rather than the constructor. That way you'll avoid any conflicts that might occur if the form has not finished initializing its controls when you begin setting their properties. Next you need a method to create a cart when the user selects an offer and since you don't want to create a new cart every time, you'll also want to be able to add an offer to a cart that already exists. A mechanism for writing information about a cart to the file system would be helpful. These are the functions of the Settings and SettingsManager classes that I've created to store the cart's ID and HMAC key. Take a look at the OnAddItem, CreateCart and GetCart methods to see how I implemented add and edit operations for the cart. Amazon cart response objects have a PurchaseURL property that contains a URL string used to access the cart's checkout page. Once you have a cart with at least one item you should provide the user with access to this URL. I wanted to make use of the new Pocket Outlook API, so I added a Checkout method that stores the cart's purchase URL in the Body property of a new Pocket Outlook task. The Task class is part of the Microsoft.WindowsMobile.PocketOutlook namespace, so you'll need to add another reference to your project. Once you've done so, creating and adding a new task is simple: Imports Microsoft.WindowsMobile.PocketOutlook ... Private Sub Checkout() ' warn the user that the cart will no longer be accessible If MessageBox.Show(Constants.MESSAGE_CHECKOUT_CONFIRMATION, Constants.CAPTION_CHECKOUT, MessageBoxButtons.OKCancel, MessageBoxIcon.None, MessageBoxDefaultButton.Button1) = Windows.Forms.DialogResult.OK Then ' Outlook Session implements IDisposable... Dim session As New OutlookSession() Try Dim task As New Task() task.Body = String.Format(Constants.MESSAGE_TASK_BODY, _purchaseUrl) task.Subject = String.Format("{0}, {1}", Constants.CAPTION_CHECKOUT, DateTime.Now.ToLocalTime()) task.Complete = False session.Tasks.Items.Add(task) Finally session.Dispose() End Try ' clear the contents of the local cart LocalCartReset() ' reset the user interface OnCartEmpty(Constants.MESSAGE_CHECKOUT_COMPLETED) End If End Sub ' Checkout using Microsoft.WindowsMobile.PocketOutlook; … private void Checkout() { // warn the user that the cart will no longer be accessible if (DialogResult.OK == MessageBox.Show(Constants.MESSAGE_CHECKOUT_CONFIRMATION, Constants.CAPTION_CHECKOUT, MessageBoxButtons.OKCancel, MessageBoxIcon.None, MessageBoxDefaultButton.Button1)) { // Outlook Session implements IDisposable using (OutlookSession session = new OutlookSession()) { Task task = new Task(); task.Body = string.Format(Constants.MESSAGE_TASK_BODY, _purchaseUrl); task.Subject = string.Format("{0}, {1}", Constants.CAPTION_CHECKOUT, DateTime.Now); task.Complete = false; session.Tasks.Items.Add(task); } // clear the contents of the local cart LocalCartReset(); // reset the user interface OnCartEmpty(Constants.MESSAGE_CHECKOUT_COMPLETED); } } I reset the local cart's contents after storing the task because Amazon recommends that a cart no longer be used once a purchase is submitted. I don't know when my user will complete the transaction, so I make sure that my application can't access the cart again by clearing any saved information. It's worth noting that, for performance reasons, Microsoft suggests reusing the OutlookSession object for the life of the application rather than taking the above approach. My code has infrequent need of the task collection, however, so I've chosen to release the session's resources instead. Once you have written the code to create a cart and edit its contents you should go back to your search form and handle the ListView's ItemActivate event. When someone selects an item from the list, your event handler should instantiate a new form to lookup offers for the item. In this case, it is best to do this modally, which means that execution in the search form halts until the OfferForm has been closed. Instantiate a new OfferForm and call ShowDialog: using (OfferForm offerForm = new OfferForm(_asin)) { // the next line of code shows the OfferForm modally offerForm.ShowDialog(); } Using offerForm As OfferForm = New OfferForm(_asin) offerForm.ShowDialog() End Using When the OfferForm is closed the search form will exit the using statement, calling the Dispose method on the OfferForm to release its resources. If you have a Motorola Q or any other Windows Mobile 5.0 Smartphone with a landscape resolution, debugging your code is as simple as connecting the device and then selecting “Windows Mobile 5.0 Smartphone Device” from the dropdown in the Debugging toolbar. When you start debugging, the IDE will deploy both your code and, depending on your configuration, the .NET Compact Framework to the device through ActiveSync. If you don't have the hardware you want to use yet, don't worry. You can use an emulator to create a virtual device for testing your code. The emulator that ships with Visual Studio supports only portrait Smartphone layout, however, so the first thing you'll need is Microsoft's Emulator Image for Windows Mobile 5.0 Smartphone with 320x240 (Landscape) Screen. Next you'll want to download the Q skin plug-in, which you should be able to find at, and you may find the MOTO Q Developer Guide useful for the installation. Your emulator will not have a network connection when first installed but this post will help you get one configured. Select your emulator from the Debug toolbar dropdown and press F5 when you're ready to debug. Figure 4: Amazon Mobile in action on the Q emulator. As we've seen, the .NET Compact Framework makes distributed applications such as this one as easy to write for the mobile device as they are for the desktop. With much of the processing and storage done server-side, the scaled down processor and memory of the phone aren't a problem for most applications. Check back soon – my next version (v2.0) of Amazon Mobile will hopefully feature the device's camera as an ISBN scanner using the framework's new CameraCaptureDialog class.
https://channel9.msdn.com/coding4fun/articles/Amazon-Mobile-Book-Shopping-from-your-Smartphone
CC-MAIN-2017-39
refinedweb
2,422
56.05
to convert byte [] to String? Sarone Thach Ranch Hand Joined: Jun 25, 2003 Posts: 89 posted Nov 20, 2003 19:32:00 0 Hi, I'm stuck, can some please tell me how to convert a byte array into a string . byte [] buffer;// pretend there is real data in this. String sText = (String)buffer; does not work funnily enough. thanks in advanced. Sarone Ernest Friedman-Hill author and iconoclast Marshal Joined: Jul 08, 2003 Posts: 24199 34 I like... posted Nov 20, 2003 19:42:00 0 Use String string = new String(buffer); This will convert the bytes into characters according to the platform's default encoding; in the us-english locale, this is UTF-8, so one byte is one character. [Jess in Action] [AskingGoodQuestions] Sarone Thach Ranch Hand Joined: Jun 25, 2003 Posts: 89 posted Nov 20, 2003 20:00:00 0 thanks that does work. if I want an int out of the byte array do i need to convert it to String then to an Integer, then get the int value? is there a better way than this? byte [] buffer = new byte[4];//pretend there is valid data here String sIntValue = new String(buffer); Integer IValue = new Integer(sIntValue); int iValue = IValue.intValue(); Also if there is material that i can read to understand what is a byte and how it relates to char, int and String. I would greatly appricate this. thanks. Wayne L Johnson Ranch Hand Joined: Sep 03, 2003 Posts: 399 posted Nov 21, 2003 11:43:00 0 If you are looking to create an "int" based on the value of four "byte"-s, then you may have to start using the shift and boolean operators. This is getting into more advances stuff, but look at this example: public class TestStuff { public static void main(String args[]) { byte[] bValue = { 1, 2, 3, 4 }; int iValue = ((bValue[0] & 0x000000FF) << 24) | ((bValue[1] & 0x000000FF) << 16) | ((bValue[2] & 0x000000FF) << 8) | ((bValue[3] & 0x000000FF)); System.out.println("iValue: " + iValue); System.out.println(" hex1: " + Integer.toString(iValue, 16)); System.out.println(" hex2: " + Integer.toHexString(iValue)); byte[] bValue2 = { '1', '2', '3', '4' }; String sValue2 = new String(bValue2); System.out.println(" Value: " + Integer.parseInt(sValue2)); } } In the first part you have the numeric values and you want the new int to be the result of the four bytes. The "&" does a logical AND of the bits; the "<<" shifts them; and the "|" ORs the results. The end result is that you get a 1 in the first bit, 2 in the second bit, 3 in the third bit, and 4 in the fourth bit. Which gives you the number 16909060. In the second example you have the String "1234" broken into four bytes and you are converting it, which leads to the number 1234. So the answer to your question is, it depends on what you are trying to do. With your second questions, if you google on "Java primitive" you'll find out a lot of information on Java primitives. "byte" is an 8-bit signed values (-128 to 127). "char" is an unsigned 16-bit value (0 to 65535?). "int" is a signed 32-bit value. String is an object (not a primitive) that happens to be based on Unicode. Wayne L Johnson Ranch Hand Joined: Sep 03, 2003 Posts: 399 posted Nov 21, 2003 12:07:00 0 Better answer here: how to convert byte [] to String? Sarone Thach Ranch Hand Joined: Jun 25, 2003 Posts: 89 posted Nov 22, 2003 18:06:00 0 this answer is very convenient for me, exactly what i want to do. But I'm confused why you left shift the first byte by 24, 2nd by 16, 3rd by 8? I would do it this way and end up with the right number: number = ((buffer[3] & 0x000000FF) << 32) | ((buffer[2] & 0x000000FF) << 16) | ((buffer[1] & 0x000000FF) << 8) | ((buffer[0] & 0x000000FF)); is your 24 supposed to be 32? if it is then we are doing the reverse from one another. I think its some thing to do with where the least significant bit is supposed to be. This makes me wonder how do i determine which end is the least significant bit. Is the least significant bit fixed or changeable?? Well that is my thought, can anyone confirm that or give me another explanation? By the way I'm getting the byte array by another program, written by someone else ages ago. So It could be how they place the data in the byte array. Sarone Thach Ranch Hand Joined: Jun 25, 2003 Posts: 89 posted Nov 22, 2003 18:29:00 0 In regards to the original question, if I had a byte array containing: byte [] bytesArray = {49,50,49,0,0,0}; String sNumber = new String(bytesArray); The numbers are ascii coded numbers. So sNumber is '121'. You think that you will get a null terminated String, but when i set the sNumber to a JTextField ,I can see 121 with alot of squares. The lenght of the string is 6. I would have thought the lenght is supposed to be 3. Can someone explain what is going on here, is there a remedy to emit all the nulls from being displayed in the textfield. Sarone Thach Ranch Hand Joined: Jun 25, 2003 Posts: 89 posted Nov 22, 2003 18:34:00 0 Well i found a remedy, but i still don't understand why it does not elminate all the nulls, when i convert it to the new String. sNumber = new String(byteArray); sNumber = sNumber.substring(0, sNumber.indexOf(0)); sNumber.trim(); [ November 22, 2003: Message edited by: Sarone Thach ] I agree. Here's the link: subject: how to convert byte [] to String? Similar Threads encrypt data in database String into byte buffer FBNS: convert readUTF to NIO equivalent how to convert a Bytearray input stream to String how to convert Byte to the String All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/394963/java/java/convert-byte-String
CC-MAIN-2015-48
refinedweb
1,003
79.4
I have a list like the following (sample): But this list is just a sample for logic. The list should be dynamic. List field may have more than 3 and The list can have a collection of the child (data format like json) I want to convert nested ul-li html tags. I think, I can do that with reflection like following. But I am using to the reflection first time. My code is like following for now. What should I do for that? public static string ConvertToHtml<T>(IEnumerable<T> list) where T : class { StringBuilder html = new StringBuilder(); foreach (var item in list) { Type itemType = item.GetType(); if (itemType.IsClass) { FieldInfo[] fieldInfo = itemType.GetFields(BindingFlags.Public | BindingFlags.Instance); // Field? if (fieldInfo.Any()) { foreach (var field in fieldInfo) { var name = field.Name; var value = field.GetValue(item); } } PropertyInfo[] propertyInfo = itemType.GetProperties(BindingFlags.Public | BindingFlags.Instance); // Property? if (propertyInfo.Any()) { foreach (var property in propertyInfo) { var name = property.Name; var value = property.GetValue(item); } } } } return string.Empty; } Most likely you have chosen a wrong approach to do the task. The reflection is typically used for querying the runtime code structures like types, their fields, properties and methods. The most common use case for that is creating a method for serialization/deserialization of arbitrary types. In your case it does not look like the arbitrary structure - you have quite strict data structure even though it is supports infinite (conceptually) nested levels (like JSON). In other words, you have "tree". To traverse it several algorithms exists: and of course for most of them you can easy find the sample implementation: Implementing Depth First Search into C# using List and Stack But the problem is a bit tricky than you can expect as you first need to understand the concept. The tree traversal algorithms are typically recursive. So in order to do it right you have to get into this concept as well. After that the code for building the list is quite simple: public class Node { string Name { get; set; } IList<Node> Subnodes { get; private set; } } private void BuildList(Node node, StringBuilder sb) { sb.Append("<ul>"); foreach (var n in node.Subnodes) { sb.Append("<li>" + n.Name); BuildList(n, sb); sb.Append("</li>"); } sb.Append("</ul>"); } public string BiuldList(Node root) { var sb = new StringBuilder(); BuildList(root, sb); return sb.ToString(); } EDIT Using the given code it would generate empty <ul></ul> tags inside the <li></li> items who don't have children. So I did a slightly change adding a condition to only create the sub list when there are children. Code: private void BuildList(Node node, StringBuilder sb) { if(node.Subnodes.Count > 0){ sb.Append("<ul>"); foreach (var n in node.Subnodes) { sb.Append("<li>" + n.Name); BuildList(n, sb); sb.Append("</li>"); } sb.Append("</ul>"); } }
https://codedump.io/share/M8nSdQ7tF9FM/1/convert-to-html-nested-ul-li-from-list-in-c
CC-MAIN-2017-13
refinedweb
465
59.3
21 April 2010 16:39 [Source: ICIS news] TORONTO (ICIS news)--Borealis seeks to raise up to €200m ($270m) in a bond issue this month to help finance projects in the Middle East and Europe, the Austria-based polyolefins major said on Wednesday. The seven-year bond – Borealis’ first such corporate bond transaction – would be placed with Austrian private and institutional investors, it said. Austrian banks Erste Group and UniCredit Bank Austria would act as the issue’s lead managers, it added. The bond's tentative coupon rate was 5.5% per year and its expected value date was 30 April, Borealis said. The final coupon rate and issue price would be determined shortly before the subscription period started and would depend on the capital market situation, it added. Erste said it expected Borealis' bond to meet with very strong demand from private investors, partly due to a lack of alternative investing opportunties. Borealis said its petrochemicals joint venture in ?xml:namespace> An additional 2.5m tonne/year expansion was scheduled for 2014, it added. In related news on Wednesday, a Borouge executive earlier told ICIS news in an interview that his firm saw the polyolefins outlook as "very positive in the long term", although there could variations in the short run. Borealis is 64%-owned by Abu Dhabi’s International Petroleum Investment Co (IPIC), with Austria’s oil and petrochemicals major OMV owning the remaining 36%. ($1 = €0.74) For more on Borealis
http://www.icis.com/Articles/2010/04/21/9352674/borealis-to-raise-200m-to-support-mideast-europe-projects.html
CC-MAIN-2015-22
refinedweb
244
53.51
Both the client and the server are internal, each has a certificate signed by the internal CA and the CA certificate. I need the client to authenticate the server's certificate against the CA certificate it has. It also should send its certificate to the server for authentication. The urllib2 manual says that server authentication is not performed. PycURL is a natural alternative but its license is not approved yet. I would also prefer not having to compile the library from the source code but to use RPM instead. I went over a bunch of libraries like requests, httplib2 and don't see what I need. There is also the ssl module but I don't feel like implementing http myself if I don't absolutely must. Python 2.6 on RHEL 5.7 well, the winner (almost) is httplib2 v0.7. Starting from this version it supports SSL certificate authentication. Here's the sample code import httplib2 client = httplib2.Http(ca_certs='ca.crt') client.add_certificate(key='client_private_key.pem', cert='cert_client.pem', domain='') headers, resp = client.request(query) Note the domain='' parameter, it didn't work for me otherwise. PS. unfortunately this simple solution does not work for me as I forgot to mention additional requirement - having RPM installation for RHEL 5.7 & Python 2.6.
https://codedump.io/share/a5dJOHAKQM4K/1/how-to-create-a-dual-authentication-https-client-in-python-without-lgpl-libs
CC-MAIN-2017-04
refinedweb
216
59.19
Date.cpp:Date.cpp: #ifndef DATE_H #define DATE_H class Date { private: int m_nMonth; int m_nDay; int m_nYear; Date() { } // private default constructor public: Date(int nMonth, int nDay, int nYear); void SetDate(int nMonth, int nDay, int nYear); int GetMonth() { return m_nMonth; } int GetDay() { return m_nDay; } int GetYear() { return m_nYear; } }; #endif #include "Date.h" // Date constructor Date::Date(int nMonth, int nDay, int nYear) { SetDate(nMonth, nDay, nYear); } // Date member function void Date::SetDate(int nMonth, int nDay, int nYear) { m_nMonth = nMonth; m_nDay = nDay; m_nYear = n As you can see I junked it up quite a bit moving things around and then realized putting all this into a properly structured c++ project with headers for all the classes would fix a lot of my issues which seemed to be forward declaration related. But the code is here enough that I think a good C++ programmer could visualize the way it should be. There are other problems I'm sure and maybe even some missing code but I can't take this any further until I've done some re-organizing. Open in new window Experts Exchange Solution brought to you by Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial Notice the line 15 in your header Open in new window This is the constructor. Only the default constructor is made private to prevent the situation when anyone tries to create "uninitialized" object in the sense of its functionality. In other words, the date object must be given some date when created. It cannot be postponed. The implementation of the constructor (its body) is located at the .cpp file. The compiler actually compiles the .cpp files. The .h files are included as text by the #include directive. The result is the same as if you copied it to that place. The accessor GetMonth() etc should actually be made inlines to make the code possibly more efficient.
https://www.experts-exchange.com/questions/27628919/Confused-with-C-header-files.html
CC-MAIN-2018-47
refinedweb
336
59.74
The Felgo 3.9.2 update shows you how to create a messenger app like WhatsApp and brings many new features for Felgo Live and the Felgo SDK. The highlights of this update include a new way to clear the Felgo Live project cache, and more control over the app navigation flow. - Create a Messenger App UI like WhatsApp - Clear the Felgo Live Project Cache with the Live Server - Control the App Navigation Flow and Block Page Changes - Updated Facebook Plugin and Firebase Plugin for Android - More Fixes and Improvements Create a Messenger App UI like WhatsApp There's a reason why WhatsApp became the most popular messenger: It looks great and is simple to use. To see how you can create a similar app UI, have a look at the new Messenger App Demo. This demo is a prominent Felgo app template and now provides a user interface just like WhatsApp: Try the Felgo Developer App to browse all the latest demos and explore the Felgo SDK. The full source code of the demos is also available for you on GitHub. Clear the Felgo Live Project Cache with the Live Server Felgo Live with QML Hot Reload is the fastest way to develop your app. Each time you modify your code and hit save, QML Hot Reload instantly applies the code changes on all connected test devices. .gif?width=924&name=hot-reload-3%20(1).gif) The Felgo SDK includes all you need to get started. This includes the Felgo Live Server and the Felgo Live Client for development on Desktop. You can also connect your mobile device by downloading the Developer App for iOS or Android. To clear the local project cache of the mobile app you can use the clear cache option in the settings. Advanced users can now also empty the cache remotely with the Felgo Live Server. Follow these steps to do so: 1) Open the Felgo Live Server settings and enable the "Advanced options" 2) Click the "Clear Cache" button in the Felgo Live Server whenever you want to clear the project cache. 3) The Felgo Live Client resets to the main screen. Press the connect button to reconnect to the Felgo Live Server and continue developing. With this addition you no longer have to navigate to the client app settings each time you need to clear the project cache. Make use of this option in the Felgo Live Server to save time whenever you want to test a fresh state of your project. Control the App Navigation Flow and Block Page Changes Felgo 3.9.2 makes it possible to run custom code when the user attempts to change the active NavigationItem or pop a page from the NavigationStack. You can also block the event and thus stop the action of being performed. This is useful to keep users on the page and inform of local changes that might be lost when the page is removed. The Page::willPop signal triggers whenever the Page is about to be popped from its NavigationStack. If you want to stop the pop from being executed, you can set event.accepted to false. import Felgo 3.0 import QtQuick 2.0 App { id: app Navigation { NavigationItem { icon: IconType.heart title: "Home" NavigationStack { Page { id: page title: "Main Page" AppButton { anchors.centerIn: parent text: "Push Detail Page" onClicked: page.navigationStack.push(subPageComponent) } } } } } Component { id: subPageComponent Page { id: subPage title: "Detail Page" // Signal to disable pop and show dialog instead property bool allowPop: false onWillPop: { if(!allowPop) { event.accepted = false InputDialog.confirm(app, "Pop?", function(ok) { if(ok) { allowPop = true subPage.navigationStack.pop() } }) } } } } } Similarly, the NavigationItem::willChange signal triggers whenever the current menu item is about to be changed. To stop the change from being executed, set event.accepted to false. import Felgo 3.0 App { Navigation { navigationMode: navigationModeTabsAndDrawer NavigationItem { id: firstItem icon: IconType.heart title: "First" // Signal to disable changing the NavigationItem property bool isLocked: true onWillChange: event.accepted = !firstItem.isLocked NavigationStack { Page { id: page title: "First" AppButton { text: "Navigation Locked: " + firstItem.isLocked onClicked: firstItem.isLocked = !firstItem.isLocked anchors.centerIn: parent } } } } NavigationItem { icon: IconType.suno title: "Second" NavigationStack { Page { title: "Second" } } } } } This new feature allows more precise control over the navigation flow and can prevent users from leaving a certain page. Navigation changes that are triggered by the native back button on Android or swipe-back gesture on iOS are covered as well. Updated Facebook Plugin and Firebase Plugin for Android The Facebook Plugin now uses the Facebook SDK v8.2.0 on Android. This lets the Facebook Login use Chrome custom tabs instead of a WebView. This update is required until October 5th, 2021. You can read more about this change on the Facebook Developer portal. To better control the Facebook Plugin's internal behavior, you can now use the new properties Facebook::autoLogAppEventsEnabled and Facebook::advertiserIdCollectionEnabled. The Firebase Plugin can now use the latest version of the Firebase gradle plugin on Android. Using earlier plugin versions is not recommended and can produce warnings at the initialization of Firebase. To update the used Firebase gradle plugin, open your android/build.gradle and increase the version to 4.3.10. Also move the "apply plugin" line from the very bottom to the other plugins. buildscript { dependencies { // update version here: classpath 'com.google.gms:google-services:4.3.10' } } apply plugin: 'com.android.application' // move this from the very bottom up to the android application plugin: apply plugin: 'com.google.gms.google-services' See the Felgo Plugin Demo for example integrations of all plugins with the latest supported versions. More Features and Improvements Felgo 3.9.2 includes many more improvements, for example: - The ImagePicker has new APIs that give more control over the current photo selection. Use the ImagePicker::select() or ImagePicker::deselect() methods to add or remove certain photos from the selection. You can also implement your own selection handling by disabling ImagePicker::autoSelectClickedPhotos. - Disable the swipe back gesture on iOS with the Page::backSwipeEnabled property. Navigating back with the default back button in the navigation bar is still possible. - You can now use the Qt::AA_EnableHighDpiScaling setting on Android without scaling issues. NativeUtils::safeAreaInsets and NativeUtils::statusBarHeight report the correct height for custom code and Felgo components like the navigation.9.1: Publish Qt Apps as Android App Bundle on Google Play and Configure Icon Sizes in the Felgo Theme Release 3.9.0: Improved UX with Overlays & Tooltips and WebAssembly Browser Communication Release 3
https://blog.felgo.com/updates/release-3.9.2-whatsapp-messenger-demo-clear-felgo-live-cache-control-app-navigation
CC-MAIN-2022-33
refinedweb
1,080
56.96
How to call HttpHandler from .cs file asp.net I have created a http handler fro my Jquery ajax call. which is working fine the jquery call is mentioned below $.ajax({ url: "Services/name.ashx", contentType: "text/plain", data: { CustMobile: a, CustName: b, CustEmail: c }, contentType: "application/json; charset=utf-8", success: function (data) { $("#loading").hide(); }, error: function () { $("#loading").hide(); } }); Now my problem is this can I call the name.ashx handler through my code behind. I am using asp.net with C#. I know that it can be called by Response.write("") writing the whole script part. Ans also Register Script of page method. But Is there any way by which I can send request to handler and get back response from the handler using c#. Thanks. Answers You can call any http resource from code behind using HttpWebRequest (System.Net namespace) Sample HttpWebRequest request = (HttpWebRequest) WebRequest.Create("<YourDomain>/Services/name.ashx?CustMobile=ValueOfA&CustName=ValueOfB&CustEmail=ValueOfC"); HttpWebResponse response = (HttpWebResponse)request.GetResponse(); You need an absolute path but you can get your domain or server url from the HttpContext if you dont want to hardcode the domain More Information I found another way of doing this. If you want to access it from the same project it's is very easy. Steps to use it in code behind - Basically it create a class with a class name. - You can create a object of that class. - Then you can call ProcessRequest. - It will execute that handler. Suppose I have created a handler as follows public class HandlerName : IHttpHandler { public void ProcessRequest(HttpContext context) { //some code } public bool IsReusable { get { return false; } } } So you can use it as follows HandlerName obj=new HandlerName(); obj.ProcessRequest(HttpContext); Note that you can get the current context and you need to pass it in the process request. More about HttpContext [1 2] Edit 1 You can also overload the ProcessRequest method in case if you need to do this. public class HandlerName : IHttpHandler { public void ProcessRequest(HttpContext context) { // some code } public void ProcessRequest(HttpContext context, string someString) { // do your coding here } public bool IsReusable { get { return false; } } } If you don't want to override the method the you can pass the values as follow You can add the value in the HttpContext.Current.Items HttpContext.Current.Items["ModuleInfo"] = "Custom Module Info" and get it as follow in the ProcessRequest Method public class HandlerName : IHttpHandler { public void ProcessRequest(HttpContext context) { string contextData = (string)(HttpContext.Current.Items["ModuleInfo"]); } public bool IsReusable { get { return false; } } } Need Your Help In Swift 4, how do I remove a block-based KVO observer? swift key-value-observing swift4If I store an observer like this:
http://www.brokencontrollers.com/faq/11535754.shtml
CC-MAIN-2019-51
refinedweb
445
58.48
Hello all. Before I start asking I will say. I am not asking a question about how to get started with python, I am asking what I need to learn to make a game. I already know the basics of python, so for example I am ok with getting a program to read a file, then to write to a different file, then to go and download a file, so on and so forth. My problem at hand is getting my knowledge into a game; I am not sure what modules I should be learning. Should I learn pyaudiogame, or learn about pygame, What I am trying to find is different options for getting started with making games. Also if any one has any good options, how would I learn about that module or library? Thanks, and best regards Hello all. Well I've never really herd of pyaudiogame, if that's even a thing. I'd recommend pygame, at least that's what minefield re write and a couple of my other test python projects are made with. so, let me get this you use mixer for your sound? and you use pygame keyboard module? But what do you use to make the window, Also to my knowlige the mixer does not work with 3 D audio. thanks, The window is usually created by pygame itself too and the mixer module works just fine for simply playing some sounds. But yes, you're right, the mixer doesn't support 3D audio. If you really want to have 3D audio, you should prefer OpenAL (or more advanced BASS wrappers, since there is a HRTF-based 3D audio add-on already in the works for that). Best Regards. Hijacker thanks. By the way how would i go about learning pygame's sintacks? thanks. also, @2 pyaudiogame is a thing, It lets you create games with only 1 module. however it's not the best, but that's just going of what i have seen with it. If any one has any more info about pyaudiogame that would be grate also! I guess I should have been more clear. It's not made completely with pygame, pygame just handles the simple stuff like window, keyboard handling, etc. For sound I use a combination of sound_lib and amerikranian's awesome sound_pool, other things like menu were used with private modules I have access to. ok, would you be able to shair any open source code or some thing. so I could have an idea how it all works? thanks. HI. Well you don't really learn syntax with pygame, it uses regular python syntax, you just need to read the pygame manual to get familiar with how pygame works. The mixer in pygame is very basic, it plays sounds but never 3d. Like others have said if you want 3d sound there are other options than pygame. You could try using piglet too, it has some unique features for handling gaming things. I wouldn't recommend using pyaudiogame, it has good features, but it is very outdated. It's your choice, but like I said, I would stick with pygame, piglet, or whatever else you find useful. Hth. What has been created in the laws of nature holds true in the laws of magic as well. Where there is light, there is darkness, and where there is life, there is also death. Aerodyne: first of the wizard order thanks, I will learn a bit of pygame thanks for the help, I all ready now how to use the mixer and the key input. going to try to learn about open al You may find my [OpenAL Examples] useful, as they cover 3D positional Audio, Recording, HRTF, Effects, and Filters. If you want screen reader output you could also go with Tolk, though the download last checked had some issues. There's a test pack [here] with the necessary files however. Other libraries besides Tolk would be Pyttsx or Accessible_Output2. Window handling in Pygame is also fairly straight forward, here's a simple example with a main loop: import pygame from pygame import mixer import sys def Example(): #initialize pygame pygame.init() #initialize sound mixer mixer.init() #create display window = pygame.display.set_mode([640,480]) #load sound sound = mixer.Sound('tone5.wav') ) #update window display pygame.display.update() Example() -AudiMesh3D v1.0.0: Accessible 3D Model Viewer
https://forum.audiogames.net/topic/29352/python-from-the-brain-to-the-keyboard/
CC-MAIN-2019-39
refinedweb
733
80.62
Locating Resources Using JNDI (Java Naming and Directory Interface) This material is from Chapter 13, Locating Resources Using JNDI, from the book JavaServer Pages Developer's Handbook (ISBN: 0-672-32438-5) written by Nick Todd and Mark Szolkowski, published by Sams Publishing. Chapter 13: Locating Resources Using JNDI This chapter introduces the concepts surrounding the Java Naming and Directory Interface (JNDI). It discusses the need for naming services, and the purposes for which Web applications use them. Directory services are also described, and by the time you have read this chapter you will be able to distinguish between the two types of service. You will then be introduced to JNDI and its architecture before seeing the specifics of using JNDI in a Web application. In the next chapter (Chapter 14, "Databases and JSP") there are examples of using JNDI to locate JDBC datasources. In Chapter 15, "JSP and EJB Interaction," you will see Web applications that use JNDI to locate Enterprise JavaBeans (EJBs). Note - If you want to run the demonstration applications for this chapter, you will need the chapter13.ear and chapter13.jar files from the book Web site (). The source code for the standalone examples is in the file chapter13.jar in the CommandLineJNDI folder. Naming and Directory Services This part of the chapter discusses naming services, and then directory services. After you have read them you will know what each is, as well as the differences between them. You have probably already come across several such services, such as DNS and NDS. Overview of Naming Services A naming service is quite simply a software application that associates a name with the location of information or services. This means that the software you write can utilize objects without any knowledge of where those objects are located. The objects need not even reside on your local machine, but can live on any machine that is accessible on the network. Another benefit to using a naming service is that for most people it is much easier to remember a logical name rather than a URL or some other object reference. For example, you can associate a logical name with a JDBC datasource. It is much easier to remember a name like CUSTOMER_ADDRESSES than a JDBC URL such as jdbc:mysql://localhost:3306/ADDRESS! This really is not that much different from many examples in day-to-day life. For example, if you want to make a telephone call to somebody whose number you don't know, you normally look that number up in a telephone book. Conversely, you can register your own telephone number with the producers of the telephone book so that other people can look you up. The only tricky part about looking up somebody's number in a telephone book (assuming that they are listed) is making sure that you are looking in the correct telephone book. You have a similar problem to overcome when writing computer software that uses a naming service, in that you can only lookup an object if you search the correct naming service. The term given to this is that you must obtain a context. When you then use a context to retrieve information from a naming service, you are said to perform a lookup. The act of storing the name/resource pair in the naming service in the first place is known as binding. However, when people use the term a binding, they are referring to the association between an object and its name. After an object has been registered by name in the naming service, a client can retrieve a reference to the object by specifying the same name. Figure 13.1 shows the basic architecture involved with using a naming service. The diagram depicts a client that retrieves an object by specifying a name that was previously used to bind an object into the naming service. You can see that the naming service associates a name with an object (a binding). Figure 13.1 The architecture of a naming service. You have just read that a context is a set of name/resource pairs. A naming system contains a set of related contexts that have the same naming convention. It is this naming system that provides the naming service to clients. The set of names in a naming system is known as a namespace. Several common naming services are The CORBA Common Object Services (COS) Naming Service provides a hierarchical directory in which you can store object references, in a way that is comparable to directories in file systems. The COS Naming Service is widely used in Java-based distributed environments as a way of storing information about the location of remote objects. You can find further information on the COS Naming Service at. Domain Name Service (DNS) is the Internet naming service that identifies hosts on a network by performing a translation between host names and Internet addresses. All Internet clients use DNS, for example, Web browsers. More information on DNS can be found online at. Network Information Service (NIS) from Sun Microsystems provides system-wide information about users, files, printers, machines, and networks. You will normally encounter NIS when working with systems that use the Solaris operating system. There are, however, other systems such as Linux and other Unix operating systems that support NIS. Novell Directory Services (NDS) provides information about network services such as printers and files. NDS is mainly found in environments where Novell provides the main networking software. File systems in general. File and directory objects are bound to names and are generally stored in a hierarchical form. The RMI registry is a simple server-side bootstrap naming facility that enables remote clients to obtain a reference to a remote object. One example of a binding is a file that is bound to its filename. Another is an IP address that is bound to a hostname in DNS or WINS. At the very least, a naming service must provide the capability to bind objects to a name and support the retrieval of those objects by name. However, the way in which the naming service can store the objects can differ. For example, the actual resource might be stored inside or outside the naming service. A naming service that does not store the resource directly is DNS. DNS simply maps a logical name such as to an IP address (165.193.123.117), but does not store the remote host itself. This situation also arises when the object that is associated with the name is large, and you do not want to store it in the naming service. In this case you can store a reference to the object instead. An example of a naming service that can store objects internally is the file system provided by Microsoft Windows NT. For efficiency, NTFS stores files that are smaller than about 1KB in the Master File Table (MFT). Anything larger than this is stored externally. It is possible to overwrite an existing binding by specifying the same name, but a different resource. This is known as rebinding. In the previous telephone number example, this is analogous to moving and being allocated a new number by the telephone company. Other things that you can do with a naming service include renaming a bound object, and unbinding it completely so that it is no longer available to clients.JNDI also supports the notion of federated namespaces. This is when a resource is identified by a name that spans multiple naming systems. For example, consider the name myhost.somedomain.com/documents/manual.txt. The first part of this name (myhost.somedomain.com) is a host name that must be resolved via DNS, and the rest of the name (documents/manual.txt) is a file system name. For details of how this works, see the JNDI tutorial at. Overview of Directory Services A directory service is similar to a naming service in that it enables you to associate names with objects. However, a major distinction between the two is that a directory service enables you to store information about the object (these pieces of information are known as attributes), in addition to providing mechanisms for searching for objects based on that information. For example, if you need to print out a color photograph, you could use a directory service to find the locations of color printers in your office building. See Figure 13.2 for a diagram of a generic directory service. Figure 13.2 The architecture of a directory naming service. Going back to the real-world telephone book example, using a directory service is similar to using the Yellow Pages phone directory. Instead of simply listing the name of a business along with a contact telephone number, the Yellow Pages directory often includes advertisements that contain additional information that add value to the entry. For example, a business might list location maps, professional qualifications, and even affiliated organizations. The fact that a directory service enables you to search for objects based on the values of these attributes means that you can, for example, search for all plumbers who operate a 24-hour emergency service in your neighborhood. Page 1 of 7 This article was originally published on June 2, 2003
https://www.developer.com/java/data/article.php/2215571/Locating-Resources-Using-JNDI-Java-Naming-and-Directory-Interface.htm
CC-MAIN-2020-40
refinedweb
1,543
54.12
04-30-2017 11:02 PM Hi team, I am doing GeoMedia programming on GeoMedia 16. From the GeoMedia Object Reference, all unit of measure objects are comming from the dll PCSS_tlb but I can't find this in my system (GeoMedia 16 installed). I believe the document is out of date. If someone knows, please give me the latest information on this. Kind regards, Solved! Go to Solution. 05-01-2017 01:06 PM Hi Bob, this was changed back in the version 2015 (or even 2014). If you look into GeoMedia Object Reference, you can see a new section named "Coordinate System .NET API". If you expand it and click any object, CoordSystem for example, you can see that it is now in the Intergraph.CoordSystem namespace. Basically it means that you should remove the old reference to PCSS.tlb (or PCSS_tlb.dll) and replace it by the reference to Intergraph.CoordSystems namespace, which you find in the .NET section of Add Reference dialog. Then if you add the following line into the top of your source code (for C#, it will be different for VB.NET): using PCSS = Intergraph.CoordSystems; then your code should compile almost without changes. The exception is if you work with MapView control. In this case, you will also need the namespace Intergraph.CoordSystems.Interop and sometimes cast between the same objects from the other library. Pavel
https://community.hexagongeospatial.com/t5/Developer-Discussions/Dll-PCSS-tlb-dll-in-GeoMedia-16/td-p/13140
CC-MAIN-2019-47
refinedweb
234
68.77
NAME Sub::Lexical - implements lexically scoped subroutines)); DESCRIPTION Using this module will give your code the illusion of having lexically scoped subroutines. This is because where ever a sub is lexically declared it will really just turn into brave. Another advantage is you can use them as truly private methods in packages, thereby realising the dream of true encapsulation so many have dreamed of. Your code will be automatically parsed on include (this is a filter module after all) so the methods listed below are provided so you can filter your own code manually. METHODS - new Typical constructor will return a Sub::Lexical object. Must be called as a class method at the moment - subs_found Returns an ArOH of the form [ { 'code' => '{ ... }', 'extra' => '() : attrib', 'name' => 'foo' } ] - filter_code It takes one argument which is the code to be filtered and returns a copy of that code filtered e.g my $f = Sub::Lexical->new(); $filtered = $f->filter_code($code); CAVEATS If you have a sub called foo it will clash with any variable called LEXSUB_foo within the same scope, as all subs have 'LEXSUB_' appended to them so as to avoid namespace clashes with other variables (any suggestions for a cleaner workaround are very much welcome). SEE ALSO perlsub, Regex::Common, Filter::Simple THANKS Damian Conway and PerlMonks for giving me the skills and resources to write this AUTHOR by Dan Brook <broquaint@hotmail.com> Copyright (c) 2002, Dan Brook. All Rights Reserved. This module is free software. It may be used, redistributed and/or modified under the same terms as Perl itself.
https://metacpan.org/pod/Sub::Lexical
CC-MAIN-2015-32
refinedweb
261
51.99
Using NetBeans to Develop a JavaFX Desktop Application Creating the DeltaTitleRect.fx ClassStarting with this section, you will develop the application's graphical components. Every node that represents a graphical component is a JavaFX file, and at this point you know how to create such a file. Per the previous example for creating the DeltaTitleRect.fx file, follow these steps (use this approach to create all the files of the DeltaCars application): - In the Projects view, expand DeltaCars -> Source Packages and select the deltacars node. - Right-click on the deltacars node and select the New -> Empty JavaFX File option (from the same contextual menu, you can choose to create a new JavaFX Class or a JavaFX Stage). - In the New Empty JavaFX File wizard, type DeltaTitleRect as the file name (don't type the .fx extension). - Press the Finish button. Now the DeltaTitleRect.fx file is located under the deltacars node. Open DeltaTitleRect.fx in the editor, where you will transform it into a JavaFX Node. Because a graphical component is just an element in a scene graph, it has to extend the CustomNode class. You accomplish this task from the Applications Palette by dragging a CustomNode item on the editor surface under the comment "place your code here." You should see something like Figure 3. Figure 3: Creating a JavaFX Node Author's Note: All the JavaFX classes developed in the section to follow should extend the CustomNode class and should be declared public. The next step is implementing the graphical design of the DeltaTitleRect.fx class. This class should map some text (Text node) and two colored rectangles (Rectangle node). As you can see in Figure 4if you use a little imaginationbefore the Text is set in its final place, it is transformed with the scale and translate transformations. Figure 4: Scale and Translate a Text Node To accomplish these transformations, you will use a keyframe animation. This kind of animation controls changes in properties such as scale, translate factor, or opacity over time by defining the property's values at key times and interpolating the values in between. To define a keyframe animation, you need to define the following objects: - Timeline: This object represents a list of key frames that are controlled through a start function (play) and a stop function (stop). In addition, the animation can be repeated using the repeatCount attribute. - KeyFrame: This object represents a set of end states of various object values at a certain time instant relative to the start of the animation. It uses these values along with interpolation specifications to calculate the in-between values relative to the previous frame. - KeyValue: This object represents a target, which is a reference to an object to animate, a value for the object to take on at the KeyFrame time, and an Interpolator. The supported interpolations types are: - Interpolator.LINEAR (The default): Uses simple linear interpolation. - Interpolator.EASEOUT: Starts changing the value slowly after the KeyFrame. - Interpolator.EASIN: Slows the rate of change as the KeyFrame is reached. - Interpolator.EASEBOTH: Smoothes the rate of change through the KeyFrame. Now, that you know what a keyframe animation is, follow these steps to implement one in DeltaTitleRect.fx: - Expand the Animation Palette. - Drag a Timeline item right below the create function definition. - Drag a Values item right below the canSkip attribute of the Timeline. - Modify the generated code (the left side of Figure 5 ) to conform to the right side of Figure 5. Figure 5: Modifying a Generated Timeline Now that you have a keyframe animation, it is time to define the graphical components for the DeltaTitleRect.fx class. Both types of components can be added from the Basic Shapes Palette by dragging two Rectangle items and two Text items inside the Group content. After dropping them inside the default Group, you should adjust them to conform to the code in Listing 1 (to add Scale and Translate nodes, use the Transformations Palette). Creating the DeltaCarLogo.fx ClassLike any serious company, Delta Automotive needs a logo. Their logo features a car image named logoCar.bmp, which is stored in the deltacars/img folder (you can find this image in the source code download for this article). The image will be displayed in the upper-left corner of the Scene through a fade transition. The javafx.animation.transition package contains JavaFXs transition functionality, which includes path, parallel, rotate, scale, fade, and other transitions. Before applying the fade transition, you need to define the ImageView node for logoCar.bmp. For this, expand the Basic Shapes Palette and drag-and-drop the Image icon right below the create function definition. In the generated code, modify the URL to {__DIR__}img/logoCar.bmp and then place this code under a JavaFX variable as follows: var logoCar = ImageView { image: Image { url: "{__DIR__}img/logoCar.bmp" } }It is time to implement the fade transition (i.e., create a fade effect animation that spans the duration of the effect). Do this by updating the opacity variable of the node at regular intervals. Unfortunately, NetBeans Palette doesn't offer a set of transitions, so you have to insert one manually, like this: FadeTransition { duration: 20s node: logoCar fromValue: 0.0 toValue: 1.0 repeatCount:1 autoReverse: true }.play();Finally, you add the logoCar Node in the default Group content. The final code of DeltaCarLogo.fx should be: package deltacars; import javafx.scene.CustomNode; import javafx.scene.Group; import javafx.scene.Node; import javafx.scene.image.ImageView; import javafx.scene.image.Image; import javafx.animation.transition.FadeTransition; // place your code here public class DeltaCarLogo extends CustomNode { public override function create(): Node { var logoCar = ImageView { image: Image { url: "{__DIR__}img/logoCar.bmp" } } FadeTransition { duration: 20s node: logoCar fromValue: 0.0 toValue: 1.0 repeatCount:1 autoReverse: true }.play(); return Group { content: [logoCar] }; } } Creating the DeltaMenu.fx ClassAs you can see from Figure 6, the DeltaMenu.fx class should provide a nice menu for your application. It's a little hard to explain its design in words, but the main idea is to display a menu made up of: - A set of six white rectangles using the fade transition (this time implemented for an array of Nodes) - A set of six Text nodes over these rectangles - A set of six "wheel" images, which use the path transition with a random cubic curve (the "wheel" image is named wheel.png and is stored in deltacars/img foldersee Figure 6.) Figure 6: The DeltaMenu.fx Menu In addition, when the cursor touches the rectangle, the corresponding "wheel" rotates 180 degrees once using a rotate transition. Start by creating the DeltaMenu.fx class and then declaring the following set of variables: var rectArray : Rectangle[]; //an array of Rectangle var wheelArray : ImageView[]; //an array of ImageView var pathArray : Path[]; //an array of Path var tranArray : PathTransition[]; //an array of PathTransition var ycoord=[180,220,260,300,340,380]; //an array of Integer Next, add an array of six Rectangle nodes and implement mouse-click and mouse-enter events for every Rectangle. While you can add Rectangle nodes from the Basic Shapes Palette, you can add the mouse events from the Actions Palette using drag-and-drop. Encapsulating everything in a for statement, you should get something like this: ... for (i in [0..5]) { insert Rectangle { x: 10, y: bind ycoord[i] - 10, width: 140, height: 20, arcWidth: 10, arcHeight: 10 fill: Color.WHITE //on mouse clicked onMouseClicked: function( e: MouseEvent ):Void { println("Clicked on: {wheelArray[i]} "); } //on mouse enter onMouseEntered: function( e: MouseEvent ):Void { var rotTransition = RotateTransition { duration: 1s node: wheelArray[i] byAngle: 180 repeatCount:1 autoReverse: false } rotTransition.play(); } } into rectArray; } ...Next, add a fade transition for the above Rectangle array. You can insert one manually, like this: ... for (i in [0..5]) { var fadTransition = FadeTransition { duration: 10s fromValue: 0.3 toValue: 1.0 node: rectArray[i] repeatCount:1 autoReverse: true } fadTransition.play(); } ...Now, define the wheelArray array elements. This array contains six ImageView nodes, as follows (use the Basic Shapes Palette to insert the ImageView node): ... for (i in [0..5]) { insert ImageView { image: Image { url: "{__DIR__}img/wheel.png" } } into wheelArray; } ...Next, you have to define six Paths and store them in the pathArray array. Unfortunately, you can't add a Path through the Palette, so you have to insert them manually. After that, manually define six PathTransition elements (one for every Path) and "play" them: ... for (i in [0..5]) { insert Path { elements: [ MoveTo { x: 700 y: rnd.nextInt( 450 ) }, CubicCurveTo { controlX1: rnd.nextInt( 500 ) controlY1: rnd.nextInt( 500 ) controlX2: rnd.nextInt( 500 ) controlY2: rnd.nextInt( 500 ) x: 40 y: bind ycoord[i] } ] } into pathArray; } for (i in [0..5]) { insert PathTransition { duration: 20s node: wheelArray[i] path: AnimationPath.createFromPath(pathArray[i]) orientation: OrientationType.ORTHOGONAL_TO_TANGENT repeatCount:1 autoReverse: false } into tranArray; } for (trans in tranArray) { trans.play(); } ... Finally, populate the default Group with the defined rectangles and "wheels," and add some text over the rectangles. Assembling everything and adding the corresponding imports, you should have the DeltaMenu.fx class in Listing 2. Page 3 of 5
https://www.developer.com/java/other/article.php/10936_3827981_3/Using-NetBeans-to-Develop-a-JavaFX-Desktop-Application.htm
CC-MAIN-2017-51
refinedweb
1,510
58.48
#include <ieee1284.h> cc files... -lieee1284 The libieee1284 library is a library for accessing parallel port devices. The model presented to the user is fairly abstract: a list of parallel ports with arbitrary names, with functions to access them in various ways ranging from bit operations to block data transfer in one of the IEEE 1284 sanctioned protocols. Although the library resides in user space the speed penalty may not be as bad as you initially think, since the operating system may well provide assistance with block data transfer operations; in fact, the operating system may even use hardware assistance to get the job done. So, using libieee1284, ECP transfers using DMA are possible. The normal sequence of events will be that the application Usually a port needs to be claimed before it can be used. This is to prevent multiple drivers from trampling on each other if they both want to use the same port. The exception to this rule is the collection of IEEE 1284 Device IDs, which has an implicit open-claim-release-close sequence. The reason for this is that it may be possible to collect a Device ID from the operating system, without bothering the device with it. When ieee1284_find_ports is first called, the library will look for a configuration file, /etc/ieee1284.conf. Comments begin with a '#' character and extend to the end of the line. Everything else is freely-formatted tokens. A non-quoted (or double-quoted) backslash character '\' preserves the literal value of the next character, and single and double quotes may be used for preserving white-space. Braces and equals signs are recognised as tokens, unless quoted or escaped. The only configuration instruction that is currently recognised is "disallow method ppdev", for preventing the use of the Linux ppdev driver. You can enable debugging output from the library by setting the environment variable LIBIEEE1284_DEBUG to any value. /etc/ieee1284.conf parport(3), parport_list(3), ieee1284_find_ports(3), ieee1284_free_ports(3), ieee1284_get_deviceid(3), ieee1284_open(3), ieee1284_close(3), ieee1284_claim(3), ieee1284_release(3), ieee1284_data(3), ieee1284_status(3), ieee1284_control(3), ieee1284_negotiation(3), ieee1284_ecp_fwd_to_rev(3), ieee1284_transfer(3), ieee1284_get_irq_fd(3), ieee1284_set_timeout(3) Tim Waugh <twaugh@redhat.com>
http://www.makelinux.net/man/3/L/libieee1284
CC-MAIN-2014-52
refinedweb
358
51.99
Disclaimer If you’re looking for a flame post - this is not one of them. I love both languages and I’ll simply compare some of their features and possible uses. Prelude. Installation Linux/Unix installation If you’re using a Linux distribution or some other Unix derivative such as *BSD or Solaris you’ll probably be able to install Ruby and Python through the operating system’s software management system. For instance on Debian Linux systems(Ubuntu is a popular Debian derivative) you can use apt to install them. Run the following commands as root or with sudo: $ apt-get install ruby $ apt-get install python On Red Hat based distros like Fedora, CentOS, etc you can use yum instead: $ yum install ruby $ yum install python You should keep in mind the fact that both Ruby and Python have two version that are commonly used at the moment. Ruby’s current version is 1.9.2 and Python’s is 3.2. For various reasons(like backward compatibility for instance), however, the current versions are not widely deployed yet(especially Python 3). In most Linux distributions the package ruby will actually be Ruby 1.8.x and the package python will be Python 2.7.x. If your distribution is one of those - look for packages named ruby19(or similar) and python3: $ apt-get install ruby19 $ apt-get install python3 or on a Red Hat system: $ yum install ruby19 $ yum install python3 Using the distribution package management system is a simple solution, but in the case of Ruby it might not be best one. Most Ruby hackers favour a powerful bash script called RVM(Ruby Version Manager) that allows you to install several different version(or flavours of Ruby) and switch easily between them. Please refer to the official RVM documentation for installation and usage instructions. Windows installation Installing Ruby on Windows used to be a pretty hard task, but this is no longer the case now thanks to the RubyInstaller for Windows. This is a self-contained Windows-based installer that includes the Ruby language, an execution environment, important documentation, and more. It has two editions one for the older 1.8.x Ruby branch and one for the current 1.9.x. Python has several installation options for Windows - the most obvious being the official Python installer for Windows. ActiveState’s ActivePython is another popular option packed with more features, but you should keep in mind that although the Community Edition is free ActivePython is not an open-source project. Personally I prefer ActivePython. Other prebuilt Python binaries for Windows are also available, but are not commonly used. OS X installation Ruby is generally preinstalled on OS X, but OS X users can also install it via homebrew or RVM(as mentioned in the Linux section). The is an official Python package for OS X available. Most users will probably prefer using homebrew, however. Syntax & code structure Ruby makes heavy use of braces and keywords(like do/then/end) to delimit blocks of code. Python relies simply on indentation. def fact(n): return reduce(lambda x, y: x * y, range(1, n + 1)) Same thing in Ruby: def fact(n) (1..n).reduce(:*) end I personally prefer the Python approach since it enforces the code semantics based on the code structure alone without imposing special syntax. As a side node you might take under consideration that the Ruby method definition doesn’t have an explicit return value. The value of the last expression in the method’s body becomes automatically the method’s return value. Lisp developers will find this familiar. Java and C# developers will probably find it a bit confusing. There is a return in Ruby, though, it’s just rarely used. Both languages have support for nested function definitions. Both languages have support for “top-level” functions - that live(or seem to live) outside classes and modules(something not possible in Java for instance). This makes them good for general purpose scripting. While I would still prefer to do my system administration with shell and Perl scripts - Ruby and Python offer a solid alternative. Python has a richer system administration library so I’d prefer it over Ruby for such tasks. Ruby has a lot of crust(“heritage”) from Perl - like a myriad of special variables that are now more or less deprecated. It also has much syntactic sugar - for instance do/end is commonly replaced by {} for blocks that are only one line long, there is special syntax for hashtables, whose keys are symbols, etc. Since special symbols(non alphanumeric) are allowed in Ruby identifiers Ruby uses them to impose some naming conventions to make the source code a bit more readable in certain scenarios - for instance predicate methods(those that return true or false) have names that end with ?(usually) like even?, odd?, prime?, etc. Methods that mutate the object on which they were invoked generally have the ! suffix - sort!, map!, etc. I find this a nice decision. In Ruby you generally have many ways to achieve the same result: ruby-1.9.2-p0 > 1.even? => false ruby-1.9.2-p0 > arr = [1, 2, 3] => [1, 2, 3] ruby-1.9.2-p0 > arr.map { |x| x * 2 } => [2, 4, 6] ruby-1.9.2-p0 > arr => [1, 2, 3] ruby-1.9.2-p0 > arr.map! { |x| x * 2 } => [2, 4, 6] ruby-1.9.2-p0 > arr => [2, 4, 6] ruby-1.9.2-p0 > (1..5).reduce(:*) => 120 ruby-1.9.2-p0 > (1..5).reduce { |x, y| x * y } => 120 ruby-1.9.2-p0 > (1..5).reduce do |x, y| ruby-1.9.2-p0 > x * y ruby-1.9.2-p0 ?> end => 120 No such things in Python, however. Python’s philosophy is one of simplicity - no excessive syntax sugar, one true way of doing things. Both languages have powerful features for organising code in libraries. I would not go into any details on the subject here, but I’ll share with you the fact that I like Python’s more. Both languages come with a REPL in which you can do some exploratory programming. Ruby’s REPL(irb) allows you to do TAB smart completion(amongst other things) by default. To get TAB completion in the Python REPL you’d have to execute this bit of code first: >>> import readline, rlcompleter >>> readline.parse_and_bind("tab: complete") Alternative you can just stick this code snippet in the ~/.pythonrc.py file(create it if it doesn’t exist). If you are using Windows adjust accordingly(you will have to figure out where pythonrc.py is located there). Ruby does not have statements - only expressions. This basically means that everything(objects, method calls) evaluate to some value(though the value might not be helpful always). In Python there are some statements such as assignment and if. One thing I dislike about the Python REPL is that it doesn’t print None values. Compare this bit of Python code: >>> print("this is a test") this is a test to this Ruby snippet: ruby-1.9.2-p0 > puts "this is a test" this is a test => nil In the Python version we see only the side-effect(the printing), but not the return value. Python also ships with a minimalistic IDE called IDLE. If you don’t have it by default after a python installation on Linux probably you’re vendor decided to package IDLE as a separate package. IDLE offers basic features like syntax highlighting, code completion and integration with a debugger. It’s a good tool for exploratory programming, but I advise you to pick another tool for serious development. Naming conventions The naming conventions for both Ruby and Python are mostly the same which is good if you’re using them both on a daily basis - less room for confusion. - variable and method names consisting of more then one word are written in lowercase with underscores separating the individual words like_this. - Class names start with capital letter and follow the camel case naming convention LikeThis. Some of Python’s core classes, however, violate this convention. - Constants are generally written in all caps with underscores separating the individual words LIKE_THIS. As I mentioned earlier it’s customary to add ? as a suffix to predicate methods and ! to mutator methods in Ruby. This convention is not always followed unfortunately, even in Ruby’s standard library. OOP support Both Ruby and Python are famous members of the family of object oriented languages. Unlike languages such as Java and C#, however, Ruby and Python are pure OOP languages. There is no distinction between primitive types(such as numbers, characters and booleans) and reference types(classes). Everything in them is an object and in that sense it’s an instance of some class: >>> type(1) <type 'int'> >>> type("string") <type 'str'> >>> type(20.07) <type 'float'> >>> type([1, 2, 3]) <type 'list'> >>> type((1, 2, 3)) <type 'tuple'> And in Ruby: ruby-1.9.2-p0 > 10.class => Fixnum ruby-1.9.2-p0 > "string".class => String ruby-1.9.2-p0 > [].class => Array As you can see core Ruby classes tend to have a bit more standard and descriptive names than their Python counterparts. You can also notice that in Python for some task we use built-in functions like type, instead of method calls. The built-in function will eventually result in a method invocation(like “string”.class in the case of type(“string”)), but I find this irregularity in the syntax a bit irritating. Method invocation is more flexible in Ruby - you can omit braces in some scenarios. This is handy when you’re designing a DSL or you’re trying to implement the uniform access patterns(data should be accessed through fields and methods in the same manner(read this as with no braces in method invocations)). On the other hand Python’s uniform syntax makes it easier to spot method invocations. Both languages don’t have operators - just methods. Ruby’s OO support seems to be a bit more mature and polished(at least to me), but Python’s has some touches as well. I particularly like the explicit self references that are required when you try to access class members. I’m not too found of the use of special sigils in Ruby to mark instance(@) and class members(@@). Though they make them visually distinctive I think we could have lived without them - I’m generally not a fan of non-uniform syntax rules(and you guessed it - my favourite language is Lisp). Ruby’s metaprogramming model arguably gives it an edge in the OOP department, but I won’t be discussing the metaprogramming issues here since they are quite lengthy. Functional programming support Functional programming has been on the rise lately and it’s useful to examine what kind of support both languages provide for it. Both have support for lambda functions and respectively - higher-order functions(functions that accept functions as parameters). Ruby has code blocks, Python has list comprehensions(generally favoured over higher-order functions). Both languages lack in their standard libs the immutable data structures that generally are the code of most functional programming languages. Here’s a few example related to filtering a sequence based on some predicate: >>> filter(lambda x: x % 2 == 0, range(1, 11)) [2, 4, 6, 8, 10] >>> [x for x in range(1, 11) if x % 2 == 0] [2, 4, 6, 8, 10] Ruby: ruby-1.9.2-p0 > (1..10).select { |x| x.even? } => [2, 4, 6, 8, 10] ruby-1.9.2-p0 > (1..10).select &:even? => [2, 4, 6, 8, 10] Ruby’s functional programming support seems to be better to me, but this is of course subjective. GUI programming Python has tkinker by default - a wrapper around the Tk library(which sucks in my humble opinion). Ruby doesn’t have even this much. Both have binding for the popular GUI toolkits such wxwidgets, GTK, QT. From my experimentation with them I can tell you that you’ll be much better off with Python in that department. It’s not wonder that many GTK+ applications these days are implemented in Python. Most Ruby bindings projects seem to be in a state of disarray, abandonment - I guess we have to thank Rails for that. Most people think of Rails as the only use of Ruby which is sad… Ruby devs shouldn’t despair however. JRuby(Ruby’s port to the JVM) has an excellent support for the superb Swing GUI framework and MacRuby has great support for building Cocoa Apps for OS X. I personally think that JRuby is the best Ruby distribution out there, but that’s the point of another post entirely. 3rd party library availability and installation It’s not secret that part of Python’s philosophy is that it comes with batteries included - meaning that it’s standard library is vast and covers a lot of common tasks. In case you can’t find what you’re looking for in it you’re left with an number of third party libraries for Python whose number can only be described by the word epic and that cover every task conceivable. PyPi maintains an up-to-date list of Python packages. You have several options in the department of python package management. If you’re using ActivePython you can use the excellent PyPM tool, which provides quick installation of thousands of packages for many Python versions and platforms for ActivePython distributions. EasyInstall is another popular solution that works with the standard Python distribution. Like PyPM it allows you easily search for and install Python packages from the PyPI that are bundled in Python’s standard egg format(Jave developers might think of eggs as jars). EasyInstall has splendid documentation so I won’t go into any details here. pip is a replacement for easy_install. It uses mostly the same techniques for finding packages, so packages that were made easy_installable should be pip-installable as well. pip is meant to improve on EasyInstall.. Linux users will also find a great selection of Python libraries prepackaged for use with the distribution’s package manager. Ruby has an application that is more or less equivalent to EasyInstall called RubyGems(gems are the standard way to distribute Ruby libraries). Linux users can of course install Ruby libraries with the distribution’s package manager as well. RubyGems has the following - Easy to use building and publishing of gem packages Using RubyGems, you can: - download and install Ruby libraries easily - not worry about libraries A and B depending on different versions of library C - easily remove libraries you no longer use - have power and control over your Ruby platform! A reader pointed out that about 23000 packages are available for installation through RubyGems and 15000 through PyPI. This, however, cannot be considered as a certain sign that there are more libraries available for Ruby than for Python. Although tools like EasyInstall and RubyGems are easy to use and quite handy, I as a long-time Linux users dislike them a bit, since they circumvent the distributions native package handling. Unfortunately package maintainers cannot find the time to package every Python and Ruby library available so I guess EasyInstall and RubyGems won’t be going anywhere soon and of course we have to consider Windows users for whom such applications are of great value given the lack of unified package management on Windows. One thing to keep in mind about installing eggs and gems is that some of them are implemented in C(usually for performance reasons) and are locally built prior to their installation - an operation bound to fail if you don’t have a C compiler installed. Misc In terms of performance of the default interpreters CPython and MRI Ruby Python is the clear winner. One should note, however, is that there are many how quality implementation of Ruby and Python for different platforms where the performance situation differs wildly. For instance Jython is much slower that JRuby. With the addition of invoke_dynamic in JDK7(basically bytecode level support for dynamic method dispatch) the performance of JRuby and Jython could potentially be improved greatly. In terms of overall usage, market share, job offers and sheer size of the community and available libraries Python is ahead of Ruby as well. One of the main supporters of Python is after all none other than the mighty Google. Ruby has also the unfortunate luck to be living in the shadow of a single application written in Ruby - Ruby on Rails, that is arguably more popular than the language itself. Tooling for dynamic languages is currently not as advance as the one for static languages. People often joke that Python(Ruby) > Java(C#), but Python + any python IDE < Java + IntelliJ IDEA(Eclipse/NetBeans). Python and Ruby IDEs seem to be mostly on par currently. I do all of my Ruby and Python coding in Emacs, but I do like RubyMine and PyCharm. Since they are often used for the creation of webapps one should consider the deployment issue. Most web hosting companies provide cheap Python hosting, but very few companies provide Ruby hosting. The Python 3 problem Python 3 was a great undertaking that improved on a lot of aspects of the language(for example Unicode support) and the standard library. To do this it dropped backward compatibility, an act that slowed it’s adoption immensely. Three years have passed since it’s release and still most hosting providers, Linux distros and Python projects hold on to the older 2.7.x Python branch. This is a bit of tragedy since Python 3 is truly a great improvement over Python 2. I mention this because most recent books are written with Python 3 in mind, but if you land a Python jobs somewhere chances are you’ll have to use Python 2.x.x for the foreseeable future. Epilogue Ruby and Python are two beautifully engineered languages capable of just about everything. If you don’t know any of them you’ll do well to learn at least one. If you know only one it might not be a terrible idea to learn the other. I haven’t touched on many language features(like Python generators and Ruby mixins), but to be honest I’m just tired of typing. One good place to start your journey to Python is “Dive into Python 3”. For Ruby beginners I’d recommend a copy of “The Ruby Programming Language”. P.S. If you like exploring different programming languages and you’re currently shopping for ideas on the subject of which language to learn next you might find my recent article “Programming languages worth learning” interesting as well.
https://batsov.com/articles/2011/05/03/ruby-or-python/
CC-MAIN-2020-24
refinedweb
3,135
62.58
I have a sketch for a Binary Clock which I have modified. The original sketch used the RTClib.h Library, for the DS1307 I have modified the sketch to use the DS1302RTC.h Library. When I compile the sketch I get the message "‘RTC_DS1302’ does not name a type. Here is the relevant piece of code: #include <Wire.h> #include <DS1302RTC.h> RTC_DS1302 RTC; // this causes an error. int datapin = 2; int clockpin = 3; int latchpin = 4; int datapin2 = 8; int clockpin2 = 9; int latchpin2= 10; void setup() { Serial.begin(57600); Wire.begin(); RTC.begin(); Can I just change the libraries and is this the correct way to cfreate an instancde?
https://forum.arduino.cc/t/binety-clock/525062
CC-MAIN-2022-21
refinedweb
111
76.82
Agenda See also: IRC log <fsasaki> looking at <fsasaki> summary issue 74 - 89 <fsasaki> summary issue 106 - 123 <tadej> ACTION: felix to review draft for XLIFF mapping [recorded in] <trackbot> Created ACTION-504 - Review draft for XLIFF mapping [on Felix Sasaki - due 2013-05-14]. <tadej_> ACTION: dF to correct XLIFF mapping (change prefix its to itsx for certain attributes) [recorded in] <trackbot> Created ACTION-505 - Correct XLIFF mapping (change prefix its to itsx for certain attributes) [on David Filip - due 2013-05-14]. <tadej_> ACTION: Felix to set up the URI placeholder for ITS2-XLIFF mapping () [recorded in] <trackbot> Created ACTION-506 - Set up the URI placeholder for ITS2-XLIFF mapping () [on Felix Sasaki - due 2013-05-14]. <Arle> ACTION: Dave to look at the XLIFF 2.0 change tracking module for provenance [recorded in] <trackbot> Error finding 'Dave'. You can review and register nicknames at <>. <Arle> ACTION: dlewis6 to look at the XLIFF 2.0 change tracking module for provenance [recorded in] <trackbot> Created ACTION-507 - Look at the XLIFF 2.0 change tracking module for provenance [on David Lewis - due 2013-05-14]. <Arle> ACTION: Yves to examine XLIFF resource data module with respect to external resouce. [recorded in] <trackbot> Created ACTION-508 - Examine XLIFF resource data module with respect to external resouce. [on Yves Savourel - due 2013-05-14]. <Arle> ACTION: davidF to make notes on extraction/merging behavior for target pointer [recorded in] <trackbot> Error finding 'davidF'. You can review and register nicknames at <>. <Arle> ACTION: david to make notes on extraction/merging behavior for target pointer [recorded in] <trackbot> 'david' is an ambiguous username. Please try a different identifier, such as family name or username (e.g., dlewis6, dfilip). <Arle> ACTION: dfilip to make notes on extraction/merging behavior for target pointer [recorded in] <trackbot> Created ACTION-509 - Make notes on extraction/merging behavior for target pointer [on David Filip - due 2013-05-14]. <Marcis> Hi All <Marcis> We are waiting for the GotoMeeting Organiser to arrive <Arle> scribe:Arle Due to connectivity problems the following log is based on offline scribing, done by Arle Topic: XLIFF [Portion missed] David: First public review of XLIFF 2.0 will be done 29.May. Those interested in mapping need to comment now. There is a commenting facility on the web. Felix provided the needed links. ... Table on XLIFF in the wiki has a 2.0 column Felix: What version should we link to? David: Public review contains links to the public review draft. That would be a good source for the link. ... We expect at least two public review drafts, a second one in June. But we can review to public review draft 1. That's better than referencing an editorial draft. ... Felix, it should all be in the email you got. Also the first page of the spec contains the citation format. ... We've concentrated so far on XLIFF 1.2, but 2.0 solutions are marked in most cases. In many cases it is easier to map to 2.0. ... We tried to use the core vocabulary of XLIFF where possible, which leads to some inconsistent solutions when 1.2 can't handle something in ITS 2.0. ... <mrk> is a generic marker in XLIFF, and we use it to map metadata. One marker can carry more than one piece of metadata, e.g., a "translate" marker can also carry information about terminology ... Does anyone have a concern about that? Dave: Is that best practice in all cases? Felix: Can you show an example? Dave: What happens when they change in a workflow, e.g., through resegmentation? David: We have <sm> and <em> for cases where you can't have well-formed <mrk> in a segment. Dave: So you don't preclude using nested <mrk>s if that's more appropriate? David: It depends on what is in the original document. If the spans are all the same in the source, you wouldn't usually want to split them, although you could without violating syntax. Dave: I'm thinking of cases where you do text analytics after you have extracted the text and need to insert your own. Felix: This is what Marcis needs to know. David: Use the exiting markers if possible, otherwise insert your own. We need some language to explain this. ... The table started mapping categories, but now we need explanation. Dave: Example of "The City of London", where you might get two annotations on "City of London" but then realize one needs to be moved to "The City of London" David and Dave: Discussion about solutions David: In 2.0 it's easy, but in 1.2 the solution is ugly. Yves: There is no implementation of 2.0. David: We're working on implementation statements now. When we are past the review we will gather them. Felix: Marcis has generated 1.2 markup, and that is what he needs to know. Dave: We still need to look at localization quality rating for XLIFF. Phil needs that. David: terminology markup shows where alongside its:term. Yves: That duplicates information. David: And creates the possibility for conflicting information. Dave: There are XLIFF-only and ITS-only processors, plus processors that would know about the mapping. Felix: Actually the third one is ITS-only since it would use global rules. David: We should catch places where ITS does not make sense ... Yves: I've look at the example from Marcis and he generated an invalid XML file due to some linking [?] [missed discussion about processing between Yves and David] Felix: You could do a mapping of its: to itsx: Felix: Someone on IRC please assign an action to David. ... Marcis is implementing to the end of the month. Terminology, language, elements within text, and domain are what he needs to do. ... David asked if we could have a namespace on the W3C server. Do we still need that? Yves: Yes. We need the URI. (Prefix is unimportant.) Felix: Could we agree on what the URI would look like? ... http: ... That would be the mapping URI with the prefix itsx ... Please assign me an action to do this. David: Does Marcis need the 2.0 part as well? Yves: He is doing 1.2 only. David: There is an issue with needing more than one reference? Yves: The issue is with the mapping. We don't have term-info locally in ITS? ... There was a problem that in some cases in XLIFF you can have information about the term without referring to anything, but you can't have that in mapping from XLIFF to ITS. ... We have the termInfo pointer globally, but you can point to extra information about the term and put it locally in XLIFF, but the reverse is not possible since you cannot create a global rule. David: if the original processor knows about global rules, the extractor can do the matching and knows it comes from a global rule, it handles the mapping. Yves: But what happens if the markup comes from XLIFF not the original file so there is no local markup to attach it to. ... For example, if you use Enrycher. David: So you need to know about the source format to map it back. ... Maybe you do it if you can or you send an error message. Yves: Without implementations we can't tell if it will work or not. ... Some cases we can't do anything. ... Solution is that we may need a termInfo attribute in ITS, but we decided early on not to do this. David: In the case of domain we discussed the need for local mechanisms in ITS. ... I wouldn't be opposed, but we are in the last call. Yves: Who does it and would we really need it? We don't know. If you start with an XLIFF file from an original document you want to mark the terms up, but you probably don't want to inject them back in the original document. Felix: Tilde use case ends in May, so we don't want to make changes that may render it nonconformant. Yves: Let's discuss these issues in best practice David: The big things are ITS prefix, domain. Dave: Let's put a complex example in domain to be more realistic. David: I think domain is resolved. ... Same issue as for termInfo. If you introduce it in XLIFF, you need somewhere to put it globally when you get back. Dave: On domain, for consistency, we have an attribute <domains> with an S. The domain should be comma-separated since there an be more than one. It's not in the spec, but in the suite. Should we use domains for consistency? Felix: Domains attribute contains a comma-separated list. David: So you suggest itsx-domains? Dave: Yes. For RDF mapping it is important too. Felix: Let's discuss that tomorrow. David: For elements within text, we suggest a separate unit in both cases. David: For language information, mtype="x-itsLang". We just need to remove the question marks in the wiki. Yves: How about making the mtype as generic and then use the data categories you need? The only problem would be when you use term and things like that. David: mtype becomes "x-its" in this case. For XLIFF 2.0 it takes the same mechanism (UPDATED IN WIKI) David: Now for locale filter, it is basically more sophisticated version of translate, so it should be easy. ... You can use the same mechanism. Yves: We haven't thought about this one. In XLIFF you can extract without a target locale. So what do you do when you have some entries for some locales but not others. ... You may do things after extraction, but you need to know information about locales and preserve it after extraction. An XLIFF doc may have no targets. You don't want to reprocess it for every single locale. ... You might need information at the trans-unit level. David: Can the processor work at the inline level? Yves: That would certainly be possible. ... If we use an extension, you have to understand it to process the file. Felix: Maybe Dave can create an example we can look at. David: The mechanism will be the same in any case since it is an extension, like translate. It would be on a unit. Does it make sense to have it at the segment level? Yves: You don't extract segments, but units. You need at least one segment (which can be a paragraph if no further segmentation is done). Dave created examples in the wiki for locale filter. [missed portion of a few minutes] David: Ruby is gone David: text analysis uses mtype="phrase". Felix: Change this to use should instead of must since confidence may not be needed. David: If you do use it, we need guidance on how. Tadej: In the spec if uses a MUST, but it applies only when confidence information is used. Yves: Does mtype="phrase" make sense here? Does it have a specific meaning? It may not be a phrase. What is the meaning of phrase? David: There is no defined meaning. Dave: There is some ISO meaning [?] David: Even if you can't read the ITS portion, using phrase gives you a notion that there is an idiomatic value, for the generic XLIFF processor that doesn't know ITS. ... Now on to provenance, which is Dave's. Dave: We only support it in stand-off mode. David: It works in 1.2, but for 2.0, maybe look at change tracking mechanisms. [OBSCURED because already added: Action: dlewis6 to look at the XLIFF 2.0 change tracking module for provenance] David: externalResource Yves: should be externalResourceRef. I also found that <mrk> doesn't work when implementing. Because you are not referring to an object, but rather the content in the object [?] ... But we have cases where you can't put the mrk inside where it needs to go. David: So here we should use something other than mrk? Yves: Probably <ph> David: What if the external resource is a TM, term database? Yves: Those don't apply. It refers to video, images, etc., stuff from the original document. David: We need to change mrk for generic inlines. In 2.0 we need to look at the resource data module. ... You have skeleton, original data, and resource data module. So you think this is always just an inline? Yves: For 1.2 it is clear. David: Let's use inlines the inline level in 2.0. But use the skeleton for the structural level? Yves: The skeleton doesn't make sense to me, but otherwise, the external module for reference is relevant. We need to look at it. David: I'll look it. Yves: I'm not OK with that, because it would force people to add support for a module for something pretty simple. Felix: The category was suggested by Shaun and dealt with source content to show what belongs to the file and is source-content focused. Yves: I think that is partially what Microsoft intended in their module. ... Someone need to look at it anyway. Felix: What is the timeline? Yves: Don't know. Dave: So what are we using for 1.2? David: generic inline markup. Same for 2.0, but a different set. [OBSCURED because already added: Action: Yves to examine XLIFF resource data module with respect to external resouce.] David: for id value inline, we are not addressing this use case. (Explain that NA is after deliberation.) Yves: To do it we would need an extension. David: Would require unique IDs within the doc. But I think we can drop it for now and revisit if needed. David: Mārcis asked why we have both its- and itsx- Felix: Can you clarify it for me too? David: Sometimes the local markup does not exist in XLIFF, so you have to use its. In some cases the local markup exists, but some information is lost in the mapping, so the attributes no longer make sense and depend on the data that was mapped. ... For that we use itsx-. The URI is actually the its-xliff namespace, but we use itsx- prefix. ... The namespace has been sort of defined. Felix: Point Tilde to There is nothing there, but there will be. David: preserve space ... XLIFF uses xml:space, so what do we say about inline? ... There shouldn't be different whitespace behavior inline, right? ... so I would suggest to say that preserveSpace doesn't need to be defined for inline. Yves: You could, but I'm not sure tools can actually do it. But you could have an element in a paragraph where you need to preserve space. David: A code sample inline might need that. Can you put xml:space on that? Yves: You could. No idea what tools would do, but follow the example of xml:lang here. David: Agreed. Use generic xits- and xml:space ... Localization quality issue Dave: We decided to do this only in standoff mode. ... you can do it on unit, source, or target, depending on where the issue is and where it relates. <Yves_> s\\\ David: Localization quality issue Dave: Not finding anything on this in the spec. David: Not sure if the rating makes sense for inlines. ... Maybe in 1.2 if mtype="seg" ... In 1.2 both unit and seg are structural. But talking at that fine-grained level doesn't make sense. Yves: can we have it on a span element in HTML? ... If so, then we need to support it. Dave: We can't anticipate what people might do. David: We should have a note to say that it is discouraged to do it on spans and other inline elements. Dave: a lot of things like mtConfidence work at fine granularity. David: This is analogical. It's the same thing: a score. Dave: Would you apply these to an alt-trans? David: Alt-trans don't have any equivalent in the target document. Arle: I think it makes sense to support it because you might need it for your selection mechanism between multiple alt-trans elements. David: Agreed. <scribe> ACTION: dlewis6 to make LQI and LQR similar to mtConfidence in structure. [recorded in] <trackbot> Created ACTION-510 - Make LQI and LQR similar to mtConfidence in structure. [on David Lewis - due 2013-05-14]. David: mtConfidence. We don't have much for 2.0. <scribe> ACTION: dfilip to create mtConfidence examples for XLIFF 2.0 [recorded in] <trackbot> Created ACTION-511 - Create mtConfidence examples for XLIFF 2.0 [on David Filip - due 2013-05-14]. David: for origin, it doesn't tell you what it should look like. Don't want to conflict with simple and advanced usages. Dave: the reasoning for using it as a flag was so that you'd know how to treat things in scenarios, but you can't reserve that phrase. David: We could recommend that origination is used for a URI. Then it would be like annotatorsRef, but it would be overloaded. ... Maybe we use origin for annotators ref and then we could have—wouldn't need a flag then—because the ref would say it mtConfidence. ... We get rid of annotatorsRes in alt-trans, but put it origin and then it is overloaded. Yves: Doesn't sound good to me. Tools already use origin and want to put values there. ... I use it to tell me that it comes from Microsoft ... We have matchQuality. Why not use annotatorRef like we do elsewhere? Dave: annotatorsRef then becomes the flag. Yves: I don't use annotatorsRef. It shouldn't behave differently depending on where it comes from. ... I already use it. You don't need to overload origin. It doesn't bring anything to the information. You already have confidence. Felix: The tools processing origin and mtConfidence don't need to understand both. Yves: Using it will create problems since we already use it. Not just mine, so we can't reuse it for an override. ... We would lose information. David: So shall we leave origin out of the example? Dave: Yes, it takes care of the problem. Felix: The conclusion of those in the room is not to confuse origin and annotatorsRef Dave: The only issue is that matchQuality doesn't have restrictions on the value. David: Could be anything: TM, MT, etc. ... Agree we can use mtConfidence as the flag for the processor. We don't have to specify what to do with origin. Resolved. ... Allowed characters. This is only just now stable in terms of the regex. The mechanism is to use the its:allowedCharacters, in both XLIFF 1.2 and 2.0. ... StorageSize in 1.2 is clear. but need action about native attribute in 1.2. ... maxWidth. Yves: We actually decided we can't use local markup. David: 1.2 is resolved, but 2.0 needs checking. Yves: The example currently shows transunit, but normally it would apply to the element with the content that has the limitation. David: Make it always on source and target, in 2.0 as well. Yves: Should be the same for allowedCharacters. David: Agreed. <scribe> Scribe: pnietoca scribe pnietoca Pablo: Felix is summarising his discussion with Robin Berjon about the handling of HTML translate ... Yves: Which of the 3 options choose? Jirka: Whatever you want! Karl: It should be the most supported Yves: Can support a bit of the three <Arle> \me darobin, what was the ID number for the GTM session you used. Pablo: Robin has just joined the meeting Yves: Talking in general about HTML5 Robin: The text on HTML5.1 is going in the right direction ... The reference should be HTML5.1 David: We understand we need to be in sync with HMTL5.1, but we need a normative reference <fsasaki> Robin: You can normative reference to the above link <darobin> Felix: Encourage to look at the latest version of translate HTML ... in the spec Yves: Looking at the text that is currently there, the attributte style ... is like a script ... why not a list of script attributtes there? Robin: It's terrible practice Felix: Putting the last reference from Robin in the spec ... it's ok with everybody? ... Agreed ... Talking with Robin about Ruby in HTML5 and the dropped one in ITS ... Would it be good to have a reference to the HTML spec for the moemnt? <darobin> Robin: They are going to update it, so look to the latest version Yves: Still something to do, correct the text about the use of translate XML vs HTML Felix: Showing how to point to HTML spec from the data category table ... Felix changing the spec with the reference and needed text about translate attribute ... The case of translate is similar to language information, too many different behaviours ... We need to summarise the behaviour in a note of as of writing Karl: Provide examples of the predefined list of attributes ... Suggests not to have two subsequent links to HTML5 in the translate section of spec Felix: Seggests to change from TEI to another interchange format for the next spec <fsasaki> ACTION: daveL to ping lerory to update tests for translate in html5 and elements within text [recorded in] <trackbot> Created ACTION-512 - Ping lerory to update tests for translate in html5 and elements within text [on David Lewis - due 2013-05-14]. Felix: New issues forbidden :) <Marcis> Hi Felix <Marcis> Now I do not hear you <fsasaki> we are restarting gotomeeting <fsasaki> from a different computer <fsasaki> can you leave the call and dial in the call? 682-416-317 <fsasaki> sorry for the inconvinience <Marcis> We can try <Marcis> (Skype) <Marcis> Tatiana's Skype name is kumeliite_1 <fsasaki> tatiana: how you see the most recent version of the showcase <fsasaki> .. implements all of the required functionality Tatiana: Show Tilde's demo of ITS enryched Term <fsasaki> .. annotating plain text example <fsasaki> demo shows how to upload a file to be annotated <fsasaki> tatiana: choose one of three languages. Now showing English <fsasaki> tatiana: first annotation option is statistical annotation. 2nd is term bank based annotation <fsasaki> tatiana: orange term candidates <fsasaki> .. source shows how these are mapped to ITS. You can download annotated document <fsasaki> .. html "script" element contains term list <fsasaki> .. the html body contains references to terms from that list <fsasaki> marcis: termConfidence not given is term is comming from term bank <fsasaki> .. in html5 document you have a tbx format term entry <fsasaki> marcis: now example of annotation with html5 as input <fsasaki> now xliff examples <fsasaki_> now q/a <fsasaki_> dave: do you prioritze term bank over statistical system <fsasaki_> .. what do you do if you get a clash? <fsasaki_> tatiana: you can switch <fsasaki_> .. e.g. having results only proceeded by statistical tool <fsasaki_> .. or you can have only euroterm bank results <fsasaki_> dave: what do you do if a word is matched in both? <fsasaki_> marcis: whenever you select term bank <fsasaki_> .. there is a filtering step <fsasaki_> .. not to put a heavy burden on the term bank <fsasaki_> .. in order to narrow down search space in term bank <fsasaki_> .. we select term candidates with statistics, then do term search in the term bank <fsasaki_> .. a user can select whether she wants to see the results of statistics or also term bank <fsasaki_> dave: the schema for the term entries in html? <fsasaki_> marcis: that are tbx entries <fsasaki_> dave: had you planned to offer this as a restful web service? <fsasaki_> marcis: it can be accessed as a web service api <fsasaki_> .. what you see is just a visual interface for humans <fsasaki_> .. everything what you send to a web page calls the service <fsasaki_> dave: did you document the web service interface? can the other partners use that? <fsasaki_> marcis: the documentation will contain the API description as well <fsasaki_> tatiana: this is just an interface to showcase the solution <fsasaki_> tatiana: example of the agricultural domain <fsasaki_> .. that annotation gives quite a lot of terminology tagged This is scribe.perl Revision: 1.138 of Date: 2013-04-25 13:59:11 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/iits-xliff/its-xliff/ Succeeded: s/"set"/"seg"/ Found Scribe: Arle Inferring ScribeNick: Arle Found Scribe: pnietoca Inferring ScribeNick: pnietoca Scribes: Arle, pnietoca ScribeNicks: Arle, pnietoca Present: Karl arle dF dave felix jirka karl mauricio milan pablo tadej yves ankit(IRC) Agenda: Got date from IRC log name: 07 May 2013 Guessing minutes URL: People with action items: dave davel david davidf df dfilip dlewis6 felix yves[End of scribe.perl diagnostic output]
http://www.w3.org/2013/05/07-mlw-lt-minutes
CC-MAIN-2014-52
refinedweb
4,121
76.42
Member Since 5 Months Ago 2,600. tha07 left a reply on Using UUIDs Over Auto Increment @DARIUSIII - Thank you. I'll check this out :) tha07 left a reply on Using UUIDs Over Auto Increment @BOBBYBOUWMANN - Thank you. Yep. I have not done in any projects. Maybe I should start from this one :) tha07 started a new conversation Using UUIDs Over Auto Increment I have an admin and guest interface in my application. If I want to implement UUIDs only for guest interfaces what is the approach? Or do I need to stick with auto increment IDs. (Guest interfaces will only include "get" requests) tha07 left a reply on Prompt To Change Password If Default Password tha07 started a new conversation Prompt To Change Password If Default Password I'm creating a function to check the authenticated users password with the app's default password and prompt user to change password. I think the best way is to do it via a middleware. But was not able to achieve it. How can I check it from a middleware and redirect user to a specific route to force to enter a password tha07 started a new conversation My First Big App - Structuring Issue Hi, I'm new to Laravel and loving it. I'm on my way to create my first big app for a laboratory. I'm in the plan stage and have this issue. I have planned several tables. customers, tests, customer_tests, test_properties, customer_test_property_results Example for a test is "high blood pressure". Each test have several properties like white blood cell count, red blood cell count with there necessary max values and min values. this properties vary according to the test. After customer register for a test and the test is done by the lab, the test results are entered by the app. I will create this form with every test property to enter there respected values. The problem is How can I validate the form data? The form values are varying with each test types. So the controller function which test data results are validating don't know what test is coming to do the validations. Is there any dynamic way to do this? tha07 started a new conversation Problem In Structuring Models, Controllers And Tables Let's say I have Clients and Tests tables. A client will register for many tests. And keep registering for the same test again. The normal procedure I use is creating a model called ClientTest and also a controller for model as ClientTestController. Also a route named clienttest to implement the crud functionalities for the table. Is it the best practice or is there any better way to handle it? tha07 left a reply on Route For Different User Types Thank you :) tha07 started a new conversation Route For Different User Types I have a project that is been setup for middleware as this way. This routes are available only for role 1 user role. I want them available for both role 1 & 2 user roles. I tried adding 'auth', 'Role:1', 'Role:2' but it didn't work my routes\web.php file Route::group(['prefix' => ADMIN, 'as' => ADMIN . '.', 'middleware'=>['auth', 'Role:1']], function() { Route::get('/', 'Dashboar[email protected]')->name('dash'); Route::resource('users', 'UserController'); Route::resource('businesses', 'BusinessController'); }); my App\Http Kernal File protected $routeMiddleware = [ ... 'Role' => \App\Http\Middleware\Role::class, ]; my App\Http\Middleware Role File public function handle($request, Closure $next, $role) { // Not Logged if (!Auth::check()) { return redirect('/login'); } // Not allowed if ($request->user()->role != $role) { return abort(404); } return $next($request); } tha07 left a reply on Can You Explain This Code Thank you very much tha07 left a reply on Can You Explain This Code what is the $role here? tha07 left a reply on Can You Explain This Code In Kernal.php I have, protected $routeMiddleware = [ ... ... 'Role' => \App\Http\Middleware\Role::class, ]; In Role.php I have, public function handle($request, Closure $next, $role) { // Not Logged if (!Auth::check()) { return redirect('/login'); } // Not allowed if ($request->user()->role < $role) { return abort(404); } return $next($request); } tha07 started a new conversation Explain This Code I have this code in the routes web file. I find it difficult to understand. Route::group(['prefix' => 'admin', 'as' => 'admin'. '.', 'middleware'=>['auth', 'Role:0']], function() { Route::get('/', '[email protected]')->name('dash'); Route::resource('users', 'UserController'); }); Does it mean that the Role 0 users only have the access to the following Routes? tha07 left a reply on $ Is Not Defined - Jquery Not Found thank you for your feedbacks. I think I should read more about mix. Actually this is a template. I go this error while compiling it. Manged to solve it by importing jquery in the app.js file like this. import $ from 'jquery'; window.jQuery = $; window.$ = $; tha07 left a reply on $ Is Not Defined - Jquery Not Found here how my bootstrap.js file looks like. import './masonry'; import './charts'; import './popover'; import './scrollbar'; import './search'; import './sidebar'; import './skycons'; import './vectorMaps'; import './chat'; import './datatable'; import './datepicker'; import './email'; import './fullcalendar'; import './googleMaps'; import './utils'; import './sweetalert2'; import './select2'; so you're telling that i have to add jquery to this file? tha07 left a reply on $ Is Not Defined - Jquery Not Found It still doesn't work tha07 started a new conversation $ Is Not Defined - Jquery Not Found In my Laravel project (laravel 5.6) I installed Jquery from npm. Then I added it to webpack.mix.js. mix.webpackConfig(webpack => { return { plugins: [new webpack.ProvidePlugin({ $: "jquery", jQuery: ["jquery", "$"], "window.jQuery": "jquery", Popper: ["popper.js", "default"] })] }; }); after compiling the assets and trying to use jquery it shows "Uncaught ReferenceError: $ is not defined" I am using my custom JavaScript file after loading the mix file in my view. ```php <script src="{{ mix('/js/app.js') }}"></script> <script type="text/javascript" src="/js/tests/tests.js"></script> In my custom JavaScript file I added the following code to check Jquery. ```javascript $("#myCheckButton").click(function(e) { console.log(test); }); I tried changing the webpack.min.js webPackconfig settings but was not able to solve it. Most questions like this recommended to put the custom js files after the mix. I think I got it right in my case
https://laracasts.com/@tha07
CC-MAIN-2019-13
refinedweb
1,036
66.23
iSndSysManager Struct ReferenceThis is the sound manager for Crystal Space. More... [Sound system] #include <isndsys/ss_manager.h> Inheritance diagram for iSndSysManager: Detailed DescriptionThis is the sound manager for Crystal Space. Its only purpose is to keep track of loaded sounds. Definition at line 57 of file ss_manager.h. Member Function Documentation Create a new sound wrapper. Find a sound wrapper by name. Get the specified sound. Return the number of sounds. Remove a sound wrapper by index from the sound manager. Remove a sound wrapper from the sound manager. Remove all sound wrappers. The documentation for this struct was generated from the following file: - isndsys/ss_manager.h Generated for Crystal Space 1.2.1 by doxygen 1.5.3
http://www.crystalspace3d.org/docs/online/api-1.2/structiSndSysManager.html
CC-MAIN-2015-35
refinedweb
119
55.61
I am making a game having different scenarios. It is an endless runner with four different characters . After certain score amount I want to change the sprite in background and also the character itself .I have written a code to spawn background one after other . using UnityEngine; using System.Collections; public class SpawnerScript1 : MonoBehaviour { public GameObject background; public Transform bga; public Transform bgb; float finalpos; public float spawnTime; // Use this for initialization void Start () { Spawn (); finalpos = bgb.position.x - bga.position.x; } void Spawn() { bgb.position = new Vector3 (bgb.position.x + finalpos, bgb.position.y , bgb.position.z); Instantiate (background, b.position, Quaternion.identity); Invoke ("Spawn", spawnTime); } public void Stopinvokation() { CancelInvoke ("Spawn"); } } Now , How can I change background after some time .So that this same code snippet is used but with other background on it .Just Keep in mind that I have to change the sprite from other script not this one , from the script in which i am calculating the score . Maybe add a public reference to the instantiated background in your class, and see if you can access it using your score script e.g. public GameObject theBackground; then in spawn, theBackground = Instantiate(background, b.position, Quaternion.identity); You should be able to access in your score script using GameObject accessedFromHere = Spawnerscript1.theBackground; If you still can't get access, try changing "public GameObject theBackground;" to public static GameObject theBackground;. Component without gameObject 2 Answers Assigning to global transforms of current gameobject 1 Answer If you change the rigidBody2D.position, you also need to change the gameObject.transform.position? 0 Answers Are .gameObject and .transform both using GetComponent() in the background? 1 Answer How to get the height of a gameobject at a given point ? 1 Answer
https://answers.unity.com/questions/963197/how-to-change-background-sprite-at-run-time.html
CC-MAIN-2020-40
refinedweb
289
50.73
#include "byte.h" /* If you need to compare a password or a hash value, the timing of the * comparison function can give valuable clues to the attacker. Let's * say the password is 123456 and the attacker tries abcdef. If the * comparision function fails at the first byte without looking at the * other bytes, then the attacker can measure the difference in runtime * and deduce which byte was wrong, reducing the attack space from * exponential to polynomial. */ int byte_equal_notimingattack(const void* a, size_t len,const void* b) { size_t i; const unsigned char* x=(const unsigned char*)a; const unsigned char* y=(const unsigned char*)b; unsigned char res=0; for (i=0; i<len; ++i) { res |= (x[i]^y[i]); } return res==0; }
https://git.lighttpd.net/mirrors/libowfat/src/commit/c4f30cc2c387c30572c8edf7f9b175e970e113bb/byte/byte_equal_notimingattack.c
CC-MAIN-2022-27
refinedweb
123
52.53
Hide Forgot Description of problem: Over the past 2-3 weeks we have had a repeating issue where all 3 HA routers go into backup state. No Master is negociated and the router is unusable. Looking at a tcpdump of the namespace for the routers, I see VRRP traffic arriving at to all routers and so all 3 routers remain in backup. On a working router (for comparison) I see only traffic to 2 routers .. as expected. Interestingly the VRRP traffic in the broken routers case appears to be orginating from the *other* router in the tenant! Version-Release number of selected component (if applicable): openstack-neutron-10.0.6-0.20180317014607.93330ac.el7.centos.noarch openstack-neutron-sriov-nic-agent-10.0.6-0.20180317014607.93330ac.el7.centos.noarch python-neutron-10.0.6-0.20180317014607.93330ac.el7.centos.noarch puppet-neutron-10.4.1-0.20180216212754.249bdde.el7.centos.noarch openstack-neutron-openvswitch-10.0.6-0.20180317014607.93330ac.el7.centos.noarch python2-neutronclient-6.1.1-1.el7.noarch python-neutron-lbaas-10.0.2-0.20180313085330.dfc0e24.el7.centos.noarch openstack-neutron-metering-agent-10.0.6-0.20180317014607.93330ac.el7.centos.noarch python-neutron-lib-1.1.0-1.el7.noarch openstack-neutron-common-10.0.6-0.20180317014607.93330ac.el7.centos.noarch openstack-neutron-ml2-10.0.6-0.20180317014607.93330ac.el7.centos.noarch openstack-neutron-lbaas-10.0.2-0.20180313085330.dfc0e24.el7.centos.noarch How reproducible: Happens every few days. Not sure how to reproduce what I've noticed is tenant has more than one router and only quick and dirty solution is to delete *all* routers (not just affected one) and re-create them. Steps to Reproduce: 1. Not sure 2. 3. Actual results: Router is created and shortly after all 3 routers become backup Expected results: Router is created with one master and 2 backup Additional info: Hi Kieran, The keepalived configuration pasted in comment 1 seems correct, router replicas are supposed to be set up with the same configuration. Next time this reproduces, can you please attach sosreports from all controllers but more importantly grab someone from the Networking team such as Brian, Jakub, Slawek or Bernard? Hi Assaf, The router config in comment 1 is for two different routers in the same tenant. That doesn't appear correct to me ... is it? From what I can see the two separate routers above are put in the same VRRP group and so we have a situation with 6 routers made up of 1 master and 5 backups, instead of 2 separate routers comprised of 1 master + 2 backups each. Maybe I totally on the wrong track? Kieran and I had a call this morning. We can confirm there is an issue in vr_id allocation as the second tenant router has same vr_id as the first router. There is only a single vr_id allocation for two routers. We created third router and it got a new allocation correctly. Fortunately, we do have debug logs from the time second router was created so after looking into those we shall hopefully find the cause. Turns out the logs were not in DEBUG mode as we recently did a deploy that overwrote out DEBUG mode settings! These logs didn't show anything obvious causing this issue. We haven't see a repeat of this for the past 4 days. I will enable debugging again, and see if the problem happens again. OSP11 is now retired, see details at
https://bugzilla.redhat.com/show_bug.cgi?id=1570136
CC-MAIN-2021-21
refinedweb
585
53.81
#include <UUID.h> Type to represent UTC as a count of 100 nanosecond intervals since 00:00:00.00, 15 October 1582. Default constructor. Destructor. Obtain the system time in UTC as a count of 100 nanosecond intervals since 00:00:00.00, 15 October 1582 (the date of Gregorian reform to the Christian calendar). ACE_Time_Value is in POSIX time, seconds since Jan 1, 1970. UUIDs use time in 100ns ticks since 15 October 1582. The difference is: 15 Oct 1582 - 1 Jan 1600: 17 days in Oct, 30 in Nov, 31 in Dec + 17 years and 4 leap days (1584, 88, 92 and 96) 1 Jan 1600 - 1 Jan 1900: 3 centuries + 73 leap days ( 25 in 17th cent. and 24 each in 18th and 19th centuries) 1 Jan 1900 - 1 Jan 1970: 70 years + 17 leap days. This adds up, in days: (17+30+31+365*17+4)+ (365*300+73)+ (365*70+17) or 122192928000000000U (0x1B21DD213814000) 100 ns ticks. Get the time of day, convert to 100ns ticks then add the offset. Initialize the UUID generator The locking strategy prevents multiple generators from accessing the UUID_state at the same time. Get the locking strategy. Set a new locking strategy and return the old one. Initalization state of the generator. The system time when that last uuid was generated.
http://www.dre.vanderbilt.edu/Doxygen/Stable/libace-doc/a00926.html
CC-MAIN-2013-20
refinedweb
221
74.49
GameFromScratch.com This entry in the Closer Look series is a bit different than normal. First, Blade Engine is very much a work in progress, so expect bugs and flaws and minimal documentation. Second, it’s actually built over top of an existing game engine, LibGDX. Finally, it’s a game engine focused on one very specific genre – adventure games. Given the popularity of hidden object games on mobile these days, there are no doubt a number of people looking for an appropriate engine. So without further adieu, I present the Bladecoder Adventure Engine, an open source cross platform LibGDX based game engine and editor for creating adventure games. As always there is an HD video version available here. Blade engine consists of two parts, the underlying game engine and the editor that is layered on top of it. It is designed in such a way that you can work entirely in the editor and never once right a line of source code. You assemble your game from a collection of Chapters, Scenes and Actors and added events and actions in the form of verbs. If you want to modify the fundamental structure of the game itself, you are going to have to jump into the underlying source code. Fortunately that is an option, as Bladecode Engine is hosted on Github and the source is available under the incredibly liberal Apache 2 license. Blade Engine Features at a Glance: The heart of Bladecoder is ultimately the editor, so let’s focus there after we cover getting started. To get started with Bladecoder you need to have Java and git installed and properly configured. Bladecoder uses the JavaFX ui library so you will have to use JDK 8 or newer or be prepared to have to configure JavaFX manually in the build process. You will also require an internet connection for the build process to succeed the first time. To start, from a terminal or command line, change to the folder you want to install Bladecoder and enter: git clone cd bladecoder-adventure-engine gradlew build gradlew run There is an example repository, including the work in progress game The Goddess Robbery available in the repository. You should probably clone this repository too, as this is perhaps the single biggest documentation source available right now. Assuming the compilation process went without issue above, you should now see the Adventure Editor, where the bulk of your work will occur. Your game is composed of a collection of Chapters, which in turn contain Scenes. Scenes in turn are a collection of Actors and organized in layers: Game Props enables you to set global properties of your game: Resolution enables you to quickly create scaling modes for supporting multiple device resolutions ( think Retina ): While Assets enables you to import multiple defined assets include audio and music files, texture atlases, 3D models, images and more. You organize your scene using the editor available in the center of the window: You can place actors on different layers, define walk paths, etc. Click the Test button to preview that scene in action. The actual logic of your game is defined on the right hand side of the editor. Here you can set properties of your actors: Create and edit dialogs: Define sounds and animations: Clicking the edit icon will bring up the appropriate editor: While selecting an animation will preview it in the scene: Finally Verbs are the heart of your application: You can think of verbs an analogous to event handlers, and they can be applied at the world, scene or actor level. There are also default verbs that will be fired if unhandled. Think the generic “I don’t know how to use that” messages from adventure games from the past. Let’s look at an example from the Scene, handling the Init verb which is fired when the scene is ready. This verb causes the sequence of actions shown at the bottom part of the above image to be fired when the scene init verb is called. This causes the player to move, a dialog sequence, the player is scripted to drop an item, a state value is changed, etc. You can create new elements by clicking the + icon: And filling out the resulting form. Each element has a different form associated with it. Here for example is the result of the Say element: Once complete simply click the play or package button: Play launches the standard loader: This screen can obviously be customized to each individual game. While package brings up a form enabling you to build your game for a variety of platforms: And that essentially is it. This is certainly a weak point of the Bladecoder engine, it’s the result of a single coder, there is minimal help available and if you don’t know how to debug Java code, you will probably end up in trouble, at least at this point in it’s lifecycle. There is currently no community or forum available for this engine but perhaps that will change in the future. I spoke with the developer a few times however and he was very responsive and quick with fixes and answers. He is also on twitter at @bladerafa if you want status updates on the project. For now documentation consists of a minimal wiki although for the most part the best source of documentation is going to be from following the examples. Make no mistakes, this is very much an under development engine so expect things to blow up spectacularly at any time. When it does, you are probably going to be on your own figuring out why as there is no community to fall back on. All that said this is a surprisingly robust tool that makes the process of creating an adventure game exceedingly simple. Once the engine matures a little bit it will be an excellent tool for even a non-programmer interested in making adventure games. For now though if you are competent in Java and interested in making an adventure game, this engine takes care of a hell of a lot of work for you and provides full source code for when it doesn’t. Plus at the end of the day, the price is certainly good too! Design Art Programming Engine 2D Review LibGDX Java In this Closer Look At we look at take a look at the jMonkeyEngine. The Closer Look At game engine series is a cross between an overview, a review and a getting started tutorial to help you decide if a game engine is the right fit for you. The jMonkeyEngine engine is a Java based, open sourced, cross platform 3d game engine that runs on most Java supported platforms and can target Windows, Linux, Mac and Android, with iOS and Oculus VR support currently being tested. jMonkeyEngine is available as both a game library, or as a set of tools built on top of the NetBeans IDE. For this closer look, we will focus on the full SDK experience. This closer look is also available in HD video format here. Although we are going to focus on the complete set of tools including in the jMonkeyEngine SDK, keep in mind it can be used in library form if you prefer working in Eclipse or IntelliJ. You will however lose access to some very convenient tools. As I mentioned earlier, jMonkeyEngine ships in two forms, as a set of libraries, or as a complete SDK build on top of the Netbeans IDE. You can download load the SDK for Windows, Mac or Linux right here. As of writing, 3.0 is the current released version, while 3.1 is available in development on Github. This version marks the first public release using the Github platform. jMonkeyEngine has a few prerequisites before installing, but they basically boil down to having an OpenGL 2 compatible video card and JDK 6 or higher installed. Once downloaded and installed simply run the jMonkeyEngine SDK application. This is jMonkeyEngine: As mentioned earlier, this is actually a preconfigured version of the Netbeans IDE with a set of plugins and extensions to support jMonkeyEngine development. This means in addition to the various jME tools you get a complete modern Java development environment, meaning code completion, project management, refactoring tools, debugging and more. I won’t be specifically covering Netbeans functionality in this guide. If you’ve got prior experience in Eclipse or IntelliJ, you should feel right at home. Personally I rate the Netbeans experience somewhere between the two, with IntelliJ being quite a bit better, while Eclipse is many many many times worse. That all said, that is purely opinion, each platform has it’s strength and weakness, it’s fans and haters. If you prefer to use Eclipse or IntelliJ you can. It is often easiest to start with a simple project, so let’s do exactly that. Select File->New Project A New Project wizard will appear. All of the standard project types supported by Netbeans are available, but also the new jMonkeyEngine templates are available too. Select BasicGame and click Next. Pick a name and location and click Finish. Your project will now be created. You can have several projects open in the IDE at the same time, just be sure to select the right one in the Projects panel: The wizard will have automatically created a project hierarchy for you: It’s optional to use this layout, but you are making life more difficult for yourself if you do not. File paths for textures in imported models are absolute, forcing your hand somewhat in how you import your data. Again, you can code around this design, but you are making your life more complicated. For the most part I found the layout fairly logical, but the suggestion to import your models into the Textures folder then relocating them to Models ( well discuss this more later ), well that simply a gross kludge. The New Project wizard also generated a default source file for us, Main.java, with the following contents: package mygame; import com.jme3.app.SimpleApplication; import com.jme3.material.Material; import com.jme3.math.ColorRGBA; import com.jme3.math.Vector3f; import com.jme3.renderer.RenderManager; import com.jme3.scene.Geometry; import com.jme3.scene.shape.Box; /** * test * @author normenhansen */ public class Main extends SimpleApplication { public static void main(String[] args) { Main app = new Main(); } @Override public void simpleRender(RenderManager rm) { //TODO: add render code } } The code is all pretty straight forward. You game code extends the class SimpleApplication, which in turn implements Application plus implements some “out of the box” behaviour like key mappings for exiting the application and implementing a camera. These default behaviours can easily be overridden as we will see shortly. SimpleApplication exposes three critical methods as part of your games life cycle, simpleInitApp(), called when your app is created, then simpleUpdate() and simpleRender() called over and over by the game event loop. Basically stick your setup code in the init() method, your update code in the update() method and drawing code in the render() method. If these methods start getting overly complex, you can refactor your design to use States, something we will cover later on. You can run or debug your project using the toolbar: Or via the Run menu: Once launched you will see a configuration Window. Select your preferred configuration and click Continue. You may be asking, can I get rid of this damned window? The answer is yes you can, but you have to use code to do it. I can’t really fathom why there isn’t a “Remember my settings” check box. Once you click Continue, your first app will run. As you move the mouse cursor around, the camera implemented in SimpleApplication is moving the camera position around. You may also notice the debug details and of course that startup window. As said earlier, this can all be override, let’s look at how. First we can get rid of the configuration window ( which I admit, gets old very quickly ) and set a default resolution using the following code: public static void main(String[] args) { Main app = new Main(); // Dont show window app.showSettings = false; // Create a new app settings loaded with defaults AppSettings appSettings = new AppSettings(true); // Override resolution appSettings.put("Width",720); appSettings.put("Height",480); // Add a title, just because appSettings.put("Title", "Super Awesome Megagame 9000!"); app.setSettings(appSettings); app.start(); } Next in our init we add the following logic to disable the camera and debug info. These need to be called after app.start(), thus why they are in init. @Override public void simpleInitApp() { // Disable fly cam this.flyCam.setEnabled(false); // Turn off debug info and FPS window this.setDisplayFps(false); this.setDisplayStatView(false);); } Now when you run your game, you should no longer see the config window, nor display stats when running. Instead you should see: One of the first things I do when testing a new engine is check to see how hard it is to get a 3D model imported. In jMonkeyEngine you have a couple of options, you can import to their native format, use a Blender plugin, support an OBJ file, or import files converted using the Ogre XML toolchain, which is also available as a Blender plugin as well as several other packages. I will use the native format (j3o) later, for now, let’s look at the process of importing a Blender model, since jMonkeyEngine has solid Blender integration built in. In fact, jMonkeyEngine actually ships with a copy of Blender as part of the SDK install, currently version 2.69 (as of writing, 2.75 is the most current version). When you run Blender from within jMonkeyEngine, this included version is the one that is run. (Note, for performance, you should always prefer using the native binary format unless you have a very good reason not to). You can add a new textured Blender cube (you don’t have to by the way), right click the desired location and select File->New->Other… Then select Blender->Box prepared for UV texturing. Name it and confirm the location, then click Finish. This will run a copy of Blender and set up a cube with textures defined for you. What’s extremely odd here is the configured cube isn’t actually ready to go. You still need to UV unwrap the cube, attach a texture and set the UVmap. You can see the entire process in the video if you need more details. You can confirm that the blend file works fine, right click the blend and select View Model. This will open the Viewer. Be sure to click the light icon (top left) to enable lighting in the viewer. Now that we know the Blender file works, let’s move over to the code to load a Blender file. there is a bit of a challenge first, Blender support is actually added as a plugin, we need to add it in first. Right click Libraries and select Add Library… Select jme3-libraries-blender then click Add Library. We need to add a light to the scene or the model isn’t going to show up. Simply drag and drop SunLight from to the drop of the simpleInitApp() code and it will drop all the code we need. package mygame; import com.jme3.app.SimpleApplication; import com.jme3.light.DirectionalLight; import com.jme3.math.ColorRGBA; import com.jme3.math.Vector3f; import com.jme3.renderer.RenderManager; import com.jme3.scene.Spatial; public class Main extends SimpleApplication { public static void main(String[] args) { Main app = new Main(); app.start(); } @Override public void simpleInitApp() { /** A white, directional light source */ DirectionalLight sun = new DirectionalLight(); sun.setDirection((new Vector3f(-0.5f, -0.5f, -0.5f)).normalizeLocal()); sun.setColor(ColorRGBA.White); rootNode.addLight(sun); Spatial blenderModel = assetManager.loadModel("Models/demoBox.blend"); rootNode.attachChild(blenderModel); } @Override public void simpleUpdate(float tpf) { //TODO: add update code } @Override public void simpleRender(RenderManager rm) { //TODO: add render code } } And run it: So other than Blender configuration, getting a model into a jMonkeyEngine app is fairly straight forward. Code Palette We briefly saw the Palette in action in the previous example. This is a selection of code snippets you can drag and drop into the editor. One major gotcha however, many of these samples depend on a library, jme3-test-data, that isn’t included by default oddly enough. We saw earlier when we set up the Blender plugin the process of adding a library. 3D File Importer While jMonkeyEngine supports the Ogre XML format and Blend files, working with a game oriented file format is almost always the best performing option. Fortunately jMonkeyEngine provides just such a format, j3o. These files can be created easily using the menu File->Import Model menu: Then select the model Material/Shader Editor You can easily create shaders right clicking an Asset folder such as Materials, New->Other… Then Material->Empty Material file You can then define a shader using a UI tool. You can also set a template that other materials inherit from. 3D Scene Composer The Scene Composer can be use to assemble and create 3D scenes. There is also a corresponding scene graph: A variety of game nodes can be created here: Terrain Editor In addition to the Scene Composer, there is also a 3d terrain tool: You can create terrain visually. Easily pull and push terrain into shape, paint with multiple textures. The generated terrain can be used in the scene composer. We only briefly touched upon the code capabilities of the jMonkeyEngine due to time and space restraints. jMonkeyEngine is a full functioning engine with the following functionality, excerpted from their website. jMonkeyEngine is well documented, with a comprehensive collection of tutorials and guides available on the wiki. I encountered a few entries that were out of date or invalid, but for the most part the document was solid and easy to follow. There is also a good reference in the form of the JavaDoc. I may not always be the biggest Java fan, but I almost always love JavaDoc generated references! Until recently the forums for jMonkeyEngine were pretty terrible, but thankfully they’ve recently transitioned to an improved forum. There is an active community and questions rarely go unanswered. They have also recently transitioned the source code to Github. There are two books available for jMonkeyEngine. Programming Engine Java 3D Review EDIT -- Paradox is now Xenko Game Engine and is available at xenko.com. Everything else should remain the same however. In this Closer Look At we look at take a look at the Paradox Game Engine. The Closer Look At game engine series is a cross between an overview, a review and a getting started tutorial to help you decide if a game engine is the right fit for you. The Paradox game engine is a C# based, open sourced, cross platform 2d/3d game engine that runs on Windows and can target Windows platforms, iOS, Android and soon the PlayStation 4. Now let’s jump right in and see if Paradox is the right engine for you. This review is also available in HD video right here. First off, if you are just starting out, let me save you some time. No, Paradox is not the right engine for you. At least, not yet it isn’t. The same is true if you aren’t willing to deal with an unstable and developing API or less than great documentation. This is very much an engine under development and it shows. There is however much here to love as you will soon see. As mentioned earlier, Paradox is a Windows based game engine capable of targeting Windows, Windows Universal, Windows Phone, plus iOS and Android using Xamarin. PlayStation 4, Mac and Linux support are listed as coming soon. Paradox provides an editor but is built primarily on interop with Visual Studio which for small teams or individuals is now thankfully free. I tested with both Visual Studio 2013 and Visual Studio 2015 without issue. The engine itself is impressively full featured: Paradox is currently available for free. The source code is also available on Github under the GPL license. Please note the GPL license is incredibly restrictive in what you can do using the source ( must release all changes and derived source code! ) and is by far the open source license I like the least. You do not however have to release source code if you link to the binary versions. They also negotiate source licenses if the GPL doesn’t work for you. I do believe that the source license is under review, or at least was. Getting started with Paradox is easy, start by downloading the installer available here. When you run the launcher it will install the newest version of the SDK as well as the Visual Studio plugin. You can update the SDK and re-install the plugin using the launcher. Assuming you are running for the first time, click the big purple “Start” button in the top left corner. By default the launcher will be left open unless you click “Close the launcher after starting Paradox” option. Next you will be taken to the New/Open Project dialog: As you can see, there are a wealth of samples you can get started with, or you can create your own New Game or Package. We will be creating a new package. These samples however are absolutely critical, as they are probably the primary source of reliable/current documentation when using Paradox. Let’s select New Game: Fill in the relevant information and click Select Next we select the Platforms we want to support as well as our targeted graphic fidelity and default orientation. Your project will now be created, bringing you to Paradox Studio. Meet Paradox Studio: The above screenshot demostrates the default starting scene that will be generated for you when creating a new project. There are a few things to realize right away… first, none of the above is required. You could remove everything and create your game entirely in code. Of course you will generally be creating more work for yourself if you do. Let’s run our default application. Choose your application and press the play or debug icon: When you hit either button, Visual Studio is invoked via MSBuild and your project is compiled and run (Visual Studio does not need to be open, but it must be installed). And here is the default application running: Now let’s take a look at the various components of Paradox Studio. This is the scene graph of your world. Using the * icon you can instance new entities: You can create hierarchies of entities by selecting one then creating a child using the * icon: In Paradox all items that exist in the game’s scene graph derive from the Entity class. Paradox is a component based game engine and the entity class is basically a component container coupled with spatial information. Below the Scene Graph is the Solution Explorer: Somewhat interestingly, the Paradox game engine uses the Visual Studio Solution (sln) as their top level project format. If you look in the project folder you will see YourProj.sln, then a folder hierarchy containing your code, assets, etc. Outside of creating new packages and folders, there isn’t a ton of reason to use the Solution Explorer, at least so far as I can figure out. Next up is the Asset View: This is where you can see and select the various assets that compose your game. They can be organized into folders to keep the mess to a minimum. You can instantiate and asset by simply dragging it to the 3D view. You can also import existing assets and create new assets using this window. Creating a new asset once again involves clicking the * icon: The views across the right are context sensitive. If you select an asset in the Asset View, it’s properties (if any) will be exposed in the Property Grid: The above show a portion of the settings that can be configured for a Material. Below the Property Grid is the References/Preview/History panel. References shows you either all of the objects that reference, or are referenced by the selected object: Action History is simply a recently performed task history: While Asset Preview enables you to see your asset in action, for example your material applied to teapot: It’s a fully zoom/pan-able 3D view. The property grid however performs double duty. When selecting an instanced object ( from either the scene graph or the 3D view ) as opposed to a template from the Asset View you will have complete different options available: This is where you configure or add new components on your entity. Public properties of a component will have the appropriate fields available. Click Add Component to add a new component to your entity: Keep in mind that components that were already added ( such as Light and Transform in this case ) will not be displayed. Components can in turn have Assets attached to them: Click the hand icon and the asset chooser dialog is shown: Finally we have the 3D View. The 3D View can be used to create and position the various entities in your scene. As you can see in the image above, the 3D view provides the traditional transformation widgets from the 3D graphics world. It also uses the traditional QWERT hot keys for selection,transform, rotate and scale that Maya made famous. The view can be zoomed using the mouse wheel or Alt + RMB, panned with middle mouse button and orbited using RMB. You also have the ability to toggle between local, camera and global coordinates as well as snap to the grid. One oddly missing component however is axis markers, making navigation a bit more difficult than it should be. For the most part the editor does it’s job. Occasionally it can become a bit unresponsive and I’ve had to restart it a few times to sync changes between it and Visual Studio. The primary purpose of the editor is to add and manage assets in your game, to position them in space. As I mentioned earlier, usage is entirely optional. There are however a few glaringly missing features, such as the ability to see and manipulate collision volumes ( you can create them, just oddly not see them ) or the ability to create nav visibility meshes. So far we’ve only seen the project creation and configuration components of Paradox3D, however it’s when you leave Paradox Studio that is either going to make you love or loathe Paradox. As I mentioned earlier, the ultimate project type of Paradox is a Visual Studio solution. Paradox is designed to work hand in hand with Visual Studio. In the main editor you should see this button: Clicking the Visual Studio logo will automatically open your project in Visual Studio, in my case Visual Studio 2015 (which is supported, along with 2013 and I believe 2010). Here is our default project: Hmmm… not a lot of code here… in fact there is only one file, our platform specific bootstrap. Obviously there would be one such project for each platform you selected when you created your game. The code contained within is certainly not huge: using SiliconStudio.Paradox.Engine; namespace DemoOh { class DemoOhApp { static void Main(string[] args) { using (var game = new Game()) { game.Run(); } } } } This file implements your specific platform’s main and simply creates an instance of the Game class, then calls Run(). Well then, how exactly do we code our game? Well that’s where the component based nature of our game comes in. In your Game project, add a new class like I did here with ExampleScript.cs: There are a couple choices of the type of script you can create depending on your needs. For a game object that is updated by the gameloop you have a choice between an asynchronous script using C# async functionality, or a more traditional synchronous script, which implements an update() method that is called each frame. There is also a Startup Script that is called when the engine is created but not on a frame by frame basis. I’ll implement a simple SyncScript for this example, as it’s the most familiar if you are from another game engine. using System; using SiliconStudio.Paradox.Engine; namespace DemoOh { public class ExampleScript :; } } } } } There are enough common bugs in Paradox Studio that a Reload will fix. It’s annoying but quickly becomes second nature. Now that your script is attached to a Script component, you can run the application and see you can now update the sphere position using the arrow keys. Also not, you don’t have to run from Paradox Studio, after you make the edits in Studio, make sure to save your project, then in you can also run in Visual Studio using F5 or by hitting the Start/Debug toolbar. As mentioned earlier, this is where it all starts to go a bit wrong with Paradox. There is full documentation, including getting started guides and reference materials, all available online only. There is also a forum as well as a stack overflow style answers site. The biggest challenge is with the engine being beta and under active development, much of the documentation is simply wrong. What remains is often sparse at best. Frankly the samples are going to be your primary learning source for now. Of course the game engine is open source and available on github, just be sure to read up on the license thoroughly. I think the Paradox Engine has the potential to be a great engine. It is certainly not for beginners, not by a mile. All of the functionality you require to create a game is in there, with a few glaring exceptions. The rendering engine is extremely nice and I personally liked the programming model, of course I like component based engines, so I was bound to enjoy it. The documentation however is… yeah, not good. I did however enjoy Paradox enough that I think I am going to do a tutorial series to help others get started with it. Of course, I will suffer the same problems that Paradox do, a changing code base is going to break my work constantly. So I am going to try and focus on smaller more bite sized tutorials. Let me know if you are interested. Programming Engine Review
https://www.gamefromscratch.com/?tag=/Review&page=3
CC-MAIN-2019-51
refinedweb
5,103
61.77
A support library, containing OLE2 functions, required for MS-Excel support in gnumeric, the GNOME spreadsheet. WWW: NOTE: FreshPorts displays only information on required and default dependencies. Optional dependencies are not covered. No installation instructions: this port has been deleted. The package name of this deleted port was: libole2 libole2 No options to configure Number of commits found: 41 Abandoned upstream, incomplete, not depend on Cleanup plist Remove old patches and add INSTALL_TARGET=install-strip devel/ patch-xy patches to reflect the files they modify. - INSTALLS_SHLIB -> USE_LDCONFIG. Found by: portlint (cports.sh) portlint: -Use DATADIR in pl. Fix build on -CURRENT. Reported by: walt <wa1ter@myrealbox.com> GNOME has just changed the layout of their FTP site. This resulted in making all the distfiles unfetachable. Update all GNOME ports that fetch from MASTER_SITE_GNOME to fetch from the correct location. Convert USE_GLIB into USE_GNOMENG+USE_GNOME. Backout previous change - it seems that new revision of the patch doesn't apply everywhere. Don't filter libc_r on 5-CURRENT. Update to 0.2.4. Update to 0.2.3 Update to 0.2.2.2.0 Update to 0.1.7 Convert category devel to new layout. Implement USE_GLIB. Update to 0.1.6 Update to 0.1.5, fixing potential library namespace conflict with WINE. libole2 contains OLE support functions, currently aimed at the Excel[tm] plugin for math/gnumeric, but extensible. Servers and bandwidth provided byNew York Internet, SuperNews, and RootBSD 17 vulnerabilities affecting 46 ports have been reported in the past 14 days * - modified, not new All vulnerabilities
http://www.freshports.org/devel/libole2/
CC-MAIN-2015-22
refinedweb
258
53.68
Opened 8 years ago Closed 8 years ago #9593 closed (invalid) permalink breaks when using include() in urls.py Description If you use include() to include urls in another urls.py then include will fail to generate the proper urls for urls in the included urls.py if the urls fall in a sub-path. In the given urls.py ... urlpatterns = patterns('', (r'', include('core.urls')), (r'^blog/(.*)', include('blog.urls')), ) ... and models.py ... class Post(models.Model): """Post model.""" title = models.CharField(_('title'), max_length=200) slug = models.SlugField(_('slug'), unique_for_date='publish') author = models.ForeignKey(User, blank=True, null=True) body = models.TextField(_('body')) publish = models.DateTimeField(_('publish')) def __unicode__(self): return u'%s' % self.title @permalink def get_absolute_url(self): # return '/blog/%s/%s/%s/%s' % (self.publish.year,self.publish.strftime('%b').lower(),self.publish.day,self.slug) return ('blog_detail', None, { 'year': self.publish.year, 'month': self.publish.strftime('%b').lower(), 'day': self.publish.day, 'slug': self.slug }) ... using the @permalink decorator will work for core.urls but it would not work for blog.urls. The permalink decorator simply returns an empty string. Change History (1) comment:1 Changed 8 years ago by mtredinnick - Needs documentation unset - Needs tests unset - Patch needs improvement unset - Resolution set to invalid - Status changed from new to closed Note: See TracTickets for help on using tickets. The only bug here is in the URL pattern you've created. You're capturing everything from blogs/ to the end of the string at the top level, so nothing will be seen by the second-level URLConf. Leave off the (.*) part and you'll find things work a lot better.
https://code.djangoproject.com/ticket/9593
CC-MAIN-2016-30
refinedweb
278
54.49
My solo flipped and destroyed the propellers on my first attempt at flying it. Contacted solo .they wanted my logs I sent them 3 times and have been waiting for days to hear from someone at solo. After paying $1500.00 for the solo, I'm starting to get a little p'd. Can anyone tell me what to do?w Views: 4442 <![if !IE]>▶<![endif]> Reply to This This is an issue with people who buy them who do not read enough and research the technology and requirements. Do some homework... The term RTFM comes to mind :-) <![if !IE]>▶<![endif]> Reply well yes lol . IT has to be. Again find it so hard to believe its anything else since my DIY quad flies amazing... for 1 year.. <![if !IE]>▶<![endif]> Reply Hah, three+ years and not counting anymore :-) <![if !IE]>▶<![endif]> Reply I agree with Tony, but to an extent. I would say Tony you got lucky if you built a quad and it flew with no issues for a year. I have an MS in engineering and I'm a full scale flight test professional however, it took me 3+ months to get a DIY quad in the air, and another 2 months of development to make it do something useful more than a couple times. Granted I tried to do everything the right way and put a lot of time and research into every single part I bought, but even then there are always little bugs. I'm also still standing on the shoulders of the folks who developed both the autopilot software AND the hardware. There are also inherently more risks in going the DIY route since you have to research each component which introduces more human error into the equation. It seems like 3DR is doing everything they can to help Jack out, great to hear there is great customer service coming from that company. When doing trade studies on components for my own setup I had a column for customer service/support as a pivot. Why? Because when you invest as much time and money as we all have into something you care about you want to make sure someone is going to care WHEN it malfunctions. For all intents and purposes, this guy did do his homework. He researched and bought a 3DR Solo, and here, in this forum he found people that cared about his issue with it. And he is getting taken care of by 3DR. Everyone is so tied up in understanding the difference between DIY and commercial products instead of realizing we are all trying to accomplish roughly the same goal. My DIY quad is a collection of commercially bought products. I had to research them all. Jack bought a commercially built quad. He researched that product and found others had a couple minor issues which the manufacturer has fixed. I think we should mark this as {SOLVED}. <![if !IE]>▶<![endif]> Reply Yes i totally agree with you. And to add, i also took my time and build my quad within 2-3 months after reading forums for months and months. But i would not say he did his homework. There has to be more passion than just write on a forum and contact 3DR. Like with anything else i personally do. I have alot of other hobbies. From guns to motorcycles and photography, scuba diving. I never just grab stuff and expect it to just WORK. Thats just me though. <![if !IE]>▶<![endif]> Reply You two are a match made in heaven. <![if !IE]>▶<![endif]> Reply It seems the new owner who has no quad flying experience are the one had trouble. In one video, the operator did not know how to stop the quad. <![if !IE]>▶<![endif]> Reply <![if !IE]>▶<![endif]> Reply @Scott, could you attach log files to let us study your case ? Is your Solo controlled by Pixhawk Pro and AutoPilot Pro version ? <![if !IE]>▶<![endif]> Reply Darius that question shows that you are not qualified to troubleshoot our system. <![if !IE]>▶<![endif]> Reply Did you recalibrate the sticks? <![if !IE]>▶<![endif]> Reply I am just building Knowledge Database for 100+ drone unusual incidents, crash cases and 100+ for correctly commanded drones, made of log file analysis. All I need to know if firmware comes with an old FSF GNU licence to read (as below). No warranty ... to fit for any purpose... or Not fir for any purpose - licence has been removed from Firmware PRO My question is for real under previous legislation by FAA on personal drone aircraft registration and pending Senate legislation on the testing and certification of commercial drones. I would like to know (get assured) if ArduPilot , ArduDrone can get ever certified by FAA, featuring: Not fit for any pupose licence for software firmware. /// -*- tab-width: 4; Mode: C++; c-basic-offset: 4; indent-tabs-mode: nil -*- /* 24 state EKF based on work by Paul Riseborough Converted from Matlab to C++ by Andrew Tridge>. */ #include <AP_Math.h> class Kalman24 { public: // constructor Kalman24() {} void CovariancePrediction(Vector3f deltaAngle, Vector3f deltaVelocity, float dt, bool onGround); private: // note that some of these are temporary variables, but we don't // want huge spikes in stack usage, so better to make these class variables float _states[24]; float _P[24][24]; float _predP[24][24]; float _SF[21]; float _SG[8]; float _SQ[11]; float _SPP[13]; float _processNoise[24]; float P(uint8_t i, uint8_t j) { return _P[i-1][j-1]; } float &predP(uint8_t i, uint8_t j) { return _predP[i-1][j-1]; } float &SF(uint8_t i) { return _SF[i-1]; } float &SG(uint8_t i) { return _SG[i-1]; } float &SQ(uint8_t i) { return _SQ[i-1]; } float &SPP(uint8_t i) { return _SPP[i-1]; } float &processNoise(uint8_t i) { return _processNoise[i-1]; } float &states(uint8_t i) { return _states[i-1]; } void predPzeroRows(uint8_t first, uint8_t last); void predPzeroCols(uint8_t first, uint8_t last); }; <![if !IE]>▶<![endif]> Reply <![if !IE]>▶<![endif]> Reply to Discussion
https://diydrones.com/forum/topics/solo-crash-1?commentId=705844%3AComment%3A2218058
CC-MAIN-2020-05
refinedweb
997
71.55
In many languages, you can create list literals, which allow you to create a list with elements already pre-defined inside of it. But what about Java? List<Integer> list = new ArrayList<Integer>(1, 2, 3, 4); If you come from any language that supports list literals, you would expect the snippet above to work. Unfortunately, the snippet doesn’t compile. The alternative and obvious solution would be to write : List<Integer> list = new ArrayList<Integer>(); list.add(1) list.add(2) list.add(3) list.add(4) While this approach does work, it has a drawback. The drawback is that you cannot use anonymous lists as a parameter for a method. Since you can’t initialize and fill a list with data at the same time, it follows that you can’t create an anonymous list and fill it with data either. public void printList(List<Integer> list){ for(int i = 0; i < list.size(); i++){ System.out.println(list.get(i)); } } The printList method can’t be used with an anonymous list. Since you can’t insert any data into an anonymous list, the printList method is completely useless. The Solution Double brace initialization allows you to create a List and fill it with data at the same time. List<Integer> list = new ArrayList<Integer>() {{ add(1); add(2); add(3); add(4); }}; With double brace initialization, you can now use an anonymous list as input. printList(new List<Integer>(){{ add(1); add(2); add(3); add(4); }}); Output : 1 2 3 4 EDIT: The above can be more concisely done with : printList(Arrays.asList(1,2,3,4)); You can also use for loops with double brace initialization, as well as declare variables inside. For example : printList(new ArrayList<Integer>(){{ add(1); add(1); int x = 1; int y = 1; int z = 1; for(int i = 0; i < 8; i++){ x = y; y = z; z = y + x; add(z); } }}); In the code above, we are simply filling a list with the first 10 numbers of the Fibonacci sequence and printing them out. The results : 1 1 3 5 8 13 21 34 How Double Brace Initialization Works The first set of curly braces creates an inner class for the anonymous list. The second set of curly braces will create an instance initialization block, which is a block of code that will run before the constructor is called. Here’s an example of what that looks like : public class A { { System.out.println("Hello World!"); } } public class B { public static void main(String[] args){ A a = new A(); } } Output : Hello World! So when we do double brace initialization, what we’re really doing is creating a new List, making an inner class with the first set of braces, making an instance initialization block with the second set of braces, and then filling the list with data inside the instance initialization block. Overall, double brace initialization is a quick and nifty trick that will save you the agony of actually making named lists and manually adding in the defined data one at a time, when you can simply insert an anonymous list with all the defined data instantly. 2 thoughts on “Double Brace Initialization” Hi Henry. The thing here is that a new annonymous class is created, polluting the class space, hence it is not recommended. In your particular example, the same can be achieved by using Arrays.asList(1,2,3,4) see…- Thanks. The Arrays.asList tip is a great idea! I actually never thought about that, but it pretty much makes my example obsolete. I’ll make a note that my example was poor and add a different one. However, as for the pollution of the namespace issue, it seems to be a bit of a pre-optimization. The performance difference is negligible, so it can effectively be ignored unless you are creating huge amounts of anonymous classes with double brace initialization.
https://henrydangprg.com/2016/05/16/double-brace-initialization/?replytocom=10
CC-MAIN-2022-27
refinedweb
656
61.26
Im View Complete Post Hi, I am using the "Microsoft.Practices.EnterpriseLibrary.Data" version "5.0.414.0". I am using VS 2010. I have added connectionstring entry in the web.config, and I am able to access the database using SQL Server 2008 R2. I have added the below line in my code to use the database: using System.Data; using Microsoft.Practices.EnterpriseLibrary.Data; using System.Data.Common; public class A1 { public DataSet GetTableData() { Database db = null; db = DatabaseFactory.CreateDatabase("Connection String Name"); } } However, I am getting an error as: "Activation error occured while trying to get instance of type Database, key". It breaks in CreateDatabase line. How to fix this? Thank you I am using Enterprise Library 5.0,when am run the console application ,i got this error " Activation error occured while trying to get instance of type Database,key 'xxx' ". Thanks Deivendran
http://www.dotnetspark.com/links/1085-activation-error-occured-while-trying-to.aspx
CC-MAIN-2015-35
refinedweb
147
53.68
Type: Posts; User: vasantharam #include <iostream> using namespace std; class A { public: int a; A() : a(1234) {}; }; Oh the smily was a random pick, didnt realize it means mad-avatar!!! If there is a reason to have two foo classes, it probably means, they are two different flavors of something. Say class... well the point is the specific constructs in c++ questioned may have been designed better(or even left out). I want to know if I am wrong, if I am where? This forum is for such open discussions... 1. Why could C++ have not disabled a default copy constructor and why shouldnt it have demanded a explicit copy constructor, if such a use case of the class exist, it would have avoided...
http://forums.codeguru.com/search.php?s=00307e971b5cde438824b6077fb6a7bc&searchid=7002425
CC-MAIN-2015-22
refinedweb
124
67.89
Skype4Py 1.0.35 Skype API wrapper for Python. - Introduction - Community - Usage - Projects using Skype4Py - Troubleshooting - Running unit tests - Making a release - Trademark notification - Changelog Introduction Skype4Py is a Python library which allows you to control Skype client application. It works on Windows, OSX and Linux platforms with Python 2.x versions. Community Support and issues on Github. Skype4Py is not a Skype™, not associated with Microsoft or Skype. For questions you can also use stackoveflow.com with skype4py tag. Do not go for developer.skype.com for support. Orignal author: Arkadiusz Wahlig Maintainer: Mikko Ohtamaa Usage Everything that you should ever need is available as aliases in the Skype4Py package. Import it using the standard form of the import statement: import Skype4Py Importing the whole package into your script’s namespace using from Skype4Py import * is generally discouraged. You should also not access the modules in the package directly as they are considered an implementation detail and may change in future versions without notice. The package provides the following: - Classes Skype4Py.Skype, an alias for Skype4Py.skype.Skype Skype4Py.CallChannelManager, an alias for Skype4Py.callchannel.CallChannelManager - Constants Everything from the Skype4Py.enums module. platform, either 'windows', 'posix' or 'darwin' depending on the current platform (Windows, Linux, Mac OS X). - Errors Skype4Py.SkypeError, an alias for Skype4Py.errors.SkypeError Skype4Py.SkypeAPIError, an alias for Skype4Py.errors.SkypeAPIError The two classes exposed by the Skype4Py package are the only ones that are to be instantiated directly. They in turn provide means of instantiating the remaining ones. They are also the only classes that provide event handlers (for more information about events and how to use them, see the EventHandlingBase class. Every Skype4Py script instantiates at least the Skype4Py.Skype class which gives access to the Skype client running currently in the system. Follow the Skype4Py.skype.Skype reference to see what you can do with it. Warning! While reading this documentation, it is important to keep in mind that everything needed is in the top package level because the documentation refers to all objects in the places they actually live. Quick example This short example connects to Skype client and prints the user’s full name and the names of all the contacts from the contacts list: import Skype4Py # Create an instance of the Skype class. skype = Skype4Py.Skype() # Connect the Skype object to the Skype client. skype.Attach() # Obtain some information from the client and print it out. print 'Your full name:', skype.CurrentUser.FullName print 'Your contacts:' for user in skype.Friends: print ' ', user.FullName Note on the naming convention Skype4Py uses two different naming conventions. The first one applies to interfaces derived from Skype4COM, a COM library which was an inspiration for Skype4Py. This convention uses the CapCase scheme for class names, properties, methods and their arguments. The constants use the mixedCase scheme. The second naming convention is more “Pythonic” and is used by all other parts of the package including internal objects. It uses mostly the same CapCase scheme for class names (including exception names) with a small difference in abbreviations. Where the first convention would use a SkypeApiError name, the second one uses SkypeAPIError. Other names including properties, methods, arguments, variables and module names use lowercase letters with underscores. Troubleshooting Segfaults If you get segfault on OSX make sure you are using 32-bit Python. Debugging segmentation faults with Python. Related gdb dump: Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address: 0x0000000001243b68 0x00007fff8c12d878 in CFRetain () (gdb) bt #0 0x00007fff8c12d878 in CFRetain () #1 0x00000001007e07ec in ffi_call_unix64 () #2 0x00007fff5fbfbb50 in ?? () (gdb) c Continuing. Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address: 0x0000000001243b68 0x00007fff8c12d878 in CFRetain () Skype4Py on for OSX 64-bit (all new OSX versions) Currently Skype4Py must be installed and run using arch command to force compatibility with 32-bit Skype client application. To install: arch -i386 pip install Skype4Py Also when you run your application using Skype4Py prefix the run command with: arch -i386 Crashing on a startup on Ubuntu server Segfault when starting up the bot: File "build/bdist.linux-i686/egg/Skype4Py/skype.py", line 250, in __init__ File "build/bdist.linux-i686/egg/Skype4Py/api/posix.py", line 40, in SkypeAPI File "build/bdist.linux-i686/egg/Skype4Py/api/posix_x11.py", line 254, in __in it__ Skype4Py.errors.SkypeAPIError: Could not open XDisplay Segmentation fault (core dumped) This usually means that your DISPLAY environment variable is wrong. Try: export DISPLAY=:1 or: export DISPLAY=:0 depending on your configuration before running Sevabot. Running unit tests Here is an example: virtualenv-2.7 venv # Create venv source venv/bin/activate python setup.py develop # Install Skype4Py in development mode cd unittests python skype4pytest.py # Execute tests Making a release Example: virtualenv-2.7 venv # Create venv source venv/bin/activate # Bump version in setup.py python setup.py develop # Install Skype4Py in development mode pip install collective.checkdocs pthon setup.py checkdocs # Check .rst syntax easy_install zest.releaser fullrelease Trademark notification Skype™, associated trademarks and logos and the “S” logo are trademarks of Skype. Skype4Py Python project is not affiliate of Skype or Microsoft corporation. Changelog 1.0.35 (2013-05-25) Fixed Issue #16 [prajna-pranab] The Skype API generally responds to ALTER commands by echoing back the command, including any id associated with the command e.g. -> ALTER VOICEMAIL <id> action <- ALTER VOICEMAIL <id> action For some reason the API strips the chat id from the ALTER CHAT command when it responds but the code in the chat.py _Alter() method was expecting the command to be echoed back just as it had been sent. Updated Skype main window classname under Windows for Skype versions 5 and higher, to detect whether Skype is running [suurjaak] 1.0.34 (2013-01-30) Reworked release system and egg structure to follow the best practices [miohtama] Merged all fixed done in a fork [miohtama] Use standard pkg_distribution mechanism to expose the version numebr [miohtama] Skype4Py.platform Easy detection of what platform code Skype4Py is using currently. May be one of ‘posix’, ‘windows’ or ‘dar Fixed CHANGES syntax so that zest.releaser understands it [miohtama] 1.0.33 (2013-01-30)). - Author: Mikko Ohtamaa - License: BSD License - Platform: Windows,Linux,MacOS X - Provides Skype4Py - Package Index Owner: yak, miohtama - Package Index Maintainer: miohtama - DOAP record: Skype4Py-1.0.35.xml
https://pypi.python.org/pypi/Skype4Py
CC-MAIN-2017-22
refinedweb
1,062
58.38
#include <stdint.h> #include <rte_mbuf.h> #include "rte_eventdev.h" Go to the source code of this file. RTE Event Ethernet Tx Adapter The event ethernet Tx adapter provides configuration and data path APIs for the ethernet transmit stage of an event driven packet processing application. These APIs abstract the implementation of the transmit stage and allow the application to use eventdev PMD support or a common implementation. In the common implementation, the application enqueues mbufs to the adapter which runs as a rte_service function. The service function dequeues events from its event port and transmits the mbufs referenced by these events. The ethernet Tx event adapter APIs are: The application creates the adapter using rte_event_eth_tx_adapter_create() or rte_event_eth_tx_adapter_create_ext(). The adapter will use the common implementation when the eventdev PMD does not have the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability. The common implementation uses an event port that is created using the port configuration parameter passed to rte_event_eth_tx_adapter_create(). The application can get the port identifier using rte_event_eth_tx_adapter_event_port_get() and must link an event queue to this port. If the eventdev PMD has the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT flags set, Tx adapter events should be enqueued using the rte_event_eth_tx_adapter_enqueue() function, else the application should use rte_event_enqueue_burst(). Transmit queues can be added and deleted from the adapter using rte_event_eth_tx_adapter_queue_add()/del() APIs respectively. The application can start and stop the adapter using the rte_event_eth_tx_adapter_start/stop() calls. The common adapter implementation uses an EAL service function as described before and its execution is controlled using the rte_service APIs. The rte_event_eth_tx_adapter_service_id_get() function can be used to retrieve the adapter's service function ID. The ethernet port and transmit queue index to transmit the mbuf on are specified using the mbuf port struct rte_mbuf::hash::txadapter:txq. The application should use the rte_event_eth_tx_adapter_txq_set() and rte_event_eth_tx_adapter_txq_get() functions to access the transmit queue index, using these macros will help with minimizing application impact due to a change in how the transmit queue index is specified. Definition in file rte_event_eth_tx_adapter.h. This flag is used when all the packets enqueued in the tx adapter are destined for the same Ethernet port & Tx queue. Definition at line 303 of file rte_event_eth_tx_adapter.h. Function type used for adapter configuration callback. The callback is used to fill in members of the struct rte_event_eth_tx_adapter_conf, this callback is invoked when creating a RTE service function based adapter implementation. Definition at line 122 of file rte_event_eth_tx_adapter.h. Create a new ethernet Tx adapter with the specified identifier. Create a new ethernet Tx adapter with the specified identifier. Free an ethernet Tx adapter Start ethernet Tx adapter Stop ethernet Tx adapter Add a Tx queue to the adapter. A queue value of -1 is used to indicate all queues within the device. Delete a Tx queue from the adapter. A queue value of -1 is used to indicate all queues within the device, that have been added to this adapter. Set Tx queue in the mbuf. This queue is used by the adapter to transmit the mbuf. Definition at line 266 of file rte_event_eth_tx_adapter.h. Retrieve Tx queue from the mbuf. Definition at line 282 of file rte_event_eth_tx_adapter.h. Retrieve the adapter event port. The adapter creates an event port if the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT is not set in the ethernet Tx capabilities of the event device. Enqueue a burst of events objects or an event object supplied in rte_event structure on an event device designated by its dev_id through the event port specified by port_id. This function is supported if the eventdev PMD has the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability flag set. The nb_events parameter is the number of event objects to enqueue which are supplied in the ev array of rte_event structure. The rte_event_eth_tx_adapter_enqueue() function returns the number of events objects it actually enqueued. A return value equal to nb_events means that all event objects have been enqueued. Definition at line 352 of file rte_event_eth_tx_adapter.h. Retrieve statistics for an adapter Reset statistics for an adapter. Retrieve the service ID of an adapter. If the adapter doesn't use a rte_service function, this function returns -ESRCH.
http://doc.dpdk.org/api/rte__event__eth__tx__adapter_8h.html
CC-MAIN-2022-21
refinedweb
667
65.52
Unit testing with JMUnit and NetBeans IDE This article needs to be updated: If you found this article useful, please fix the problems below then delete the {{ArticleNeedsUpdate}} template from the article to remove this warning. Reasons: hamishwillee (talk) (24 Jul 2013) Instructions are not clear and it is not obvious what versions of Netbeans and JMUnit and what devices have been tested on. It is likely that this is very out of date. Unit testing is common practice in professional software development. Especially in the Java™ Platform, Enterprise Edition (Java EE) space it's widely used to ensure stable high quality software. While unit testing is a methodology and doesn't depend on a special framework it's a good idea to use a mature framework. One of the frameworks available in the Java Platfrom, Micro Edition (Java ME) space is JMUnit. Here I will give you a quick overview of the steps involved to use JMUnit and NetBeans 6.1 to easily write and run Java Platfrom, Micro Edition (Java ME) unit tests and how to create a release build without any test artifacts contained. When you have created a Mobile Java Appication right click on your desired package node and choose "New / Empty JUnit Test". Now create your Test-Class. It's a good idea to have a naming convention like "*Test" for test classes. NetBeans 6.1 automatically adds a dependency on JMUnit4CLDC10. If you intend to use CLDC 1.1 you should remove that dependency and use JMUnit4CLDC11. Don't forget to change the import statement of the generated class. Now you have a new test class consisting of a constructor and a method named "test(int testNumber)". In the constructor there's a call to the super constructor. Here you have to give the total number of tests to run. In the test method you should dispatch the call to the actual test method. Example: public class NewEmptyJMUnitTest extends TestCase { public NewEmptyJMUnitTest() { //The first parameter of inherited constructor is the number of test cases super(1,"NewEmptyJMUnitTest"); } public void test(int testNumber) throws Throwable { switch(testNumber){ case 0: testOne(); break; } } private void testOne() throws Exception{ // here we run our test, use assertXXX methods to check results String tmp = "Hello World"; assertEquals(tmp, "Hello World!"); } } Now it's time to run the test. Right click on the project node, choose "Properties". Choose "Application Descriptor". Activate "MIDlets" and click "Add...". Choose the newly created test class and give it a good name indicating that it's the testrunner. When you launch the application there are two MIDlets to choose from: Your original MIDlet and the Test-Runner-MIDlet. If there was any error it's details are dumped to the console. For the above test the output should look like this: Assert Equals failed. Expected Hello World, but was Hello World! jmunit.framework.cldc11.AssertionFailedException at jmunit.framework.cldc11.Assertion.fail(Assertion.java:1066) at jmunit.framework.cldc11.Assertion.fail(Assertion.java:1054) at jmunit.framework.cldc11.Assertion.assertEquals(Assertion.java:150) at jmunit.framework.cldc11.Assertion.assertEquals(Assertion.java:594) at hello.NewEmptyJMUnitTest.testOne(+9) at hello.NewEmptyJMUnitTest.test(NewEmptyJMUnitTest.java:26) at jmunit.framework.cldc11.TestCase.test(TestCase.java:65) at jmunit.framework.cldc11.Screen.run(Screen.java:157) Now it's time to correct the error and add tests to the test class. Don't forget to add the method call to the "test(int)" method and raise the number of tests in the call to the super constructor. When everything works fine and you are about to release your work you most likely don't want to ship the testing stuff with it. To do this you can create an additional project configuration. Right click on the project node and choose "Properties". Choose "Add configuration..." from the project configuration combo box. Choose a good name for the configuration. (e.g. "Release") In the release configuration choose MIDlets. Uncheck the "Use values from "DefaultConfiguration"" and remove the test runner MIDlet. Choose "Build/Libraries & Resources" and remove the JMUnit dependency. Choose "Build/Sources Filtering" and check the "Exclude Test Souces". If there are still test resources left, uncheck them too. You might wish to change other settings (e.g. obfuscation level) for the release, too. Now you can build the release e.g. via NetBeans' batch build. You will find the release binaries in ".../dist/Release".
http://developer.nokia.com/community/wiki/index.php?title=Unit_testing_with_JMUnit_and_NetBeans_IDE&oldid=203726
CC-MAIN-2014-41
refinedweb
732
59.7
. Great ! What about the Win x64 builds ? What are the target builds for Windows in 5.0.2 ? MinGW x86+VS2010 x86+VS2010 x64+VS2012 x64 ? @Robin Lobel: 64-bit Windows builds work, there is just not yet a ready-made binary installer available. We are testing both VS2010 and VS2012 64-bit regularly, and will add more binary installers later on. @Tuukka Turunen: You mean, more binary installers for 5.0.1, or should we have to wait for 5.0.2 to be released ? @Robin Lobel: It may be that we can add 64-bit Windows binary installer for 5.0.1 still – we dropped those out, but it was most likely build machine issue causing problems. But latest in 5.0.2. And, as before with 5.0.0, building this yourself is possible. Why were x64 binaries dropped in the first place? I downloaded a couple of them from last week and both failed to produce a working executable, projects compiled fine, but upon startup immediately exited with some negative code. Anyway, looks like I will be building 5.0.1 as well. Do you consider launching multiple versions – with desktop OpenGL, without the webkit, without any openGL dependency and so on? It will save many collective hours of doing this ourselves. Binary Windows 64bit installers would be really nice. I think we all could build our own 64bit versions, but image how much energy is saved if we don’t have to. 😉 Another vote for VS2012 installers, I was really looking forward for the 5.0.1 release for this reason so I’m a bit disappointed now… +1 previous authors… Could you please announce preliminary dates on VS2012 x64 support? We then could plan our projects. Thanks. A desktop OpenGL build would be great. It seems to perform much better for qml rendering but is a hassle to build every time. Not a big deal to build a X64 version for Qt5.0.1. I send my notes about this. REM *** be sure to use Visual Studio X64 Win64 Command prompt as Administrator Install Perl, Python and add to PATH Install MySQL 64bit version and add inlude and lib paths for your installation ( might be 5.5.6 ) INCLUDE=%INCLUDE%C:\Program Files\MySQL\MySQL Server 5.5\include; or INCLUDE=C:\Program Files\MySQL\MySQL Server 5.5\include; set LIB=%LIB%C:\Program Files\MySQL\MySQL Server 5.5\lib; or LIB=C:\Program Files\MySQL\MySQL Server 5.5\lib; QT_QPA_PLATFORM_PLUGIN_PATH=F:\Qt\qt-everywhere-opensource-src-5.0.1\qtbase\plugins\platforms configure -opensource -hostprefix -debug-release -shared -mp -qt-sql-mysql -opengl desktop nmake Qt PlugIn for Qt 5.0.x and Visual Studio 2010+ available Why missing ‘configure.exe’ is a known and un-fixed issue? @srazi: It is expected that Windows users take the .zip which configure.exe is part of. There are a few other formats of source packages mainly intended to other than Windows, and thus not contain the configure.exe. @srazi the page explicitly says “The source code is available as a zip (270 MB) file for Windows users” … or you didnt find configure.exe in the .zip package ? There is not configure.exe in main directory, where is directory “bin” for example. In previous version there was this file. MinGW is the first but not the last new pre-build binary installer which we are going to bring along on later Qt 5.0.x releases. ———– Who will be the second? still not have x64-bit package? Anyway, congrats! Please, make installation of QtCreator optional. It’s mandatory for technical reasons. E.g. the Qt package registers itself into Qt Creator, and just expects Qt Creator to be there. We might overcome this in future versions, but really, Qt Creator is just 140MB, so having it installed shouldn’t really hurt you. 140 MB dont hurt me, but i install mingw and msvc versions of SDK. And now i have twice of 140 MB. Okay. Feel free to file a feature request to , so it’s not forgotten. Very simple (?) question: how can those of us who already have Qt 5.0.0 installed on their system upgrade to Qt 5.0.1? I naively thought that it could be done through the maintenance tool, but all it seems to offer is to uninstall Qt 5.0.0…!? So?… So uninstall 5.0.0, install 5.0.1 You clearly missed my point… @Alan: Sometimes it is beneficial to have parallel Qt versions, so by default we allow it. By default?… So, is there an actual way to upgrade our copy of Qt 5.0.0 to version 5.0.1? I can certainly appreciate that, but then… why do you have a maintenance tool which offers you to “update [your] components” if, when selecting that option, all it does is tell you that it will uninstall everything?… So, my point is that even though I wish there was a way to upgrade one’s version of Qt, I can appreciate that you guys have taken a different view, but then I don’t see the point of that ‘maintenance’ tool… I can’t see a difference between “upgrading from Qt 5.0.0 to Qt 5.0.1” and “uninstall Qt 5.0.0, install Qt 5.0.1”, the result is the same, is it ? Seriously?!… Sure, the end result is the same, but having to do that on 7 different machines (running either Windows 7, Ubuntu 12.04 32-/64-bit or OS X 10.8.2) is a waste of my time while a ‘proper’ upgrade would have potentially saved me quite a bit of time. This is not to mention that that you have to download the full installers while a ‘proper’ maintenance tool would have downloaded whatever is necessary for the upgrade. Don’t complain, installing a new Qt binary is nowhere nearly as tedious as compiling it from the ground up, like I have to do, you should be happy there is a ready to use binary that fits your needs. We plan to come up with online installers with all the goodies (installing/deinstalling individual packages, updating packages etc) at one point. See maintenancetool.exe as a first step to get there (again) Thanks Kai, I am certainly looking forward to it. I guess you guys spoiled us (me, at least) with the Qt SDK 1.x… How do you check binary compatibility, internal tools ? @Jeff .. test is run for most of the modules in the CI system (Testing tst_Bic) I guess you would like to check bic: Can’t wait for the native 64 bit version of Webkit for VS2010+ x86_64 Can you (or anybody else) explain what this means? I’m trying to figure whether QWebViews I create in Qt 5 are already using Webkit5 and V8, or whether this is a build-time or runtime configuration issue. I can’t seem to find any documentation (other than the Qt 5 ‘what’s new’ description to prove whether this is true. Any ideas how to tell? Any news about PySide support? PySide is a different stuff maintained by different people than Qt .. please ask them here Hello Sir, Good to see the first Qt 5 build with mingw. I have a question for you that is, when are you going to release the Qt for android as it ios my first need today? Hpoe to see the answer from you. Thankyou. @Sonu Lohani: We are actively working with both Android and iOS in the ‘dev’ branch and it currently looks that we will be able to support both to a good extent in the next minor release of Qt (not fully yet, though). If you are interested to try these out, it is possible to pull the stuff from dev already now. Development is fully open and and we are also interested in contributions for these, naturally. If you need a released version, it is Qt 5.1.0. We’ll talk more on the status of the new ports later on in the blog, so keep posted Could I know the current development states of the QML components for ios and android? I do not mean those basic items like Rectangle, Text or MouseArea but something like the desktop components of QML Or ios and android would not develop those components? @terry: Each of these (iOS, Android and Qt Quick Components) is an item we are actively working with. While iOS will take some more time complete, we expect that both Android and Qt Quick Components are in good shape in Qt 5.1. And something will be available also for iOS at that time. We will blog about all these during February, so keep posted. If you have time to follow Qt Project mailing lists and repositories, you are welcome to join us in the development as well. @terry: Sorry, I misread you question. You meant if there are Qt Quick Components specific for Android and iOS? The Qt Quick Components (for Desktop) will work also in these, though most are obviously better suited for desktop. What comes to styling and mobile specific components, let’s see. These ports are still young, and first priority is to get the basic things working well. Thank you sir for giving us some time. I really appreciate with you quality of work that you’re doing. Love to see more updates in qt. Thank you once again And all the best. Where are the release notes? This is the changelog for qtbase: did you read the blog post ? “For detailed list of changes in Qt 5.0.1, please have a look into the changes file included to each module – or check the three most important ones: qtbase, qtdeclarative and qtmultimedia.” .. there are three links there I too miss the release notes. We check these against possible code changed we made to circumvent bugs (that may now be fixed and our fixes may break!) Having to check file changes for each module is tedious and extreme. I should not have to search you website to find this link to get this information. This was a change to your release procedure that is greatly missed and a disservice to your customers. Everyone could build binary packages themselves, If Nokia SDK is dead, why not create a new Qt Project SDK, with qt5,qt484,ecc ? new release of creator? Use Sdk to upgrade… That’s our intend … Hi, On Qt5.0 Can the QML app run on directfb platform? I know on eglfs platform is OK. Now the QWebView Control is back in 5.0.1, so I use QTCreator to create a qtgui app, added a QWebView control by drag and drop (its the only control I added) Then I set the url Property to Hey fine I see the google site live in the QTCreatot designer. So I try to build it and get the following error: /home/user/dev/cpp/qt/qtgui-build-Desktop_Qt_5_0_1_GCC_64bit-Debug/ui_mainwindow.h:44: Error:undefined reference to `QWebView::QWebView(QWidget*)’ and /home/user/dev/cpp/qt/qtgui-build-Desktop_Qt_5_0_1_GCC_64bit-Debug/ui_mainwindow.h:48: Error:undefined reference to `QWebView::setUrl(QUrl const&)’ hmm and I am doomed again I downloaded qt 5.0.1 for Linux 64 bit and installed it just now to gie it a try, 3 minutes later I got my showstopper again. Understand me well I am great fan of KDE and so QT (but never used it up to now), but honestly … this is a beta version isn’t it? Anton your code is beta .. did you add “QT += webkitwidgets” to your .pro file ? @Anton: No, it is not a beta. WebKit works, but it seems that you need to do something differently in your application. Please use the Qt Project mailing lists and forums to get help for your program, in case you are an open-source user – or Digia support in case you have a commercial license. There may still be bugs you need to work around, but most use cases work just fine. And in a case of a bug, please list it to so that is can be fixed in the upcoming versions. You’ve to link against the Qt5WebKitWidgets library, by e.g. adding QT+=webkitwidgets to your .pro file. very good! I love mingw,but why it is so big?! I was used the “mingw-builds Qt5” The mingw-builds qt 5 package just contains the Qt libraries. The installer from the download page also ships qt creator, the mingw toolchain itself, the qt and qt creator help and the qt sources. First of all, nice to have a MinGW version now. Thank you. But: Why is it that you do not care about the size of a project? You did link it to the 17 Megabytes large ICU library, and many other stuff, which is definitely not necessary ! In Qt 4.8.4 it was much smaller. If I distribute an application with Qt5 the size of my (zipped) Install package increases by about 5 to 10 Megabytes. WTF guys? Please do care about the size of a project too. Qt was always on the larger side, but now it’s out of hand.. I will have to recompile it myself or stay with Qt4.. does work too. I thought, maybe you would fix this with 5.0.1 but, no, you didn’t… Using a static build with desktop opengl and without the webkit is tolerable – a qtbse+qtgui+qtwidgets executable compiles to about 8 mb without any external dependencies. Sadly, static builds require to either open your source or have a commercial license. I tried Qt 5.0.0 and had even more problems… I had even to install a DirectX Runtime to be able to run Qt, is was a 100MB that took an eternity from Microsoft, and another eternity to install. So releasing a Qt5 software is simple a nightmare… No way end users will download and install almost all the Internet, just to be able to run a Qt5 application +1 I’m happy to see a MinGW version but I’m really sad about this. I’m distributing a free program on a free web site This absurdly huge runtime dependency will make this impossible. (The space available on my website will be not enough) In general the size of the needed runtime’s libraries, to me is definitely overloaded. Hope is not a strategy to make life more difficult for no commercial users, because this way we have more chance to have it fixed. The size of qt project is too large!!! If I use WebKit and opengl, the qt library will be 30M + size (Zipped size), Even JRE is only 20M+. I cant install mingw version in windows8 Enterprise It just stuck at “Installing component Qt patcher”…. !! Should i uninstall Qt 4.8 ?! Fixed . Is there a way to upgrade (i.e. download only what is needed/has changed)? It’s a waste of time/bandwidth to download the sdk and creator again. Not everyone has fast connections. @me: At the moment no. We are working with online installers for Qt 5. Currently they are only available for Qt 4.8 commercial users. Online installer makes upgrade more convenient, but there will still be a lot to download when changing to a new version of Qt. Today (2/3/2013) I uninstalled my old Qt Creator (2.4.1 with 4.8.1) and tried installing Qt 5.01 (qt-windows-opensource-5.0.1-mingw47_32-x86-offline.exe) from the downloads page () but the installation failed. I tried downloading the Qt 5.0.1 for Windows 32-bit (MinGW 4.7, 823 MB) installer seveal times and and tried not only a default install but also a full install on windows 7. Every time it seems to install (i.e., it builds the diurectories) except it will not launch Creator! I also tried starting Qt Creator from both the start menu and by going directly to tools but nothing happens. Currently I’m at a loss as I have neither my old Qt development system nor Qt Creator for Qt 5.01. Any suggestions? Things you can try when lauching creator: – launch creator from command line with ‘-settingspath C:\Temp’ argument to block it from reading old configuration. Does it launch then? – download & start dbgview () before launching qt creator. Does it print anything suspicious? Also, ‘the installation failed’ – what exactly did fail, the whole installation or just the launching of creator? If the installation failed, could you create a bug report with the installation log (InstallationLog.txt in installation dir, or output when launching installer from command line with ‘-v’ ) ? Kai thanks for the prompt reply. I still cannot get qtcreator to work. Specifically here are the results of your suggestions and questions. By “failed” I mean qtcreator does not start. Everything appears to be installed (i.e., directories 5.01, Licenses, Tools, vcedist plus numerous files). In fact ….\Tools\QtCreator\bin\qtcreator.exe exists and is 971 KB in size. During the installation process I selected all components to install. At the end of the install process the README.txt file is successfully launched (via notepad) automatically but qtcreator doesn’t start even though it is selected in the setup wizard. (I also tried running from the start menu and directly from the Tools directory; neither worked ) As suggested I tried starting qtcreator.exe in a cmd.exe window with the the command ….\Tools\QtCreator\bin\qtcreator.exe ‘-settingspath C:\Temp’ That didn’t work. Also as suggested I downloaded dbgview and started that before running qtcreator. DebugView listed no activity (note: I ran dbgview with other programs and it appears to work fine so the problem must be in qtcreator). qtcreator just appears to be a no-op program even through it takes up 971 KB of disk space. Try co clean up your user&system PATH, old records might be messing up your installation. Also you might try a tool called Dependecny Walker to check if there is anythying that Qt Creator misses. Whatever; Thanks for the dependency walker tip. I downloaded and tried it and found that qtcreator requies several dll modules were either missing or incorrect on my windows 7 op (e.g., IEShims.dll, gpsvc.vc, sysntfy among others ). Some of these are in my system32 search path but they don’t work. Some of these exist on my windows 7 laptop only as amd 64bit versions. Now I’m stuck as apparently the interface to my windows 7 dll modules are not compatible with qtcreator. I found via a google search that this appears to not be an uncommon problem. Regarding IESHIMS.DLL: Those you listed are very often not necessary, when using Dependency Walker, look for those directly referenced from your application and its DLLs. @QT-Team: Please try to decrease the runtime dependencies on Windows. It’s way too big! As Clemens said some missing ‘ie’ libs are harmless. What you can try though is launch qt creator through the built-in ‘Profiler’ of Dependency Walker: Load qtcreator.exe, and select “Profile”, “Start Profiling”. The output will be probably long, so you should really create a task in the bugtracker Thanks for the hard work, and just in time for the spring festival in China Is it possible to provide seperate download links the dll files needed for an application created with the binary package “Qt 5.0.1 for Windows 32-bit (MinGW 4.7, 823 MB)” you provide? I mean, could you provide one discrete link to download Qt5Core.dll and so on (one link for each needed dll file)? I am not a lawyer, but I hope that It would be helpful for people who don’t want to distribute these dll files with their executable! I would (for example) consider the possibility of distributing only the exe (dynamically linked) of my application(s), and instruct the users to download the dll files needed from you. Well…for me will be better than they provide smaller minimum dependency for QtCore and QtGUI increasing the number of runtime libraries required for each module used in the project. I dislike to rely on an “external link” to distribute a working copy of a software. See other comments: for now, the minimum runtime library required seem absurdly huge! And THIS IS the problem! +1 Compare the size with a classic MFC application or with other GUI toolkits.. it’s ridiculous. . Oh, btw. it would also be great, if everything except QtCore was loaded dynamically at runtime if needed. Same way it works for Plugins. So, every application developer could just skip everythings besides the necessary stuff. What do you think? A kind of “delay load” on runtime needs basis. Yes, it could be very interesting to save resources until the program’s execution need it. But I think this behaviour should be a choice of the coder and, very important, a consistency check should be made at the startup of the application to be sure nothing will go wrong when each dependent library will be entirely loaded (version check, required symbols presence…) I’m not a guru in computer programming but this is what came to me in mind on a first steps. And… +1 for your avatar. 😉 We already have that, it’s called Java 😉 The biggest chunk of the required libs is actually ICU, and we are looking into ways of slimming it down. Already now you should be able to replace the default icudt49.dll e.g. by recompiling your own tailored version (see e.g.). So I have hopes we slim the minimum size of runtime libs down quite a bit … Hi Stavros, sorry, I guess I miss the point of your request. Are there any technical reasons why you don’t want to ship the Qt libs together with your binary? Or do you have legal concerns? Qt::TranslucentBackground widgets bug still here. Qt is unusable with custom UI… There’s a task open about that: If you have information that’s not contained in the task, please add it there. Sir, one thing i want to know is that do you have any plan of releasing qt 5 with VS 2008? If yes, then when it will be? @Sonu Lohani: VS2008 binary installer is not currently planned. It is already rather old compiler. If you need to use it, better to compile yourself. MacOS X binary install is not usable for me. After building a simple application, I run it and get “Failed to load platform plugin “cocoa”. Available platforms are: ” Then it aborts. I added a comment to about this. Tried debug and release builds, on Mountain Lion 10.8.2 Since QTBUG-28155 is marked as fixed for 5.0.0 the problem you have is probably not the same as the bug described in there (although the symptom is the same). You should rather file a separate bug report. Heya i’m for the primary time here. I came across this board and I to find It really helpful & it helped me out a lot. I hope to give one thing back and aid others like you aided me. Hello, I can see you are nearing Qt 5.1 and its QML desktop components. Will there be an analogue of QDockWidget for QML? Any program I try to run give me the following qtcreator_ctrlc_stub: Command line failed: mingw32-make.exe 03:28:06: The process “mingw32-make.exe” crashed. Error while building/deploying project Sync (kit: Desktop Qt 5.0.1 MinGW 32bit) When executing step ‘Make’ Hi Yazwas, try to run mingw32-make by hand from a console / in a clean environment. Does it still crash? It’s really disappointing to see that QTBUG-28097 is still not fixed. Looking forward to the next update! BenH Great post. I was checking constantly this web blog and I am impressed! Extremely useful information specifically the last section I care for such information much. I was looking for this particular information for a lengthy time. Thanks and best of luck. I am trying to set the background image in a “MainWindow” in QT 5.01. In my “main.cpp” I do this: #include #include "mainwindow.h" ... MainWindow w; w.setStyleSheet("background-image: url(myimage.png);"); ... But the image is not being shown there. The same code works for QT 4.8. What is the reason? Thanks.. Can’t build for Windows. I’ve downloaded the .zip and uncompredded it into d:\Qt501 . then I open VS2012’s command and I write configure. This is the result: /d/Qt501/D:/Qt501_x64/qtbase/configure not found. Did you forget to run “init-repository”? at D:\Qt501\configure line 88. Oh, and if I try to run nmake from qtbase dir i get this error when compiling: ..\..\..\include\QtXml\qdom.h(1) : fatal error C1083: Cannot open include file: ‘../../../../../d:/Qt501/qtbase/src/xml/dom/qdom.h’: Invalid argument
http://blog.qt.io/blog/2013/01/31/qt-5-0-1-released/
CC-MAIN-2015-48
refinedweb
4,205
75.1
Is there a good place we can add code to prevent all users from having access to BDM data through the REST API? It'd be nice to both be able to only display search results/objects for users who should have access to them (using custom code to filter them out) as well as only display fields that they should be able to see (in case we have internal fields that shouldn't be exposed.) Thank you! Hi , I'm trying to access the BDM from a script in Groovy. After several attempts and tests can not I find what is wrong. Attached diagram and log. DAO_error I have a java.lang.StackOverflowError error when working with DBO containing reference to the same type of BDM. Is this a bug ? I use the following Groovy script to retrieve data stored in the BDM. In the BDM I defined a "business Object" called "Rutas" that has two "attributes", "nombreRuta" and "valor". The script I use is: import com.company.model.Rutas List rtSubir = rutasDAO.findByNombreRuta("rutaSubir",0,100) return rtSubir But I fails. I think that left me something, but I can not to find out what it is. Any suggestions?. Thank you.) Hi, how to update business data model with RestAPI? I tried: Request url:{businessDataType} Request method: PUT Request payload: A business data (in JSON format) But It does not work! Thanks Manolo Hello Team, I have a New Account Process that has a total of three sub process. The first pool is the new request process, which sends task to the second level of approval process and then to the final approval process that create the user to role access. I'm newer to the BDM so I'm having some issues accessing the existing records. I'm able to pass initialize the BDM (see below code) and pass the data from step to step within the same pool and thru the contract. However, I'm seeing two things. Hi, I'm trying to get access to a row in a Businnes Object from an external process (I cannot access from context) but I cannot do this, I get the following exception: Hello. My first test process with Bonita 7 uses a business object to model its main data elements, and uses contracts for tasks pages. In all the tasks pages, the business object is available, using a variable of type "external API" like "../{{context.request_ref.link}}" where "request" is my process object, and forms fields can refer to the needed elements through the variable.
https://community.bonitasoft.com/tags/bdm?page=6
CC-MAIN-2019-51
refinedweb
428
71.85
Retrieving Accelerometer Input (Windows Phone) This topic demonstrates how to detect and use accelerometer input in an XNA Game Studio game on Windows Phone. Complete Sample The code in this topic shows you the technique. You can download a complete code sample for this topic, including full source code and any additional supporting files required by the sample. Accelerometer input on Windows Phone OS 7.1 is not directly supported by XNA Game Studio 4.0 Refresh, but it is easy to add to an existing project. Adding Microsoft.Devices.Sensors to Your Application Accelerometer input on Windows Phone OS 7.1 is handled by the Microsoft.Devices.Sensors assembly, which must be referenced by your assembly before any of its types, events, or methods can be used in your application. To add Microsoft.Devices.Sensors to your game - In Solution Explorer, right-click your game project's References node, and then select Add Reference. - Select Microsoft.Devices.Sensors from the list, and then click OK. - Add using Microsoft.Devices.Sensors;to the top of any source file that will use the accelerometer classes and methods. Adding Accelerometer Support to Your Game Once you have a reference to Microsoft.Devices.Sensors in your project and an associated using statement in your source files, you can begin adding code to support accelerometer input. To add accelerometer support Add a data member to your game to hold accelerometer data. Accelerometer data is returned in an AccelerometerReadingEventArgs class. You can declare an instance of this structure in your class, declare an instance of a similar structure (such as Vector3), or create separate data members to read AccelerometerReadingEventArgs's X, Y, and Z members. Add an event handler for the ReadingChanged event. Add a method to your class that returns void, and has two parameters: an object representing the sender, and an AccelerometerReadingEventArgs to get the accelerometer reading. Associate your event handler with the ReadingChanged event. Start the accelerometer sensor. The accelerometer must be started before it begins calling your event handler. This may raise an exception, so your code should handle the case where the accelerometer cannot be started. // Start the accelerometer try { accelSensor.Start(); accelActive = true; } catch (AccelerometerFailedException e) { // the accelerometer couldn't be started. No fun! accelActive = false; } catch (UnauthorizedAccessException e) { // This exception is thrown in the emulator-which doesn't support an accelerometer. accelActive = false; } Get accelerometer readings. After it is started, the accelerometer calls your event handler when the ReadingChanged event is raised. Update your stored AccelerometerReadingEventArgs class (previously shown in the event handler code), and then use its data in your game's Update method. Stop the accelerometer sensor. To avoid having your event handler called repeatedly when your game is not actually using the accelerometer data, you can stop the accelerometer when the game is paused, when menus are being shown, or at any other time by calling the Stop method. Like Start, this method can throw an exception, so allow your code to handle the AccelerometerFailedException.
https://msdn.microsoft.com/en-us/library/ff604984.aspx
CC-MAIN-2017-51
refinedweb
500
57.27
. Category: Programming With PHP. Important Skills Colleges Never Teach Programmers Hi!! Lynda vs Pluralsight Developer Training Let us look at Lynda vs Pluralsight Deverloper Training. Today, I want to give my brief comparison of Lynda.com and Pluralsight.net – both of which provide training and courses to software developers and newbies alike. Starting a week ago, I signed up for a free trial on Lynda to learn Photoshop and color matching for web design because I recently took heat for not being good at it(I have been told that I have one of those things where you cannot easily identify colors – oh wait, it is called color blind ): ).? Object Oriented Programming In Java – Inheritance Object oriented programming in java is very powerful and a clear understanding is important. While I was trying to dig deeper, I realized that there is more to it than I actually knew. So, I looked around the web and spent some time on StackOverflow! Now let us create a simple file here Base.java [java] public class Base{ public static void main(String[] args]){ //do some cool stuff here } } [/java]: Object-oriented Programming In PHP – Part II Yesterday: Object-Oriented Programming In PHP – Part I Welcome. Writing Your Own Functions – PHP Functions exist in most programming languages and they separate code that performs a single well-defined task. This makes it easier to read and reuse the code. A function is a self-contained module of code that prescribes a calling interface, performs some task and optionally returns a result. There are several reasons to write your own functions in PHP; for instance if you want to perform a task that other built-in functions cannot help you do. Let us quickly look at an example of a function call before we do function definition. [php] <?php #call a function in php function_name();
http://simpledeveloper.com/category/programming-with-php/
CC-MAIN-2019-35
refinedweb
309
54.63
running into I tried replacing the BoxView with a StackLayout (or any other layout for that matter), which fixes the part that the background gets rendered. The problem is that even though the layout covers the whole page, it can be clicked through. Here's a code that reproduces the error. It works as it should on iOS, where the ListView can't be clicked on through the overlay, but on Android it doesn't matter hwat value is InputTransparent set to, it just doesn't work. public class TransparencyPage : ContentPage { public TransparencyPage() { var layout = new RelativeLayout(); var list = new ListView { ItemsSource = new[] { "hello", "xamarin", "forms" } }; var modal = new StackLayout { BackgroundColor = Color.FromRgba(0, 0, 0, 0.4), InputTransparent = false }; layout.Children.Add(list, () => Bounds); layout.Children.Add(modal, () => Bounds); Content = layout; } } Setting the value of InputTransparent on the ListView also has no effect, and it also doesn't seem to matter if I set it to true or false on any of them. I have checked this issue and able to reproduce. To reproduce this issue I have followed the code provided in the bug description. Stets to reproduce: 1. Create a Xamarin.Forms application in XS NewSolution=>C#=>Mobile Apps=>Blank App(Xamarin.Forms.Portable) 2. Add a file to shared project RightClick=>Add=>NewFile=>Forms=>Forms ContentPage Xaml=>give filename and click 'New' 3. Go to Filename.cs page and paste the above code 4. Call the "Filename.cs" page in App.cs 5. Set "iOS" as startup project, Set "InputTransparent = true" in filename.cs page code 6. Run application, observe that user is able to select or click on ListView items. 9. Set "InputTransparent = false" in filename.cs page code 10. Run application, observe that user is not able to select or click on ListView items. 11. Set "Android" as startup project, Set "InputTransparent = true" in filename.cs page code 12. Run application, observe that user is able to click on ListView items. 13. Set "InputTransparent = false" in filename.cs page code and run the application 14. Observe that user is able to click on ListView items while setting "InputTransparent = false" I observed that "InputTransparent" in working fine in "iOS", while it is not working in "Android". it does't affect I set it to true or false any of them, and user is able to select or click on ListView items. Screencast: Environment Info: === Xamarin Studio ===.1 (5085) Build 5B1008 === Xamarin.Mac === Version: 1.10.0.10 (Enterprise Edition) === Xamarin.Android === Version: 4.16.0 (Enterprise) === Xamarin.iOS === Version: Just a small update, a possible workaround is to add a TapGestureRecognizer with no action attached to it, which will cause the view to consume the tap events and they won't propagate to the views below it, such as the following. layout.GestureRecognizers.Add(new TapGestureRecognizers()); Fixed in branch, may or may not be in next release depending on risk evaluation. This is going to be held for the 1.3 release
https://xamarin.github.io/bugzilla-archives/22/22796/bug.html
CC-MAIN-2019-43
refinedweb
497
60.31
device consists of a hierarchy of objects called views — every element of the screen is a View. The View class represents the basic building block for all UI components, and the base class for classes that provide interactive UI components such as buttons, checkboxes, and text entry fields. Commonly used View subclasses described over several lessons include: TextViewfor displaying text. EditTextto enable the user to enter and edit text. Buttonand other clickable elements (such as RadioButton, CheckBox, and Spinner) to provide interactive behavior. ScrollViewand RecyclerViewto display scrollable items. ImageViewfor displaying images. ConstraintLayoutand LinearLayoutfor containing other Viewelements and positioning them. The Java code that displays and drives the UI is contained in a class that extends Activity. An Activity is usually associated with a layout of UI views defined as an XML (eXtended Markup Language) file. This XML file is usually named after its Activity and defines the layout of View elements on the screen. For example, the MainActivity code in the Hello World app displays a layout defined in the activity_main.xml layout file, which includes a TextView with the text "Hello World". In more complex apps, an Activity might implement actions to respond to user taps, draw graphical content, or request data from a database or the internet. You learn more about the Activity class in another lesson. In this practical you learn how to create your first interactive app—an app that enables user interaction. You create an app using the Empty Activity template. You also learn how to use the layout editor to design a layout, and how to edit the layout in XML. You need to develop these skills so you can complete the other practicals in this course. What you should already know You should be familiar with: - How to install and open Android Studio. - How to create the HelloWorld app. - How to run the HelloWorld app. What you'll learn - How to create an app with interactive behavior. - How to use the layout editor to design a layout. - How to edit the layout in XML. - A lot of new terminology. Check out the Vocabulary words and concepts glossary for friendly definitions. What you'll do - Create an app and add two Buttonelements and a TextViewto the layout. - Manipulate each element in the ConstraintLayoutto constrain them to the margins and other elements. - Change UI element attributes. - Edit the app's layout in XML. - Extract hardcoded strings into string resources. - Implement click-handler methods to display messages on the screen when the user taps each Button. The HelloToast app consists of two Button elements and one TextView. When the user taps the first Button, it displays a short message (a Toast) on the screen. Tapping the second Button increases a "click" counter displayed in the TextView, which starts at zero. Here's what the finished app looks like: In this practical, you design and implement a project for the HelloToast app. A link to the solution code is provided at the end. 1.1 Create the Android Studio project - Start Android Studio and create a new project with the following parameters: - Select Run > Run app or click the Run icon in the toolbar to build and execute the app on the emulator or your device. 1.2 Explore the layout editor Android Studio provides the layout editor for quickly building an app's layout of user interface (UI) elements. It lets you drag elements to a visual design and blueprint view, position them in the layout, add constraints, and set attributes. Constraints determine the position of a UI element within the layout. A constraint represents a connection or alignment to another view, the parent layout, or an invisible guideline. Explore the layout editor, and refer to the figure below as you follow the numbered steps: - In the app > res > layout folder in the Project > Android pane, double-click the activity_main.xml file to open it, if it is not already open. - Click the Design tab if it is not already selected. You use the Design tab to manipulate elements and the layout, and the Text tab to edit the XML code for the layout. - The Palettes pane shows UI elements that you can use in your app's layout. - The Component tree pane shows the view hierarchy of UI elements. Viewelements are organized into a tree hierarchy of parents and children, in which a child inherits the attributes of its parent. In the figure above, the TextViewis a child of the ConstraintLayout. You will learn about these elements later in this lesson. - The design and blueprint panes of the layout editor showing the UI elements in the layout. In the figure above, the layout shows only one element: a TextView that displays "Hello World". - The Attributes tab displays the Attributes pane for setting properties for a UI element. Tip: See Building a UI with Layout Editor for details on using the layout editor, and Meet Android Studio for the full Android Studio documentation. In this task you create the UI layout for the HelloToast app in the layout editor using the ConstraintLayout features. You can create the constraints manually, as shown later, or automatically using the Autoconnect tool. 2.1 Examine the element constraints Follow these steps: - Open activity_main.xmlfrom the Project > Android pane if it is not already open. If the Design tab is not already selected, click it. If there is no blueprint, click the Select Design Surface button in the toolbar and choose Design + Blueprint. - The Autoconnect tool is also located in the toolbar. It is enabled by default. For this step, ensure that the tool is not disabled. - Click the zoom in button to zoom into the design and blueprint panes for a close-up look. - Select TextView in the Component Tree pane. The "Hello World" TextViewis highlighted in the design and blueprint panes and the constraints for the element are visible. - Refer to the animated figure below for this step. Click the circular handle on the right side of the TextViewto delete the horizontal constraint that binds the view to the right side of the layout. The TextViewjumps to the left side because it is no longer constrained to the right side. To add back the horizontal constraint, click the same handle and drag a line to the right side of the layout. In the blueprint or design panes, the following handles appear on the TextView element: - Constraint handle: To create a constraint as shown in the animated figure above, click a constraint handle, shown as a circle on the side of an element. Then drag the handle to another constraint handle, or to a parent boundary. A zigzag line represents the constraint. - Resizing handle: To resize the element, drag the square resizing handles. The handle changes to an angled corner while you are dragging it. 2.2 Add a Button to the layout When enabled, the Autoconnect tool automatically creates two or more constraints for a UI element to the parent layout. After you drag the element to the layout, it creates constraints based on the element's position. Follow these steps to add a Button: - Start with a clean slate. The TextViewelement is not needed, so while it is still selected, press the Delete key or choose Edit > Delete. You now have a completely blank layout. - Drag a Button from the Palette pane to any position in the layout. If you drop the Buttonin the top middle area of the layout, constraints may automatically appear. If not, you can drag constraints to the top, left side, and right side of the layout as shown in the animated figure below. 2.3 Add a second Button to the layout - Drag another Button from the Palette pane to the middle of the layout as shown in the animated figure below. Autoconnect may provide the horizontal constraints for you (if not, you can drag them yourself). - Drag a vertical constraint to the bottom of the layout (refer to the figure below). You can remove constraints from an element by selecting the element and hovering your pointer over it to show the Clear Constraints button. Click this button to remove all constraints on the selected element. To clear a single constraint, click the specific handle that sets the constraint. To clear all constraints in the entire layout, click the Clear All Constraints tool in the toolbar. This tool is useful if you want to redo all the constraints in your layout. The Attributes pane offers access to all of the XML attributes you can assign to a UI element. You can find the attributes (known as properties) common to all views in the View class documentation. In this task you enter new values and change values for important Button attributes, which are applicable to most View types. 3.1 Change the Button size The layout editor offers resizing handles on all four corners of a View so you can resize the View quickly. You can drag the handles on each corner of the View to resize it, but doing so hardcodes the width and height dimensions. Avoid hardcoding sizes for most View elements, because hardcoded dimensions can't adapt to different content and screen sizes. Instead, use the Attributes pane on the right side of the layout editor to select a sizing mode that doesn't use hardcoded dimensions. The Attributes pane includes a square sizing panel called the view inspector at the top. The symbols inside the square represent the height and width settings as follows: In the figure above: - Height control. This control specifies the layout_heightattribute and appears in two segments on the top and bottom sides of the square. The angles indicate that this control is set to wrap_content, which means the Viewwill expand vertically as needed to fit its contents. The "8" indicates a standard margin set to 8dp. - Width control. This control specifies the layout_widthand appears in two segments on the left and right sides of the square. The angles indicate that this control is set to wrap_content, which means the Viewwill expand horizontally as needed to fit its contents, up to a margin of 8dp. - Attributes pane close button. Click to close the pane. Follow these steps: - Select the top Buttonin the Component Tree pane. - Click the Attributes tab on the right side of the layout editor window. - Click the width control twice—the first click changes it to Fixed with straight lines, and the second click changes it to Match Constraints with spring coils, as shown in the animated figure below. As a result of changing the width control, the layout_width attribute in the Attributes pane shows the value match_constraint and the Button element stretches horizontally to fill the space between the left and right sides of the layout. - Select the second Button, and make the same changes to the layout_widthas in the previous step, as shown in the figure below. As shown in the previous steps, the layout_width and layout_height attributes in the Attributes pane change as you change the height and width controls in the inspector. These attributes can take one of three values for the layout, which is a ConstraintLayout: - The match_constraintsetting expands the Viewelement to fill its parent by width or height—up to a margin, if one is set. The parent in this case is the ConstraintLayout. You learn more about ConstraintLayoutin the next task. - The wrap_contentsetting shrinks the Viewelement's dimensions so it is just big enough to enclose its content. If there is no content, the Viewelement becomes invisible. - To specify a fixed size that adjusts for the screen size of the device, use a fixed number of density-independent pixels ( dpunits). For example, 16dpmeans 16 density-independent pixels. Tip: If you change the layout_width attribute using its popup menu, the layout_width attribute is set to zero because there is no set dimension. This setting is the same as match_constraint— the view can expand as much as possible to meet constraints and margin settings. 3.2 Change the Button attributes To identify each View uniquely within an Activity layout, each View or View subclass (such as Button) needs a unique ID. And to be of any use, the Button elements need text. View elements can also have backgrounds that can be colors or images. The Attributes pane offers access to all of the attributes you can assign to a View element. You can enter values for each attribute, such as the android:id, background, textColor, and text attributes. The following animated figure demonstrates how to perform these steps: - After selecting the first Button, edit the IDfield at the top of the Attributes pane to button_toast for the android:idattribute, which is used to identify the element in the layout. - Set the backgroundattribute to @color/colorPrimary. (As you enter @c, choices appear for easy selection.) - Set the textColorattribute to @android:color/white. - Edit the textattribute to Toast. - Perform the same attribute changes for the second Button, using button_count as the ID, Count for the textattribute, and the same colors for the background and text as the previous steps. The colorPrimary is the primary color of the theme, one of the predefined theme base colors defined in the colors.xml resource file. It is used for the app bar. Using the base colors for other UI elements creates a uniform UI. You will learn more about app themes and Material Design in another lesson. One of the benefits of ConstraintLayout is the ability to align or otherwise constrain elements relative to other elements. In this task you will add a TextView in the middle of the layout, and constrain it horizontally to the margins and vertically to the two Button elements. You will then change the attributes for the TextView in the Attributes pane. 4.1 Add a TextView and constraints - As shown in the animated figure below, drag a TextViewfrom the Palette pane to the upper part of the layout, and drag a constraint from the top of the TextViewto the handle on the bottom of the Toast Button. This constrains the TextViewto be underneath the Button. - As shown in the animated figure below, drag a constraint from the bottom of the TextViewto the handle on the top of the Count Button, and from the sides of the TextViewto the sides of the layout. This constrains the TextViewto be in the middle of the layout between the two Buttonelements. 4.2 Set the TextView attributes With the TextView selected, open the Attributes pane, if it is not already open. Set attributes for the TextView as shown in the animated figure below. The attributes you haven't encountered yet are explained after the figure: - Set the IDto show_count. - Set the textto 0. - Set the textSizeto 160sp. - Set the textStyleto B (bold) and the textAlignment to ALIGNCENTER(center the paragraph). - Change the horizontal and vertical view size controls ( layout_widthand layout_height) to match_constraint. - Set the textColorto @color/colorPrimary. - Scroll down the pane and click View all attributes, scroll down the second page of attributes to background, and then enter #FFFF00 for a shade of yellow. - Scroll down to gravity, expand gravity, and select center_ver (for center-vertical). textSize: The text size of the TextView. For this lesson, the size is set to 160sp. The spstands for scale-independent pixel, and like dp, is a unit that scales with the screen density and user's font size preference. Use dp units when you specify font sizes so that the sizes are adjusted for both the screen density and the user's preference. textStyleand textAlignment: The text style, set to B (bold) in this lesson, and the text alignment, set to ALIGNCENTER(center the paragraph). gravity: The gravityattribute specifies how a Viewis aligned within its parent Viewor ViewGroup. In this step, you center the TextViewto be centered vertically within the parent ConstraintLayout. You may notice that the background attribute is on the first page of the Attributes pane for a Button, but on the second page of the Attributes pane for a TextView. The Attributes pane changes for each type of View: The most popular attributes for the View type appear on the first page, and the rest are listed on the second page. To return to the first page of the Attributes pane, click the icon in the toolbar at the top of the pane. The Hello Toast app layout is nearly finished! However, an exclamation point appears next to each UI element in the Component Tree. Hover your pointer over these exclamation points to see warning messages, as shown below. The same warning appears for all three elements: hardcoded strings should use resources. The easiest way to fix layout problems is to edit the layout in XML. While the layout editor is a powerful tool, some changes are easier to make directly in the XML source code. 5.1 Open the XML code for the layout For this task, open the activity_main.xml file if it is not already open, and click the Text tab at the bottom of the layout editor. The XML editor appears, replacing the design and blueprint panes. As you can see in the figure below, which shows part of the XML code for the layout, the warnings are highlighted—the hardcoded strings "Toast" and "Count". (The hardcoded "0" is also highlighted but not shown in the figure.) Hover your pointer over the hardcoded string "Toast" to see the warning message. 5.2 Extract string resources Instead of hard-coding strings, it is a best practice to use string resources, which represent the strings. Having the strings in a separate file makes it easier to manage them, especially if you use these strings more than once. Also, string resources are mandatory for translating and localizing your app, because you need to create a string resource file for each language. - Click once on the word "Toast"(the first highlighted warning). - Press Alt-Enter in Windows or Option-Enter in macOS and choose Extract string resource from the popup menu. - Enter button_label_toast for the Resource name. - Click OK. A string resource is created in the values/res/string.xmlfile, and the string in your code is replaced with a reference to the resource: @string/button_label_toast - Extract the remaining strings: button_label_countfor "Count", and count_initial_valuefor "0". - In the Project > Android pane, expand values within res, and then double-click strings.xml to see your string resources in the strings.xmlfile: <resources> <string name="app_name">Hello Toast</string> <string name="button_label_toast">Toast</string> <string name="button_label_count">Count</string> <string name="count_initial_value">0</string> </resources> - You need another string to use in a subsequent task that displays a message. Add to the strings.xmlfile another string resource named toast_messagefor the phrase "Hello Toast!": <resources> <string name="app_name">Hello Toast</string> <string name="button_label_toast">Toast</string> <string name="button_label_count">Count</string> <string name="count_initial_value">0</string> <string name="toast_message">Hello Toast!</string> </resources> Tip: The string resources include the app name, which appears in the app bar at the top of the screen if you start your app project using the Empty Template. You can change the app name by editing the app_name resource. In this task, you add a Java method for each Button in MainActivity that executes when the user taps the Button. 6.1 Add the onClick attribute and handler to each Button A click handler is a method that is invoked when the user clicks or taps on a clickable UI element. In Android Studio you can specify the name of the method in the onClick field in the Design tab's Attributes pane. You can also specify the name of the handler method in the XML editor by adding the android:onClick property to the Button. You will use the latter method because you haven't yet created the handler methods, and the XML editor provides an automatic way to create those methods. - With the XML editor open (the Text tab), find the Buttonwith the android:idset to button_toast: <Button android: - Add the android:onClickattribute to the end of the button_toastelement after the last attribute and before the />end indicator: android: - Click the red bulb icon that appears next to attribute. Select Create click handler, choose MainActivity, and click OK. If the red bulb icon doesn't appear, click the method name ( "showToast"). Press Alt-Enter (Option-Enter on the Mac), select Create ‘showToast(view)' in MainActivity, and click OK. This action creates a placeholder method stub for the showToast() method in MainActivity, as shown at the end of these steps. - Repeat the last two steps with the button_count Button: Add the android:onClickattribute to the end, and add the click handler: android: The XML code for the UI elements within the ConstraintLayout now looks like this: <Button android: <Button android: <TextView android: - If MainActivity.javais not already open, expand java in the Project > Android view, expand com.example.android.hellotoast, and then double-click MainActivity. The code editor appears with the code in MainActivity: package com.example.android showToast(View view) { } public void countUp(View view) { } } 6.2 Edit the Toast Button handler You will now edit the showToast() method—the Toast Button click handler in MainActivity—so that it shows a message. A Toast provides a way to show a simple message in a small popup window. It fills only the amount of space required for the message. The current activity remains visible and interactive. A Toast can be useful for testing interactivity in your app—add a Toast message to show the result of tapping a Button or performing an action. Follow these steps to edit the Toast Button click handler: - Locate the newly created showToast()method. public void showToast(View view) { } - To create an instance of a Toast, call the makeText()factory method on the Toastclass. public void showToast(View view) { Toast toast = Toast.makeText( } This statement is incomplete until you finish all of the steps. - Supply the context of the app Activity. Because a Toastdisplays on top of the ActivityUI, the system needs information about the current Activity. When you are already within the context of the Activitywhose context you need, use thisas a shortcut. Toast toast = Toast.makeText(this, - Supply the message to display, such as a string resource (the toast_messageyou created in a previous step). The string resource toast_messageis identified by R.string. Toast toast = Toast.makeText(this, R.string.toast_message, - Supply a duration for the display. For example, Toast.LENGTH_SHORTdisplays the toast for a relatively short time. Toast toast = Toast.makeText(this, R.string.toast_message, Toast.LENGTH_SHORT); The duration of a Toast display can be either Toast.LENGTH_LONG or Toast.LENGTH_SHORT. The actual lengths are about 3.5 seconds for the long Toast and 2 seconds for the short Toast. public void showToast(View view) { Toast toast = Toast.makeText(this, R.string.toast_message, Toast.LENGTH_SHORT); toast.show(); } Run the app and verify that the Toast message appears when the Toast button is tapped. 6.3 Edit the Count Button handler You will now edit the countUp() method—the Count Button click handler in MainActivity—so that it displays the current count after Count is tapped. Each tap increases the count by one. The code for the handler must: - Keep track of the count as it changes. - Send the updated count to the TextViewto display it. Follow these steps to edit the Count Button click handler: - Locate the newly created countUp()method. public void countUp(View view) { } - To keep track of the count, you need a private member variable. Each tap of the Count button increases the value of this variable. Enter the following, which will be highlighted in red and show a red bulb icon: public void countUp(View view) { mCount++; } If the red bulb icon doesn't appear, select the mCount++ expression. The red bulb eventually appears. - Click the red bulb icon and choose Create field ‘mCount' from the popup menu. This creates a private member variable at the top of MainActivity, and Android Studio assumes that you want it to be an integer ( int): public class MainActivity extends AppCompatActivity { private int mCount; - Change the private member variable statement to initialize the variable to zero: public class MainActivity extends AppCompatActivity { private int mCount = 0; - Along with the variable above, you also need a private member variable for the reference of the show_countTextView, which you will add to the click handler. Call this variable mShowCount: public class MainActivity extends AppCompatActivity { private int mCount = 0; private TextView mShowCount; - Now that you have mShowCount, you can get a reference to the TextViewusing the ID you set in the layout file. In order to get this reference only once, specify it in the onCreate()method. As you learn in another lesson, the onCreate()method is used to inflate the layout, which means to set the content view of the screen to the XML layout. You can also use it to get references to other UI elements in the layout, such as the TextView. Locate the onCreate()method in MainActivity: @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } - Add the findViewByIdstatement to the end of the method: @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mShowCount = (TextView) findViewById(R.id.show_count); } A View, like a string, is a resource that can have an id. The findViewById call takes the ID of a view as its parameter and returns the View. Because the method returns a View, you have to cast the result to the view type you expect, in this case (TextView). - Now that you have assigned to mShowCountthe TextView, you can use the variable to set the text in the TextViewto the value of the mCountvariable. Add the following to the countUp()method: if (mShowCount != null) mShowCount.setText(Integer.toString(mCount)); The entire countUp() method now looks like this: public void countUp(View view) { ++mCount; if (mShowCount != null) mShowCount.setText(Integer.toString(mCount)); } - Run the app to verify that the count increases when you tap the Count button. Tip: For an in-depth tutorial on using ConstraintLayout, see the Codelab Using ConstraintLayout to design your views. Android Studio project: HelloToast The HelloToast app looks fine when the device or emulator is vertically oriented. However, if you switch the device or emulator to horizontal orientation, the Count Button may overlap the TextView along the bottom as shown in the figure below. Challenge: Change the layout so that it looks good in both horizontal and vertical orientations: - On your computer, make a copy of the HelloToast project folder and rename it to HelloToastChallenge. - Open HelloToastChallenge in Android Studio and refactor it. (See Appendix: Utilities for instructions on copying and refactoring a project.) - Change the layout so that the Toast Buttonand Count Buttonappear on the left side, as shown in the figure below. The TextViewappears next to them, but only wide enough to show its contents. (Hint: Use wrap_content.) - Run the app in both horizontal and vertical orientations. Challenge solution code Android Studio project: HelloToastChallenge View, ViewGroup, and layouts: - All UI elements are subclasses of the Viewclass and therefore inherit many properties of the Viewsuperclass. Viewelements can be grouped inside a ViewGroup, which acts as a container. The relationship is parent-child, in which the parent is a ViewGroup, and the child is a Viewor another ViewGroup. - The onCreate()method is used to inflate the layout, which means to set the content view of the screen to the XML layout. You can also use it to get references to other UI elements in the layout. - A View, like a string, is a resource that can have an id. The findViewByIdcall takes the ID of a view as its parameter and returns the View. Using the layout editor: - Click the Design tab to manipulate elements and the layout, and the Text tab to edit the XML code for the layout. - In the Design tab, the Palettes pane shows UI elements that you can use in your app's layout, and the Component tree pane shows the view hierarchy of UI elements. - The design and blueprint panes of the layout editor show the UI elements in the layout. - The Attributes tab displays the Attributes pane for setting properties for a UI element. - Constraint handle: Click a constraint handle, shown as a circle on each side of an element, and then drag to another constraint handle or to parent boundary to create a constraint. The constraint is represented by the zigzag line. - Resizing handle: You can drag the square resizing handles to resize the element. While dragging, the handle changes to an angled corner. - When enabled, the Autoconnect tool automatically creates two or more constraints for a UI element to the parent layout. After you drag the element to the layout, it creates constraints based on the element's position. - You can remove constraints from an element by selecting the element and hovering your pointer over it to show the Clear Constraints button. Click this button to remove all constraints on the selected element. To clear a single constraint, click the specific handle that sets the constraint. - The Attributes pane offers access to all of the XML attributes you can assign to a UI element. It also includes a square sizing panel called the view inspector at the top. The symbols inside the square represent the height and width settings. Setting layout width and height: The layout_width and layout_height attributes change as you change the height and width size controls in the view inspector. These attributes can take one of three values for a ConstraintLayout: - The match_constraintsetting expands the view to fill its parent by width or height—up to a margin, if one is set. - The wrap_contentsetting shrinks the view dimensions so the view is just big enough to enclose its content. If there is no content, the view becomes invisible. - Use a fixed number of dp( density-independent pixels) to specify a fixed size, adjusted for the screen size of the device. Extracting string resources: Instead of hard-coding strings, it is a best practice to use string resources, which represent the strings. Follow these steps: - Click once on the hardcoded string to extract, press Alt-Enter (Option-Enter on the Mac), and choose Extract string resources from the popup menu. - Set the Resource name . - Click OK. This creates a string resource in the values/res/string.xmlfile, and the string in your code is replaced with a reference to the resource: @string/button_label_toast Handling clicks: - A click handler is a method that is invoked when the user clicks or taps on a UI element. - Specify a click handler for a UI element such as a Buttonby entering its name in the onClickfield in the Design tab's Attributes pane, or in the XML editor by adding the android:onClickproperty to a UI element such as a Button. - Create the click handler in the main Activityusing the Viewparameter. Example: public void showToast(View view) {/...}. - You can find information on all Buttonproperties in the Button class documentation, and all the TextViewproperties in the TextView class documentation. Displaying Toast messages: A Toast provides a way to show a simple message in a small popup window. It fills only the amount of space required for the message. To create an instance of a Toast, follow these steps: - Call the makeText()factory method on the Toastclass. - Supply the context of the app Activityand the message to display (such as a string resource). - Supply the duration of the display, for example Toast.LENGTH_SHORTfor a short period. The duration can be either Toast.LENGTH_LONGor Toast.LENGTH_SHORT. - Show the Toastby calling show(). The related concept documentation is in 1.2: Layouts and resources for the UI. Android developer documentation: - Android Studio - Build a UI with Layout Editor - Build a Responsive UI with ConstraintLayout - Layouts View Button TextView - Android resources - Android standard R.color resources - Supporting Different Densities - Android Input Events - Context Other: The next codelab is Android fundamentals 1.2 Part B: The layout editor.
https://codelabs.developers.google.com/codelabs/android-training-layout-editor-part-a?hl=en
CC-MAIN-2020-45
refinedweb
5,328
63.7
Bugzilla – Bug 5650 Unable to start thread during AppDomain.ProcessExit event Last modified: 2012-07-19 13:10:28 EDT The following program hangs on Mono (and runs as expected on .NET): using System; using System.Threading; class Test { static void Main() { AppDomain.CurrentDomain.ProcessExit += OnProcessExit; Console.WriteLine("should exit!"); } static void Dummy() { } static void OnProcessExit(object sender, EventArgs e) { Console.WriteLine("starting shutdown hook"); Thread t = new Thread(Dummy); t.Start(); t.Join(); Console.WriteLine("done"); } } While it's arguably not a great idea to start a new thread during exit, this is the mechanism that Java uses for application cleanup handlers and I've implemented it as-is in IKVM and it works on .NET. If there is a good reason not to support this, I'd be happy to add a workaround to IKVM, but maybe it is easy to fix in the Mono runtime. This is probably a mono bug, we run the ProcessExit handler during shutdown, when thread creation is disabled. Its not very easy to fix. As I said, I'm fine with it not being fixed, but I would like to suggest two changes: - Thread.Start() should throw an exception under these conditions - Thread.Join() should return immediately I guess we need to give ProcessExit the same treatment we did with the appdomain unload event and make it run as an independent step of the shutdown process. But, yeah, it's a non-trivial change to an already convoluted part of the runtime, shutdown.
https://bugzilla.xamarin.com/show_bug.cgi?id=5650
CC-MAIN-2014-10
refinedweb
251
56.35
Transaction Processing Namespace What is the namespace within the Microsoft .Net framework which provides the functionality to implement transaction processing called as? neeraj - Mar 14th, 2015 System.data AndrewMcAlister - Aug 8th, 2010 There are at least 4 transaction namespaces.System.Transactions for scope type transactions, and for database specific transactions under System.Data.SqlClient, System.Data.OracleClient and System.Data.ODBCClient What is the difference between data reader and data adapter ? Read Best Answer Editorial / Best Answer Answered by: Madhukar Singh -. nikhiljain27 - Jul 13th, 2013 Datareader- works in fast forward read only mode to fetch data from the database. It works in connected environment. DataAdapter- Works as a bridge between the database and Ado.net and Fills the dataset. It works as a bridge to work in a disconnected environment. Deepak - Feb 15th, 2013 1. A DataReader works in a connected environment, whereas DataSet works in a disconnected environment. 2. A DataSet represents an in-memory cache of data consisting of any number of inter related DataTable objects. A DataTable object represents a tabular block of in-memory data. What is Partial class Read Best Answer Editorial / Best Answerbb.geetha - Member Since May-2008 | Jun 12th,... Data Table in ADO.NET What is Data table in .net and how it is different from table in database (sqlserver) nikhiljain27 - Jul 13th, 2013 Datatable is one table collection of Rows and columns in ADO.Net. It is a member of the System.Data namespace in the .NET Framework. DataSet is also a combination of multiple DataTable. It works in Disconnected environment. You can Add/ update/ delete records in the Datatable at runtime. Trapti - Sep 14th, 2011 The DataTable class is a member of the System.Data namespace within the .NET Framework class library. You can create and use a DataTable independently or as a member of a DataSet, and DataTable object... How can we load multiple tables in to Dataset? TheDentist - Apr 23rd, 2013 I know the DataAdapter provides this functionality but I prefer to create a SQL Statement that performs the merge and creates a single DataSet. The servers are usually high powered machines so by let... Varung5 - Jun 15th, 2010 You can pass multiple select statements in data adapter separated by “;”.SqlDataAdapter dataAdapter = new SqlDataAdapter(“select * from table1;select * from table2”, connection);DataSet ds = new DataSet();dataAdapter.Fill(ds); Difference between ado.net data set and ADO Record set Read Best Answer Editorial / Best Answerramakrishnag1982 - Member Since Nov-2006 | Aug 12th, 2007 1) A DataSet can represent an entire relational database in memory, complete with tables, relations, and views, A Recordset can not. 2) A DataSet is designed to work without any continuing connection to the original data source; Recordset maintains the contentious connection with the original data source. 3) There's no concept of cursor types in a DataSet, They are bulk loaded, while Recordset work with cursors and they are loaded on demand. 4) DataSets have no current record pointer, you can use For Each loops to move through the data. Recordsets have pointers to move through them. Kishore - Jan 24th, 2013 Please need some more related information about the each topic..It is not sufficient to answer in Interviews. sathin - Oct 29th, 2009 To get the data from a database and put it on a data control In ADO they used recordset In ADO.Net they used dataset Record set is a connection oriented architecture whereas dataset supports a c... What is main difference between ADO and ADO.Net ponraman m - May 5th, 2012 Classic asp is a connection mode. asp.net is disconnected mode and most efficient to work with web application asp is using recordset in a single table and asp.net is using dataset in multiple table DEEPIKA - Dec 7th, 2011 ADO use only OLEDB and it is COM based technology.ADO use tabel log and user log ADO.NET is Built-in technology.it provide Data binding link between Data source controls. How do you update a Dataset in ADO.Net and How do you update database through Dataset brajesh gautam - Feb 6th, 2012 In ADO.Net if you use the disconnect mode then local copy are already update but if we want to update a database type this program in visual studio sqlconncection = new sqlconection (" connection str... lata negi Code - Nov 4th, 2011 - SqlConnection conn = new SqlConnection("Server=__; DataBase=__; UID: __; Password: __"); - DataAdapter da; - DataSet ds = new DataSet(); - da = new SqlDataAdapter("select * from employee", conn); - ds = da.fill(ds); - GridView.DataSource=ds; - GridView.DataBind(); Difference between SqlCommand and SqlCommandBuilder Read Best Answer Editorial / Best Answersahu - Member Since Dec-2005 | Mar 29th, 2007 Hi All, Ans: a) SQLCommand is used to execute all kind of SQL queries like DML(Insert, update,Delete) & DDL like(Create table, drop table etc) b)SQLCommandBuilder object is used to build & execute SQL (DML) queries like select, insert, update & delete. Lijo - Nov 10th, 2011 SqlCommand: SqlCommand is used to execute Query involving Select, Update, Delete operation and to Execute SQL Stored procedures SqlCommandBuilder: SqlCommandBuilder provides the feature of refl... abhi_la2006 - Jul 13th, 2011 Sql command use to only execute dml like update delete insert but sql command builder use the dml as well as the ddl like create database alter etc. How do you bind columns manually in datagrid ? lata negi Code - Nov 4th, 2011 - asp:GridView ID="GridView1" DataSourceID="SqlDataSource1" AutoGenerateColumns="False" - - <Columns> - <asp:BoundField - <asp:BoundField - <asp:BoundField - <asp:BoundField - <asp:BoundField - </Columns> - </asp:GridView> ptharak - Nov 11th, 2010 Set autogenerate columns ="false" How do you connect to SQL Server Database without using sqlclient Read Best Answer Editorial / Best Answerknreddy221 - Nov 10th, 2006 Using System.Oledb;OledbConnection con= new OledbConnection ("Provider=SQLOLEDB.1;User Id=sa;Database=Northwind");OledbCommand cmd= new OledbCommand ("select * from table1",con);oledbdatareader dr= new oledbdatareader ();dr= cmd.ExecuteReader();datagrid1.datasource=dr;datagrid1.databind(); lata negi - Nov 4th, 2011 We have 4 Data Providers to connect to our database: SqlClient = Used only for Sql Server database. OracleClient = Used only for Oracle database. Oledb = Used for any kind of databases either new or o... Kamal Kant Verma - Dec 18th, 2006 You can connect sql using oledbname space . How to compare the two data sets in Ado.net ? AJ999 - Sep 21st, 2011 Using Diffgram Can we connect two datareader to same data source using single connection at same time? No, we cann't since once connection to database is opened must be closed before you reopen again . Read Best Answer Editorial / Best Answer Answered by: Sathiyavathi - Mar 24th, 2006 We can do it in ADO.Net 2.0 as In your connection string there is an additional attribute named as MARS and set it as true. MARS stands for Multiple Active Result Sets. Ex : string connectionString = "Data Source=MSSQL1;" + "Initial Catalog=AdventureWorks;Integrated Security=SSPI" + "MultipleActiveResultSets=True"; rajankvns - Sep 18th, 2011 Yes, 100% you can connect. help of-- Multiple Active Result Sets (MARS). more details with Example visit below Microsoft official website. --.... Ravi - Sep 16th, 2011 Yes, you can connect two datareader to the same datasource, but one main thing is close the first datareader before using second one then only it's possible. The point is, we can have any number of d... - what is "View"? what is "Stored procedure"? what is "delegate"? "> What is "Connection string"?what is "View"?what is "Stored procedure"?what is "delegate"? rohit chittora - Aug 20th, 2011 view : it is a mirror image or shadow of table with some modification so that only the essential information is display hidden some information from end user. Store procedure : this are group of exec... Praveen Chandra - May 19th, 2011 1.Connection sting contains servername, database and credentials of db, without connection string we cannot working with a backend application2.View is similar to a table, user can modify the data in ... handler which can be configured in the IIS to accept and handle...). Database Method If we are not returning any records from the database, which method is to be used? Read Best Answer Editorial / Best AnswerPendurti - Member Since Sep-2008 | Sep 11th, 2008 There is a method called Execute Non Query. This method executes the Update, Delete etc.This does not return any rows but will give the number of rows affected. karthimo - Nov 30th, 2009 Executenonquery() rakesh_mwisp - Nov 4th, 2009 ExecuteNonQuery - this method returns no data at all. It is used mainly with Inserts and Updates of tables.... ADO.NET Interview Questions Ans
http://www.geekinterview.com/Interview-Questions/Microsoft/Ado-NET
CC-MAIN-2017-13
refinedweb
1,408
58.08
Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant indentation. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects.[29] Python is dynamically-typed and garbage-collected. It supports multiple programming paradigms, including structured (particularly, procedural), object-oriented and functional programming. Python is often described as a "batteries included" language due to its comprehensive standard library.[30] Python was created in the late 1980s, and first released in 1991, by Guido van Rossum as a successor to the ABC programming language. Python 2.0, released in 2000, introduced new features, such as list comprehensions, and a garbage collection system with reference counting, and was discontinued with version 2.7 in 2020.[31] Python 3.0, released in 2008, was a major revision of the language that is not completely backward-compatible and much Python 2 code does not run unmodified on Python 3. With Python 2's end-of-life (and pip having dropped support in 2021[32]), only Python 3.6.x[33] and later are supported, with older versions still supporting e.g. Windows 7 (and old installers not restricted to 64-bit Windows). Python interpreters are supported for mainstream operating systems and available for a few more (and in the past supported many more). A global community of programmers develops and maintains CPython, a free and open-source[34] reference implementation. A non-profit organization, the Python Software Foundation, manages and directs resources for Python and CPython development. As of January 2021[update], Python ranks third in TIOBE’s index of most popular programming languages, behind C and Java,[35] having previously gained second place and their award for the most popularity gain for 2020.] Van Rossum shouldered sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from his responsibilities as Python's Benevolent Dictator For Life, a title the Python community bestowed upon him to reflect his long-term commitment as the project's chief decision-maker.[40] He now shares his leadership as a member of a five-person steering council.[41][42][43] In January 2019, active Python core developers elected Brett Cannon, Nick Coghlan, Barry Warsaw, Carol Willing and Van Rossum to a five-member "Steering Council" to lead the project.[44] Guido van Rossum has since then withdrawn his nomination for the 2020 Steering council.[45] Python 2.0 was released on 16 October 2000 with many major new features, including a cycle-detecting garbage collector and support for Unicode.[46] Python 3.0 was released on 3 December 2008. It was a major revision of the language that is not completely backward-compatible.[47] Many of its major features were backported to Python 2.6.x[48] and 2.7.x version series. Releases of Python 3 include the 2to3 utility, which automates (at least partially) the translation of Python 2 code to Python 3.[49] Python 2.7's end-of-life date was initially set at 2015 then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3.[50][51] No more security patches or other improvements will be released for it.[52][53] With Python 2's end-of-life, only Python 3.6.x[54] and later are supported. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of its features support functional programming and aspect-oriented programming (including by metaprogramming[55] and metaobjects (magic methods)).[56] Many other paradigms are supported via extensions, including design by contract[57][58] and logic programming.[59] Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management.[60] It also features dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the Lisp tradition. It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions.[61] The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML.[62] The language's core philosophy is summarized in the document The Zen of Python (PEP 20), which includes aphorisms such as:[63] - Beautiful is better than ugly. - Explicit is better than implicit. - Simple is better than complex. - Complex is better than complicated. - Readability counts..[37] Python strives for a simpler, less-cluttered syntax and grammar while giving developers a choice in their coding methodology. In contrast to Perl's "there is more than one way to do it" motto, Python embraces a "there should be one—and preferably only one—obvious way to do it" design philosophy.[63] Alex Martelli, a Fellow at the Python Software Foundation and Python book author, writes that "To describe something as 'clever' is not considered a compliment in the Python culture."[64] Python's developers strive to avoid premature optimization, and reject patches to non-critical parts of the CPython reference implementation that would offer marginal increases in speed at the cost of clarity.[65][66]—and in occasionally playful approaches to tutorials and reference materials, such as examples that refer to spam and eggs (from a famous Monty Python sketch) instead of the standard foo and bar.[67][68].[69][70] Users and admirers of Python, especially those considered knowledgeable or experienced, are often referred to as Pythonistas.[71][72].[73] Indentation Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block.[74],[75] which encloses a code block within a context manager (for example, acquiring a lock before the block of code is run and releasing the lock afterwards, or opening a file and then closing it), allowing Resource Acquisition Is Initialization (RAII)-like behavior and replaces a common try/finally idiom.[76] - The breakstatement, exits from returnstatement, used to return a value from a function. - The importstatement, which is used to import modules whose functions or variables can be used in the current program. There are three ways of using import: import <module name> [as <alias>]or from <module name> import *or from <module name> import <definition 1> [as <alias 1>], <definition 2> [as <alias 2>], .... - The print()function in Python 3. The assignment statement ('=') operates by binding a name as a reference to a separate, dynamically-allocated object. Since the name's storage location doesn't contain the indicated value, it is improper to call it a variable. Names may be subsequently rebound at any time to any object. Since a name is a generic reference holder it is unreasonable to associate a fixed data type with it. However at a given time, a name will be bound to some object, which will have a type. This association is referred to as dynamic typing. Python does not support tail call optimization or first-class continuations, and, according to Guido van Rossum, it never will.[77][78] However, better support for coroutine-like functionality is provided in 2.5, by extending Python's generators.[79] Before 2.5, generators were lazy iterators; information was passed unidirectionally out of the generator. From Python 2.5, it is possible to pass information back into a generator function, and from Python 3.3, the information can be passed through multiple stack levels.[80].[81] Python also added the **operator for exponentiation. - From Python 3.5, the new @infix operator was introduced. It is intended to be used by libraries such as NumPy for matrix multiplication.[82][83] - From Python 3.8, the syntax :=, called the 'walrus operator' was introduced. It assigns values to variables as part of a larger expression.[84] - In Python, ==compares by value, versus Java, which compares numerics by value[85] and objects by reference.[86] .[61] - Anonymous functions are implemented using lambda expressions; however, these are limited in that the body can only be one expression. - Conditional expressions in Python are written as x if c else y[87] (different in order of operands from the c ? x : yoperator common to many other languages). - Python makes a distinction between lists and tuples. Lists are written as .[88] -.[89] -}'.[90] -".[90] -).[91].[92][93] and from Python 3.5, the syntax of the language allows specifying static types but they are not checked in the default implementation, CPython. An experimental optional static type checker named mypy supports compile-time type checking.[94] @ .[98] These operators work like in traditional math; with the same precedence rules, the operators infix ( + and - can also be unary to represent positive and negative numbers respectively). The division between integers produces floating-point results. The behavior of division has changed significantly over time:[99] - division..[100] Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses round to even: round(1.5) and round(2.5) both produce 2.[101] Versions before 3 used round-away-from-zero: round(0.5) is 1.0, round(-0.5) is −1.0.[102] Python allows boolean expressions with multiple equality relations in a manner that is consistent with general use in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c.[103] C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c.[104] Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision and several rounding modes.[105] The Fraction class in the fractions module provides arbitrary precision for rational numbers.[106] Due to Python's extensive mathematics library, and the third-party library NumPy that further extends the native capabilities, it is frequently used as a scientific scripting language to aid in problems such as numerical data processing and manipulation.[107][108] Programming examples Hello world program: print('Hello, world!') Program to calculate the factorial of a positive integer: n = int(input('Type a number, and its factorial will be printed: ')) if n < 0: raise ValueError('You must enter a non negative integer') fact = 1 for i in range(2, n + 1): fact *= i print(fact) Libraries Python's large standard library, commonly cited as one of its greatest strengths,[109] provides tools suited to many tasks. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. It includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals,[110] manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications (for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333[111]), but most modules are not. They are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules need altering or rewriting for variant implementations. As of January 2020,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 287,000[112] packages with a wide range of functionality, including: - Automation - Data analytics - Databases - Documentation - Graphical user interfaces - Image processing - Machine learning - Mobile App - Multimedia - Computer Networking - Scientific computing - System administration - Test frameworks - Text processing - Web frameworks - Web scraping[113] Development environments Most Python implementations (including CPython) include a read–eval–print loop (REPL), permitting them to function as a command line interpreter for which the user enters statements sequentially and receives results immediately. Other shells, including IDLE and IPython, add further abilities such as improved auto-completion, session state retention and syntax highlighting. As well as standard desktop integrated development environments, there are Web browser-based IDEs; SageMath (intended for developing science and math-related Python programs); PythonAnywhere, a browser-based IDE and hosting environment; and Canopy IDE, a commercial Python IDE emphasizing scientific computing.[114] Implementations Reference implementation CPython is the reference implementation of Python. It is written in C, meeting the C89 standard with several select C99 features.[115] It compiles Python programs into an intermediate bytecode[116] which is then executed by its virtual machine.[117] CPython is distributed with a large standard library written in a mixture of C and native Python. It is available for many platforms, including Windows (starting with Python 3.9, the Python installer deliberately fails to install on Windows 7 and 8;[118][119] Windows XP was supported until Python 3.5) and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, with experimental installer) and unofficial support for e.g. VMS.[120] Platform portability was one of its earliest priorities,[121] during the Python 1 and 2 time-frame, even OS/2 and Solaris were supported[122] support has since been dropped for a lot of platforms. Other implementations - PyPy is a fast, compliant interpreter of Python 2.7 and 3.6.[123] Its just-in-time compiler brings a significant speed improvement over CPython but several libraries written in C cannot be used with it.[124][125] - Stackless Python is a significant fork of CPython that implements microthreads; it does not use the call stack in the same way, thus allowing massively concurrent programs. PyPy also has a stackless version.[126] - MicroPython and CircuitPython are Python 3 variants optimized for microcontrollers, including Lego Mindstorms EV3.[127] - Pyston is a variant of the Python runtime that uses just-in-time compilation to speed up the execution of Python programs.[128],[129]..[130]++.[131] - Pythran compiles a subset of Python 3 to C++.[132][133][134] - Pyrex (latest release in 2010) and Shed Skin (latest release in 2013) compile to C and C++ respectively. - Google's Grumpy (latest release in 2017) transpiles Python 2 to Go.[135][136][137] - IronPython (now abandoned by Microsoft) allows running Python 2.7 programs on the .NET Common Language Runtime. - Jython compiles Python 2,[138] Transcrypt[139][140].[141] Python's performance compared to other programming languages has also been benchmarked by The Computer Language Benchmarks Game.[142] Development Python's development is conducted largely through the Python Enhancement Proposal (PEP) process, the primary mechanism for proposing major new features, collecting community input on issues and documenting Python design decisions.[143] Python coding style is covered in PEP 8.[144] Outstanding PEPs are reviewed and commented on by the Python community and the steering council.[143] Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues are discussed in the Roundup bug tracker hosted at bugs.python.org.[145] Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017.[146].[147][148] They are largely compatible but introduce new features. The second part of the version number is incremented. Each major version is supported by bugfixes for several years after its release.[149] - Bugfix releases,[150] which introduce no new features, occur about every 3 months and are made when a sufficient number of bugs have been fixed upstream since the last release. Security vulnerabilities are also patched in these releases. The third and final part of the version number is incremented.[150].[151] The major academic conference on Python is PyCon. There are also special Python mentoring programmes, such as Pyladies. Pythons 3.10 deprecates wstr (to be removed in Python 3.12; meaning Python extensions[152] need to be modified by then),[153] and also plans to add pattern matching to the language.[154] API documentation generators Python API documentation generators include: Naming Python's name is derived from the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture;[155] for example, the metasyntactic variables often used in Python literature are spam and eggs instead of the traditional foo and bar.[155][156] The official Python documentation also contains various references to Monty Python routines.[157][158] The prefix Py- to Python respectively; and PyPy, a Python implementation originally written in Python. Uses Since 2003, Python has consistently ranked in the top ten most popular programming languages in the TIOBE Programming Community Index where, as of February 2020[update], it is the third most popular language (behind Java, and C).[159] It was selected Programming Language of the Year in 2007, 2010, and 2018.[160]++".[161] Large organizations that use Python include Wikipedia, Google,[162] Yahoo!,[163] CERN,[164] NASA,[165] Facebook,[166] Amazon, Instagram,[167] Spotify[168] and some smaller entities like ILM[169] and ITA.[170] The social news networking site Reddit is written entirely in Python.[171] Python can serve as a scripting language for web applications, e.g., via mod_wsgi for the Apache web server.[172]. Libraries such as NumPy, SciPy and Matplotlib allow the effective use of Python in scientific computing,[173][174] with specialized libraries such as Biopython and Astropy providing domain-specific functionality. SageMath is a mathematical software with a notebook interface programmable in Python: its library covers many aspects of mathematics, including algebra, combinatorics, numerical mathematics, number theory, and calculus.[175] OpenCV has python bindings with a rich set of features for computer vision and image processing.[176] Python is commonly used in artificial intelligence projects and machine learning projects with the help of libraries like TensorFlow, Keras, Pytorch and Scikit-learn.[177][178][179][180] As a scripting language with modular architecture, simple syntax and rich text processing tools, Python is often used for natural language processing.[181],[182] Inkscape, Scribus and Paint Shop Pro,[183] and musical notation programs like scorewriter and capella. GNU Debugger uses Python as a pretty printer to show complex structures such as C++ containers. Esri promotes Python as the best choice for writing scripts in ArcGIS.[184] It has also been used in several video games,[185][186] and has been adopted as first of the three available programming languages in Google App Engine, the other two being Java and Go.[187] Many operating systems include Python as a standard component. It ships with most Linux distributions,[188] AmigaOS 4 (using Python 2.7), FreeBSD (as a package), NetBSD, OpenBSD (as a package).[189][190] Most of the Sugar software for the One Laptop per Child XO, now developed at Sugar Labs, is written in Python.[191] The Raspberry Pi single-board computer project has adopted Python as its main user-programming language. LibreOffice includes Python, and intends to replace Java with Python. Its Python Scripting Provider is a core feature[192] since Version 4.0 from 7 February 2013. Languages influenced by Python Python's design and philosophy have influenced many other programming languages: - Boo uses indentation, a similar syntax, and a similar object model.[193] - Cobra uses indentation and a similar syntax, and its Acknowledgements document lists Python first among languages that influenced it.[194] - CoffeeScript, a programming language that cross-compiles to JavaScript, has Python-inspired syntax. - ECMAScript/JavaScript borrowed iterators and generators from Python.[195] - GDScript, a scripting language very similar to Python, built-in to the Godot game engine.[196] - Go is designed for the "speed of working in a dynamic language like Python"[197] and shares the same syntax for slicing arrays. - Groovy was motivated by the desire to bring the Python design philosophy to Java.[198] - Julia was designed to be "as usable for general programming as Python".[25] - Nim uses indentation and similar syntax.[199] - Ruby's creator, Yukihiro Matsumoto, has said: "I wanted a scripting language that was more powerful than Perl, and more object-oriented than Python. That's why I decided to design my own language."[200] - Swift, a programming language developed by Apple, has some Python-inspired syntax.[201] Python's development practices have also been emulated by other languages. For example, the practice of requiring a document describing the rationale for, and issues surrounding, a change to the language (in Python, a PEP) is also used in Tcl,[202] Erlang,[203] and Swift.[204] See also References - ^ a b Guttag, John V. (12 August 2016). Introduction to Computation and Programming Using Python: With Application to Understanding Data. MIT Press. ISBN 978-0-262-52962-4. - ^ "Python 3.9.2 and 3.8.8 are now available". 19 February 2021. Retrieved 19 February 2021. - ^ "Python 3.10.0a5". 2 February 2021. Retrieved 9 February 2021. - ^ "Why is Python a dynamic language and also a strongly typed language - Python Wiki". wiki.python.org. Retrieved 27 January 2021. - ^ . - ^ - ^ "Python Developer's Guide — Python Developer's Guide". devguide.python.org. Retrieved 17 December 2019. - ^ "History and License". Retrieved 5 December 2016. "All Python Releases are Open Source" - ^ TIOBE index (December 2020). "TIOBE Index for December 2020". TIOBE.com. Retrieved 20 December 2020. - ^ "index | TIOBE - The Software Quality Company".. Retrieved 2 February 2021. Python has won the TIOBE programming language of the year award! This is for the fourth time in the history, which is a record! The title is awarded to the programming language that has gained most popularity in one year. - ^. - ^ "Steering Council nomination: Guido van Rossum (2020 term)". Discussions on Python.org. 27 November. - ^ "2to3 – Automated Python 2 to 3 code translation". docs.python.org. Retrieved 2 February 2021. - ^ "PEP 373 -- Python 2.7 Release Schedule". python.org. Retrieved 9 January 2017. - ^ "PEP 466 -- Network Security Enhancements for Python 2.7.x". python.org. Retrieved 9 January 2017. - ^ "Sunsetting Python 2". Python.org. Retrieved 22 September 2019. - ^ "PEP 373 -- Python 2.7 Release Schedule". Python.org. Retrieved 22 September 2019. - ^ "Python Developer's Guide — Python Developer's Guide". devguide.python.org. Retrieved 17 December 2019. - ^. - ^ "Python Culture". ebeab. 21 January 2014. Archived from the original on 30 January 2014. - ^ machines today (November 2000) use IEEE-754 floating point arithmetic, and 2 February 2021. - ^ Ebrahim, Mokhtar (5 December 2017). "Python web scraping tutorial (with examples)". Like Geeks. Retrieved 27 January 2021. - ^. - ^ "Changelog — Python 3.9.0 documentation". docs.python.org. Retrieved 8 February 2021. - ^ "Download Python". Python.org. Retrieved 13 December 2020. - ^ "history [vmspython]".. Retrieved 4 December 2020. - ^ "An Interview with Guido van Rossum". Oreilly.com. Retrieved 24 November 2008. - ^ "Download Python for Other Platforms". Python.org. Retrieved 4 December 2020. - ^ . - ^ Yegulalp, Serdar (29 October 2020). "Pyston returns from the dead to speed Python". InfoWorld. Retrieved 26 January 2021. - ^ "Plans for optimizing Python". Google Project Hosting. 15 December 2009. Retrieved 24 September 2011. - ^ "Python on the Nokia N900". Stochastic Geometry. 29 April 2010. - ^ . - ^ - ^ "google/grumpy". 10 April 2020 – via GitHub. - ^ "Projects". opensource.google. - ^ - ^ "Brython". brython.info. Retrieved 21 January 2021. - ^ "Transcrypt - Python in the browser". transcrypt.org. Retrieved 22 December 2020. - ^ - ^". - ^ "PEP 602 -- Annual Release Cycle for Python". Python.org. Retrieved 6 November 2019. - ^ "Changing the Python release cadence [LWN.net]". lwn.net. Retrieved 6 November 2019. - ^ Norwitz, Neal (8 April 2002). "[Python-Dev] Release Schedules (was Stability & change)". Retrieved 27 June 2009. - ^ a b Aahz; Baxter, Anthony (15 March 2001). "PEP 6 – Bug Fix Releases". Python Enhancement Proposals. Python Software Foundation. Retrieved 27 June 2009. - ^ "Python Buildbot". Python Developer’s Guide. Python Software Foundation. Retrieved 24 September 2011. - ^ "1. Extending Python with C or C++ — Python 3.9.1 documentation". docs.python.org. Retrieved 14 February 2021. - ^ "PEP 623 -- Remove wstr from Unicode". Python.org. Retrieved 14 February 2021. - ^ "PEP 634 -- Structural Pattern Matching: Specification". Python.org. Retrieved 14 February 2021. - ^. S2CID 206457124. - ^. - Russell, Stuart J. & Norvig, Peter (2009). Artificial Intelligence: A Modern Approach (3rd ed.). Upper Saddle River, NJ: Prentice Hall. ISBN 978-0-13-604259-4. Further reading - Downey, Allen B. (May 2012). Think Python: How to Think Like a Computer Scientist (Version 1.6.6 ed.). ISBN 978-0-521-72596-5. - Hamilton, Naomi (5 August 2008). "The A-Z of Programming Languages: Python". Computerworld. Archived from the original on 29 December 2008. Retrieved 31 March 2010. - Lutz, Mark (2013). Learning Python (5th ed.). O'Reilly Media. ISBN 978-0-596-15806-4. - Pilgrim, Mark (2004). Dive into Python. Apress. ISBN 978-1-59059-356-1. - Pilgrim, Mark (2009). Dive into Python 3. Apress. ISBN 978-1-4302-2415-0. - Summerfield, Mark (2009). Programming in Python 3 (2nd ed.). Addison-Wesley Professional. ISBN 978-0-321-68056-3. External links
https://wiki2.org/en/Python_(programming_language)
CC-MAIN-2021-10
refinedweb
4,130
50.73
Profiling a Werkzeug (flask) app When trying to optimize a web app, finding bottleneck is key (where we spent a lot of time). In other type of app it can be finding memory or CPU usage. This optimization should not be done to early, no need to optimize an unused or to-be-refactored piece of code. Werkzeug have a build-in middleware that can profiles a request with python cProfile. It allow to follow exactly the execution graph of a request. To enable this middleware just initialize your app with it: from werkzeug.contrib.profiler import ProfilerMiddleware from myapp import app # This is your Flask app app.wsgi_app = ProfilerMiddleware(app.wsgi_app) app.run(debug=True) # Standard run call Now each request will be profiled so make sure to remove this middleware when your optimization process is finished because it will drastically degrade your app response time! Without any further options a cProfile output is printed: PATH: '/my/path' 87052 function calls (81433 primitive calls) in 0.203 seconds Ordered by: internal time, call count List reduced from 711 to 30 due to restriction <30> ncalls tottime percall cumtime percall filename:lineno(function) 4523 0.088 0.000 0.088 0.000 {method 'recv' of '_socket.socket' objects} 2088/26 0.011 0.000 0.022 0.001 /opt/lib/python2.7/site-packages/schema.py:104(validate) 161 0.011 0.000 0.105 0.001 /opt/lib/python2.7/socket.py:406(readline) 38 0.003 0.000 0.003 0.000 /opt/lib/python2.7/json/decoder.py:371(raw_decode) 4661 0.003 0.000 0.003 0.000 {method 'write' of 'cStringIO.StringO' objects} ... It’s a good start but this is not really easy to read and to follow the call graph. When developping a C/C++ app, callgrind (the reference call-graph profiler) provide a very nicer third party graphical app called kcachegrind (or qcachegrind for the Qt version). It can be installed on Mac via brew. Here is what a debug session looks like: The problem which is not really one is that the call-graph session must be a callgrind one not a cProfile. This can be easily solved by using a small python script called pyprof2calltree (just pip install it in your virtualenv). The Werkzeug middleware must be initialized with the profile_dir option in order to store profiling session as a file (beware, if a directory is specified, it must exist or an error is raised). Then the cProfile output can be converted using pyprof2calltree: $> pyprof2calltree \ -i GET.my.path.000230ms.1450347684.prof \ -o callgrind.GET.my.path.000230ms.1450347684.prof The generated callgrind.* file can now be opened with kcachegrind. To get callgraph ploting work the dot binary must be installed. You may experience $PATH issue like me on Mac when opening kcachegrind directly from the Finder, to solves this just open it from a terminal. Checkout kcachegrind documentation and callgrind documentation for more details on usage.
http://www.alexandrejoseph.com/blog/2015-12-17-profiling-werkzeug-flask-app.html
CC-MAIN-2017-47
refinedweb
497
65.22
31876/nameerror-nitesh1995-defined-error-while-creating-bucket I am getting this error NameError: name 'nitesh1995' is not defined I am using BOTO3 with python for creating a S3 bucket. import boto3 s3_resource = boto3.resource('s3') s3_resource.create_bucket(Bucket= nitesh1995) This is code I am using. Can someone help please? Hello @nitesh, You are getting the error because when you pass the name of the bucket it is string. Here in your program it is not treated as string. import boto3 s3_resource = boto3.resource('s3') s3_resource.create_bucket(Bucket= 'anikets3bucket') Use the name within the quotes to mark it as string. Hope this helps! You are getting the error because the ...READ MORE This error occurs when your build does ...READ MORE You can run this command below on ...READ MORE AWS provides naming standards when naming The error says you have entered a ...READ MORE Hello @jino, You are not using the command ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/31876/nameerror-nitesh1995-defined-error-while-creating-bucket
CC-MAIN-2020-16
refinedweb
162
69.99
Essentials All Articles What is LAMP? Linux Commands ONLamp Subjects Linux Apache MySQL Perl PHP Python BSD ONLamp Topics App Development Database Programming Sys Admin From data storage to data exchange and from Perl to Java, it's rare to write software these days and not bump into XML. Adding XML capabilities to a C++ application, though, usually involves coding around a C-based API. Even the cleanest C API takes some work to wrap in C++, often leaving you to choose between writing your own wrappers (which eats up time) or using third-party wrappers (which means one more dependency). Adopt the Xerces-C++ parser and you can skip these middlemen. This mature, robust toolkit is portable C++ and is available under the flexible Apache Software License (version 2.0). Xerces' benefits extend beyond its C++ roots. It gives you a choice of SAX and DOM parsers, and supports XML namespaces. It also provides validation by DTD and XML schema, as well as grammar caching for improved performance. This article uses the context of loading, modifying, and storing an XML config file to demonstrate Xerces-C++'s DOM side. My first example shows some raw code for reading XML. Then I revise it a couple of times to address deficiencies. The last example demonstrates how to modify the XML document and write it back out to disk. Along the way, I've made some helper classes that make using Xerces a little easier. My next article will cover SAX and validation. I compiled the sample code under Fedora Core 3/x86 using Xerces-C++ 2.6.0 and GCC 3.4.3. The Document Object Model (DOM) is a specification for XML parsing designed with portability in mind. That is, whether you're using Perl or Java or C++, the high-level DOM concepts are the same. This eases the learning curve when moving between DOM toolkits. (Of course, implementations are free to add special features and convenience above and beyond the requirements of the spec.) DOM represents an XML document as a tree of nodes (Xerces class DOMNode). Consider Figure 1, an XML document of some airport information. DOM sees the entire document as a document node (DOMDocument), the only child of which is the root <airports> element node (DOMElement). Were there any document type declarations or comments at this level, they would also be child nodes of the document node. DOMNode DOMDocument <airports> DOMElement Figure 1. The DOM of an XML document The <airport> element is a child node of <airports>. Its only attribute, name, is an attribute node (DOMAttr). <airport> children include the <aliases>, <location>, and <comment> elements. <comment> has a child text node (DOMText), which contains the string "Terminal 1 has a very 1970's sci-fi decor." > You can create, change, or remove nodes on this object representation of your document, then write the whole thing--comments included--back to disk as well-formed XML. DOM requires that the parser load the entire document into memory at once, which can make handling large documents very memory intensive. For small to midsize XML documents, though, DOM offers portable read/modify/write capabilities to structured data when a full relational database (such as PostgreSQL or MySQL) is overkill. I prefer to explain this with source code. I will share some code excerpts inline, but as always, the complete source code for the examples is available for download. The program step1 represents a portion of a fictitious report viewer. The config file tracks the time of its most recent modification, the user's login and password to the report system, and the last reports the user ran. Here's a sample of the config file: step1 <config lastupdate="1114600280"> <login user="some name" password="cleartext" /> <reports> <report tab="1" name="Report One" /> <report tab="2" name="Report Two" /> <report tab="3" name="Third Report" /> <report tab="4" name="Fourth Report" /> <report tab="5" name="Fifth Report" /> </reports> </config> (Xerces also supports XML namespaces, though the sample code doesn't use them.) The first thing to notice about step1 is the number of #included headers. Xerces has several header files, roughly one per class or concept. Some such projects have one master header file that includes the others. You could write one yourself, but including just the headers you need may speed up your build process. () xercesc::XMLPlatformUtils::Initialize(); // ... regular program ... xercesc::XMLPlatformUtils::Terminate(); Your code must call Initialize() before using any Xerces classes. In turn, attempts to use Xerces classes after the call to Terminate() will yield a segmentation fault. Initialize() may throw an exception, so I've wrapped it in a try/catch block. Notice the call to XMLString::transcode() in the catch section:.
http://www.linuxdevcenter.com/pub/a/onlamp/2005/09/08/xerces_dom.html?page=4&x-order=date
CC-MAIN-2017-04
refinedweb
788
63.59
#include <Pt/PoolAllocator.h> Pool based allocator. More... Inherits Allocator, and NonCopyable. The PoolAllocator uses pools to allocate memory. Each pool consists of blocks of equally sized records, which can be used for allocations up to the size of a record. The record sizes increase from pool to pool. When memory is allocated, a record is used from the pool, which handles the requested size. When memory is deallocated, the record is returned to the corresponding pool. This method of allocation is effective, because larger blocks of memory are allocated and then reused in the form of many smaller records. An advantage of this kind of allocator, compared to free list based allocators, is that it is able to release completely unused blocks. When a PoolAllocator is constructed, the maximum size for records has to be specified. The reason for this is that this type of allocator is ineffective for large allocations. Therefore, memory which is larger than this limit will be allocated using the new operator, instead of a record from a memory pool. Optionally, the alignment and the maximum block size can be set. The record sizes of the pools will be multiples of the alignment. So if the alignment is 8, the first pool will have records of size 8, the second pool records of size 16 and so forth, until the maximum size is reached. The maximum block size controls the number of records per block. A new block of records is added, when a pool is depleted and has to be extended to allow more allocations.
http://pt-framework.net/htdocs/classPt_1_1PoolAllocator.html
CC-MAIN-2018-34
refinedweb
261
64.51
Original from: Description: The article is mainly about how convert Blu-ray movie to iOS devices like iPhone 5S, iPhone 5C, iPhone 5, iPad 4, iPad 3, iPad Mini, iPod 5, etc. 4, iPad 3, iPhone 5S, iPhone 5C, iPhone 5, iPad Mini, iPod 5, etc. Support format. In this way, you can watch your Blu-ray movies at any time without carrying the physical Blu-ray disc. Most of you may consider using excellent free Handbreak which is good at ripping standard Def DVDs excluding commercial DVDs. while the latest version of the software has added SOME support for reading Blu-ray folder structures, it cannot yet decrypt the Blu-ray discs the way it can with regular DVDs for ripping your Blu-ray. So you need to find a more professional one. import it to the software. When you launch this software, you should make sure that you have inserted Blu-ray to your Blu-ray drive. Then click the "Load file" button to import the Blu-ray you want for iOS devices. for iOS devices. Click "Convert" button to start convert Blu-ray to iPhone 5S/5C, iPad, iPod, etc. iOS Devices. Once completed, "Open" button will help you to get output files quickly. Tips: It has powerful edit functions such.
http://www.anddev.org/other-coding-problems-f5/30-off-mac-blu-ray-ripper-for-iphone-5s-5c-ipad-mini-ipad-t2183296.html
CC-MAIN-2017-22
refinedweb
214
72.56
Understanding the problem: The given question wants us to write an efficient C++ program that takes a string input from the user and displays the number of words, characters, alphabets, vowels, consonants and digits in that given string. Approaching the problem: A string is an array of characters, therefore, the number of characters in a string is equal to the string length. Further, we have library functions in C++ to check whether a character is an alphabet or a digit. An alphabet can either be a vowel or a consonant, so, if an alphabet isn’t a vowel then it is a consonant. To count the number of words we can check when we encounter a ‘space’ or ‘end of line (\0)’ character. As we encounter any of these we will increment our word count by one. Algorithm: - First, we will input a string from the user and store it in a string variable str. - Then we will access str character by character with the help of a for loop - First, we will check if the current character is alphabet with the help of “isalpha()” function. If yes we will further use a nested if condition to check if it’s a vowel by comparing it with the five vowels both upper case and lower case. - Next, we will check for digit with the help of “isdigit()” function. - At last, we will check for ‘space’ and ‘\0’ to count the number of words. Code: #include <iostream> #include <string> //for using string data type and its functions #include <cstdio> //for using getline function #include <ctype.h > //for using isalpha, isdigit function using namespace std; int main(){ string str; //inputting the string and setting all the int words = 0, ch = 0, dig = 0, alph = 0, vow = 0, cons = 0; // parameters as zero cout << "Enter a string\n"; getline(cin, str); ch = str.length(); //setting number of characters equal to the //string length for (int i = 0; i <= str.length(); ++i) //accessing the string character by character { if (isalpha(str[i])) //checking for alphabets { ++alph; if (str[i] == 'A' || str[i] == 'a' || str[i] == 'E' || str[i] == 'e' || str[i] == 'I' || str[i] == 'i' || str[i] == 'O' || str[i] == 'o' || str[i] == 'U' || str[i] == 'u') //checking for vowels ++vow; else ++cons; //if not vowel then it must be a consonant } else if (isdigit(str[i])) //checking for digits ++dig; if (str[i] == ' ' || str[i] == '\0') //counting the umber of words ++words; } cout << "Number of words=" << words << "\n"; cout << "Number of alphabets=" << alph << "\n"; cout << "Number of vowels=" << vow << "\n"; cout << "Number of consonants=" << cons << "\n"; cout << "Number of digits=" << dig << "\n"; cout << "Number of characters=" << ch << "\n"; return 0; } Report Error/ Suggestion
https://www.studymite.com/cpp/examples/program-to-count-the-number-of-words-characters-alphabets-vowels-consonants-and-digit-in-a-line-of-text/?utm_source=related_posts&utm_medium=related_posts
CC-MAIN-2020-16
refinedweb
450
59.47
Download presentation Presentation is loading. Please wait. Published byMervyn Blair Modified over 4 years ago 1 Prepared by Uzma Hashmi Instructor Information Uzma Hashmi Office: B# 7/ R#1-121 E-mail address: uzma_a2001@yahoo.com Group Email Addresses Post message: cs202-lab@yahoogroups.com Subscribe: cs202-lab-subscribe@yahoogroups.comcs202-lab-subscribe@yahoogroups.com I will use this group to communicate and all the Slides will be posted on the group after each lesson CPCS-202 LAB -1 2 Prepared by Uzma Hashmi Learning Outcomes of Lab-1 1.Installation of jdk1.7 and IDE ECLIPSE 2.Understanding the IDE 3.Writing,Running and Debugging the code 4.Studying the structure of a Java Program 5.Adding Comments 6.White Spaces 7.Identifiers 3 Prepared by Uzma Hashmi In this semester we will learn JAVA The IDE (integrated development environment)we will use will be Eclipse The compiler is jdk(java development kit) Now we will see how can we install both in Part A and Part B See next slides for the installation steps 4 Prepared by Uzma Hashmi First you need to install jdk for java language compilation To do so we'll access this web link /downloads/index.html 5 Prepared by Uzma Hashmi Step 1: index.html JRE (Java Runtime Environment) on your system to run Java applications and applets. To develop Java applications and applets, you need the JDK (Java Development Kit), which includes the JRE 6 Prepared by Uzma Hashmi Quick Tip Click Start, then click on Run or Start Search. Type msinfo32.exe and then press Enter key. In “System Information”, review the value for the System Type item: For 32-bit editions of Windows, the value of the System Type item is x86-based PC.System Type For 64-bit editions of Windows, the value of the System Type item is x64-based PC. 7 Prepared by Uzma Hashmi 7 9 10 Prepared by Uzma Hashmi Extract the files and there you get the files listed below 11 Prepared by Uzma Hashmi Using Eclipse The system will prompt you for a workspace. The workspace is the place there you store your Java projects (more on workspaces later). Select a suitable (empty) directory and press Ok. 12 Prepared by Uzma Hashmi Click here 13 Prepared by Uzma Hashmi Creating Java Project Select from the menu File -> New-> Java project. 14 Prepared by Uzma Hashmi 16 Creating Packages inside the work-space 17 Prepared by Uzma Hashmi 21 Another way of Running your code RunDebug 22 Prepared by Uzma Hashmi Use of Refactor Once you have created your file,you can change the name of the file using the refactor option 23 Prepared by Uzma Hashmi 26 Error description in Problems 27 Prepared by Uzma Hashmi 28 Structure of the program In the Java programming language: – A program is made up of one or more classes – A class contains one or more methods – A method contains program statements These terms will be explored in detail throughout the course A Java application always contains a method called main, A Java application name must be the same as the class name. 29 Prepared by Uzma Hashmi Java Program Structure public class MyProgram {}{} // comments about the class class header must be the same As the java program name MyProgram.java class body Comments can be placed almost anywhere 30 Prepared by Uzma Hashmi 1-30 Java Program Structure public class MyProgram {}{} // comments about the class public static void main (String[] args) {}{} // comments about the method method header method body 31 Prepared by Uzma Hashmi Program.java //******************************************************************** // Program.java Author: Lewis/Loftus // // Demonstrates the basic structure of a Java application. //******************************************************************** public class Program { //----------------------------------------------------------------- // Prints a presidential quote. //----------------------------------------------------------------- public static void main (String[] args) { System.out.println ( "A quote by Abraham Lincoln:” ); //System is a predefined class that provides access to the system. //out is the output stream that is connected to the console{e.g. Monitor}. //println() - Displays the String which is passed to it. System.out.println ( "Whatever you are, be a good one.” ); } 32 Prepared by Uzma Hashmi 1-32 */ 33 Prepared by Uzma Hashmi 1-33 – Combination(Compound Word)opt. - Camel Notation – E.g Class Name :MyProject 34 Prepared by Uzma Hashmi Identifiers cont. Often we use special identifiers called reserved words that already have a predefined meaning in the language ( such as void ) A reserved word cannot be used in any other way 35 Prepared by Uzma Hashmi 1-35 Reserved Words The Java reserved words: 36 Prepared by Uzma Hashmi 1-36 White Space Spaces, blank lines, and tabs are called white space White space is used to separate words and symbols in a program Extra white space is ignored A valid Java program can be formatted many ways Programs should be formatted to enhance readability, using consistent indentation 37 Prepared by Uzma Hashmi //************************************************** // Poem.java // // Prints a classic poem on four lines. //************************************************** public class Poem { public static void main(String[] args) { System.out.println("Roses are red"); System.out.println("Violets are blue"); System.out.println("Sugar is sweet"); System.out.println("And so are you!"); } 38 Prepared by Uzma Hashmi Example for white spaces //******************************************************************** // Lincoln3.java Author: Lewis/Loftus // Demonstrates another valid program that is poorly formatted. //******************************************************************** public class Lincoln3 { public static void main ( String [] args ) { System.out.println ( "A quote by Abraham Lincoln:" ) ; System.out.println ( "Whatever you are, be a good one." ) ; } 39 Prepared by Uzma Hashmi Lab Assignment Similar presentations © 2020 SlidePlayer.com Inc.
http://slideplayer.com/slide/6391060/
CC-MAIN-2020-29
refinedweb
917
54.12
Let’s do a simple exercise. You need to identify the subject and the sentiment in following sentences: - Google is the best resource for any kind of information. - I came across a fabulous knowledge portal – Analytics Vidhya - Messi played well but Argentina still lost the match - Opera is not the best browser - Yes, like UAE will win the Cricket World Cup. Was this exercise simple? Even if this looks like a simple exercise, now imagine creating an algorithm to do this? How does that sound? The first example is probably the easiest, where you know “Google” is the subject. Also we see a positive sentiment about the subject. Automating the two components namely subject mining and sentiment mining are both difficult tasks, given the complex structure of English language. Basic sentiment analysis is easy to implement because positive / negative word dictionary is abundantly available on internet. However, subject mining dictionaries are very niche and hence user needs to create his own dictionary to find the subject. In this article, we will talk about subject extraction and ways to automate it using Google API. Also See: Basics of creating a niche dictionary Why is subject extraction not a common analysis? The most common projects using text mining are those with sentiment analysis. We rarely hear about subject mining analysis. Why is it so? The answer is simple. Social media, the major source of unstructured data, does a good job with subject mapping. Users themselves make the subject of most comment very obvious by hash tagging. These hash tags help us search for a subject related comments on social media. And most of our analysis is based on a single subject which makes subject mining redundant. It would be awesome if we hash tag all our statements . In that case text mining would have become a cake walk because indirectly I am creating a structured text on social media. Only, if the world was ideal! Hash tags could also help to find sarcasm to some extent. Consider the following tweet : Indian batting line was just fine without Sachin. #Sarcasm #CWC2015 #Sachin #Indian-Cricket-Team Think of this sentence without hash-tags. It would be incomplete and would give a different meaning. Mining for hash tag(#sarcasm) will indicate that the sentence is most probably a negative sentence. Also, multiple subjects can be extracted from the hash tags and added to this sentence. Hopefully, you can now realize the importance of these hash tags in data management and data mining of social networks. They enable the social media companies to understand our emotions, preferences, behavior etc. Why do we even need subject extraction? However, social media do a good job with subject tagging, we still have a number of other sources of unstructured informations. For instance, consider the following example : You run a grocery stores chain. Recently you have launched a co-branded card which can help you understand the buying patterns of your customers. Additionally this card can be used at other retail chains. Given, that you now will have transaction information of your customers at other retail chains, you will be in a better position to increase the wallet share of the customer at your store. For instance, if a customer buys all vegetables at your store but fruits at other, you might consider giving the customer a combo of fruits & vegetables. In this scenario, you need to mine the name of retail store and some description around the type of purchase from the transaction description. No hash-tags or other clues, hence you need to do the hard work! What are the challenges in subject extraction? There are multiple other scenarios where you will need to do subject mining. Why do we call it a “hard work”? Here are the major challenges you might face while subject mining : - Rarely do we find a ready-made dictionary to mine subjects. - Creating a subject based dictionary is extremely manual task. You need to pull a representative sample then pull those keywords and find a mapping subject. - Standardization of subjects is another challenge. For instance, take following transaction description : - “Pizza Hut paid $50” - “Pizzahut order for $30” Now even if we build a system which can populate the first/first 2 words as the subject, we can’t find a common subject for the above two. Hence, we need to build a dictionary which can identify a common subject i.e. “Pizza Hut” for both these sentences. Possible Framework to build a Subject Extraction Dictionary There are two critical steps in building a subject mining dictionary: - Find the keywords occurring frequently in the text. This has been covered in detail in this article. - Create a mapping dictionary from these keywords to a standardized subject list. For second part, here are the sub-steps you need to follow : - Find the most frequent words in your text / tweets / comments. (assume a minimum threshold for a word to appear in the list). - Find the most associated word with these frequently occurring word. (You again need to assume a minimum association threshold) - Combine the frequently occurring words with associated words to find searchable pairs. Now all we need to do is to match subjects for each of these pairs. We search pairs and not single words because we need enough context to search for the phrase. For example “Express” might mean “American Express” or “Coffee Express”, two words can give enough context whereas more than two words will make the dictionary too big. Here are some examples of this process : “Wall Mart has the best offers” “Tesco stores are not good with discounts” “New Wall Mart stores are supposed to open this year” “Tesco Stores have the coolest loyalty programs and discounts” Most Frequent words: After removing stop-words : 1. Wall 2. Mart 3.Tesco 4. Stores Most Associated words: 1. Wall & Mart , 2. Mart & Wall , 3. Tesco & Stores , 4. Stores & Tesco Now we’ll use these words to search for the right subject. How to automate the process of Subject Extraction Dictionary Creation? Second step of subject mining is creating keyword to subject pairs. This step is generally done manually, but let’s take a shot at automating this process. Here is what we intend to do : - Pick up the keyword pairs found significant in the context (coming from last step). - Google Search on this pair - Pick the first 4 links which Google would give. - If two of the first 4 links are same, we return back to the URL. In case the search is not unanimous, we return “No Match Found”. Let’s first create a function which can retrieve the first four links from Google on a search and then find if we have a common link. Here is code to do the same : Now, let’s create a list of keywords which our code can search. (Notice that each of these keywords are quite different but Google will help us standardize them) Its now time to test our function : And Bingo! You see that our code was given different inputs but our code has done fairly well to spot the right set of subjects. Also notice that this dictionary is not limited by any scope of the subject. Two of its searches are Fast Food chains. Third one is an analytics website. Hence, we are creating a more generalized dictionary in this case. Now all we need to do is build rules using these keywords and map them to the matched links. Here is the entire code : import urllib import json import numpy as np from urlparse import urlparse from bs4 import BeautifulSoup def searchengine(examplesearch): encoded = urllib.quote(examplesearch) rawData = urllib.urlopen(''+encoded).read() jsonData = json.loads(rawData) searchResults = jsonData['responseData']['results'] links = np.empty([4, 1], dtype="S25") i = 0 for er in searchResults: link = er['url'] link1 = urlparse(link).netloc links[i,0]=link1 i = i + 1 target = "No Match found" if links[0,0] == links[1,0] or links[0,0] == links[2,0] or links[0,0] == links[3,0]: target = links[0,0] if links[1,0] == links[2,0] or links[1,0] == links[3,0]: target = links[1,0] if links[2,0] == links[3,0] : target = links[2,0] return [target] import numpy as np import pandas as pd import pylab as pl import os os.chdir(r"C:\Users\Tavish\Desktop") Transaction_details = pd.read_csv("Descriptions.csv") Transaction_details["match"] = "blank" Transaction_details for i in range(0,11): descr = Transaction_details['Descriptions'][i] Transaction_details["match"][i] = searchengine(descr) Transaction_details End Notes The approach mentioned in this article can be used to create a generalized dictionary which is not restricted to any subject. Frequently, we use the super powers of Google to auto correct the input keywords to get the most appropriate results. If this result is unanimous, it tells us Google has found a decent match from the entire web world. This approach minimizes the human effort of creating such tedious subject extraction dictionaries. Thinkpot: Can you think of more cases where Google API’s are used? Share with us useful links of related video or article to leverage Google API, your approach is excellent and innovative. I used think of doing my compulsory project work on the impact of search suggestions google shows to us while we type our search string. I have a feeling that such suggestions distract the user from his original intended search/purpose. but, I’m unable to copy all those suggestions without really selecting each and everyone. Plz let me know that if there a way to get those suggestions through Google API? ( Though I have already completed my MBA, I’m still interested to know.) Hi Sumalatha, It is possible to extract the entire page using Google API but the process is slightly more complex. It involves extracting text using BeautifulSoup. You can find multiple tutorials for the same. I will try to publish such article in future if we get enough requirement from people reading this article. Thanks, Tavish Hi Tavish, Thank you for the reply. In fact, what I have asked is not the entire results after we hit enter. All I want is the search suggestions google shows to us to relieve us from typing the complete query(while we are still typing). I used to get deviated by those suggestions and lost so much of time on selecting them and surfing in their results. For example, I used to want to type “how to lose stage fear” , but after finishing typing of “how to lose”, google used to show me suggestion strings that are related to weight reduction and I always had selected them, putting aside my real need(knowing about stage fear, in this situation). I feel that many common people who are not very much focused and with less or no professionalism , would tend to deviate like me. Starting parts of some search strings which may lead google to show us some suggestions that are related to topics very-very interesting/ attracting to human nature. If majority of users select them, their popularity goes up further and it leads to increase in the probability of their appearance in the search suggestions, like a spiral. And, If the clicks of so many mediocre people made a website to get a good position in a search result, intelligent people also gets a falls good impression on it. I used to want to find out these assumptions are true or false. if they are true, giving suggestions while typing in the search box may lead the users into a wrong direction and could affect the behaviour of internet users. Some areas of knowledge may prone to get hit with higher magnitude. Any way, after posting my earlier reply, I did some more work on it and find out the solution. the following url gives us the suggestion data for our given key words. eg: to lose However, my maturity and knowledge at the time of studying MBA was not enough to know that mere access to those suggestions data is not at all enough and A clear research plan with well defined objectives, methodology etc should be prepared first. Even now also I’m not sure weather It is ok to think like this or is it purely imaginative. What do you say?
https://www.analyticsvidhya.com/blog/2015/03/text-mining-subject-extraction-google-api/
CC-MAIN-2021-04
refinedweb
2,057
63.59
First let me explain why I called the article "3rd Way". I've already seen such articles on CodeGuru, explaining how to load and parse HTML file from memory. You may ask, so why I'm writing another guide? Well, below I'll show advantages and disadvantages that I found in those ways. First one, which is also shown in MSDN , is to load HTML code using IStream interface. You can read the article about it here. If all you want is to put a new code into your document, you should definately use this one. But if you'll try to get tags from your document after you load HTML, you will get nothing. Just because they are still in parsing and you have to create an OnDocumentComplete handler and only than start to look inside your document. IStream OnDocumentComplete When I realized this I went to look for another way that will give me document immediately after submitting a code. And yes, I found it! You can look at the great article by Asher Kobin at CodeGuru. It uses a new interface called IMarkupServices, introduced with MS Internet Explorer 5.0. I picked this code and made my own from it and started using it.... but suddenly I saw that when I'm saving my document to disk, the BODY tag has no attributes! I worked on this problem a whole day, trying to get it working, but... nothing. When you load your HTML code from memory into document, all attributes of BODY tag are gone. Still have no idea why it is happening and will be glad if someone will tell me. IMarkupServices Thus I came to MSDN again and found another, third way to load and parse HTML. I was so happy, so I decided to write my first article to CodeProject about it, which you are reading now For those, advanced programmers, that don't want to read a whole article, I will give a hint: loading HTML code is made by write() method of IHTMLDocument2 interface. write() IHTMLDocument2 Now I'll explain how to do this from beginning. I'll assume here, that you have a standard MFC application (such as Dialog , SDI or MDI applications). First of all you have to initialize COM, since we gonna use MSHTML COM interfaces. This can be done in InitInstance() function of your application. Remember also to uninitialize COM in your ExitInstance(): InitInstance() ExitInstance() BOOL CYourApp::InitInstance() { CoInitialize(NULL); ... } int CYourApp::ExitInstance() { ... CoUninitialize(); return CWinApp::ExitInstance(); } Now in the file you are going to use MSHTML interfaces, include mshtml.h, comdef.h (for smart pointers) and import mshtml.tlb: mshtml.h comdef.h mshtml.tlb #include <comdef.h> #include <mshtml.h> #pragma warning(disable : 4146) //see Q231931 for explaintation #import <mshtml.tlb> no_auto_exclude Now let's get a pointer to IHTMLDocument interface. How you will get it? Depends on what you already have If you are hosting a WebBrowser control or using CHtmlView in your application, u can call GetDocument() function in store the return value in your pointer, but I will explain how to get a 'free' document, which is not attached to any control or view. This can be done by simple call to CoCreateInstance() function: IHTMLDocument WebBrowser CHtmlView GetDocument() CoCreateInstance() MSHTML::IHTMLDocument2Ptr pDoc; HRESULT hr = CoCreateInstance(CLSID_HTMLDocument, NULL, CLSCTX_INPROC_SERVER, IID_IHTMLDocument2, (void**)&pDoc); Validate that you have a valid pointer (not NULL) and move on. NULL I'll assume that you have all HTML code you want to load in some variable called lpszHTMLCode. This can be CString or any other buffer, loaded for example from file on disk. We need to prepare it before passing to MSHTML. The problem is that MSHTML function we are going to use takes only SAFEARRAY as parameter. So let's convert our string to SAFEARRAY: lpszHTMLCode CString SAFEARRAY SAFEARRAY* psa = SafeArrayCreateVector(VT_VARIANT, 0, 1); VARIANT *param; bstr_t bsData = (LPCTSTR)lpszHTMLCode; hr = SafeArrayAccessData(psa, (LPVOID*)¶m); param->vt = VT_BSTR; param->bstrVal = (BSTR)bsData; Now we are ready to pass our SAFEARRAY to write() function. These 2 lines of code will do all dirty parsing work for you hr = pDoc->write(psa); //write your buffer hr = pDoc->close(); //and closes the document, "applying" your code //Don't forget to free the SAFEARRAY! SafeArrayDestroy(psa); Of course, remember to check every your step, so your program never crush, I skipped it to keep the code simple. Now, after all this work you have a pointer to IHTMLDocument2 interface, which gives you a lot of features, like getting particular tag, searching, inserting, replacing, deleting tag, just like you do it in JavaScript. And remember, if you are using smart pointers (like I do here) you don't need to call Release() function, the object will be freed automatically. Well, since we have no site "attached" to our document interface, all links (href, src) that are relative to document, will start with "about:blank" if you'll try to use IHTMLAnchorElement::href property. The way to get the exact link, as it is in HTML source, is to use IHTMLElement interface with nice function called getAttribute. Just remember that the second parameter of this function should be 2, it will tell to parser to return you text as is. IHTMLAnchorElement::href IHTMLElement getAttribute Of course same way you should work with IMG, LINK and other tags. The example project updated with this fix also. You can download it and see how I did it. Ahser Kobin's article about parsing with IMarkupServices (CodeGuru) Load HTML from Stream (MSDN) MSHTML Reference (MSDN) IHTMLDocument2 Reference (MSDN) This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) bstr_t bsData = (LPCTSTR)lpszHTMLCode; SysFreeString SafeArrayDestroy bsData.Detach(); General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/1788/Loading-and-parsing-HTML-using-MSHTML-3rd-way?fid=3219&df=90&mpp=10&sort=Position&spc=None&tid=1281480
CC-MAIN-2016-26
refinedweb
988
61.36
When we look at a new programming language, we start with the "Hello, World!" program. In Ring we can write this simple program with different styles that reflect the general scope of the language! The language documentation comes with three styles in the begining, In this article I will try to present all of the styles that we can use after learning the different aspects of the language! The Ring programming language is a dynamic programming language for Windows, Linux and macOS. Very similar to Python and Ruby, but brings a lot of features from BASIC too!, The new ideas in the language are related to domain-specific languages and using declarative and natural programming. It's free open source language. In Ring 1.0 (1) see "Hello, World!" (2) Using nl we can print new line! see "Hello, World" + nl (3) The language is not case-sensitive! SEE "Hello, World!" (4) Using nl we can print new line! SEE "Hello, World" + nl (5) See "Hello, World!" (6) Using nl we can print new line! See "Hello, World" + nl In Ring 1.1 (7) put "Hello, World!" (8) put "Hello, World!" + nl (9) PUT "Hello, World!" (10) PUT "Hello, World!" + nl (11) Put "Hello, World!" (12) Put "Hello, World!" + nl (13) Using the Standard Library "stdlib.ring" load "stdlib.ring" print("Hello, World!") (14) Using the Standard Library "stdlib.ring" and \n for new lines! load "stdlib.ring" print("Hello, World!\n") (15) LOAD "stdlib.ring" PRINT("Hello, World!") (16) LOAD "stdlib.ring" PRINT("Hello, World!\n") (17) Load "stdlib.ring" Print("Hello, World!") (18) Load "stdlib.ring" Print("Hello, World!\n") (19) In Ring 1.6 we can use ? as see <expr> + nl ? "Hello, World!" (20) We can write multiline strings ? " Hello, World! " We can use functions|procedures, the main function will be executed in the start of the program. (1) First Style func main see "Hello, World!" (2) Second Style def main put "Hello, World!" end (3) Third Style load "stdlib.ring" func main() { print("Hello, World!") } We can define a class, then create an object for this class! new MyApp { main() } class MyApp func main see "Hello, World!" new MyApp { main() } class MyApp def main put "Hello, World!" end end load "stdlib.ring" new MyApp { main() } class MyApp { func main() { print("Hello, World!") } } The web library can be used from weblib.ring We have the package System.web This packages contains many classes that we can use for web development. #!ring -cgi load "weblib.ring" import System.Web new page { text("Hello, World!") } Games are based on Allegro and LibSDL To use these libraries through the game engine, We have gameengine.ring load "gameengine.ring" func main oGame = new game { title = "Hello, World!" } Desktop and mobile development are based on the Qt framework To use this framework we will load the guilib.ring load "guilib.ring" new qApp { new qWidget() { setwindowtitle("Hello World") move(100,100) resize(400,400) show() } exec() } We can define new statments in the language, then use classes to implement our definition The next example define Hello World as a new command! new natural { Hello World } class natural hello world func getHello ? "Hello" func getWorld ? "World!" Pros: (1) Different styles that match programmers with different backgrounds (2) The language could be used in different domains Cons: (1) All of these styles could be mixed in the same project! (2) The language still new, A lot of libraries need more work! Ring is a new programming language (just released in 2016) The current version is Ring 1.6 (free open source - MIT License) This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
https://www.codeproject.com/Tips/1222859/Different-styles-for-writing-Hello-World-program-i
CC-MAIN-2020-24
refinedweb
618
79.77
Rich Internet Applications (RIAs) provide a way to build dynamic user interfaces that simplify the process of accessing and displaying data to end users. Microsoft's Silverlight 2 product adds many new features that let .NET developers more easily build RIAs without facing the large learning curve normally associated with new technologies. The learning curve is still there, of course, but by leveraging your existing VB.NET or C# skills you can minimize the slope and begin building applications more quickly. In this article I introduce new features available in Silverlight 2 and discuss why they're important. Many advancements have been made since the release of Silverlight 1 that are key especially if your applications target a variety of browsers and end users. In future articles, I will drill-down into new features and provide additional details on how they can be used to build RIAs using Silverlight 2. Do I Still Have to Use JavaScript? One of the most important advancements available in Silverlight 2 is the inclusion of a mini Common Language Runtime (CLR) that can run on multiple operating systems including Windows and Macintosh (with Linux support coming through the Moonlight project. Version 2 lets you write VB.NET or C# code that will run in Internet Explorer, FireFox, and Safari which means you don't have to use JavaScript any longer as with Silverlight 1. As a result, you can leverage your existing programming skills to build Silverlight 2 applications without having to learn a new language or object model. Several of the namespaces and classes available in the full version of the .NET Framework are also available to use in Silverlight 2. Listing One presents an example of handling a Button control's Click event using C#. Looking through the code, you'll see that it looks a lot like standard ASP.NET event handler code aside from the RoutedEventArgs parameter that is passed. private void btnSearch_Click(object sender, RoutedEventArgs e) { HtmlPage.Window.Alert("Button Clicked!"); } Where are the Controls? Silverlight 1 provides a solid framework for building RIAs using XML Application Markup Language (XAML) and JavaScript. Using XAML elements available in version 1 along with some JavaScript code you can animate objects in creative ways, skew and resize objects and interact with data dynamically using AJAX technologies and Web Services. However, many developers coming from an ASP.NET or Windows Forms background wondered (aloud at times) where all of the controls were for capturing user input and displaying collections of items. Aside from the TextBlock control, the MediaElement control and shape controls such as Rectangle and Ellipse, no user input or data display controls were available in version 1 and laying out data on a page wasn't as straightforward as it should be. I've heard more than more developer ask, "Where are the controls?" The good news is that Silverlight 2 provides more than 25 controls that can be used to layout other controls as well as capture and display data. All of the controls can be defined in XAML or created dynamically through VB.NET or C# code. Each control exposes properties, methods and events typically found in ASP.NET and Windows Forms controls, and can even be customized through styles (see for an example). Figure 1 shows what the Silverlight 2 project Toolbox looks like in Visual Studio 2008. Silverlight 2 controls can generally be classified into several categories including layout controls, user input controls, media controls, shape controls, and a few miscellaneous controls. Let's take quick look at new layout, user input, and media controls.
http://www.drdobbs.com/windows/whats-new-in-silverlight-2/207200289
CC-MAIN-2018-39
refinedweb
598
54.22
Martin v. L=F6wis wrote: > Martijn Faassen <faassen@vet.uu.nl> writes: >=20 > > Of course, but it would also simplify matters in development and > > deployment.=20 >=20 > It wouldn't simplify maintenance, though: Files that are now shared > identically between Python and PyXML now must be "nearly" shared: > Every reference to the package should then read "_xmlplus" for the > copy of the file in PyXML, and should read "xml" for the copy of the > file in Python. >=20 > It would also be an ongoing source of user confusion: Should I use > xml.dom.minidom, or _xmlplus.dom.minidom? etc. Yes, those are two good points. I'm not sure about user confusion=20 issues; I still suspect the current scheme is far more confusing for users. People definitely construct the wrong mental model about the way things are set up now, and this leads to confusion and wasted time. The nearly shared maintenance issue is a more thorny one, though. If PyXML actually moved maintenance of modules that end up in the Python core out of PyXML then the problem would be decreased here. This does require updating those files only when a new major version of Python is released (unless you sneak them in like the email package upgrade). Would this be a large problem? As far as I'm aware eventually this is the plan for that code anyway -- what is the thinking on that? > > And I hope there can be some discussion of migration=20 > > strategies. You could for instance abandon the scheme in the next > > PyXML release, keep _xmlplus of the previous one installed if it's > > there, and point out to people that in order to use PyXML from now on > > they have to explicitly refer to the pyxml namespace. You could even > > provide a migration hack where people can explicitly enable pyxml to > > be picked up in the traditional way. >=20 > Please be aware that there is *a lot* of code out there that is still > in use but unmaintained. We still get questions from people using code > written for PyXML 0.5, with API that went away in PyXML 0.6. Somebody > has to port this code, but nobody can, as the original author lost > interest. Yes, but changing the way packages work wouldn't much alter this, right? Though you could say that breaking all the imports would cause a lot of code to become unusuable with a PyXML upgrade? I don't think that's strictly the case as one might keep an older version of PyXML installed in parallel. In fact that might become easier if you have a renaming (at least keeping before the renaming things installed in=20 parallel to the after the renaming stuff). Regards, Martijn
https://mail.python.org/pipermail/xml-sig/2003-March/009137.html
CC-MAIN-2017-09
refinedweb
459
69.82
Scope resolution operator Discussion in 'C++' started by Shan, Jun 2, 2008. Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum. - Similar Threads Scope Resolution Operatorexits funnel, Dec 12, 2003, in forum: C++ - Replies: - 5 - Views: - 593 - exits funnel - Dec 13, 2003 scope resolution operatorrichard pickworth, Jun 5, 2005, in forum: C++ - Replies: - 3 - Views: - 653 - richard pickworth - Aug 8, 2005 scope resolution operator???????sushant, Jan 7, 2005, in forum: C Programming - Replies: - 16 - Views: - 1,003 - Lawrence Kirby - Jan 10, 2005 Scope resolution operator questionJack, May 12, 2006, in forum: C++ - Replies: - 6 - Views: - 619 - Jack - May 12, 2006 namespace:: scope resolution operator does not list all classesdwaach, Jul 6, 2006, in forum: C++ - Replies: - 1 - Views: - 452 - Victor Bazarov - Jul 6, 2006
http://www.thecodingforums.com/threads/scope-resolution-operator.618043/
CC-MAIN-2015-35
refinedweb
159
53.75
We will compare the three proximity sensor technologies that make distance measurement possible. HC-SR04 by ultrasound, Sharp GP2Y0A02YK0F by infrared and VL53L0X by measurement of the flight time (ToF) of a laser beam. You will find a 3D printing layout and the Arduino code that will allow you to perform comparative tests at home. You will be able to more easily test these sensors before starting the development of your Arduino, ESP32, ESP8266 or Raspberry Pi projects. The HC-SR04 is a fairly accurate and very easy to use sensor (using one of the many Arduino libraries) but it can be too cumbersome in some projects. It can be replaced very advantageously by a VL53L0X, a sensor that measures the time of flight (Time Of Flight, ToF) of a laser beam.. All source codes and the ODT file are available on GitHub on this page. Ultrasonic distance measurement, HC-SR04 sensor There are at least 7 libraries available directly from the Arduino IDE Library Manager. For this article, I used the Bifrost library developed by Jeremy Lindsay. Characteristics of HC-SR04 - Range of distance measurement: 2cm to 450cm (4.5m) - Measurement accuracy: 0.3cm - Supply voltage: 5V - Digital output: PWM - Weight: 9g - Dimensions: 45mm x 20mm x 18mm Infrared Distance Measurement (IR) Sharp Sensor GP2Y0A02YK0F After having tested several libraries, the ZSharpIR library developed by zoubworldArduino (GitHub page) gives satisfaction with Chinese clones at less than 4 euros. Characteristics of Sharp A02YK0 - Measurement range: from 20cm to 150cm - Analog output (signal proportional to distance) - Case size: 29.5mm x 13mm x 21.6mm - Typical current consumption: 33mA - Power Range: 4.5V to 5.5V - Output voltage: 0.4V - Operating temperature range: -10 ° C to + 60 ° C Measurement of distance by laser flight time, VL53L0X sensor (Adafruit or equivalent) I used the library developed by the manufacturer Pololu. It is very compatible with compatible sensors. For this article, I bought a clone for less than €4 on AliExpress. Features of the VL53L0X - Range of distance measurement: up to 200cm (2m) - I2c bus: address 0x29 - Laser beam wavelength: 940nm - Card size (excluding connector): 25mm x 13mm (depends on the manufacturer) - Power Range: 2.8V to 5.5V The VL53L1X can reach 400cm. It is harder to get. The VL6180X is dedicated to precision measurement below 10cm. Its price is similar, about €4. Important note for operation on Arduino The use of the Pololu library requires manually starting the wire () function in the setup otherwise the VL53L0X can not be found on the I2C bus. #include <Wire.h> void setup() { Wire.begin(); Serial.begin(115200); vl53.init(); vl53.startContinuous(); } void loop() { Serial.println(vl53.readRangeContinuousMillimeters(); delay(1000); } Test setup by 3D printing To rigorously test the 3 sensors, I have prepared a small support on which we can fix these. The three sensors are positioned in such a way that they measure the same distance. The STL file is downloadable on Thingiverse here. With the 3 sensors installed. From left to right : Test program Here is a test program that performs a series of 10 measurements. The delay between each measurement is one second by default. It can be modified using the DELAY_REFRESH constant. Before you upload the program, you will need to install the following Arduino libraries: - HC-SR04 (Bifrost library) - VL53L0X from Pololu - ZSharpIR from Pierre Valleau - MD_MAX72XX from majicDesigns /* Distance measurement comparison with HC-SR04, Sharp GP2Y0A02YK0F and VL53L0X SMTe sensors * Recording of 10 measurement points. Time between each changeable point with the key DELAY_REFRESH * Reset between each distance to restart the program * Manual calibration procedure of the Sharp GP2Y0A02YK0F proximity sensor * * * Test support to print in 3D * * * Comparaison de mesure de distance avec les capteurs HC-SR04, Sharp GP2Y0A02YK0F et VL53L0X SMTe * Enregistrement de 10 points de mesure. Temps entre chaque point modifiable avec la clé DELAY_REFRESH * Faire un reset entre chaque distance pour relancer le programme * * Procédure d'étalonnage manuel du Sharp GP2Y0A02YK0F * * * Support de test à imprimer en 3D * */ #include <Wire.h> #include <hcsr04.h> #include <VL53L0X.h> #include <MD_MAX72xx.h> #include <ZSharpIR.h> #define ir A0 #define model 20150 #define OFFSET_HCSR04 -10 #define PIN_HCSR04_ECHO 4 #define PIN_HCSR04_TRIG 3 #define PIN_GP2Y0A02YK0F A0 #define DELAY_REFRESH 1000 #define POINT_FILTER 10 #define LONG_RANGE true int posFilter = 0; int filterVl53[] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}; int filterSr04[] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}; int filterIr[] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}; ZSharpIR SharpIR(ir, model); HCSR04 hcsr04(PIN_HCSR04_TRIG, PIN_HCSR04_ECHO, 20, 4000); VL53L0X vl53; void setup() { Wire.begin(); Serial.begin(115200); vl53.init(); vl53.setTimeout(DELAY_REFRESH - 100); #if defined LONG_RANGE // lower the return signal rate limit (default is 0.25 MCPS) vl53.setSignalRateLimit(0.1); // increase laser pulse periods (defaults are 14 and 10 PCLKs) vl53.setVcselPulsePeriod(VL53L0X::VcselPeriodPreRange, 18); vl53.setVcselPulsePeriod(VL53L0X::VcselPeriodFinalRange, 14); #endif vl53.startContinuous(DELAY_REFRESH - 50); } void print(String key, String val, boolean returnline ) { Serial.print(key); Serial.print(": "); if ( returnline ) { Serial.println(val); } else { Serial.print(val); } } bool measure = true; void loop() { if ( measure ) { float distSR04 = hcsr04.distanceInMillimeters() + OFFSET_HCSR04; int distSharp = SharpIR.distance(); float distVl53 = vl53.readRangeContinuousMillimeters(); if (vl53.timeoutOccurred()) { Serial.print(" TIMEOUT"); } print("HR-SR04 (mm)", String(distSR04), false); print(" | IR (mm)", String(distSharp), false); print(" | VL53L0X (mm)", String(distVl53), true); if ( distVl53 != 8190 ) { filterVl53[posFilter] = distVl53; } filterSr04[posFilter] = distSR04; filterIr[posFilter] = distSharp; posFilter++; if (posFilter >= 10 ) { Serial.println("# résultats des mesures"); measure = false; for ( int k = 0 ; k <= 10 ; k++ ) { //Serial.print(k); Serial.print(filterSr04[k]); Serial.print(";"); Serial.print(filterIr[k]); Serial.print(";"); Serial.println(filterVl53[k]); } } } delay(DELAY_REFRESH); } At the end of each measurement run, the measurement table is displayed. Just import it into a spreadsheet for your calculations. Here is the order of columns: - HC-SR04 - Sharp IR - VL53L0X # résultats des mesures 397;367;425 397;370;426 401;366;435 395;364;424 395;359;431 394;359;435 391;359;422 397;359;429 391;359;419 399;358;424 HC-SR04 test The first sensor that I propose to test is the ultrasound sensor HC-SR04. As can be seen in the graph, the measurements made with HC-SR04 are very stable. The dispersion of measurements increases progressively with distance. It remains very acceptable even at 1500mm. The measurement gap does not exceed 60mm to 1.5m. Synthesis of the results of measurements of HC-SR04 The HC-SR04 measurements are very accurate below 500mm. Beyond that, accuracy is sufficient to measure the fill level of a tank. Sharp GP2Y0A02YK0F test The first measurement series highlighted the need to manually calibrate the sensor. I tested a clone bought on AliExpress about €3.5. For this comparative test, I used the ZSharpIR library from zoubworldArduino (GitHub page). After performing a manual calibration and updating the library, I got much better results. However, I could not exceed 1300mm of measurement. The calibration procedure is detailed step by step in this tutorial. If you need precision in your project, I advise you to stay below 600mm. It is very difficult to correctly calibrate the sensor beyond 1200mm. On the one hand, the signal varies very weakly, on the other hand, the sensor returns a lot of noise. It is best to filter the measurements over a very large number of points to reduce the aberrations. VL53L0X test It is with the VL53L0X that we will obtain the best results up to 1200mm (1.2m). Beyond, it happens that the sensor can not measure. This results in the return of an aberrant measurement of 8190mm. Luckily, it’s always the same value that is returned by the library (at least in my case). It is therefore very easy to exclude all erroneous measurement points. Beyond 1200mm, about 30% of the measuring points are lost. It depends very much on the surface. The more reflective the surface, the better the measurement accuracy. Feel free to share your experiences in the comments If you have projects that require more precision, you can opt for his little brother, the VL6180X. Its price is similar. It allows measurements below 10cm. Summary of measurement results of VL53L0X The measurements of the VL53L0X are very accurate up to 800mm. From 800 to 1200 mm, the accuracy is good. Beyond that, the sensor approximately 30% of the measurements. It all depends on the surface of the object and the ambient lighting. Synthesis, which solution to choose to measure a distance with an Arduino or a Raspberry Pi Sensors HC-SR04 and Sharp GP2Y0A02YK0F require a supply voltage of 5V. It will be necessary to use a circuit of conversion of tension (approximately 0,80 €) for the projects Raspberry Pi, ESP8266 and ESP32. The VL53L0X accepts a supply voltage of between 3V and 5V. It can be used directly in all projects. (*) Unofficial data given for indicative purposes obtained from an average of 10 measurements. In terms of performance and measurement quality, the HC-SR04 and VL53L0X give the best results. The HC-SR04 is even better than the long-range laser. The power of the beam is certainly too weak to measure correctly beyond 1200m. Conclusion: which technology to choose? Now, you will definitely want to know which sensor to choose. There is no ready answer. Everything will depend on your project. Here are several criteria to consider - Environment: light intensity, obstacles (ultrasonic diffraction) - Surfaces: reflection index (US, IR, Laser), absorption rate, material, texture (paint …) - Geometry: some shapes (angles) will diffract more or less the incident beam and make it undetectable. This is the principle used for stealth aircraft 🙂 It is best to test with these different technologies and choose the one that gives the best result. Project: measure the level of a water tank I now propose a little fun project that you can reuse in an automatic garden watering project. For the demo, I used a measuring cup with a capacity of one liter. To visualize the fill rate of the tank, use an array of LEDs (4 blocks). The tank measuring less than 20cm, it will not be possible to use the Sharp A02YK0 sensor which is sensitive only in the range 20 to 150cm. Arduino Code The Arduino code is long enough to explain in detail. On the main lines : - Libraries used - To use a matrix of LEDs of 4 * 64 points, one can use the library MX_MAX72xx which makes it possible to manage the matrix like one and the same element. Here is the configuration - #define HARDWARE_TYPE MD_MAX72XX :: FC16_HW (Pololu type does not work on Chinese generic matrices) - #define MAX_DEVICES 4. Here we have 4 blocks of 8×8 pixels - MD_MAX72XX mx = MD_MAX72XX (HARDWARE_TYPE, DATA_PIN, CLK_PIN, CS_PIN, MAX_DEVICES). It is imperative - to initialize the controller by specifying all the pins, otherwise it does not work - OFFSET_HCSR04 adjusts the origin of the sensor relative to the VL53L0X #include <Wire.h> #include <hcsr04.h> #include <VL53L0X.h> #include <MD_MAX72xx.h> #define OFFSET_HCSR04 -10 #define PIN_HCSR04_ECHO 4 #define PIN_HCSR04_TRIG 3 #define PIN_GP2Y0A02YK0F A0 #define DELAY_REFRESH 2000 #define POINT_FILTER 10 // Define the number of devices we have in the chain and the hardware interface #define HARDWARE_TYPE MD_MAX72XX::FC16_HW #define MAX_DEVICES 4 /* pin 12 is connected to the DataIn pin 11 is connected to the CLK pin 10 is connected to LOAD */ const uint8_t CLK_PIN = 12; // or SCK const uint8_t DATA_PIN = 11; // or MOSI const uint8_t CS_PIN = 10; // or SS int posFilter = 0; int countFilter = 0; int filterVl53[] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}; int filterSr04[] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}; MD_MAX72XX mx = MD_MAX72XX(HARDWARE_TYPE, DATA_PIN, CLK_PIN, CS_PIN, MAX_DEVICES); HCSR04 hcsr04(PIN_HCSR04_TRIG, PIN_HCSR04_ECHO, 20, 4000); VL53L0X vl53; void setup() { Wire.begin(); Serial.begin(115200); // Initialise et efface la matrice de Led - Init dot matrix display and clear it Serial.println("Init and clear MX Panel"); mx.begin(); mx.clear(); // Initialise le capteur VL53L0X - Init VL53L0X laser sensor // Scan i2c bus Serial.println("Scan i2c"); Serial.println(I2CScanner()); //delay(500); Serial.println("Init ToF VL53L0X sensor"); vl53.init(); vl53.setTimeout(DELAY_REFRESH - 100); vl53.startContinuous(DELAY_REFRESH - 100); } void loop() { float distSR04 = hcsr04.distanceInMillimeters() + OFFSET_HCSR04; float distSharp = getDistanceSharp(); float distVl53 = vl53.readRangeContinuousMillimeters(); float levelSR04 = tankLevel(distSR04); float levelSharp = tankLevel(distSharp); float levelVl53 = tankLevel(distVl53); print("HR-SR04 (mm)", String(distSR04), false); print(" | IR (mm)", String(distSharp), false); print(" | VL53L0X (mm)", String(distVl53), true); updateDotMatrix(levelVl53, levelSR04, levelSharp); delay(DELAY_REFRESH); } void print(String key, String val, boolean returnline ) { Serial.print(key); Serial.print(": "); if ( returnline ) { Serial.println(val); } else { Serial.print(val); } } float getDistanceSharp() { int a0 = analogRead(PIN_GP2Y0A02YK0F); Serial.println(a0); float dist = 0.002421276045 * a0 * a0 - 3.024416502 * a0 + 114.7941861; //float dist = 0.0001936428309 * a0 * a0 + 0.06987226424 * a0 - 14.32575223; if ( dist < 20 ) { dist = 20; } return dist; } void updateDotMatrix(float vl53, float sr04, float ir) { filterVl53[posFilter] = vl53; filterSr04[posFilter] = sr04; posFilter++; if ( posFilter > 10 ) { posFilter = 0;} countFilter++; if (countFilter > 10 ) { vl53 = average(filterVl53,POINT_FILTER); sr04 = average(filterSr04,POINT_FILTER); print("SR04(%)", String(sr04), false); print(" | VL53L0X(%)", String(vl53), true); int dotVl53 = round(vl53 / 3.125); int dotSr04 = round(sr04 / 3.125); mx.clear(); mx.update(MD_MAX72XX::OFF); for ( int i = 32 ; i >= 32 - dotVl53 ; i--) { mx.setPoint(0, i, true); mx.setPoint(1, i, true); mx.setPoint(2, i, true); } for ( int i = 32 ; i >= 32 - dotSr04 ; i--) { mx.setPoint(5, i, true); mx.setPoint(6, i, true); mx.setPoint(7, i, true); } mx.update(MD_MAX72XX::ON); } } float average (int * array, int len) // assuming array is int. { long sum = 0L ; // sum will be larger than an item, long for safety. for (int i = 0 ; i < len ; i++) sum += array [i] ; return ((float) sum) / len ; // average will be fractional, so float may be appropriate. } float tankLevel(float dist) { float tankLev = board(dist, 150, 20, 0, 100); //Serial.print("Tank level (%)"); //Serial.println(tankLev); return tankLev; } String I2CScanner() { byte error, address; int nDevices; String s; s = "Scanning:\n"; nDevices = 0; for (address = 1; address < 127; address++ ) { Wire.beginTransmission(address); error = Wire.endTransmission(); if (error == 0) { //s+="\n"; s += "I2C device found at address 0x"; if (address < 16) s += "0"; s += String(address, HEX); s += "\n"; nDevices++; } else if (error == 4) { s += "Unknow error at address 0x"; if (address < 16) s += "0"; s += String(address, HEX); s += "\n"; } } if (nDevices == 0) s += "No I2C devices found\n"; else s += "done\n"; return s; } Demo video Nothing better than a small demonstration video That’s it, this comparative study is now complete, it’s up to you to choose the sensor that best suits your project. I was very pleasantly surprised by the performance of the two HC-SR04 and VL53L0X sensors. To monitor the fill level of a tank, the HC-SR04 is more efficient than the laser. Water is a medium that must absorb more laser radiation than ultrasound. If you have autonomous or RC car projects, I recommend the VL53L0X, much more compact and very simple to implement. On the price side, count about €3.50 for the VL53L0X and less than €1 for the HC-SR04. Remember to take into account the medium, the surface and the shape of the object. It is best to test with these different technologies and choose the one that gives the best result. -
https://diyprojects.io/hc-sr04-ultrasound-vs-sharp-gp2y0a02yk0f-ir-vl53l0x-laser-solutions-choose-distance-measurement-arduino-raspberrypi/?amp
CC-MAIN-2022-40
refinedweb
2,532
57.27
If you start exploring C# or decide to expand your knowledge, you should learn these useful language features, which will help you to simplify the code, avoid errors and save a lot of time. 1) async / await Use the async / await-pattern to allow unblocking of the UI / current thread when execution blocking operations. The async / await-pattern works by letting the code continue executing even if something is blocking the execution (like a web request). Read more about the async / await-pattern here: 2) Object / array / collection initializers Create instances of classes, arrays and collections easily by using the object, array and collection initializers: //Just some demo class public class Employee { public string Name {get; set;} public DateTime StartDate {get; set;} } //Create an employlee by using the initializer Employee emp = new Employee {Name="John Smith", StartDate=DateTime.Now()}; The above example can be really useful in unit testing but should be avoided in other contexts as instances of classes should be created using a constructor. Read more about initializers here: 3) Lambdas, predicates, delegates and closures These features are practically a necessity in many cases (e.g. when using Linq), make sure to actually learn when and how to use them. Read more about Lambdas, predicates, delegates and closure here: 4) ?? (Null coalescing operator) The ??-operator returns the left side as long as it’s not null, in that case the right side will be returned: //May be null var someValue = service.GetValue(); var defaultValue = 23 //result will be 23 if someValue is null var result = someValue ?? defaultValue; The ??-operator can be chained: string anybody = parm1 ?? localDefault ?? globalDefault; And it can be used to convert nullable types to non nullable: var totalPurchased = PurchaseQuantities.Sum(kvp => kvp.Value ?? 0); Read more about the ??-operator here: 5) $”{x}” (String Interpolation) – C# 6 A new feature of C# 6 that lets you assemble strings in an efficient and elegant way: //Old way var someString = String.Format("Some data: {0}, some more data: {1}", someVariable, someOtherVariable); //NewWay var someString = $"Some data: {someVariable}, some more data: {someOtherVariable}"; You can put C# expressions in between the braces, which makes this very powerful. 6) ?. (Null-conditional operator) – C# 6 The null-conditional operator works like this: //Null if customer or customer.profile or customer.profile.age is null var currentAge = customer?.profile?.age; No more NullReferenceExceptions! Read more about the ?.-operator here: 7) nameof Expression – C# 6 So the new nameof-expression might not seem important, but it really has it value. When using automatic re-factoring tools (like ReSharper) you sometime need to refer to a method argument by it’s name: public void PrintUserName(User currentUser) { //The refactoring tool might miss the textual reference to current user below if we're renaming it if(currentUser == null) _logger.Error("Argument currentUser is not provided"); //... } This is how you should use it… public void PrintUserName(User currentUser) { //The refactoring tool will not miss this... if(currentUser == null) _logger.Error($"Argument {nameof(currentUser)} is not provided"); //... } Read more about the nameof-expression here: 8) Property Initializers – C# 6 Property initializers lets you declare an initial value for a property: public class User { public Guid Id { get; } = Guid.NewGuid(); // ... } A benefit of using property initializers is that you can not declare a set:er, thus making the property immutable. Property initializers works great together with C# 6 primary constructor syntax. 9) as and is-operators The is-operator is used to control if an instance is of a specific type, e.g. if you want to see if a cast is possible: if (Person is Adult) { //do stuff } Use the as-operator to try to cast an instance to a class. It will return null if cast was not possible: SomeType y = x as SomeType; if (y != null) { //do stuff } 10) yield-keyword The yield-keyword lets you feed an an IEnumerable-interface with items. The following example will return each powers of 2 up to the exponent of 8 (e.g. 2, 4, 8, 16, 32, 64, 128 ,256): public static IEnumerable Power(int number, int exponent) { int result = 1; for (int i = 0; i < exponent; i++) { result = result * number; yield return result; } } yield return can be very powerful if it’s used in the correct way. It enables you to lazily generate a sequence of objects, ie. the system does not have to enumerate the whole collection – it can be done on demand. Source CodeAddiction.net 11 thoughts on “10 features in C# that you really should learn (and use!)” Pingback: 10 features in C# that you really should learn (and use!) - How to Code .NET Good article!! This is a good article. So many new features get added to C# that I read about them, think to myself, that is great, but then I go back to my day-to-day programming and never actually use it. Of course, this means that a week later I have completely forgot about them! 🙂 The only thing that I dislike about string interpolation is that you can’t pull the strings into resource files. Otherwise, it is one of my favorite features. Would it be good to mention that, if using the `is` casting, when it returns true you’d have to cast it again to use it? Could lead to performance problems in bigger projects. 🙂 You can use the C# 7.0 pattern matching: if (someObject is someType instance) { // here you can use the ‘instance’ variable } You can use the c# 7 pattern matching: if (someInstance is SomeType newInstance) { // here you can use the ”newInstance” instance } I’m guilty of a few of these! It’s interesting to read, but, any reason why as I want to learn! e.g. number 2 – I use this a heck of a lot – I don’t mind using constructors, but, I don’t get the benefit? Code clarity Pingback: C# 中 10 个你真的应该学习(和使用!)的功能 - C/C++ - 代码豆 Pingback: C# 中 10 个你真的应该学习(和使用!)的功能 - XiaoBingBy
https://hownot2code.com/2016/11/24/10-features-in-c-that-you-really-should-learn-and-use/?replytocom=138
CC-MAIN-2020-16
refinedweb
996
53.92
You can load triples into a MarkLogic database from two sources; from an XML document that contains embedded triples elements, or from a triples files containing serialized RDF data. This chapter includes the following sections: You can also use SPARQL Update to load triples. See SPARQL Update for more information. Load documents that contain embedded triples in XML documents with any of the ingestion tools described in Available Content Loading Interfaces in the Loading Content Into MarkLogic Server Guide. The embedded triples must be in the MarkLogic XML format defined in the schema for sem:triple semantics.xsd). (( Triples ingested into a MarkLogic database are indexed by the triples index and stored for access and query by SPARQL. See Storing RDF Triples in MarkLogic for details. There are a number of ways to load documents containing triples serialized in a supported RDF serialization into MarkLogic. Supported RDF Triple Formats describes these RDF formats. When you load one or more groups of triples, they are parsed into generated XML documents. A unique IRI is generated for every XML document. Each document can contain multiple triples. The setting for the number of triples stored in documents is defined by MarkLogic Server and is not a user configuration. Ingested triples are indexed with the triples index to provide access and the ability to query the triples with SPARQL, XQuery, or a combination of both. You can also use a REST endpoint to execute SPARQL queries and return RDF data. If you do not provide a graph for the triple, the triples will be stored in a default graph that uses a MarkLogic Server feature called a collection. MarkLogic Server tracks the default graph with the collection IRI. You can specify a different collection during the load process and load triples into a named graph. For more information about collections, see Collections in the Search Developer's Guide. If you insert triples into a database without specifying a graph name, the triples will be inserted into the default graph -. If you insert triples into a super database and run fn:count(fn:collection()) in the super database, you will get a DUPURI exception for duplicate URIs. The generated XML documents containing the triple data are loaded into a default directory named /triplestore. Some loading tools let you specify a different directory. For example, when you load triples using mlcp, you can specify the graph and the directory as part of the import options. For more information, see Loading Triples with mlcp. This section includes the following topics: MarkLogic Server supports loading these RDF data formats: This section includes examples for the following RDF formats: RDF/XML is the original standard for writing unique RDF syntax as XML. It is used to serialize an RDF graph as an XML document. This example defines three prefixes, 'rdf', 'xsd', and 'd'. <rdf:RDF xmlns: <rdf:Description rdf: <d:shipped rdf: 2013-05-14</d:shipped> <d:quantity rdf: 27</d:quantity> <d:invoiced rdf: true</d:invoiced> <d:costPerItem rdf: 10.50</d:costPerItem> </rdf:Description> </rdf:RDF> Terse RDF Triple Language (or Turtle) serialization expresses data in the RDF data model using a syntax similar to SPARQL. Turtle syntax expresses triples in the RDF data model in groups of three IRIs. <> <> "2013-05-14"^^<> . This triple states that item 22 was shipped on May 14th, 2013. Turtle syntax provides a way to abbreviate information for multiple statements using @prefix to factor out the common portions of IRIs. This makes it quicker to write RDF Turtle statements. The syntax resembles RDF/XML, however unlike RDF/XML, it does not rely on XML. Turtle syntax is also valid Notation3 (N3) since Turtle is a subset of N3. Turtle can only serialize valid RDF graphs. In this example, four triples describe a transaction. The 'shipped' object is assigned a 'date' datatype, making it a typed literal enclosed in quotes. There are three untyped literals for the 'quantity', 'invoiced', and 'costPerItem' objects. @prefix i: <> . @prefix dt: <> . @prefix xsd: <> . i:item22 dt:shipped "2013-05-14"^^xsd:date . i:item22 dt:quantity 100 . i:item22 dt:invoiced true . i:item22 dt:costPerItem 10.50 . RDF/JSON is a textual syntax for RDF that allows an RDF graph to be written in a form compatible with JavaScript Object Notation (JSON). { "": { "": [ { "value": "Michelle", "type": "literal", "datatype": "" } ] } } Notation3 (N3) is a non-XML syntax used to serialize RDF graphs in a more compact and readable form than XML RDF notation. N3 includes support for RDF-based rules. When you have several statements about the same subject in N3, you can use a semicolon (;) to introduce another property of the same subject. You can also use a comma to introduce another object with the same predicate and subject. @prefix rdf: <> . @prefix dc: <> . @prefix foaf: <> . @prefix foafcorp: <> . @prefix vcard: <> . @prefix sec: <> . @prefix id: <> . id:cik0001265081 sec:hasRelation [ dc:date "2008-06-05"; sec:corporation id:cik0001000045; rdf:type sec:OfficerRelation; sec:officerTitle "Senior Vice President, CFO"] . id:cik0001000180 sec:cik "0001000180"; foaf:name "SANDISK CORP"; sec:tradingSymbol "SNDK"; rdf:type foafcorp:Company. id:cik0001009165 sec:cik "0001009165"; rdf:type foaf:Person; foaf:name "HARARI ELIYAHOU ET AL"; vcard:ADR [ vcard:Street "601 MCCARTHY BLVD.; "; vcard:Locality "MILPITAS, CA"; vcard:Pcode "95035" ] . N-Triples is a plain text serialization for RDF graphs. It is a subset of Turtle, designed to be simpler to use than Turtle or N3. Each line in N-Triples syntax encodes one RDF triple statement and consists of the following: Typed literals may include language tags to indicate the language. In this N-Triples example, @en-US indicates that title of the resource is in US English. <> <> <> . <> <> "Example Doc"@en-US . <> <> _:jane . <> <> _:joe . _:jane <> <> . _:jane <> "Jane Doe". _:joe <> <> . _:joe <> "Joe Bloggs". Each line breaks after the end period. For clarity, additional line breaks have been added. N-Quads is a line-based, plain text serialization for encoding an RDF dataset. N-Quads syntax is a superset of N-Triples, extending N-Triples with an optional context value. The simplest statement is a sequence of terms (subject, predicate, object) forming an RDF triple, and an optional IRI labeling the graph in a dataset to which the triple belongs. All of these are separated by a whitespace and terminated by a period (.) at the end of each statement. This example uses the relationship vocabulary. The class or property in the vocabulary has a IRI constructed by appending a term name 'acquaintanceOf' to the vocabulary IRI. <> <> <> <> . TriG is a plain text serialization for serializing RDF graphs. TriG is similar to Turtle, but is extended with curly braces ({) and (}) to group triples into multiple graphs and precede named graphs with their names. An optional equals operator (=) can be used to assign graph names and an optional end period (.) is included for Notation3 compatibility. Characteristics of TriG serialization include: This example contains a default graph and two named graphs. @prefix rdf: <> . @prefix dc: <> . @prefix foaf: <> . # default graph is { <> dc:publisher "Joe" . <> dc:publisher "Jane" . } # first named graph <> { _:a foaf:name "Joe" . _:a foaf:mbox <mailto:joe@jbloggs.example.org> . } # second named graph <> { _:a foaf:name "Jane" . _:a foaf:mbox <mailto:jane@jdoe.example.org> . } MarkLogic Content Pump (mlcp) is a command line tool for importing into, exporting from, and copying content to MarkLogic from a local file system or Hadoop distributed file system (HDFS). Using mlcp, you can bulk load billions of triples and quads into a MarkLogic database and specify options for the import. For example, you can specify the directory into which the triples or quads are loaded. It is the recommended tool for bulk loading triples. For more detailed information about mlcp, see Loading Content Using MarkLogic Content Pump in the Loading Content Into MarkLogic Server Guide. This section discusses loading triples into MarkLogic Server with mlcp and includes the following topics: Use these procedures to load content with mlcp: Although the extracted mlcp binary files do not need to be on the same MarkLogic host machine, you must have access and permissions for the host machine into which you are loading the triples. The collection lexicon index is required for the Graph Store HTTP Protocol used by REST API instances and for use of the GRAPH '?g' construct in SPARQL queries. See Configuring the Database to Work with Triples for information on the collection lexicon. $ export PATH=${PATH}:/space/marklogic/directory-name/bin where directory-name is derived from the version of mlcp that you downloaded. The mlcp import command syntax required for loading triples and quads into MarkLogic is: mlcp_command import -host hostname -port port number \ -username username -password password \ -input_file_path filepath -input_file_type filetype Long command lines in this section are broken into multiple lines using the line continuation characters '\' or '^'. Remove the line continuation characters when you use the import command. The mlcp_command you use depends on your environment. Use the mlcp shell script mclp.sh for Unix systems and the batch script mlcp.bat for Windows systems. The -host and -port values specify the MarkLogic host machine into which you are loading the triples. Your user credentials, -username and -password are followed by the path to the content , the -input_file_path value. If you use your own database, be sure to add the -database parameter for your database. If no database parameter is specified, the content will be put into the default Documents database. The -input_file_path may point to a directory, file, or compressed file in .zip or .gzip format. The -input_file_type is the type of content to be loaded. For triples, the -input_file_type should be RDF. The file extension of the file found in the -input_file_path is used by mlcp to identify the type of content being loaded. The type of RDF serialization is determined by the file extension ( .rdf, .ttl, .nt, and so on). A document with a file extension of .nq or . trig is identified as quad data, all other file extensions are identified as triple data. For more information about file extensions, see Supported RDF Triple Formats. You must have sufficient MarkLogic privileges to import to the specified host. See Security Considerations in the mlcp User Guide. In addition to the required import options, you can specify several input and output options. See Import Options for more details about these options. For example, you can load triples and quads by specifying RDF as the -input_file_type option: $ mlcp.sh import -host localhost -port 8000 -username user \ -password passwd -input_file_path /space/tripledata/example.nt \ -mode local -input_file_type RDF This example uses the shell script to load triples from a single N-Triples file example.nt, from a local file system directory /space/tripledata into a MarkLogic host on port 8000. On a Windows environment, the command would look like this: > mlcp.bat import -host localhost -port 8000 ^ -username admin -password passwd ^ -input_file_path c:\space\tripledata\example.nt -mode local^ -input_file_type RDF For clarity, these long command lines are broken into multiple lines using the line continuation characters '\' or '^'. Remove the line continuation characters when you use the import command. When you specify RDF as -input_file_type the mlcp RDFReader parses the triples and generates XML documents with sem:triple as the root element of the document. These options can be used with the import command to load triples or quads. When you load triples using mlcp, the -output_permissions option is ignored - triples (and, under the covers, triples documents) inherit the permissions of the graph that you're loading into. If -output_collections and -output_override_graph are set at the same time, a graph document will be created for the graph specified by -output_override_graph, and triples documents will be loaded into collections specified by -output_collections and -output_override_graph. If -output_collections and -output_graph are set at the same time, a graph document will be created for the graph specified by -output_graph (where there is no explicit graph specified in the data). Quads with no explicit graph specified in the data will be loaded into collections specified by -output_collections and the graph specified by -output_graph, while those quads that contain explicit graph data will be loaded into the collections specified by -output_collections and the graph(s) specified. You can split large triples documents into smaller documents to parallelize loading with mlcp and load all the files in a directory that you specify with -input_file_path. For more information about import and output options for mlcp, see in the mlcp User Guide. # Windows users, see Modifying the Example Commands for Windows $ mlcp.sh import -host localhost -port 8000 -username user \ -password passwd -input_file_path /space/tripledata \ -mode local -input_file_type RDF To load triples into a named graph, specify a collection by using the -output_collections option. To create a new graph, you need to have the sparql-update-user role. For more information about roles, see Understanding Roles in the Security Guide. # Windows users, see Modifying the Example Commands for Windows $ mlcp.sh import -host localhost -port 8000 -username user \ -password passwd -input_file_path /space/tripledata \ -mode local -input_file_type RDF \ -output_collections /my/collection This command puts all the triples in the tripledata directory into a named graph and overwrites the graph IRI to /my/collection. Use -output_collections and not -filename_as_collection to overwrite the default graph IRI. For triples data, the documents go in the default collection () if you do not specify any collections. For quad data, if you do not specify any collections, the triples are parsed, serialized, and stored in documents with the fourth part of the quad as the collection. For example with this quad, the fourth part is a IRI that identifies the homepage of the subject. <> <> <> < absolute-line=26/> . When the quad is loaded into the database, the collection is generated as a named graph,. If the -output_collections import option specifies a named graph, the fourth element of the quad is ignored and the named graph is used. If you are using a variety of loading methods, consider putting all of the triples documents in a common directory. Since the sem:rdf-insert and sem:rdf-load functions put triples documents in the /triplestore directory, use -output_uri_prefix /triplestore to put mlcp-generated triples documents there as well. $ mlcp.sh import -host localhost -port 8000 -username user \ -password passwd -input_file_path /space/tripledata/example.zip \ -mode local -input_file_type RDF -input_compressed true -output_collections /my/collection -output_uri_prefix '/triplestore' When you load triples or quads into a specified named graph from a compressed . zip or . gzip file, mlcp extracts and serializes the content based on the serialization. For example, a compressed file containing Turtle documents ( .ttl) will be identified and parsed as triples. When the content is loaded into MarkLogic with mlcp, the triples are parsed as they are ingested as XML documents with a unique IRI. These unique IRIs are random numbers expressed in hexadecimal. This example shows triples loaded with mlcp from the persondata.ttl file, with the -output_uri_prefix specified as /triplestore: /triplestore/d2a0b25bda81bb58-0-10024.xml /triplestore/d2a0b25bda81bb58-0-12280.xml /triplestore/d2a0b25bda81bb58-0-13724.xml /triplestore/d2a0b25bda81bb58-0-14456.xml Carefully consider the method you choose for loading triples. The algorithm for generating the document IRIs with mlcp differs from other loading methods such as loading from a system file directory with sem:rdf-load. For example, loading the same persondata.ttl file with sem:rdf-load results in IRIs that appear to have no relation to each other: /triplestore/11b53cf4db02080a.xml /triplestore/19b3a986fcd71a5c.xml /triplestore/215710576ebe4328.xml /triplestore/25ec5ded9bfdb7c2.xml When you load triples with sem:rdf-load, the triples are bound to the prefix in the resulting documents. <?xml version="1.0" encoding="UTF-8"?> <sem:triples xmlns: <sem:triple> <sem:subject> </sem:subject> <sem:predicate> </sem:predicate> <sem:objectAmerican politician </sem:object> </sem:triple> <sem:triple> <sem:subject> </sem:subject> <sem:predicate> </sem:predicate> <sem:object 1953-02-05 </sem:object> </sem:triple> </sem:triples> You can leave out the sem:triples tag, but you cannot leave out the sem:triple tags. Triples are typically created outside MarkLogic Server and loaded via Query Console by using the following sem: functions: The sem:rdf-insert and sem:rdf-load functions are update functions. The sem:rdf-get function is a return function that loads triples in memory. These functions are included in the XQuery Semantics API that is implemented as an XQuery library module. To use sem: functions in XQuery, import the module with the following XQuery prolog statement in Query Console: import module namespace sem = "" at "/MarkLogic/semantics.xqy"; For more details about semantic functions in XQuery, see the Semantics ( sem:) documentation in the MarkLogic XQuery and XSLT Function Reference. The sem:rdf-insert function inserts triples into the database as triples documents. The triple is created in-memory by using the sem:triple and sem:iri constructors. The IRIs of the inserted documents are returned on execution. xquery version "1.0-ml"; import module namespace sem = "" at "/MarkLogic/semantics.xqy"; sem:rdf-insert( sem:triple( sem:iri(""), sem:iri(""), "Michael")) => (: Returns the document IRI :) /triplestore/70eb0b7139816fe3.xml By default, sem:rdf-insert puts the documents into the directory /triplestore/ and assigns the default graph. You can specify a named graph as a collection in the fourth parameter. xquery version "1.0-ml"; import module namespace sem = "" at "/MarkLogic/semantics.xqy"; sem:rdf-insert(sem:triple( sem:iri(""), sem:iri(""), "John-Paul"), (), (), "mygraph") When you run this example, the document is inserted into both the default graph and mygraph. If you insert quads or triples in TriG serialization, the graph name comes from the value in the 'fourth position' in the quads/trig file. The sem:rdf-load function turtle for Turtle files or rdfxml for RDF files. loads and parses triples from files in a specified location into the database and returns the IRIs of the triples documents. You can specify the serialization of the triples, such asloads and parses triples from files in a specified location into the database and returns the IRIs of the triples documents. You can specify the serialization of the triples, such as sem:rdf-load('C:\rdfdata\example.rdf', "rdfxml") => /triplestore/fbd28af1471b39e9.xml As with sem:rdf-insert, this function also puts the triples documents into the default graph and /triplestore/ directory unless a directory or named graph is specified in the options. This example specifies mynewgraph as a named graph in the parameters: xquery version "1.0-ml"; import module namespace sem = "" at "/MarkLogic/semantics.xqy"; sem:rdf-load("C:\turtledata\example.ttl", "turtle", (), (), "mynewgraph") The document is inserted: To use sem:rdf-load you need the xdmp:document-get privilege. The sem:ref-get function returns triples in triples files from a specified location. The following example retrieves triples serialized in Turtle serialization from the local filesystem: xquery version "1.0-ml"; import module namespace sem = "" at "/MarkLogic/semantics.xqy"; sem:rdf-get('C:\turtledata\people.ttl', "turtle") The triples are returned as triples in Turtle serialization with one triple per line. Each triple ends with a period. This Query Console display format allows for easy copying from the Result pane. A REST endpoint is an XQuery module on MarkLogic Server that routes and responds to an HTTP request. An HTTP client invokes endpoints to create, read, update, or delete content in MarkLogic. This section discusses using the REST API to load triples with a REST endpoint. It covers the following topics: If you are unfamiliar with the REST API and endpoints, see Introduction to the MarkLogic REST API in the REST Application Developer's Guide. Use the following procedures to make requests with REST endpoints: curlor an equivalent command line tool for issuing HTTP requests. The collection lexicon is required for the Graph Store HTTP Protocol of REST API instances. The graph endpoint is an implementation of the W3C Graph Store HTTP Protocol as specified in the SPARQL 1.1 Graph Store HTTP Protocol: The base URL for the graph store is: Where hostname is the MarkLogic Server host machine and port is the port on which the REST API instance is running, and version is the version number of the API. The Graph Store HTTP Protocol is a mapping from RESTful HTTP requests to the corresponding SPARQL 1.1 Update operations. See Summary of the /graphs Service in the REST Application Developer's Guide. The graph endpoint accepts an optional parameter for a particular named graph. For example: If omitted, the default graph must be specified as a default parameter with no value. When a GET request is issued with no parameters, the list of graphs will be given in list format. See GET /v1/graphs for more details. A REST client uses HTTP verbs such as GET and PUT to interact with MarkLogic Server. This table lists the supported verbs and the role required to use each: The role you use to make a MarkLogic REST API request must have appropriate privileges for the content accessed by the HTTP call; for example, permission to read or update documents in the target database. For more information about REST API roles and privileges, see Security Requirements in the REST Application Developer's Guide. This endpoint will only update documents with the element sem:triple as the root. For a list of supported media formats for the Content-type HTTP header, see Supported RDF Triple Formats. To insert triples, make a PUT or POST request to a URL of the form: When constructing the request: graphparameter to the default graph. graphparameter to the named graph. Content-typeHTTP header. See Supported RDF Triple Formats. The triples are loaded into the default directory, /triplestore. This is an example of a curl command for a Unix or Cygwin command line interpreter. The command sends a PUT HTTP request to insert the contents of the file example.nt into the database as XML documents in the default graph: # Windows users, see Modifying the Example Commands for Windows $ curl -s -X PUT --data-binary '@example.nt' \ -H "Content-type: application/n-triples" \ --digest --user "admin:password" \ "" When you load triples with the REST endpoint using PUT or POST, you must specify the default graph or a named graph. These curl command options are used in the example above: For more information about the REST API, see the Semantics documentation in the REST Client API. For more about REST and Semantics see Using Semantics with the REST Client API. This section covers the error reporting conventions followed by the MarkLogic REST API. If a request to a MarkLogic REST API Instance fails, an error response code is returned and additional information is detailed in the response body. These response errors may be returned: 400 Bad Requestreturns for PUTor POSTrequests that have no parameters at all. 400 Bad Requestreturns for PUTor POSTrequests for payloads that fails to parse. 404 Not Foundreturns for GETrequests to a graph that does not exist (the IRI is not present in the collection lexicon). 406 Not Acceptablereturns for GETrequests for triples in an unsupported serialization. 415 Unsupported Media Typereturns for POSTor PUTrequest in an unsupported format. The repair parameter for PUT requests can be set to true or false. By default this is false. If set to true, a payload that does not properly parse will still insert any triples that do parse. If set to false, any payload errors whatsoever will result in a 400 Bad Request response.
http://docs.marklogic.com/guide/semantics/loading
CC-MAIN-2018-26
refinedweb
3,890
55.54
( image source ) Introduction Apache Pig is a popular system for executing complex Hadoop map-reduce based data-flows. It adds a layer of abstraction on top of Hadoop’s map-reduce mechanisms in order to allow developers to take a high-level view of the data and operations on that data. Pig allows you to do things more explicitly. For example, you can join two or more data sources (much like an SQL join). Writing a join as a map and reduce function is a bit of a drag and it’s usually worth avoiding. So Pig is great because it simplifies complex tasks - it provides a high-level scripting language that allows users to take more of a big-picture view of their data flow. Pig is especially great because it is extensible. This tutorial will focus on its extensibility. By the end of this tutorial, you will be able to write PigLatin scripts that execute Python code as a part of a larger map-reduce workflow. Pig can be extended with other languages too, but for now we’ll stick to Python. Before we continue This tutorial relies on a bunch of knowledge. It’ll be very useful if you know a little Python and PigLatin. It’ll also be useful to know a bit about how map-reduce works in the context of Hadoop. User Defined Functions (UDFs) A Pig UDF is a function that is accessible to Pig, but written in a language that isn’t PigLatin. Pig allows you to register UDFs for use within a PigLatin script. A UDF needs to fit a specific prototype - you can’t just write your function however you want because then Pig won’t know how to call your function, it won’t know what kinds of arguments it needs, and it won’t know what kind of return value to expect. There are a couple of basic UDF types: Eval UDFs This is the most common type of UDF. It’s used in FOREACH type statements. Here’s an example of an eval function in action: users = LOAD 'user_data' AS (name: chararray); upper_users = FOREACH users GENERATE my_udfs.to_upper_case(name); This code is fairly simple - Pig doesn’t really do string processing so we introduce a UDF that does. There are some missing pieces that I’ll get to later, specifically how Pig knows what my_udfs means and suchlike. Aggregation UDFs These are just a special case of an eval UDF. An Aggregate function is usually applied to grouped data. For example: user_sales = LOAD 'user_sales' AS (name: chararray, price: float); grouped_sales = GROUP user_sales BY name; number_of_sales = FOREACH grouped_sales GENERATE group, COUNT(user_sales); In other words, an aggregate UDF is a udf that is used to combine multiple pieces of information. Here we are aggregating sales data to show how many purchases were made by each user. Filter UDFs A filter UDF returns a boolean value. If you have a data source that has a bunch of rows and only a portion of those rows are useful for the current analysis then a filter function of some kind would be useful. An example of a filter function is action follows: user_messages = LOAD 'user_twits' AS (name:chararray, message:chararray); rude_messages = FILTER user_messages by my_udfs.contains_naughty_words(message); Enough Talk, let’s code In this section we’ll be writing a couple of Python UDFs and making them accessible within PigLatin scripts. Here’s about the simplest Python UDF you can write: from pig_util import outputSchema @outputSchema('word:chararray') def hi_world(): return "hello world" The data output from a function has a specific form. Pig likes it if you specify the schema of the data because then it knows what it can do with that data. That’s what the output_schema decorator is for. There are a bunch of different ways to specify a schema, we’ll get to that in a little bit. Now if that were saved in a file called “my_udfs.py” you would be able to make use of it in a PigLatin script like so: -- first register it to make it available REGISTER 'myudf.py' using jython as my_special_udfs users = LOAD 'user_data' AS (name: chararray); hello_users = FOREACH users GENERATE name, my_special_udfs.hi_world(); Specifying the UDF output schema Now a UDF has input and output. This little section is all about the outputs. Here we’ll go over the different ways you can specify the output format of a Python UDF through use of the outputSchema decorator. We have a few options, here they are: # our original udf # it returns a single chararray (that's PigLatin for String) @outputSchema('word:chararray') def hi_world(): return "hello world" # this one returns a Python tuple. Pig recognises the first element # of the tuple as a chararray like before, and the next one as a # long (a kind of integer) @outputSchema("word:chararray,number:long") def hi_everyone(): return "hi there", 15 #we can use outputSchema to define nested schemas too, here is a bag of tuples @outputSchema('some_bag:bag{t:(field_1:chararray, field_2:int)}') def bag_udf(): return [ ('hi',1000), ('there',2000), ('bill',0) ] #and here is a map @outputSchema('something_nice:map[]') def my_map_maker(): return {"a":"b", "c":"d", "e","f"} So outputSchema can be used to imply that a function outputs one or a combination of basic types. Those types are: - chararray: like a string - bytearray: a bunch of bytes in a row. Like a string but not as human friendly - long: long integer - int: normal integer - double: floating point number - datetime - boolean If no schema is specified then Pig assumes that the UDF outputs a bytearray. UDF arguments Not only does a UDF have outputs but inputs as well! This sentence should be filed under ‘dah’. I reserved it for a separate section so as not to clutter the discussion on output schemas. This part is fairly straight-forward so I’m just going to breeze through it… First some UDFs: def deal_with_a_string(s1): return s1 + " for the win!" def deal_with_two_strings(s1,s2): return s1 + " " + s2 def square_a_number(i): return i*i def now_for_a_bag(lBag): lOut = [] for i,l in enumerate(lBag): lNew = [i,] + l lOut.append(lNew) return lOut And here we make use of those UDFs in a PigLatin script: REGISTER 'myudf.py' using jython as myudfs users = LOAD 'user_data' AS (firstname: chararray, lastname:chararray,some_integer:int); winning_users = FOREACH users GENERATE myudfs.deal_with_a_string(firstname); full_names = FOREACH users GENERATE myudfs.deal_with_two_strings(firstname,lastname); squared_integers = FOREACH users GENERATE myudfs.square_a_number(some_integer); users_by_number = GROUP users by some_integer; indexed_users_by_number = FOREACH users_by_number GENERATE group,myudfs.now_for_a_bag(users); Beyond Standard Python UDFs There are a couple of gotchas to using Python in the form of a UDF. Firstly, even though we are writing our UDFs in Python, Pig executes them in Jython. Jython is an implementation of Python that runs on the Java Virtual Machine (JVM). Most of the time this is not an issue as Jython strives to implement all of the same features of CPython but there are some libraries that it doesn’t allow. For example you can’t use numpy from Jython. Besides that, Pig doesn’t really allow for Python Filter UDFs. You can only do stuff like this: user_messages = LOAD 'user_twits' AS (name:chararray, message:chararray); --add a field that says whether it is naughty (1) or not (0) messages_with_rudeness = FOREACH user_messages GENERATE name,message,contains_naughty_words(message) as naughty; --then filter by the naughty field filtered_messages = FILTER messages_with_rudeness by (naughty==1); -- and finally strip away the naughty field rude_messages = FOREACH filtered_messages GENERATE name,message; Python Streaming UDFs Pig allows you to hook into the Hadoop Streaming API, this allows us to get around the Jython issue when we need to. If you haven’t heard of Hadoop Streaming before, here is the low down: Hadoop allows you to write mappers and reducers in any language that gives you access to stdin and stdout. So that’s pretty much any language you want. Like Python 3 or even Cow . Since this is a Python tutorial the examples that follow will all be in Python but you can plug in whatever you want. Here’s a simple Python streaming script, lets call it simple_stream.py : #! /usr/bin/env python import sys import string for line in sys.stdin: if len(line) == 0: continue l = line.split() #split the line by whitespace for i,s in enumerate(l): print "{key}\t{value}\n".format(key=i,value=s) # give out a key value pair for each word in the line The aim is to get Hadoop to run the script on each node. That means that the hash bang line ( #! ) needs to be valid on every node, all the import statements must be valid on every node (any packages imported must be installed on each node); and any other system level files or resources accessed within the Python script must be accessible in the same way on every node. Ok, onto the Pig stuff… To make the streaming UDF accessible to Pig we make use of the define statement. You can read all about it here Here is how we can use it with our simple_stream script: DEFINE stream_alias 'simple_stream.py' SHIP('simple_stream.py'); user_messages = LOAD 'user_twits' AS (name:chararray, message:chararray); just_messages = FOREACH user_messages generate message; streamed = STREAM just_messages THROUGH stream_alias; DUMP streamed; Lets look at that that DEFINE statement a little closer. The general format we are using is: DEFINE alias 'command' SHIP('files'); The alias is the name we use to access our streaming function from within our PigLatin script. The command is the system command Pig will call when it needs to use our streaming function. And finally SHIP tells Pig which files and dependencies Pig needs to distribute to the Hadoop nodes for the command to be able to work. Then once we have the resources we want to pass though the our streaming function we just use the STREAM command as above. And that’s it Well, sortof. PigLatin is quite a big thing, this tutorial just barely scraped the surface of its capabilities. If all the LOADing and FOREACHing and suchlike didn’t make sense to you the I would suggest checking out a more introductory PigLatin tutorial before coming back here. This tutorial should be enough to get you started in using Python from within Pig jobs. Python is also quite a big thing. Understanding the Python import system is really worthwhile if you want to use Python on a Hadoop cluster. It’s also worthwhile understanding some little details like how Python decorators work. There are also some more technical ways of calling Python from Pig, this tutorial aimed to be an introduction to UDFs, not a definitive guide. For more examples and more in-depth discussions of the different decorators and suchlike that Pig makes available to Jython based UDFs I would suggest taking a look at Pig’s official documentation. Another topic only touched on briefly was Hadoop Streaming, this in itself is a powerful technology but actually pretty easy to use once you get started. I’ve made use of the Streaming API many times without needing anything as complicated as PigLatin - it’s worthwhile being able to use that API as a standalone thing.
http://126kr.com/article/5ycbr0y51t1
CC-MAIN-2017-09
refinedweb
1,875
68.5
Dart’s built_value for Immutable Object Models Last week I wrote about built_collection. I finished by remarking that to really make use of immutable collections, you need immutable values. So here we are: built_value. This is the second major piece behind my talk at Dart Developer Summit (video). Value Types The built_value package is for defining your own value types. The term has a precise meaning, but we use it informally to mean types where equality is based only on value. For example, numbers: my 3 is equal to your 3. Not only that: my 3 will always equal your 3; it can’t change to be 4, or null, or a different type altogether. Value types are naturally immutable. This makes them simple to interact with and to reason about. This all sounds terribly abstract. What are value types good for? Well, it turns out: a lot. A whole lot. Arguably — and I do argue this, often — any class that’s used to model the real world should be a value type. Observe: var user1 = new User(name: "John Smith"); var user2 = new User(name: "John Smith"); print(user1 == user2); What should it print? Crucially, both instances are supposed to refer to someone in the real world. Because their values are identical they must refer to the same person. So they must be considered equal. What about immutability? Consider: user1.nickname = 'Joe'; What does updating a “User” nickname mean? It could imply any number of changes; perhaps the welcome text on my web page uses the nickname, and that should be updated. I probably have some storage somewhere, so that will need updating too. I now have two major problems: - I don’t know who has a reference to “user1”. The value has just changed under them; depending on how they’re using it, this could have any number of unpredictable effects. - Anyone holding “user2” or similar is now holding a value that’s out of date. Immutability can’t help with the second problem, but it does remove the first. It means there are no unpredictable updates, just explicit ones: var updatedUser = new User(name: "John Smith", nickname: "Joe"); saveToDatabase(updatedUser); // Database will notify frontend. Crucially, it means changes are local until explicitly published. This leads to simple code that’s easy to reason about — and to make both correct and fast. The Problem with Value Types So, the obvious question: if value types are so useful, why don’t we see them everywhere? Unfortunately they’re extremely laborious to implement. In Dart and in most other Object Oriented languages, a large amount of boilerplate code is needed. In my talk at the Dart Developer Summit I showed how a simple two-field class needs so much boilerplate it fills a whole slide (video). Introducing built_value We need either a language feature — which is exciting to discuss, but unlikely to arrive any time soon — or some form of metaprogramming. And what we find is that Dart already has a very nice way to do metaprogramming: source_gen. The goal is clear: make it so easy to define and use value types that we can use them wherever a value type makes sense. First we’ll need a quick detour to look at how this problem can be approached with source_gen. The source_gen tool creates generated source in new files next to your manually maintained source, so we need to leave room for a generated implementation. That means an abstract class: abstract class User { String get name; @nullable String get nickname; } That has enough information to generate an implementation. By convention generated code starts with “_$”, to mark it as private and generated. So the generated implementation will be called “_$User”. To allow it to extend “User” there will be a private constructor for this purpose called “_”: === user.dart === abstract class User { String get name; @nullable String get nickname; User._(); factory User() = UserImpl; } === user.g.dart is generated by source_gen === class _$User extends User { String name; String nickname; _$User() : super._(); } We need to use Dart’s “part” statement to pull in the generated code: === user.dart === library user; part 'user.g.dart'; abstract class User { String get name; @nullable String get nickname; User._(); factory User() = _$User; } === user.g.dart is generated by source_gen === part of user; class _$User extends User { String name; String nickname; _$User() : super._(); // Generated implementation goes here. } We’re getting somewhere! We have a way to generate code and plug it into the code we write by hand. Now back to the interesting part: what you actually have to write by hand and what built_value should generate. We’re missing a way to actually specify values for the fields. We could think about using named optional parameters: factory User({String name, String nickname}) = _$User; But this has a couple of drawbacks: it forces you to repeat all the field names in the constructor, and it only provides a way to set all the fields in one go; what if you want to build up a value piece by piece? Fortunately, the builder pattern comes to the rescue. We’ve already seen how well it works for collections in Dart — thanks to the cascade operator. Assuming we have a builder type, we can use that for the constructor — by asking for a function that takes a builder as a parameter: abstract class User { String get name; @nullable String get nickname; User._(); factory User([updates(UserBuilder b)]) = _$User; } That’s a bit surprising, but it leads to a very simple syntax for instantiation: var user1 = new User((b) => b ..name = 'John Smith' ..nickname = 'Joe'); What about creating new values based on old ones? The traditional builder pattern provides a “toBuilder” method to convert to a builder; you then apply your updates and call “build”. But a nicer pattern for most use cases is to have a “rebuild” method. Like the constructor, it takes a function that takes a builder, and provides for easy inline updates: var user2 = user.rebuild((b) => b ..nickname = 'Jojo'); We do still want “toBuilder”, though, for cases when you want to keep a builder around for a little while. So we want two methods for all our value types: abstract class Built<V, B> { // Creates a new instance: this one with [updates] applied. V rebuild(updates(B builder)); // Converts to a builder. B toBuilder(); } You don’t need to write the implementation for these, built_value will generate it for you. So you can just declare that you “implement Built”: library user; import 'package:built_value/built_value.dart'; part 'user.g.dart'; abstract class User implements Built<User, UserBuilder> { String get name; @nullable String get nickname; User._(); factory User([updates(UserBuilder b)]) = _$User; } And that’s it! A value type defined, an implementation generated and easy to use. Of course, the generated implementation isn’t just fields: it also provides “operator==”, “hashCode”, “toString” and null checks for required fields. I’ve skipped over one major detail, though: I said “assuming we have a builder type”. Of course, we’re generating code, so the answer is simple: we’ll generate it for you. The “UserBuilder’ referred to from “User” is created in “user.g.dart”. …unless you wanted to write some code in the builder, which is a perfectly reasonable thing to want to do. If that’s what you want, you follow the same pattern for the builder. It’s declared as abstract, with a private constructor and a factory that delegates to the generated implementation: abstract class UserBuilder extends Builder<V, B> { @virtual String name; @virtual String nickname; // Parses e.g. John "Joe" Smith into username+nickname. void parseUser(String user) { ... } UserBuilder._(); factory UserBuilder() => _$UserBuilder; } The “@virtual” annotations come from “package:meta”, and are needed to allow the generated implementation to override the fields. Now that you’ve added utility methods to your builder you can use them inline just like you could assign to fields: var user = new User((b) => b..parseUser('John "Joe" Smith')); The use cases for customizing a builder are relatively rare, but they can be very powerful. For example, you might want your builders to implement a common interface for setting shared fields, so they can be used interchangeably. Nested Builders There’s a major feature of built_value you haven’t seen yet: nested builders. When a built_value field holds a built_collection or another built_value, by default it’s available in the builder as a nested builder. This means you can update deeply nested fields more easily than if the whole structure was mutable: var structuredData = new Account((b) => b ..user.name = 'John Smith' ..user.nickname = 'Joe' ..credentials.email = 'john.smith@example.com' ..credentials.phone.country = Country.us ..credentials.phone.number = '555 01234 567'); var updatedStructuredData = structuredData.rebuild((b) => b ..credentials.phone.country = Country.switzerland ..credentials.phone.number = '555 01234 555'); Why “more easily” than if the structure was mutable? Firstly, the “update” method provided by all builders means you can enter a new scope whenever you like, “restarting” the cascade operator and making whatever updates you want both concisely and inline: var updatedStructuredData = structuredData.rebuild((b) => b ..user.update((b) => b ..name = 'Johnathan Smith') ..credentials.phone.update((b) => b ..country = Country.switzerland ..number = '555 01234 555')); Secondly, nested builders are automatically created as needed. For example, in built_value’s benchmark code we define a type called Node: abstract class Node implements Built<Node, NodeBuilder> { @nullable String get label; @nullable Node get left; @nullable Node get right; Node._(); factory Node([updates(NodeBuilder b)]) = _$Node; } And the auto creation of builders lets us create whatever tree structure we want inline: var node = new Node((b) => b ..left.left.left.right.left.right.label = 'I’m a leaf!' ..left.left.right.right.label = 'I’m also a leaf!'); var updatedNode = node.rebuild((b) => b ..left.left.right.right.label = 'I’m not a leaf any more!' ..left.left.right.right.right.label = 'I’m the leaf now!'); Did I mention a benchmark? When updating, built_value only copies the parts of the structure that need updating, reusing the rest. So it’s fast — and memory efficient. But you don’t just have to build trees. With built_value you have at your disposal fully typed immutable object models … that are as fast and powerful as efficient immutable trees. You can mix and match typed data, custom structures like the “Node” example, and collections from built_collection: var structuredData = new Account((b) => b ..user.update((b) => b ..name = 'John Smith') ..credentials.phone.update((b) => b ..country = Country.us ..number = '555 01234 567') ..node.left.left.left.account.update((b) => b ..user.name = 'John Smith II' ..user.nickname = 'Is lost in a tree') ..node.left.right.right.account.update((b) => b ..user.name = 'John Smith III')); These are the value types that I’m talking about when I argue most data should be value types! More on built_value I’ve covered why built_value is needed and what it looks like to use. There’s more to come: built_value also provides EnumClass, for classes that act like enums, and JSON serialization, for server/client communication and data storage. I’ll talk about those in future articles. After that I’ll dig into the chat example that uses built_value in and end to end system with server and client. Edit: next article.
https://medium.com/dartlang/darts-built-value-for-immutable-object-models-83e2497922d4
CC-MAIN-2019-18
refinedweb
1,902
65.32
A repository where I can work on whatever random RPM macros come to mind. Feel free to file issues if you have macros you'd like to see in Fedora, or send pull requests. Eventually I will add useful macros to the appropriate packages so that they can be used throughout Fedora. You should be able to do fedpkg prep or fedpkg mockbuild from a checkout of this repo to see the packager-facing portion of these macros in action. Macros are grouped into individual files by function. Many macros have no output or effect on the specfile; they instead add objects (generally functions) to the Lua namespace. Such functions live in the 'fedora' table, but there is generally a macro which imports them into the global namespace. Macros for general use in Fedora should never pollute the global namespace and thus should never use these import functions. Packagers can use them in their own specs, of course. Context is important for many macros and the functions they define. Some assume they will be evaluated or called while RPM is processing a scriptlet ( %prep, %build, %install, etc.) and they may simply output code intended to be interpreted by the shell. Outside of a scriptlet, this will be interpreted by RPM as regular specfile directives and will likely cause an error. Similarly, some functions or macros may output specfile directives which will cause errors if evaluated inside of a scriptlet. Note that macros themselves can begin scriptlets, and that once the first scriptlet tag (or the %description tag) is parsed by RPM, no more specfile directives will be parsed; everything that follows is either being run by the shell or interpreted as a file list. Note that RPM will in some cases re-parse macro output. You must be careful when using percent signs in strings passed to macros which output text into the parse stream, because you may end up with recursive expansions. Defines one macro, %omit_patch. It removes a patch from the set of patches which %autosetup will apply. This lets you use %autosetup even if you have a patch which doesn't always need to be applied. %omit_patch takes an integer, which is the patch number (i.e. N in PatchN:) to omit, or a string, which is the patch name (i.e. XXX in Patch7: XXX) to omit. Contains two macros: %fedora_utility_init fedoratable. %fedora_utlity_import The included functions: basename get_numbered_patch(N), get_numbered_source(N) Gets the full path to the patchfile or sourcefile given by N. fedora.get_numbered_source(7) will produce the full path to the source file defined earlier in the spec with Source7: Note that Patch: and Source: define the zeroth patch/source file, not the first. If the given number does not correspond to a currently defined patch/source file, nil is returned. echo(string) Called while RPM is parsing scriptlet, will cause that scriptlet to echo the provided string (by outputting an echo statement into the spec). Note that string will be interpreted by the shell as an argument to echo. Leading dashes or shell metacharacters will not print literally. Will almost certainly cause errors if called while RPM is not parsing a scriptlet. exit When called in a scriptlet, will cause that scriptlet to fail (by outputting exit 1). Will almost certainly cause errors if called while RPM is not parsing a scriptlet. rpmerror(string) %errormacro, which causes RPM to store the provided string to be used in the final error report. Doesn't cause any output or interrupt any current scriptlet. rpmerror_exit(string) echothe string, call rpmerrorand then call exit. getflag(flag) Not implemented. Returns true if the flag was passed to the macro in which the function is called, and false otherwise. getioption(option) If option was passed to the macro in which the function is called, returns the value of that option as a string. Returns nil otherwise.
https://pagure.io/misc-rpm-macros
CC-MAIN-2018-39
refinedweb
647
64
I was required to design a circuit to take in ADC value and then display it on the LCD. I have no problem in configuring ADC ports, taking ADC values, as well as setting up the LCD Screen. I want to show real voltage value (0to 5 v),for example voltage is 4.3 volt , I dont know what shoul I do . I read from adc , now i have adc value(0 to 255). My problem is, how do I convert the value taken from ADRESH and display it onto LCD screen? p/s: I am using MPLAB c18 compiler and PIC 18f4550. thank you very much. this code shown 0 to 255 (digital value): Code: Select all #include <p18f4550.h> #include <delays.h> #include "LCD.h" #include <stdio.h> #include"stdlib.h" #pragma config FOSC = INTOSCIO_EC //Internal oscillator, port function on RA6, EC used by USB #pragma config WDT = OFF //Disable watchdog timer unsigned char msg[16]; void main() { int adc_in; lcdinit(); while(1) { ADCON0bits.GO_DONE = 1;//Start A/D Conversion while(ADCON0bits.GO_DONE != 0);//Loop here until A/D conversion completes adc_in=ADRESH;//Set the delay to the 8 MSB locate(1,1); sprintf(msg,"%d",adc_in); lcdprint(msg); } }
https://forum.sparkfun.com/viewtopic.php?f=4&t=38646
CC-MAIN-2019-09
refinedweb
201
68.16
Talk:Biblical creation on aSK I started out to do a side-by-side on this but my other half stopped me - apparently throwing a computer mouse at the wall and jumping on the keyboard are not approved activities. Oh I did find this: The origin of the large variations in magnetization direction recorded in two different lava flows at Steens Mountain, Oregon, is discussed. The time constant for lava-flow cooling and the amplitude of the variations imply extremely large variation rates for the ambient field, four orders of magnitude above the present-day secular variation. Two different mechanisms in the liquid outer core (namely turbulence and inertial waves) that may produce such variations are investigated. Energetic considerations indicate that the observed impulsive variations are unlikely to be of internal (core) origin. On the other hand, it is shown that during reversals, when the dipolar field is reduced by a factor ten, magnetic storm perturbations at the Earth's surface are enhanced by a factor two with respect to their present amplitude. It follows that intense magnetic storms can significantly contribute to ambient field changes during reversals. The amplitude of the impulsive variations recorded at Steens Mountain is shown to be consistent with the effect of a strong magnetic storm occurring during cooling of a flow. The direction of the variations is also shown to be consistent with this mechanism if the dipolar field contributes significantly to the internal field during the impulsive change. here This message brought to you by: respondand honey 11:45, 31 July 2009 (UTC) - Is RationalWiki the appropriate namespace for this? I thought this was more for projected related material. - π 13:31, 31 July 2009 (UTC) Deletion[edit] The idea of a side-by-side could be interesting, but I'm not sure that aSK are saying anything particularly interesting. aSK is small potatoes, and their points are either easily refuted or utterly vague (such as saying that God can bypass the laws of nature at will). -- Ask me about your mother 16:06, 11 January 2010 (UTC) - Leaning towards agreement. Should we create a page for every two-bit cretard who can copy and paste? ASK offers nothing original, is not notable and does not deserve the recognition. However, as it is on mission with regards to analyzing and refuting, I can see the intention. Seeing as how PJR is just CMI rebranded, how about focusing attention there? — Sincerely, Neveruse / Talk / Block 16:13, 11 January 2010 (UTC) - Think it's worth my putting together a side-by-side and seeing if it works? I started writing one, but stopped when I realised how bad the original claims are. -- Ask me about your mother 16:20, 11 January 2010 (UTC) - I kind of thought that would happen. Most of them are so unscrupulous that they are easily dismissed. It does open the door for snark, though. — Sincerely, Neveruse / Talk / Block 16:29, 11 January 2010 (UTC) - It's an easy target. But no matter how simple it is to refute, we should still do it. There's no guidelines for notability as far as RW is concerned. pathetic 17:02, 11 January 2010 (UTC) - I'll get a draft together. I like Neveruse's idea of snarking it to death. -- Ask me about your mother 19:18, 11 January 2010 (UTC) - Yeah, I know about the lack of notability guidelines...I just like kicking dirt on ASK ;) — Sincerely, Neveruse / Talk / Block 19:22, 11 January 2010 (UTC) - How about copying it over into A Whorehouse of Liars? ħuman 21:09, 11 January 2010 (UTC) - I'm working on the expanded version. You can see it as User:Concernedresident/sandbox2. Please don't edit it, but feel free to suggest changes or additions. Personally I think it's looking fairly good at the moment. -- Ask me about your mother 21:11, 11 January 2010 (UTC) (undent) I've posted the rewrite. I'm confident with most of it, but some oversight on the final section would be useful. -- Ask me about your mother 19:22, 12 January 2010 (UTC) Fossil magnetism reveals rapid reversals of the earth’s magnetic field[edit] I just had a read of Snelling's article in which he states "Just as all the molecules in the compass needle align themselves along the earth’s magnetic field...". Er, no that is not how compass needles work at all. This guy is supposed to be a physicist? Lily Inspirate me. 07:41, 21 April 2010 (UTC) T-Rex Blood[edit] Should we refute the "Scientific evidence part? User:Mectrixctic 23:39, 16 August 2010 (UTC)
https://rationalwiki.org/wiki/Talk:Biblical_creation_on_aSK
CC-MAIN-2019-26
refinedweb
775
62.38
iSectorMeshCallback Struct ReferenceSet a callback which is called when a mesh is added or removed from this sector. More... [Crystal Space 3D Engine] #include <iengine/sector.h> Inheritance diagram for iSectorMeshCallback: Detailed DescriptionSet a callback which is called when a mesh is added or removed from this sector. This callback is used by: Definition at line 114 of file sector.h. Member Function Documentation New mesh. Note that this is also called if the mesh is added as child of another mesh that is in the sector. Remove mesh. The documentation for this struct was generated from the following file: Generated for Crystal Space 1.2.1 by doxygen 1.5.3
http://www.crystalspace3d.org/docs/online/api-1.2/structiSectorMeshCallback.html
CC-MAIN-2014-42
refinedweb
112
58.18
In this tutorial, you will learn how to create a slider application in openFrameworks to control a servo motor connected to Arduino using the Wekinator machine learning software. The openFrameworks application will send the X and Y values to Wekinator at a specific port—6448—and it will use the OSC communication to send the data. Wekinator will become trained according to the X and Y values and will send an OCS message back to openFrameworks at port 12000. openFrameworks will then send this message to the Arduino through the serial communication where the servo motor will be controlled. Circuit Diagram Make the following connections between your servo motor and Arduino: - The yellow wire is the signal wire. Connect it to digital pin 2 on the Arduino. - The brown wire is the ground wire. Connect it to GND on the Arduino. - The red wire is the power wire. Connect it to the 5V pin on the Arduino. Make your connections between the servo motor and Arduino UNO according to the circuit diagram above. Programming the Arduino In the Arduino code, we have first included the servo library and initialized some variables for the servo. Then in the setup function, we attached the servo at pin 2 of Arduino and started the serial communication. In the loop function, we looked for the incoming data and if any data will be available, we will read it and according to this value the servo motor will move. // Code to control servo motor from openframeworks using wekinator #include <Servo.h> //including the servo library Servo sg90; //including a variable for servo named sg90 int servo_pin = 2; void setup() { sg90.attach(servo_pin); //Giving the command to arduino to control pin 2 for servo // Start the serial communication Serial.begin(9600); } void loop() { if (Serial.available()) { // If there is any data available int inByte = Serial.read(); // Get the incoming data sg90.write(inByte); } } Programming openFrameworks On the openFrameworks side, we have three files that will be used to send and receive the data from Wekinator and will also help us in sending the data to the Arduino. Main.cpp Main.cpp runs the app and opens up the output window. The following code is for Main.cpp: The OfApp.cpp code is where the serial communication exists between the Arduino and openFrameworks. It is also the OSC communication between openFrameworks and Wekinator. #include "ofApp.h" #include "ofxOsc.h" //-------------------------------------------------------------- void ofApp::setup(){ sender.setup(HOST, SENDPORT); receiver.setup(RECEIVEPORT); ofSetFrameRate(60); serial.listDevices(); vector <ofSerialDeviceInfo> deviceList = serial.getDeviceList(); // this should be set to whatever com port your serial device is connected to. // (ie, COM4 on a pc, /dev/tty.... on linux, /dev/tty... on a mac) // arduino users check in arduino app.... int baud = 9600; serial.setup(0, baud); //open the first device // windows example //serial.setup("COM10", baud); // mac osx example //serial.setup("/dev/tty.usbserial-A4001JEC", baud); //linux example //serial.setup("/dev/ttyUSB0", baud); } //-------------------------------------------------------------- void ofApp::update(){ // Sending data to the wekinator ofxOscMessage m; m.setAddress(string(SENDMESSAGE)); m.addFloatArg((float)mouseX); m.addFloatArg((float)mouseY); sender.sendMessage(m, false); // looking for incoming messages from wekinator while (receiver.hasWaitingMessages()) { ofxOscMessage msg; receiver.getNextMessage(&msg); // Get Message if (msg.getAddress() == RECEIVEMESSAGE) { outputData = msg.getArgAsFloat(0); // Stored it } } serial.writeByte(outputData); // sending the data to arduino } //--------------------------------------------------------------, we first setup up the sender and receiver and then we look for the serial port. Once one is found, it will automatically connect. In the update function, we first send the X and Y values of the slider to Wekinator. Then, we look for an incoming message from the receiver. When any data is available, it will store it and send it to the Arduino. In the draw function, we made a slider that will move when we drag it. OfApp.h The OfApp.h file is the header file. #pragma once #include "ofMain.h" #include "ofxOsc.h" //Defaults for OSC: #define HOST "127.0.0.1" #define SENDPORT 6448 #define RECEIVEPORT 12000 #define SENDMESSAGE "/wek/inputs" #define RECEIVEMESSAGE "/wek/outputs" class ofApp : public ofBaseApp{ public: void setup(); void update(); void draw(); void mouseDragged(int x, int y, int button); ofxOscSender sender; ofxOscReceiver receiver; ofSerial serial; int mouseX = 0; int mouseY = 0; int boxSize = 30; char outputData; }; How to Run the openFrameworks Sketch Create a new project using the project generator in openFrameworks. The ‘Ofxosc’ addon is required for this project. Copy the codes given in the previous section to the respective files and build the project. After building the project, you will see the output window like the one shown below. The output window in openFrameworks. Setting up the Project in Wekinator Once openFrameworks is set up, open Wekinator and adjust the settings to match the ones shown in the figure below. Set the inputs to the 2 and the outputs to 1. Select the output type to “custom” and click "configure". Set the input, output, and type fields in Wekinator to match the ones shown above. Next, set the minimum value to 0 and maximum value to 180 and click "done". Customize your output types to be a minimum of 0 and a maximum of 180. Click "next" and the New Project window will appear. After following the steps listed above and clicking next, you will be brought to the New Project window in Wekinator. Drag the green box in the processing window to the center of the left-hand side of the screen and click "randomize". Start the recording for a second and allow Wekinator to record some samples. Drag the green box to the left-hand side of the screen to record your first output. Now drag the green box in the processing window to the center of the window and click "randomize". Start the recording for half a second. Move the green box to the center to allow Wekinator to record more samples. Finally, drag the green box in the processing window to center right-hand side and click on randomize. Start the recording for half a second. Move the green box to the right-hand side of the window to allow Wekinator to make a third set of samples. Once you've made three sets of samples for Wekinator, click "train" and then click "run". Now, when you drag the slider in the openFrameworks window, it will control and move the servo connected to the Arduino.
https://maker.pro/wekinator/projects/how-to-control-a-servo-motor-with-machine-learning-and-openframeworks
CC-MAIN-2020-45
refinedweb
1,062
58.89
Okay I am working on arrays in class this week and since it is an online course I don't have anyone to turn to. YAY me! Basically this is what I have been asked to do.. For the TVShow.java class I have this: import javax.swing.*; public class TVShow { private int tvChannel = 0; private String showDay = ""; private String showName = ""; public TVShow(String name, String day, int channel) { showName = name; showDay = day; tvChannel = channel; } public void setTVChannel(int theChannel) { tvChannel = theChannel; } public int getTVChannel() { return tvChannel; } public void setDay(String weekDay) { showDay = weekDay; } public String getDay() { return showDay; } public void setShowName(String nameOfShow) { showName = nameOfShow; } public String getShowName() { return showName; } public void setDisplay() { JOptionPane.showMessageDialog(null, showName + " is on " + showDay + "'s, " + "on channel " + tvChannel); } } For the Main method in the useTVShow.java I have this: import javax.swing.*; import java.util.*; public class UseTVShow { public static void main(String[] args) { final int TV_MAX = 5; TVShow[] myShowArray = new TVShow[TV_MAX]; String tvInput = JOptionPane.showInputDialog(null, "Please enter a television show: "); for(int d = 0; d < TV_MAX; ++d) myShowArray[d].setShowName(tvInput); for(int d = 0; d < TV_MAX; ++d) myShowArray[d].setDay("Monday"); for(int d = 0; d < TV_MAX; ++d) myShowArray[d].setTVChannel(1 + d); for(int d = 0; d < TV_MAX; ++d) JOptionPane.showMessageDialog(null, "Show: " + d + " " + myShowArray[d].getShowName() + " airs on " + myShowArray[d].getDay() + "'s on channel " + myShowArray[d].getTVChannel()); } } I am having two issues here, first: I am unable to enter more than 1 television show. Second: I am unable to display what I have entered or even the other variables from the get and set methods. HELP!
https://www.daniweb.com/programming/software-development/threads/381676/help-with-arrays-specifically-pulling-from-1-class-to-another
CC-MAIN-2021-17
refinedweb
269
57.67
I spend a lot of my time building web applications. I want to get as many of my ideas out into the world as possible and nothing has made that simpler than Vercel. Their technologies — Next.js and their cloud platform — have helped me prototype and build full-fledged applications in days! But even with the wealth of tools they produce, auth was always a barrier for me. No matter how quickly I could build everything else – I always struggled with auth. Until someone showed me Magic. Magic is a new product that makes it simple to add email link login, like the ones used by Slack or Medium, to your application. It brings an amazing developer experience, minimal work to integrate and world class security. It may just be the product I have been looking for to solve my authentication issues. This article will dive into how to integrate Magic into your Next.js app by walking you through how I wrote Boost, a real world application that uses this approach. Boost is on a cutting edge Jamstack, written in Next.js, hosted on Vercel and using Magic for authentication. Check it out and see how it works. You can even launch your own Boost clone with one-click deploy on Vercel! Getting started We're going to cover how to integrate and allow Magic to work its... magic. We'll also look at how to use cookies, SWR and a little trick to ensure a fluid user experience for our app. Let's start with a fresh Next.js project by using npx create-next-app. You'll want to be fairly familiar with how Next.js works before going any further (if not, check out their tutorial). As we work through this high level tutorial, here is what we want our user flow to look like: - A user visits our app and clicks the login button. - They fill out the form with their email. - Magic authenticates them. - Issue some cookies and redirect them to the dashboard page. Creating the first pages First we'll replace the contents of the index route to add a link to login. Since Next.js uses file system routing to handle pages this is all the code we need to create a page. // pages/index.js import Link from 'next/link' export default function Home() { return <Link href="/login"><a>Login</a></Link> } Add the home for our users once they authenticate. We'll use the /dashboard route. // pages/dashboard.js export default function Dashboard() { return <h1>Dashboard</h1> } Now let's add a simple login page. Here is our pages/login.js file before we add any Magic code. // pages/login.js import { useRouter } from 'next/router' export default function Login() { const router = useRouter() const handleSubmit = async (event) => { event.preventDefault() const { elements } = event.target // Add the Magic code here //> ) } Now that all of our pages are set up, we can get into adding authentication. Set up Magic integration Our first step in adding authentication is to integrate Magic by creating an account, which you can do by heading over to magic.link. Getting the keys from Magic.link. Grab both the publishable and secret keys and place them in the environment ( .env.local) file like below. Also, we’ll add ENCRYPTION_SECRET, which will be used for encryption. You should create your own secret. # .env.local MAGIC_SECRET_KEY=sk_test_**************** # We’ll use the NEXT_PUBLIC_ prefix # to expose this variable to the browser. # See: NEXT_PUBLIC_MAGIC_PUB_KEY=pk_test_**************** ENCRYPTION_SECRET=you-should-create-your-own-secret-to-use-for-encryption .env.local. Once we have the keys, we'll need to use the Magic library to handle authenticating our users, so let's install that. # Use npm install if you’re not using yarn yarn add magic-sdk Authenticate with Magic Once we have the SDK installed we will need to import it and call Magic.loginWithMagicLink when the form is submitted. loginWithMagicLink will send an email to the authenticating user and they will follow a secure authentication process outside of our application. Once the user has successfully authenticated with Magic, they'll be instructed to return to our app. At that point Magic will return a decentralized identifier or DID which we can use as a token in our application. To get that process started, we'll call Magic inside handleSubmit. // pages/login.js import { useRouter } from 'next/router' import { Magic } from 'magic-sdk' export default function Login() { const router = useRouter() const handleSubmit = async (event) => { event.preventDefault() const { elements } = event.target // the Magic code const did = await new Magic(process.env.NEXT_PUBLIC_MAGIC_PUB_KEY) .auth .loginWithMagicLink({ email: elements.email.value }) //> ) } Magic is built using principles of distributed security. They run a secure in-house operation that delegates most of the security storage to Amazon's Hardware Security Module or HSM (you can learn more about HSMs and how Magic uses them in their security documentation). Not even Magic employees have access to the HSM. They've locked everyone out of ever getting access to the keys stored there. Since Magic runs in this distributed manner, their authentication returns a decentralized identifier. This identifier can be exchanged with Magic for information about the user. Once we have that DID we know that Magic has successfully authenticated that user, and our app can take over. Issue an authorization token Now that Magic has cleared the user and given us a DID to work with, we want to make use of it. With the DID we can create a user in our own database and issue an authorization token so that they have access to our restricted APIs. To do that, we are going to create an API route called login.js. (For the purposes of this article we aren't going to create any users but you can check out the code inside Boost or the API to see how that would work.) We're going to start with a skeleton node.js request handler in the pages/api/login.js route. // pages/api/login.js export default async (req, res) => { if (req.method !== 'POST') return res.status(405).end() // exchange the DID from Magic for some user data // TODO // Author a couple of cookies to persist a users session // TODO res.end() } To get our login form submission to talk to the API route we just created, we'll add a fetch request. That request will send the decentralized id in the Authorization header so we can access it on the server. // pages/login.js const handleSubmit = async (event) => { event.preventDefault() const { elements } = event.target; // the magic code const did = await new Magic(process.env.NEXT_PUBLIC_MAGIC_PUB_KEY) .auth .loginWithMagicLink({ email: elements.email.value }) // Once we have the did from magic, login with our own API const authRequest = await fetch('/api/login', { method: 'POST', headers: { Authorization: `Bearer ${did}` } }) if (authRequest.ok) { // We successfully logged in, our API // set authorization cookies and now we // can redirect to the dashboard! router.push('/dashboard') } else { /* handle errors */ } } Persisting authorization state Once our front and back end are talking to each we can go ahead and persist our login state with a cookie - or two. To get the user data we're going to exchange that DID token with Magic. Once we have the user we'll store that in our cookie - as if it was an object from our own database. To do so, we will have to add Magic's node package. yarn add @magic-sdk/admin Magic's admin API will give us the ability to trade that DID for user information. In pages/api/login.js we'll add a few lines to integrate the Magic admin package. // pages/api/login) // Author a couple of cookies to persist a users session // TODO res.end() } Take a minute to pause and appreciate that integrating world class authentication with Magic took three lines of code. One on the client side: await new Magic(process.env.NEXT_PUBLIC_MAGIC_PUB_KEY) .auth .loginWithMagicLink({ email: elements.email.value }) And two on the server side: magic.utils.parseAuthorizationHeader(req.headers.authorization) await magic.users.getMetadataByToken(did) With those two lines you are getting the best of modern security. Magic is the culmination of years spent building and running a security product called Fortmatic. Thanks to the creators of Fortmatic, and now Magic, you get it all for two lines of code. If you want to read more about the security of Magic you can take a look at their awesome documentation and FAQs. With authentication done, let's persist the user state in some cookies. Here is a prefab utility (or service) we can use to issue our cookies. In this service we will be creating two cookies: - One ( api_token) to our access token that will allow our authenticated users to make requests to protected resources in our backend. We will store this cookie as httpOnlyand securewhich will force this cookie to be sent in secure connection request only. - The other ( authed) to assist our client side navigation and fluid user experience. This cookie will need to be read with JavaScript, so although it will be marked as securefor HTTPS connections, we won't enable httpOnly. Once we have created those cookies, we'll attach them to the browser using the request header Set-Cookie. // lib/cookie.js import { serialize } from "cookie" const TOKEN_NAME = "api_token" const MAX_AGE = 60 * 60 * 8 function createCookie(name, data, options = {}) { return serialize(name, data, { maxAge: MAX_AGE, expires: new Date(Date.now() + MAX_AGE * 1000), secure: process.env.NODE_ENV === "production", path: "/", httpOnly: true, sameSite: "lax", ...options, }) } function setTokenCookie(res, token) { res.setHeader("Set-Cookie", [ createCookie(TOKEN_NAME, token), createCookie("authed", true, { httpOnly: false }), ]) } function getAuthToken(cookies) { return cookies[TOKEN_NAME] } export default { setTokenCookie, getAuthToken } To create the token we want to store in our cookie, we will use a library called Iron to encrypt and decrypt our user data. First, let’s install Iron: yarn add @hapi/iron Then make some modifications the the pages/api/login.js. // pages/api/login.js import {Magic} from '@magic-sdk/admin' import Iron from '@hapi/iron' import CookieService from '../../lib/cookie' export default async (req, res) => { if (req.method !== 'POST') return res.status(405).end() // exchange the did from Magic for some user data const did = req.headers.authorization.split('Bearer').pop().trim() const user = await new Magic(process.env.MAGIC_SECRET_KEY).users.getMetadataByToken(did) // Author a couple of cookies to persist a user's session const token = await Iron.seal(user, process.env.ENCRYPTION_SECRET, Iron.defaults) CookieService.setTokenCookie(res, token) res.end() } There we have it. The user is now authorized, and cookies are set to keep authorized state on the client. To make authenticated API calls, like getting information on the currently authorized user, we simply send a fetch request and our newly created cookie will be sent along with it. Using our hooks and cookies to get user data To use our cookie and get the user data we'll create a route called pages/api/user.js. In that route we can decrypt our token, and send back the data associated with that user. This isn't a complete implementation but it should give you an idea of what you need to do. // pages/api/user.js import Iron from '@hapi/iron' import CookieService from '../../lib/cookie' export default async (req, res) => { let user; try { user = await Iron.unseal(CookieService.getAuthToken(req.cookies), process.env.ENCRYPTION_SECRET, Iron.defaults) } catch (error) { res.status(401).end() } // now we have access to the data inside of user // and we could make database calls or just send back what we have // in the token. res.json(user) } On the frontend we will want the current authorized user to be easily accessible. For that we can create a React hook using another amazing Vercel package - SWR. First, let’s install swr: yarn add swr Here is what that hook might will look like: // hooks/useAuth.js import useSWR from "swr"; function fetcher(route) { /* our token cookie gets sent with this request */ return fetch(route) .then((r) => r.ok && r.json()) .then((user) => user || null); } export default function useAuth() { const { data: user, error, mutate } = useSWR("/api/user", fetcher); const loading = user === undefined; return { user, loading, error, }; } This useAuth hook will allow us to keep the latest info about the authorized user consistent across tabs and pages. If you want to know how that works you can read more about SWR here. Here is how we might use it to display the user email on our /dashboard.js page. // pages/dashboard.js import useAuth from "../hooks/useAuth"; export default function Dashboard() { const { user, loading } = useAuth(); return ( <> <h1>Dashboard</h1> {loading ? "Loading..." : user.email} </> ); } Handling unwanted page transitions The last piece to our auth puzzle is some smooth handling of routes for authenticated users. For example, the home for an authenticated user might be the /dashboard page. For anyone else it is simply the index. How do we handle redirecting users to the best page without flashing multiple interfaces? This is where that second cookie we set comes in. We'll inject a script tag with some redirect logic in the <head/> of our application. Handling redirections is now checking if that authed cookie is set and redirect to the appropriate page, something like so: if (document.cookie && document.cookie.includes('authed')) { window.location.href = "/dashboard" } Now we can add that logic to our / route and authenticated users will get automatically redirected to the dashboard. Here is what our pages/index.js might look like: // pages/index.js import Head from "next/head"; export default function Home() { return ( <> <Head> <script dangerouslySetInnerHTML={{ __html: ` if (document.cookie && document.cookie.includes('authed')) { window.location.<a>Login</a></Link> </> ); } Conclusion There we have it, a bunch of the best ingredients for creating the perfect auth recipe. Magic's simple and secure authentication and Vercel's beautiful development tools. Now whenever I build apps, Magic will be my go to for authentication with Next.js. Integrating authentication into Next.js applications has never been simpler. We're able to take advantage of years of security work and knowledge and plug it into our app with two lines of code. We are living in a time of simplicity in Jamstack development, and Magic is becoming a key component of that. Simple to integrate, simple for users and deployed on one of the simplest platforms ever built - Vercel.
https://vercel.com/blog/simple-auth-with-magic-link-and-nextjs
CC-MAIN-2021-17
refinedweb
2,394
57.27
LE BIG Sync Established event. More... #include <hci_api.h> LE BIG Sync Established event. Definition at line 795 of file hci_api.h. BIG handle. Definition at line 799 of file hci_api.h. Connection handles of the BIS's. Definition at line 808 of file hci_api.h. Number of new payloads in each BIS event. Definition at line 802 of file hci_api.h. Event header. Definition at line 797 of file hci_api.h. Number of times a payload is transmitted in a BIS event. Definition at line 804 of file hci_api.h. Time between two consecutive ISO anchor points. Definition at line 806 of file hci_api.h. Maximum size of the payload. Definition at line 805 of file hci_api.h. Number of Sub-Events in each BIS event in the BIG. Definition at line 801 of file hci_api.h. Number of BIS. Definition at line 807 of file hci_api.h. Offset used for pre-transmissions. Definition at line 803 of file hci_api.h. Status. Definition at line 798 of file hci_api.h. The maximum time, in microseconds, for transmission of SDUs of all BISes. Definition at line 800 of file hci.
https://os.mbed.com/docs/mbed-os/v6.14/mbed-os-api-doxy/struct_hci_le_big_sync_est_evt__t.html
CC-MAIN-2021-43
refinedweb
190
80.99
I was given this buffer and told to make a reverse input that get the last K lines in a file. From what I've been trying to do, every time I tried to run the code it says that used is not an attribute of Input. Can someone please tell me why this keeps happening? Thank you in advance. class Input: def __init___( self, file ): self.file = file # must open( <filename>, "rb" ) self.length = 0 self.used = 0 self.buffer = "" def read( self ): if self.used < self.length: # if something in buffer c = self.buffer[self.used] self.used += 1 return c else: self.buffer = self.file.read( 20 ) # or 2048 self.length = len( self.buffer ) if self.length == 0: return -1 else: c = self.buffer[0] self.used = 1 ` I'm going to go out on a limb here and try guessing that the problem is that you are using the wrong name for the __init__ magic method (as noticed by Hai Vu). Notice that there are three trailing underscores in your code instead of two. Since the __init__ method is the one called during the construction of the object to set its various attributes, the used attribute never gets set because the __init__ function never gets run. Afterwards, used is the first attribute accessed in Input.read, which makes Python complain about it being missing. If I'm right, remove the underscore and it will fix the problem (though there may be others).
https://codedump.io/share/lQRBrRijSf6M/1/using-a-tail-and-a-buffer-to-get-last-k-lines-in-a-file
CC-MAIN-2017-34
refinedweb
246
86.1
Catalyst::Dispatcher - The Catalyst Dispatcher This is the class that maps public urls to actions in your Catalyst application based on the attributes you set. Construct a new dispatcher. An arrayref of pre-loaded dispatchtype classes Entries are considered to be available as Catalyst::DispatchType::CLASS To use a custom class outside the regular Catalyst namespace, prefix it with a +, like so: +My::Dispatch::Type An arrayref of post-loaded dispatchtype classes Entries are considered to be available as Catalyst::DispatchType::CLASS To use a custom class outside the regular Catalyst namespace, prefix it with a +, like so: +My::Dispatch::Type Delegate the dispatch to the action that matched the url, or return a message about unknown resource Find an dispatch type that matches $c->req->path, and set args from it. returns a named action from a given namespace. Returns the named action by its full private path. Return all the action containers for a given namespace, inclusive. expand an action into a full representation of the dispatch. mostly useful for chained, other actions will just return a single action. Make sure all required dispatch types for this action are loaded, then pass the action to our dispatch types so they can register it if required. Also, set up the tree with the action containers. Loads all of the pre-load dispatch types, registers their actions and then loads all of the post-load dispatch types, and iterates over the tree of actions, displaying the debug information if appropriate. Get the DispatchType object of the relevant type, i.e. passing $type of Chained would return a Catalyst::DispatchType::Chained object (assuming of course it's being used.) Provided by Moose Catalyst Contributors, see Catalyst.pm This library is free software. You can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~jjnapiork/Catalyst-Runtime-5.90050/lib/Catalyst/Dispatcher.pm
CC-MAIN-2016-26
refinedweb
306
53.41
Yeah well, this is cool and everything, but where's my VR? I WANT MY VR! Hold on kid, we're getting there. In this article, we are going to set up a project that can be built and run with a virtual reality head-mounted display (HMD) and then talk more in depth about how the VR hardware technology really works. We will be discussing the following topics: - The spectrum of the VR device integration software - Installing and building a project for your VR device - The details and defining terms for how the VR technology really works (For more resources related to this topic, see here.) VR device integration software Before jumping in, let's understand the possible ways to integrate our Unity project with virtual reality devices. In general, your Unity project must include a camera object that can render stereographic views, one for each eye on the VR headset. Software for the integration of applications with the VR hardware spans a spectrum, from built-in support and device-specific interfaces to the device-independent and platform-independent ones. Unity's built-in VR support Since Unity 5.1, support for the VR headsets is built right into Unity. At the time of writing this article, there is direct support for Oculus Rift and Samsung Gear VR (which is driven by the Oculus software). Support for other devices has been announced, including Sony PlayStation Morpheus. You can use a standard camera component, like the one attached to Main Camera and the standard character asset prefabs. When your project is built with Virtual Reality Supported enabled in Player Settings, it renders stereographic camera views and runs on an HMD. The device-specific SDK If a device is not directly supported in Unity, the device manufacturer will probably publish the Unity plugin package. An advantage of using the device-specific interface is that it can directly take advantage of the features of the underlying hardware. For example, Steam Valve and Google have device-specific SDK and Unity packages for the Vive and Cardboard respectively. If you're using one of these devices, you'll probably want to use such SDK and Unity packages. (At the time of writing this article, these devices are not a part of Unity's built-in VR support.) Even Oculus, supported directly in Unity 5.1, provides SDK utilities to augment that interface (see,). Device-specific software locks each build into the specific device. If that's a problem, you'll either need to do some clever coding, or take one of the following approaches instead. The OSVR project In January 2015, Razer Inc. led a group of industry leaders to announce the Open Source Virtual Reality (OSVR) platform (for more information on this, visit) with plans to develop open source hardware and software, including an SDK that works with multiple devices from multiple vendors. The open source middleware project provides device-independent SDKs (and Unity packages) so that you can write your code to a single interface without having to know which devices your users are using. With OSVR, you can build your Unity game for a specific operating system (such as Windows, Mac, and Linux) and then let the user configure the app (after they download it) for whatever hardware they're going to use. At the time of writing this article, the project is still in its early stage, is rapidly evolving, and is not ready for this article. However, I encourage you to follow its development. WebVR WebVR (for more information, visit) is a JavaScript API that is being built directly into major web browsers. It's like WebGL (2D and 3D graphics API for the web) with VR rendering and hardware support. Now that Unity 5 has introduced the WebGL builds, I expect WebVR to surely follow, if not in Unity then from a third-party developer. As we know, browsers run on just about any platform. So, if you target your game to WebVR, you don't even need to know the user's operating system, let alone which VR hardware they're using! That's the idea anyway. New technologies, such as the upcoming WebAssembly, which is a new binary format for the Web, will help to squeeze the best performance out of your hardware and make web-based VR viable. For WebVR libraries, check out the following: - WebVR boilerplate: - GLAM: - glTF: - MozVR (the Mozilla Firefox Nightly builds with VR): - WebAssembly: 3D worlds There are a number of third-party 3D world platforms that provide multiuser social experiences in shared virtual spaces. You can chat with other players, move between rooms through portals, and even build complex interactions and games without having to be an expert. For examples of 3D virtual worlds, check out the following: - VRChat: - JanusVR: - AltspaceVR: - High Fidelity: For example, VRChat lets you develop 3D spaces and avatars in Unity, export them using their SDK, and load them into VRChat for you and others to share over the Internet in a real-time social VR experience. Creating the MeMyselfEye prefab To begin, we will create an object that will be a proxy for the user in the virtual environment. Let's create the object using the following steps: - Open Unity and the project from the last article. Then, open the Diorama scene by navigating to File | Open Scene (or double-click on the scene object in Project panel, under Assets). - From the main menu bar, navigate to GameObject | Create Empty. - Rename the object MeMyselfEye (hey, this is VR!). - Set its position up close into the scene, at Position (0, 1.4, -1.5). - In the Hierarchy panel, drag the Main Camera object into MeMyselfEye so that it's a child object. - With the Main Camera object selected, reset its transform values (in the Transform panel, in the upper right section, click on the gear icon and select Reset). The Game view should show that we're inside the scene. If you recall the Ethan experiment that we did earlier, I picked a Y-position of 1.4 so that we'll be at about the eye level with Ethan. Now, let's save this as a reusable prefabricated object, or prefab, in the Project panel, under Assets: - In Project panel, under Assets, select the top-level Assets folder, right-click and navigate to Create | Folder. Rename the folder Prefabs. - Drag the MeMyselfEye prefab into the Project panel, under Assets/Prefabs folder to create a prefab. Now, let's configure the project for your specific VR headset. Build for the Oculus Rift If you have a Rift, you've probably already downloaded Oculus Runtime, demo apps, and tons of awesome games. To develop for the Rift, you'll want to be sure that the Rift runs fine on the same machine on which you're using Unity. Unity has built-in support for the Oculus Rift. You just need to configure your Build Settings..., as follows: - From main menu bar, navigate to File | Build Settings.... - If the current scene is not listed under Scenes In Build, click on Add Current. - Choose PC, Mac, & Linux Standalone from the Platform list on the left and click on Switch Platform. - Choose your Target Platform OS from the Select list on the right (for example, Windows). - Then, click on Player Settings... and go to the Inspector panel. - Under Other Settings, check off the Virtual Reality Supported checkbox and click on Apply if the Changing editor vr device dialog box pops up. To test it out, make sure that the Rift is properly connected and turned on. Click on the game Play button at the top of the application in the center. Put on the headset, and IT SHOULD BE AWESOME! Within the Rift, you can look all around—left, right, up, down, and behind you. You can lean over and lean in. Using the keyboard, you can make Ethan walk, run, and jump just like we did earlier. Now, you can build your game as a separate executable app using the following steps. Most likely, you've done this before, at least for non-VR apps. It's pretty much the same: - From the main menu bar, navigate to File | Build Settings.... - Click on Build and set its name. - I like to keep my builds in a subdirectory named Builds; create one if you want to. - Click on Save. An executable will be created in your Builds folder. If you're on Windows, there may also be a rift_Data folder with built data. Run Diorama as you would do for any executable application—double-click on it. Choose the Windowed checkbox option so that when you're ready to quit, close the window with the standard Close icon in the upper right of your screen. Build for Google Cardboard Read this section if you are targeting Google Cardboard on Android and/or iOS. A good starting point is the Google Cardboard for Unity, Get Started guide (for more information, visit). The Android setup If you've never built for Android, you'll first need to download and install the Android SDK. Take a look at Unity manual for Android SDK Setup (). You'll need to install the Android Developer Studio (or at least, the smaller SDK Tools) and other related tools, such as Java (JVM) and the USB drivers. Android phone. The iOS setup A good starting point is Unity manual, Getting Started with iOS Development guide (). You can only perform iOS development from a Mac. You must have an Apple Developer Account approved (and paid for the standard annual membership fee) and set up. Also, you'll need to download and install a copy of the Xcode development tools (via the Apple Store). iPhone. Installing the Cardboard Unity package To set up our project to run on Google Cardboard, download the SDK from. Within your Unity project, import the CardboardSDKForUnity.unitypackage assets package, as follows: - From the Assets main menu bar, navigate to Import Package | Custom Package.... - Find and select the CardboardSDKForUnity.unitypackage file. - Ensure that all the assets are checked, and click on Import. Explore the imported assets. In the Project panel, the Assets/Cardboard folder includes a bunch of useful stuff, including the CardboardMain prefab (which, in turn, contains a copy of CardboardHead, which contains the camera). There is also a set of useful scripts in the Cardboard/Scripts/ folder. Go check them out. Adding the camera Now, we'll put the Cardboard camera into MeMyselfEye, as follows: - In the Project panel, find CardboardMain in the Assets/Cardboard/Prefabs folder. - Drag it onto the MeMyselfEye object in the Hierarchy panel so that it's a child object. - With CardboardMain selected in Hierarchy, look at the Inspector panel and ensure the Tap is Trigger checkbox is checked. - Select the Main Camera in the Hierarchy panel (inside MeMyselfEye) and disable it by unchecking the Enable checkbox on the upper left of its Inspector panel. Finally, apply theses changes back onto the prefab, as follows: - In the Hierarchy panel, select the MeMyselfEye object. Then, in its Inspector panel, next to Prefab, click on the Apply button. - Save the scene. We now have replaced the default Main Camera with the VR one. The build settings If you know how to build and install from Unity to your mobile phone, doing it for Cardboard is pretty much the same: - From the main menu bar, navigate to File | Build Settings.... - If the current scene is not listed under Scenes to Build, click on Add Current. - Choose Android or iOS from the Platform list on the left and click on Switch Platform. - Then, click on Player Settings… in the Inspector panel. - For Android, ensure that Other Settings | Virtual Reality Supported is unchecked, as that would be for GearVR (via the Oculus drivers), not Cardboard Android! - Navigate to Other Settings | PlayerSettings.bundleIdentifier and enter a valid string, such as com.YourName.VRisAwesome. - Under Resolution and Presentation | Default Orientation set Landscape Left. Play Mode To test it out, you do not need your phone connected. Just press the game's Play button at the top of the application in the center to enter Play Mode. You will see the split screen stereographic views in the Game view panel. While in Play Mode, you can simulate the head movement if you were viewing it with the Cardboard headset. Use Alt + mouse-move to pan and tilt forward or backwards. Use Ctrl + mouse-move to tilt your head from side to side. You can also simulate magnetic clicks (we'll talk more about user input in a later article) with mouse clicks. Note that since this emulates running on a phone, without a keyboard, the keyboard keys that we used to move Ethan do not work now. Building and running in Android To build your game as a separate executable app, perform the following steps: - From the main menu bar, navigate to File | Build & Run. - Set the name of the build. I like to keep my builds in a subdirectory named Build; you can create one if you want. - Click on Save. This will generate an Android executable .apk file, and then install the app onto your phone. The following screenshot shows the Diorama scene running on an Android phone with Cardboard (and Unity development monitor in the background). Building and running in iOS To build your game and run it on the iPhone, perform the following steps: - Plug your phone into the computer via a USB cable/port. - From the main menu bar, navigate to File | Build & Run. This allows you to create an Xcode project, launch Xcode, build your app inside Xcode, and then install the app onto your phone. Antique Stereograph (source) The device-independent clicker At the time of writing this article, VR input has not yet been settled across all platforms. Input devices may or may not fit under Unity's own Input Manager and APIs. In fact, input for VR is a huge topic and deserves its own book. So here, we will keep it simple. As a tribute to the late Steve Jobs and a throwback to the origins of Apple Macintosh, I am going to limit these projects to mostly one-click inputs! Let's write a script for it, which checks for any click on the keyboard, mouse, or other managed device: - In the Project panel, select the top-level Assets folder. - Right-click and navigate to Create | Folder. Name it Scripts. - With the Scripts folder selected, right-click and navigate to Create | C# Script. Name it Clicker. - Double-click on the Clicker.cs file in the Projects panel to open it in the MonoDevelop editor. - Now, edit the Script file, as follows: using UnityEngine; using System.Collections; public class Clicker { public bool clicked() { return Input.anyKeyDown; } } - Save the file. If you are developing for Google Cardboard, you can add a check for the Cardboard's integrated trigger when building for mobile devices, as follows: using UnityEngine; using System.Collections; public class Clicker { public bool clicked() { #if (UNITY_ANDROID || UNITY_IPHONE) return Cardboard.SDK.CardboardTriggered; #else return Input.anyKeyDown; #endif } } Any scripts that we write that require user clicks will use this Clicker file. The idea is that we've isolated the definition of a user click to a single script, and if we change or refine it, we only need to change this file. How virtual reality really works So, with your headset on, you experienced the diorama! It appeared 3D, it felt 3D, and maybe you even had a sense of actually being there inside the synthetic scene. I suspect that this isn't the first time you've experienced VR, but now that we've done it together, let's take a few minutes to talk about how it works. inputs that your body receives (visual, auditory, motor, and so on). This can be explained technically. Presence is the visceral feeling that you get being transported there—a deep emotional or intuitive feeling. You can say that immersion is the science of VR, and presence is moved, you'd see the virtual scene exactly as you should. You would have a nearly perfect visual VR experience. That's basically it. Ta-dah! Well, not so fast. Literally. Stereoscopic 3D viewing twin relatively small images rectangularly bound. When your eye is focused on the center of the view, the 3D effect is convincing, but you will see the boundaries of the view. Move your eyeballs around (even with the head still), and any remaining sense of immersion is totally lost. You're just an observer on the outside peering into a diorama. Now, consider what an Oculus Rift DK2. (Oculus Configuration Utility comes with a utility to measure and configure your IPD. Alternatively, you can ask your eye doctor for an accurate measurement.) It might be less obvious, but if you look closer at the VR screen, you see color separations, like's, but the pixels per inch (ppi) value may be more important. Other innovations in display technology such as pixel smearing and foveated rendering (showing a higher-resolution detail exactly where the eyeball is looking) will also help reduce the screen door effect. When experiencing a 3D scene in VR, you must also consider the frames per second (FPS). If FPS is too slow, the animation will look choppy. Things that affect FPS include the graphics processor (GPU) performance and complexity of the Unity scene , such as VisiSonics (licensed by Oculus), should block out light from the real environment around you. Head tracking So, we have a nice 3D picture that is viewable in a comfortable VR headset with a wide field of view. If this was it and you moved your head, it'd feel like you have a diorama box stuck to your face. Move your head and the box moves along with it, and this is much like holding the antique stereograph device or the childhood View Master. Fortunately, VR is so much better. The VR headset has a motion sensor (IMU) inside that detects spatial acceleration and rotation rate. Current motion sensors may be good if you wish to play mobile games on a phone, but for VR, it's not accurate enough. These inaccuracies (rounding errors) accumulate over time, as the sensor is sampled thousands of times per second, one may eventually lose track of where you are in the real world. This drift is a major shortfall of phone-based VR headsets such as Google Cardboard. It can sense your head motion, but it loses track of your head position. High-end HMDs account for drift with a separate positional tracking mechanism. The Oculus Rift does this with an outside-in positional tracking, where two or more dumb laser emitters are placed in the room (much like the lasers in a barcode reader at the grocery checkout), and an optical sensor on the headset reads the rays to determine your position. Either way, the primary purpose is to accurately find the position of your head (and other similarly equipped devices, such as handheld controllers). Together, the position, tilt, and the forward direction of your head—or the head pose then article by faster rendering of each frame (keeping the recommended FPS). This can be achieved by discouraging the moving of your head too quickly and using other techniques to make the user feel grounded and comfortable. Another thing that the Rift does to improve head tracking and realism is that it uses a skeletal representation of the neck so that all the rotations that it receives are mapped more accurately to the head rotation. An example of this is looking down at your lap makes. Summary In this article, we discussed the different levels of device integration software and then installed the software that is appropriate for your target VR device. We also discussed what happens inside the hardware and software SDK that makes virtual reality work and how it matters to us VR developers. For more information on VR development and Unity refer to the following Packt books: - Unity UI Cookbook, by Francesco Sapio: - Building a Game with Unity and Blender, by Lee Zhi Eng: - Augmented Reality with Kinect, by Rui Wang: Resources for Article: Further resources on this subject: - Virtually Everything for Everyone [article] - Unity Networking – The Pong Game [article] - Getting Started with Mudbox 2013 [article]
https://www.packtpub.com/books/content/vr-build-and-run
CC-MAIN-2017-13
refinedweb
3,397
62.78
Python. This article is not a Python advocacy piece nor a general Python tutorial. There are already several excellent resources on the net for both tasks. In fact, one of the main joys of using Python is the well organized and ever useful Python web site. Few languages can boast such a wealth of clear and concise documentation gathered in one location. For instance, you can go here for a summary of Python’s features and comparisons to other popular languages. For a much more thorough introduction to general purpose programming in Python, please see the tutorial. Finally, you can obtain Python for your system here. (That is a generic download page and there may be easy to install packages available for your specific distribution or OS. See the end of this article.){mospagebreak title=Why should my next CGI project be in Python?} First of all, it should be stated that Python is a general purpose programming language. Though it is well suited for almost every type of web application, if you need a few simple SSI (server-side include) pages a language like PHP might be more suitable. Perl is well known as the “King of CGI.” Perl is uniquely suited as a text scanning and processing language, and its CGI related modules are extensive and well implemented. However, even an old-time Perl hacker will usually tell you that Perl really shines in one-person ‘quickie’ jobs that will never have to be maintained by another human being. The combination of Perl’s line-noise-like syntax and “There’s More Than One Way To Do It” philosophy often results in an unmaintainable mess of a program. That’s not to say that interesting and powerful applications can’t be written in Perl. In addition, knowledge of the language itself is often necessary for cleaning up other people’s messes. However, if you are starting a brand new web application, you should consider Python and its “There Should Be One Obvious Way To Do It” philosophy. Python has a much cleaner syntax than most languages and object orientation is built right into the core. [Overt use of object features such as classes is entirely optional; Python can be used as a straightforward scripting language.] Python’s clean, readable syntax is a blessing to new developers learning the language and also to people who have to maintain other people’s code. People developing with the Apache web server (and according to Netcraft, that’s most of you) will be pleased to know about the mod_python module that embeds the Python interpreter directly into Apache, resulting in application execution speeds to rival Perl. Lastly, Python has a very extensive, well documented and portable module library that provides a large variety of useful functions. The Internet-related collection is particularly impressive, with modules (Python function/class libraries) to deal with everything from parsing and retrieving URL’s to retrieving mail from POP servers and everything in between. Other modules handle everything from GUI interfaces (using a variety of popular toolkits) to database access.{mospagebreak title=Your First CGI program in Python} In this section, I’m going to assume you have Python installed and have set up your web server to actually execute Python scripts (instead of, for example, sending the source code to the client.) There are so many variations of operating systems and web servers that I’m going to have to rely on you following the appropriate directions that came with your copy of Python. A tip for Unix users: often a CGI script has to have the executable attribute set (“chmod +x script.py”) and/or have a special extension or be in a special CGI directory. Consult your friendly local web administrator for details. Please note that if you’re using Apache, using mod_python is not absolutely necessary but is recommended for most applications. It doesn’t affect anything in your program other than the (perceived) execution speed because it eliminates the start-up time for the interpreter with each connection. Here is a (very) simple script to test your setup: #!/usr/bin/python # Tell the browser how to render the text print “Content-Type: text/plainnn” print “Hello, Python!” # print a test string Let’s go over the script line-by-line to familiarize ourselves with the basic components of a Python script. The first line is necessary for most Linux/Unix systems, but is harmlessly useless on all other systems. It is simply a signal to the shell that This Is A Python Program. You will need to change the path if Python is not installed in the usual place. (Sometimes it is in /usr/local/bin/python for example.) Other systems, such as Windows or Macintosh, will have to follow different procedures to inform the OS to execute this as a Python file. Since the first line begins with a # character, it is a comment and has no actual effect in the Python program itself. Next we have another comment describing what the first line of actual code does: simply print a string (to “standard output”) describing what kind of content follows. This is necessary for most browsers to know how to display the information. In this case it’s just plain text, but we might want ‘Content-Type: text/html’ in the future. The ‘n’ is a special character that says “print a new line.” Normally the print command will automatically add a newline for you, but in this case we need two (because that’s what browsers expect.) The signals a special character. Use to actually print a slash. Finally, we simply say hello to the world! Note that Python statements do not end with a semicolon or other punctuation, just a newline. If everything went according to plan, visiting the URL of the above script should print “Hello, Python!” to your browser window.{mospagebreak title=Getting some real work done} Our little example script above didn’t accomplish much real work, of course, but it was helpful (I hope) in making sure your web server is set up to run Python CGI scripts. Now we’re going to delve into the nitty-gritty of writing a real CGI program in Python. It might be helpful to have the Python tutorial open in another window to refer to from time to time on matters of syntax or keywords. The first thing that most web developers will want to know is how to get information out of HTML forms and do something with it. Python provides an excellent module to deal with this situation. Coincidentally enough, the module is named cgi. So let me show you a simple example program that will gather information out of a browser form. As you know, the data in HTML forms is referenced by the “name” attribute of each input element. For instance, this tag in a form: <INPUT TYPE=TEXT NAME=email SIZE=40> produces a 40 character wide text box with word “email” associated with it. This information is passed to a CGI script by either one of two methods: GET or POST. However, the cgi module hides this complexity from the programmer and presents all form information as a Python fundamental data type; the ever-useful dictionary. Perl users will know this one as a “hash” (which sounds like more fun than a dictionary, but the word is less descriptive.) A dictionary is a type of array that is indexed by a string instead of a number. For instance, if we had defined a dictionary called Bestiary, we could type something like this: >>> Bestiary[“Zebra”] and Python might respond with: “A black and white striped ungulate native to southern Africa.” [The little snippet above demonstrates an interesting and occasionally useful feature of Python: the ability to act as an interactive interpreter. “>>>” is the Python prompt. If you type an expression, as I did above, Python will print the return value. If you’re only writing CGI scripts, you probably won’t use the interactive interpreter much, but sometimes it does come in handy for debugging.] Python’s cgi module has a method (another name for a procedure or function) called FieldStorage that returns a dictionary containing all the form <INPUT>’s and their corresponding values. If you know the specific item you want, you can easily access it: import cgi The_Form = cgi.FieldStorage() print The_Form[“email”].value Note that just using The_Form[“email”] returns a tuple containing both the field name and the value. When dealing with the dictionaries returned by cgi.FieldStorage, we need to be explicit in asking for the values in the forms. Any generic method that works on dictionaries can be usefully applied. For instance, if you want to find out which keys are available on any given form, you might do something like this: >>> The_Form.keys() [‘name’, ’email’, ‘address’, ‘phone’] or this: >>> The_Form.values() [‘Preston Landers’, ‘preston@askpreston.com’, ‘1234 Main St.’, ‘555-1212′] (Please note that Python will not guarantee any particular order for the keys() or values() methods. If you want that, use sort()) If you try to access a field that doesn’t exist, this will generate an exception; an error status. If you don’t catch the exception with an except: block, this will stop your script and the user will see the dreaded “Internal Server Error.” Don’t worry too much about it at this point because we’ll be covering exceptions in great detail later on. We can put these pieces together to write a simple program that prints out the name and value of each <INPUT> element passed to it. In this case, it’s up to you to provide an actual web page that contains the form that points to this CGI script as its ACTION field. If you’re not sure what I mean, don’t worry, we’ll go over forms again. #!/usr/bin/python import cgi print “Content-Type: text/plainnn” The_Form = cgi.FieldStorage() for name in The_Form.keys(): print “Input: ” + name + ” value: ” + The_Form[name].value + “<BR>” print “Finished!” As you can see, we used a simple for loop to iterate over each key, printing the key name, its value, and an HTML break to separate each line. Notice that blocks of code are set off by tabs in Python, not { } curlies like in C or Perl. When the indentation changes, as in the line that prints “Finished!,” that signals the end of that code block. We’ve also demonstrated the + operation for strings; as you might expect, + will concatenate (stick together) two strings. If you’re not sure if the elements you’re concatenating will be strings at runtime, use the str() or repr() methods. For instance: . >>> NumberA = 55 >>> NumberB = 45 >>> print NumberA + NumberB 100 >>> print str(NumberA) + str(NumberB) 5545 We’ll need a routine to actually display the content that we’re going to assemble. This routine should take care of administrivia like the Content-Type line and basic HTML syntax like the <BODY></BODY> tags. Also, ideally, we want as little actual HTML in our Python program as possible. In fact, we want all this formatting stuff in an external file that can be easily modified and updated without delving into actual program code. Your site probably has a standard HTML “style sheet” (even if you’re not using actual CSS) that you’re expected to follow. It would be a real pain to embed this HTML directly into your code, only to have to change on a daily basis. There is also the danger of accidentally modifying important program code. So, we’ll go ahead and define a simple template file that your Python scripts’ Display() function will use. Your actual template will probably be much more complicated but this will get you started. template.html: <!DOCTYPE HTML PUBLIC “-//IETF//DTD HTML//EN”> <html> <head> <META NAME=”keywords” CONTENT=”blah blah — your ad here”> <title>Python is Fun!</title> </head> <body> <!– *** INSERT CONTENT HERE *** –> </body> </html> The thing that should (hopefully) strike you about this template is the HTML comment <!– *** INSERT CONTENT HERE *** –>. Your Python program will search for this comment and replace it with its goodies. To perform this feat of minor magic, we will introduce a couple of new methods including file operations and regular expressions. Old Perl Hackers will rejoice to learn that Python includes complete and powerful regular expression (RE) support. The actual RE syntax is almost identical to Perl’s, but it is not built directly into the language (it is implemented as a module.) Those of you who are not Old Perl Hackers are probably wondering at this point what exactly are regular expressions. A regular expression is like a little mini-language that specifies a precise set of text strings that match, or agree, with it. That set can be unlimited or finite in size depending on the regexp. When Python scans a string for a given regular expression, it searches the entire string for characters that exactly match the RE. Then, depending on the operation, it will either simply note the location or perform a substitution. RE’s are extraordinarily powerful in a wide range of applications, but we will only be using a few basic RE features here. Please see the Regular Expression HOWTO for an excellent guide to using regular expressions in Python. The basic tactic we’re going to use to implement template support in our CGI script is to read the entire template file in as one long string, then do a regular expression substitution, swapping our content for that “INSERT CONTENT HERE” marker. Here is the code for our Display(Content) function: import re # make the regular expression module available # specify the filename of the template file TemplateFile = “template.html” # define a new function called Display # it] Let’s review some of the important features of this code snippet. First, reading the contents of a file into a string is fairly unremarkable. Next we define a string called BadTemplateException that will be used in case the template file is corrupt. If the marker HTML comment cannot be found, this string will be ‘raised’ as a fatal exception and stop the execution of the script. This exception is fatal because it’s not caught by an ‘except’ block. Also, if the template file simply can’t be found by the open() method, Python itself will raise an (unhandled, fatal) exception. All the user will likely see is an “Internal Server Error” message. The traceback (what caused the exception) will be recorded in the web server’s logs. We’ll go into more detail later about exceptions and how to handle them intelligently. Next, we have the actual meat of this function. This line uses the subn() method of the re (regular expression) module to substitute our Content string for the marker defined in the first parameter to the method. Notice that we had to backslash the * characters, because these have special meaning in RE’s. (See the documentation for the re module for a complete list of special characters.) Our RE is relatively simple; it is simply an exact list of characters with no special attributes. The second parameter is our Content, the thing we want to replace the marker with. This could be another function as long as that function returns a string (otherwise, you’ll get an exception.) The final parameter is the string to operate on, in this case, the entire template file contained in TemplateInput. re.subn() returns a tuple with the resulting string and a number indicating how many substitutions were made. A tuple is another fundamental Python data type. It is almost exactly like a list except that it can’t be changed; it is what Python calls “immutable.” (From this we can deduce that lists can be modified “on the fly.”) It is an array that is accessed by number subscripts. SubResult[0] contains the modified result string (remember, programmers start counting from 0!) and SubResult[1] contains the number of substitutions actually made. This line also demonstrates the method of breaking up long lines. If you put a single character at the end of a line, the next line is considered a continuation of the same line. It’s useful for making code readable, or fitting Python code into narrow HTML browser windows. The next line of code says that if there weren’t any substitutions made, this is a problem, so complain loudly by raising an (unhandled, fatal) exception. In other words, if our special “INSERT CONTENT HERE” marker was not found in the template file, this is a big problem! Finally, since everything is okay at this point, go ahead and print out the special Content-Type line and then our complete string, with our Content inside our Template. You can use this function in your own CGI Python programs. Call it when you’ve assembled a complete string containing all the special output of your program and you’re ready to display it. Later, we will modify this function to handle special types of Content data including objects that contain sub-objects that contain sub-objects and so on.{mospagebreak title=Putting the pieces together} Let’s review what we’ve covered so far. We’ve learned how to get information from the user via a CGI form and we’ve learned how to print out information in a nicely formatted HTML page. Let’s put these two separate bits of knowledge together into one program that will prompt the user with a form, and then print out the information submitted by that form. We’re going to go ahead and get fancy and use two templates this time … one, the same generic site template as before, and the other template will contain our form that will be presented to the user. form.html <FORM METHOD=”POST” ACTION=”sample1.py”> <INPUT TYPE=HIDDEN NAME=”key” VALUE=”process”> Your name:<BR> <INPUT TYPE=TEXT NAME=”name” size=60> <BR> Email: (optional)<BR> <INPUT TYPE=TEXT NAME=”email” size=60> <BR> Favorite Color:<BR> <INPUT TYPE=RADIO NAME=”color” VALUE=”blue” CHECKED>Blue <INPUT TYPE=RADIO NAME=”color” VALUE=”yellow”>No, Yellow… <INPUT TYPE=RADIO NAME=”color” VALUE=”swallow”>What do you mean, an African or European swallow? <P> Comments:<BR> <TEXTAREA NAME=”comment” ROWS=8 COLS=60> </TEXTAREA> <P> <INPUT TYPE=”SUBMIT” VALUE=”Okay”> </FORM> We’ll use the same generic template file from before. Here is the program itself: sample1.py: #!/usr/bin/python import re import cgi # specify the filename of the template file TemplateFile = “template.html” # specify the filename of the form to show the user FormFile = “form.html” # Display] ### what follows are our two main ‘action’ functions, one to show the ### form, and another to process it # this is a really simple function def DisplayForm(): FormHandle = open(FormFile, “r”) FormInput = FormHandle.read() FormHandle.close() Display(FormInput) def ProcessForm(form): # extract the information from the form in easily digestible format try: name = form[“name”].value except: # name is required, so output an error if # not given and exit script Display(“You need to at least supply a name. Please go back.”) raise SystemExit try: email = form[“email”].value except: email = None try: color = form[“color”].value except: color = None try: comment = form[“comment”].value except: comment = None Output = “” # our output buffer, empty at first Output = Output + “Hello, ” if email != None: Output = Output + “<A HREF=”mailto:” + email + “”>” + name + “</A>.<P>” else: Output = Output + name + “.<P>” if color == “swallow”: Output = Output + “You must be a Monty Python fan.<P>” elif color != None: Output = Output + “Your favorite color was ” + color + “<P>” else: Output = Output + “You cheated! You didn’t specify a color!<P>” if comment != None: Output = Output + “In addition, you said:<BR>” + comment + “<P>” Display(Output) ### ### Begin actual script ### ### evaluate CGI request form = cgi.FieldStorage() ### “key” is a hidden form element with an ### action command such as “process” try: key = form[“key”].value except: key = None if key == “process”: ProcessForm(form) else: DisplayForm() Notice that we defined the function Display() again. This isn’t really necessary; in fact, it’s a bad idea to cut-n-paste code from one script to another. A far better solution (which I will show you later) is to put all your functions that are common to multiple scripts in separate file. That way, if you want to improve or debug the function later, you don’t have to hunt for each occurrence of it in all of your scripts. However, we can get away with it for this learning exercise. Note that since Python is a dynamically interpreted language, functions have to be defined before they can be used. That’s why the first thing this program does is define its functions. The DisplayForm() function is very simple and uses the same principle as the Display() function (which it, of course, calls upon to do most of the hard work.) Using these kinds of functions rather than embedding static HTML forms will save you an incredible amount of aggravation. Sometimes, of course, you’ll need to generate HTML on-the-fly; we’ll talk about that later. The ProcessForm() function has two main parts. The first extracts values from the <INPUT> fields of the form and stores them in regular variables. This is the section of your program logic where you validate the information in the form to make sure it is kosher by your application’s definition. It is a VERY bad idea to use form data provided by random people on the web without validating it; especially if you’re going to use that data to execute a system command or for acting on a database. Naively written CGI scripts, in any language, are a favorite target for malicious system crackers. At the minimum, make sure form information doesn’t exceed a certain appropriate length. Passing huge strings to external programs where they aren’t expected can cause buffer overflows which can lead to arbitrary code execution by a clever cracker. We will be discussing security in greater detail in the next article. Fortunately, Python makes it easy to write scripts securely. In our program, we enclose these validation statements in try: and except: blocks because if the value isn’t present, it will generate an exception, which could potentially stop your program. In this case, we catch the exception if a given field was left blank (or simply not in the original form.) That variable will be assigned the None value, which has special meaning in Python. (By definition, it is the state of not having a value, similar to NULL in other languages.) A special case is the “name” field; we’ve decided that that field is absolutely required. If “name” is not given, then the program will use its Display() method to print a helpful message to the user: “Please go back and try again.” It then raises the exception SystemExit, which in effect, stops the script then and there. Once we have validated all of our form input, we can act on it however we choose. In this case, we just want to print a simple message acknowledging receipt of the information. We set up an empty string to use as a buffer for our output, and we begin adding to it. We can conditionally include stuff in this buffer depending on whether or not a value is defined is None. Sharp observers will note that we cheated a bit and put a little actual HTML directly in our code. Again, this is not ideal, but we can get away with it in this learning exercise. The next article in this series will address using objects to avoid the problems of embedding HTML directly in output statements. The actual main part of the script follows the function definitions, and is very simple. We obtain the form, and conditionally execute one of our two action functions depending on the value of a hidden form element called “key”. We know that if this key is present, and it is set to the value “process,” we’ll want to process the form. Otherwise, we’ll just display the form. If you’re not sure what I mean by hidden key, go back and look at our form.html template carefully.{mospagebreak title=Simple Database Access} The last thing we’re going to learn about in this installment of our series on learning CGI programming in Python is how to make simple database queries. But instead of providing a complete demonstration program, I’m just going to give you a few code snippets for running SQL queries. You can combine these fragments with what you’ve learned about working with forms to make a useful web application in Python. Python has a standard API (Application Programming Interface) for working with databases. The interface is the same for every database that is supported, so your program can work unmodified (in theory) with any common database. Currently, the Python DBI (Database Interface) supports most popular SQL relational databases such as mSQL, MySQL, Oracle, PostgreSQL, and Informix. There is an excellent resource for people interested in interfacing Python program with databases: the Database SIG (Special Interest Group.) This article will only cover simple read-only queries. In the next installment we’ll cover more complex database operations. While the programming interface is standardized across databases, it is still necessary to have the module that provides access to your particular database available to the Python interpreter. Often these modules are a separate download and are not included with the standard Python distribution. See the Database SIG mentioned above for instructions on obtaining the module for your database. The code that initializes the database connection will be somewhat dependent on a specific database, while the rest of the code that actually uses the database should work across all supported types of database. Once you’ve installed the appropriate module on your system, make it available with an import statement. >>> import Mysqldb …loads the MySQL DBI module. Now we want to initialize the database module by connecting to it. The database module must be given a string in a specific format containing the name of the database to use, your username, and so on. The format is “Database_Name @ Machine_Name User Password”. If you don’t know some of this information, you’ll have to ask your friendly local database administrator. For instance: >>> import Mysqldb >>> SQLDatabase = “guestbook” >>> SQLHost = “localhost” >>> SQLUser = “gbookuser” >>> SQLPassword = “secret!” >>> connectstring = SQLDatabase + “@” + SQLHost + ” ” >>> connectstring = connectstring + SQLUser + ” ” + SQLPassword >>> connection = Mysqldb.mysqldb(connectstring) >>> cursor = connection.cursor() Notice that the last statement returned a thing called a cursor. Coincidentally, we’ve also named the object that holds that ‘cursor’, but just as easily it could have been named ‘bilbobaggins’. All database action is performed through a cursor, which functions as an active connection to the database. The cursor object that we obtained has a number of methods including execute() and fetchall(). execute() is used to actually execute SQL statements. Use fetchall() to get the results of the previous execute() as a list of tuples. Each tuple represents a specific record/row of the database. Once you have the cursor, you can perform any SQL statement your database will support with cursor.execute(statement). Here is a simple example that will fetch all rows of a guestbook and display them as an HTML list. [Again, it’s not a good idea to embed HTML directly into code like this; it’s just an example.] # get all entries from gbook table, ordered by time stamp myquery = “SELECT * FROM gbook ORDER BY stamp” handle.execute(myquery) Results = handle.fetchall() # fetch all rows into the Results array total = len(Results) # find out how many records were returned # we’ll want a blank list to hold all the guestbook entries entries = [] if total < 1: print “There weren’t any guestbook entries!” ### do something else here else: for record in range(total): entry = {} # a blank dictionary to hold each record entry[“gid”] = Results[record][0] # field 0 = guestbook ID entry[“stamp”] = Results[record][1] # field 1 = timestamp entry[“name”] = Results[record][2] # and so on… entry[“email”] = Results[record][3] entry[“link”] = Results[record][4] entry[“comment”] = Results[record][5] entries.append(entry) # add this entry to the master list # we’ll pretend we set up an HTML table here… ### parse variables into table for entry in entries: print “<LI>” + entry[“name”] + “@” + entry[“email”] + ” said: ” + entry[“comment”] Notice that we copied the information out of the Results list into a list of dictionaries. It’s not absolutely necessary to do this; in fact it will slow down your program a tiny bit. The benefit to doing that is that you can access each column of a record by name, rather than the number. If you use the record more than once, it becomes much easier to keep the mnemonic names straight than arbitrary numbers. {mospagebreak title=Other Resources and Links} There are several good books in print on Python, including: Internet Programming with Python (Watters, van Rossum, Ahlstrom; MIS Press): this one I personally own. Though it is slightly outdated, covering Python 1.4, most of the information is still relevant and useful. Provides a good tutorial on Python and lots of sample code for Internet applications. One of the co-authors is Guido van Rossum, the inventor of Python. Recommended, but due for a new edition. Programming Python (Lutz; Oreilly & Associates): also slightly outdated, but highly recommended by people who have it. More of a reference than a tutorial. Python Documentation: the starting point for all official Python documentation including the Library Reference. The Python Tutorial: a gentle but effective introduction to Python. The Python FAQ: many questions and answers about specific features and problems. Come here when you can’t find your question answered in any other documentation. Language Comparisons: also known as Python vs. Perl vs. Tcl vs. Java vs. the world. Most articles here are reasonably objective and balanced comparisons. Always pick the right tool for the job. Often, Python is the right tool. Downloading Python for your platform. Both source code and binaries for a wide variety of platforms can be found there. However, if you are running a Linux distribution such as Debian or Red Hat it will be easier for you to obtain the appropriate package for your system from your distributor. Regular Expression HOWTO: mastering those tricky but useful regexp’s in Python. Database SIG: a gathering place for people interested in and information pertaining to using tabular databases with Python.
http://www.devshed.com/c/a/python/writing-cgi-programs-in-python/6/
CC-MAIN-2015-22
refinedweb
5,107
62.68
Cloud Cache - Introducing the Microsoft Azure Caching Service By Karandeep Anand, Wade Wegner | April 2011 In a world where speed and scale are the key success metrics of any solution, they can’t be afterthoughts or retrofitted to an application architecture—speed and scale need to be made the core design principles when you’re still on the whiteboard architecting your application. The Azure. The Caching service was released as a Community Technology Preview (CTP) at the Microsoft Professional Developers Conference in 2010, and was refreshed in February 2011. The Caching service is an important piece of the Azure, and builds upon the Platform as a Service (PaaS) offerings that already make up the platform. The Caching service is based on the same code-base as Windows Server AppFabric Caching, and consequently it has a symmetric developer experience to the on-premises cache. The Caching service Azure and Windows Server AppFabric - Secured access and authorization provided by the Access Control service While you can set up other caching technologies (such as memcached) on your own as instances in the cloud, you’d end up installing, configuring and managing the cache clusters and instances yourself. This defaults one of the main goals of the cloud, and PaaS in particular: to get away from managing these details. The Windows Server AppFabric Caching service removes this burden from you, while also accelerating the performance of ASP.NET Web applications running in Azure with little or no code or configuration changes. We’ll show you how this is done throughout the remainder of the article. Under the Hood Now that you understand how caching can be a strategic design choice for your application, let’s dig a little deeper into what the Caching service really looks like. Let’s take an example of a typical shopping Web site, say an e-commerce site that sells Xbox and PC games, and use that to understand the various pieces of the Caching service. At a high level, there are three key pieces to the puzzle: - Cache client - Caching service - Connection between the client and the service Cache client is the proxy that lives within your application—the shopping Web site in this case. It’s the piece of code that knows how to talk to the Caching service, in a language that both understand well. You need to include this client assembly in your application and deploy it with your Web application with the appropriate configuration to be able to discover and talk to the Caching service. (We’ll cover this process later in the article.) Because there are some common application patterns, such as ASP.NET session state, where caching can be used out of the box, there are two ways to use the client. For explicit programming against the cache APIs, include the cache client assembly in your application from the SDK and you can start making GET/PUT calls to store and retrieve data from the cache. This is a good way to store your games catalog or other reference data for your gaming Web site.. One other thing to know about the cache client is the ability to cache a subset of the data that resides in the distributed cache servers, directly on the client—the Web server running the gaming Web site in our example. This feature is popularly referred to as the local cache, and it’s enabled with a simple configuration setting that allows you to specify the number of objects you wish to store and the timeout settings to invalidate the cache. The second key part of the puzzle is the Caching service itself. key here is usable memory. If you ask for 1GB cache memory, you get 1GB of usable memory to store your objects, unlike the amount of memory available on a Azure instance you buy.. In that sense, think of the Caching service as a virtual pool of partitioned and shared memory that you can consume flexibly. The other under-the-hood design point to understand is that the Caching service automatically partitions your cache so that you don’t lose data in the event of a machine going down, even if you haven’t bought the more expensive high availability (HA) option for your cache. (The HA option is not currently available in Azure Caching, and only available on Windows Server AppFabric Caching today.) This partitioning scheme increases the performance and reduces the data loss probability automatically for you without ever having to learn about the back-end service. The third and last part of the puzzle is the connection between your cache client and Caching service. The communication between the cache client and service is using WCF, and the WCF programming model is abstracted from the developer as the cache client translates GET/PUT calls to a WCF protocol that the service understands. The thing you need to know about this communication channel is that it’s secure, which is extremely critical, especially in the cloud world. Your cache is secured using an Access Control service token (which you get when you create the named cache). The Caching service uses this token to restrict access to your cached data by your client. Architectural Guidance We’ve spent a lot of time with customers who have varying degrees of application complexity . Usually, before we start whiteboarding the design choices and architecture, we’ll draw the simple diagram shown in Figure 1. For most cases this diagram captures the trade-offs between the three most basic elements involved in storing and manipulating data. (We’ve had some storage gurus argue with us that they can saturate network before they can max the disk throughput, hence we qualify the statement about “most” cases.) Figure 1 Memory, Network and Disk Usage in Data Manipulation Scenarios The basic principle is that your memory on the local machine provides the fastest data access with lowest latency, but is limited to the amount of usable memory on the machine. As soon as you need more memory than your local machine can give you or you need to externalize the data from your compute tier (either for shared state or for more durability), the minimum price you pay is for the network hop. The slowest in this spectrum is storing data on a disk (solid-state drives help a little, but are still quite expensive). Disk is the cheapest and largest store while memory is the most expensive and hence the most constrained in terms of capacity. The Caching service balances the various elements by providing the service as a network service to access a large chunk of distributed and shared memory across multiple compute tiers. At the same time, it provides an optimization with the local cache feature to enable a subset of the data to additionally reside on the local machine while removing the complexity of maintaining consistency between the local cache and the cache tier in the service. With this background, let’s look at some top-level architectural considerations for using cache in your application. What data should you put in the cache? The answer varies significantly with the overall design of your application. When we talk about data for caching scenarios, usually we break it into the data types and access patterns shown in Figure 2 (see msdn.microsoft.com/library/ee790832 for a deeper explanation of these data access patterns). Figure 2 Data in Caching Scenarios Using this model to think about your data, you can plan for capacity and access patterns (for managing consistency, eviction, refreshes and so on) and restrict the data to the most frequently used or time-sensitive data used or generated by your application. As an example, reference data should be readily partitioned into frequently accessed versus infrequently accessed to split between cache and storage. Resource data is a classic example where you want to fit as much as possible in cache to get the maximum scale and performance benefits. In addition to the cache tier, even the use of local cache on the client goes hand in hand with the data type. Reference data is a great candidate for keeping in the local cache or co-located with the client; while in most cases resource data can become quite chatty for the local cache due to frequent updates and hence best fits on the cache tier. As the second-most-expensive-resource and usually the bottleneck for most inefficient implementations, network traffic patterns for accessing cache data is something to which you should pay particular attention. If you have a large number of small objects, and you don’t optimize for how frequently and how many objects you fetch, you can easily get your app to be network-bound. Using tags to fetch like data or using local cache to keep a large number of frequently accessed small objects is a great trade-off. While the option to enable HA for a named cache is not yet available in the Caching service, it’s another factor to consider in your application design. Some developers and architects choose to use cache only as a transient cache. However, others have taken the leap of faith to move exclusively to storing a subset of their data (usually activity data) only in the cache by enabling the HA feature. HA does have a cost overhead, but it provides a design model that treats the cache as the only store of data, thereby eliminating the need to manage multiple data stores. However, the cache is not a database! We cannot stress this point too strongly. The topic of HA usually makes it sound like the cache can replace your data tier. This is far from the truth—a SQL database is optimized for a different set of patterns than the cache tier is designed for. In most cases, both are needed and can be paired to provide the best performance and access patterns while keeping the costs low. How about using the cache for data aggregation? This is a powerful yet usually overlooked scenario. In the cloud, apps often deal with data from various sources and the data not only needs to be aggregated but also normalized. The cache offers an efficient, high-performance alternative for storing and managing this aggregated data with high throughput normalization (in-memory as opposed to reading from and writing to disk), and the normalized data structure of the cache using key-value pairs is a great way to think about how to store and serve this aggregated data.). The trade-off to consider here is that if you need ultra-low latency access to the shared state (session state in an online gaming Web site that tracks scores in real time, for example), then an external cache tier might not be your best option. For most other scenarios, using the cache tier is a quick yet powerful way to deal with this design pattern, which automatically eliminates the staleness from your data.). Remember that you can start with a simple configuration switch in your web.config file to start using caching without writing a single line of code, and you can spend years in designing a solution that uses caching in the most efficient way. Setting Up Azure Caching To get started using the Azure Caching service, head to portal.appfabriclabs.com. This is the CTP portal that you can use to get familiar with the Caching service. It doesn’t cost anything, but there are no service level agreements for the service. On the portal, select the Cache option and then create a new cache by clicking New Namespace. The dialog box to configure the Cache service namespace is shown in Figure 3. Figure 3 Configuring a New Cache Service Namespace The only two options you need to specify are a unique service namespace and the cache size (of which you can choose between 128MB and 256MB in the CTP). When you choose OK the service will provision the cache for you in the background. This typically takes 10 to 15 seconds. When complete, you have a fully functional, distributed cache available to your applications. Now that it’s created, you can take a look at the properties of your cache, shown in Figure 4. (Note that we’ve obfuscated some account-specific information here.) Figure 4 Cache Service Properties You can see that we’ve created a cache using the namespace of CachingDemo. There are a few pieces of important information that you’ll want to grab, as we’ll use them later in our code: the Service URL and the Authentication Token. The Service URL is the TCP endpoint that your application will connect to when interacting with the Caching service. The Authentication Token is an encrypted token that you’ll pass along to Access Control to authenticate your service. Caching in Your App Now, before you start coding, download the Azure SDK. You can find it at go.microsoft.com/fwlink/?LinkID=184288 or click the link on the portal. Make sure you don’t have the Windows Server AppFabric Cache already installed on your machine. While the API is symmetric, the current assemblies are not compatible. The Windows Server AppFabric Cache registers its Caching assemblies in the Global Assembly Cache (GAC), so your application will load the wrong assemblies. This will be resolved by the time the service goes into production, but for now it causes a bit of friction. To begin, let’s create a simple console application using C#. Once created, be sure to update the project so that it targets the full Microsoft .NET Framework instead of the Client Profile. You’ll also need to add the Caching assemblies, which can typically be found under C:\Program Files\Microsoft Azure SDK\V2.0\Assemblies\Cache. For now, add the following two assemblies: - Microsoft.ApplicationServer.Caching.Client - Microsoft.ApplicationServer.Caching.Core One thing that changed between the October CTP and the February refresh is that you now must use the System.Security.SecureString for your authentication token. The purpose of SecureString is to keep you from putting your password or token into memory within your application, thereby keeping it more secure. However, to make this work in a simple console application, you’ll have to create the following method: While this defeats the purpose of SecureString by forcing you to load the token into memory, it’s only used for this simple scenario. Now, the next thing to do is write the code that will set us up to interact with the Caching service. The API uses a factory pattern, so we’ll have to define the factory, set some configuration settings and then load our default cache as shown in Figure 5. private static DataCache configureDataCache( SecureString authorizationToken, string serviceUrl) { // Declare an array for the cache host List<DataCacheServerEndpoint> server = new List<DataCacheServerEndpoint>(); server.Add(new DataCacheServerEndpoint(serviceUrl, 22233)); // Set up the DataCacheFactory configuration DataCacheFactoryConfiguration conf = new DataCacheFactoryConfiguration(); conf.SecurityProperties = new DataCacheSecurity(authorizationToken); conf.Servers = server; // Create the DataCacheFactory based on config settings DataCacheFactory dataCacheFactory = new DataCacheFactory(conf); // Get the default cache client DataCache dataCache = dataCacheFactory.GetDefaultCache(); // Return the default cache return dataCache; } You can see that we’ve defined a new DataCacheServerEndpoint based on a service URL (that we’ll provide) that points to port 22233. We then create the DataCacheFactoryConfiguration, pass in our authentication token (which is a SecureString) and set it to the security properties—this will enable us to authenticate to the service. At this point it’s simply a matter of constructing the DataCacheFactory, getting the DataCache based on the default cache and returning the default cache. While it’s not required to encapsulate this logic in its own method, it makes it much more convenient later on. At this point, it’s pretty simple to pull this together in our application’s Main method (see Figure 6). static void Main(string[] args) { // Hardcode your token and service url SecureString authorizationToken = createSecureString("YOURTOKEN"); string serviceUrl = "YOURCACHE.cache.appfabriclabs.com"; // Create and return the data cache DataCache dataCache = configureDataCache(authorizationToken, serviceUrl); // Enter a value to store in the cache Console.Write("Enter a value: "); string value = Console.ReadLine(); // Put your value in the cache dataCache.Put("key", value); // Get your value out of the cache string response = (string)dataCache.Get("key"); // Write the value Console.WriteLine("Your value: " + response); } We provide the authentication token and service URL to our application, then pass them into the configureDataCache method, which sets the DataCache variable to the default cache. From here we can grab some input from the console, put it into the cache and then call Get on the key, which returns the value. This is a simple, but valid, test of the cache. Including the tokens and service URL in the code is not ideal. Fortunately, the portal provides you with XML that you can insert within your app.config (or web.config) file, and the APIs will manage everything for you. In the portal, select your cache, then click the View Client Configuration button. This opens a dialog that provides the configuration XML. Copy the XML snippet and paste it into your configuration file. The end result will look like Figure 7. <?xml version="1.0"?> <configuration> <configSections> <section name="dataCacheClient" type="Microsoft.ApplicationServer.Caching.DataCacheClientSection, Microsoft.ApplicationServer.Caching.Core" allowLocation="true" allowDefinition="Everywhere"/> </configSections> <dataCacheClient deployment="Simple"> <hosts> <host name="YOURCACHE.cache.appfabriclabs.com" cachePort="22233" /> </hosts> <securityProperties mode="Message"> <messageSecurity authorizationInfo="YOURTOKEN"> </messageSecurity> </securityProperties> </dataCacheClient> </configuration> Now we can heavily refactor our code, get rid of the createSecureString and configureDataCache methods, and be left with this: static void Main(string[] args) { DataCacheFactory dataCacheFactory = new DataCacheFactory(); DataCache dataCache = dataCacheFactory.GetDefaultCache(); Console.Write("Enter a value: "); string value = Console.ReadLine(); dataCache.Put("key", value); string response = (string)dataCache.Get("key"); Console.WriteLine("Your value: " + response); } You can see that all we have to do is create a new instance of the DataCacheFactory, and all the configuration settings in the app.config file are read in by default. As you’ve seen, you can use the APIs directly or manage DataCacheFactory in your configuration. While we’ve only performed PUT and GET operations on simple data, we could easily store data retrieved from SQL Azure, Azure or another provider of data. For a more complex look at using the cache for reference data stored in SQL Azure, refer to the Caching Service Hands-On Lab in the Microsoft Azure Training Course (windowsazure.com/en-us/develop/training-kit/HOL-buildingappswithcaching/). Storing Session Data Next, let’s take a look at how to use the Caching service to store the session data for our ASP.NET Web application. This is a powerful technique, as it allows us to separate the session state from the in-process memory of each of our Web clients, thus making it easy to scale our applications beyond one instance in Azure. This is important for servicesthat don’t support sticky sessions, such as Azure. There’s no way to guarantee that a user will hit the same instance with each additional request—in fact, the Azure load balancer explicitly uses a round-robin approach to load balancing, so it’s likely that your user will hit a new instance. By using the Caching service for session state, it doesn’t matter which instance your user hits because all the instances are backed by the same session state provider. To begin, create a new Azure Project and add an ASP.NET Web Role. In the Web Role, add all of the assemblies provided by the Azure SDK, including: - Microsoft.ApplicationService.Caching.Client - Microsoft.ApplicationService.Caching.Core - Microsoft.Web.DistributedCache - Microsoft.WindowsFabric.Common - Microsoft.WindowsFabric.Data.Common Next, update your web.config file so that the very first things to follow the <configuration> element are the <configSections> and <dataCacheClient> elements (you’ll receive errors if they aren’t the first elements). Now, the key to using the Caching service for session state is the Microsoft.Web.DistributedCache assembly. It contains the custom session state provider that uses the Caching service. Return to the LABS portal where you grabbed the XML for your configuration files, and find the <sessionState> element—you can place this directly in the <system.web> element of your web.config file and it will immediately tell your application to start leveraging the Caching service for session state: <system.web> " /> </providers> </sessionState> ... </system.web> To validate that this is working, open the Global.asax.cs file and add the following code to the Session_Start method: This will add 10 random items into your session context. Next, open up your Default.aspx.cs page and update the Page_Load method: This will write out all the values that were added into the session context. Finally, open the ServiceConfiguration.cscfg file and increase the instance count from 1 to 2: Now, when you hit F5, you’ll get two instances of your application running in the Compute Emulator. Notice that no matter how many times you refresh the page, you’ll always have the same 10 values in your session state—this is because it’s a shared session, and session start only runs once. Conversely, if you don’t use the Caching service as your session state provides, but opt to keep the default in-process choice, you’ll have different values on each of the instances. What’s Next? Azure Caching service is slated to go into production as a commercial service in the first half of 2011. In the first commercial release, some of the features that are available in Windows Server AppFabric will not yet be available. Some of these exclusions are purposeful, as they may not apply in the cloud world. However, features like notifications are relevant in Azure as well and are critical in completing the local-cache scenario, and hence are part of the short-term roadmap for Microsoft. Similarly, the option of turning on HA for a given named cache as a premium capability is also high on the priority list. The growing popularity of Windows Server AppFabric Caching has resulted in a number of new feature requests that open up the applicability of caching to an even broader set of scenarios. Some of the capabilities that are being discussed include the ability to perform rich queries on the cache and enabling an easier way to retrieve bulk data from a named cache. In addition, the success of the Caching session state provider scenarios with ASP.NET has resulted in requests for the ability to associate write-behind and read-through queries with the cache so that the cache can become the primary way to manipulate data, while letting the associated queries update the data tier in the back end. We’ll be evaluating these and other features for possible inclusion in future releases of Azure Caching. In the meantime, we encourage you to experiment with the current Caching service implementation and let us know how it works for you. Karandeep Anand is a principal group program Manager with the Azure product group at Microsoft. His team is responsible for building the next generation application platform and services for Azure and Windows Server. You can reach him at Karandeep.Anand@microsoft.com. Wade Wegner is a technical evangelist at Microsoft, responsible for influencing and driving the Microsoft technical strategy for the Azure. You can reach him through his blog at wadewegner.com or on Twitter at twitter.com/WadeWegner. Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/en-us/magazine/gg983488.aspx
CC-MAIN-2019-18
refinedweb
3,943
51.07
Maths › Approximation › Interpolation › Linear Linearly interpolates a given set of points.Controller: CodeCogs Interface C++ HTML Class LinearLinear interpolation is a process employed in mathematics, and numerous applications thereof including computer graphics. It is a very simple form of interpolation. In numerical analysis a linear interpolation of certain points that are in reality values of some function f is typically used to approximate the function f. Linear interpolation can be regarded as a trivial example of polynomial interpolation. The error of this approximation is defined as References:Wikipedia, Example 1 - The following example displays 20 interpolated/approximation/interpolation/linear.h> #include <cmath> #include <iostream> #include <iomanip> using namespace std; #define PI 3.1415 linear interpolation routine with known data points Maths::Interpolation::Linear A(N, x, y); // Interrogate linear fitting curve to find interpolated values int N_out = 20; xx = PI, step = (3 * PI) / (N_out - 1); for (int i = 0; i < N_out; ++i, xx += step) { cout << "x = " << setw(7) << xx << " y = "; cout << setw(13) << A.getValue(xx) << endl; } return 0; }Output: x = 3.1415 y = -5.89868e-005 x = 3.63753 y = 0.0765858 x = 4.13355 y = 0.153231 x = 4.62958 y = 0.0678533 x = 5.12561 y = -0.0879685 x = 5.62163 y = -0.137135 x = 6.11766 y = -0.022215 x = 6.61368 y = 0.0804548 x = 7.10971 y = 0.060627 x = 7.60574 y = 0.0407992 x = 8.10176 y = -0.0110834 x = 8.59779 y = -0.0715961 x = 9.09382 y = -0.0619804 x = 9.58984 y = 0.0221467 x = 10.0859 y = 0.081803 x = 10.5819 y = 0.0313408 x = 11.0779 y = -0.0191214 x = 11.5739 y = -0.0324255 x = 12.07 y = -0.0406044 x = 12.566 y = -0.0146181 See AlsoAlso consider the regression methods: Regression/Discrete, Regression/Forsythe, Regression/Orthogonal, Regression/Stiefel Authors - Lucian Bentea (August 2005) Source Code Source code is available when you agree to a GP Licence or buy a Commercial Licence. Not a member, then Register with CodeCogs. Already a Member, then Login. Members of Linear LinearInitializes the necessary data for following evaluations of the fitting lines. GetValueReturns the approximated ordinate at the given abscissa. Note - This function is not designed to provide extrapolation points, thus you need to keep the value of x in the interval from X[0] to X[N - 1]. Linear Once This function implements the Linear class for one off calculations, thereby avoid the need to instantiate the Linear class yourself. Example 2 - The following graph is constructed from interpolating.
http://www.codecogs.com/library/maths/approximation/interpolation/linear.php
CC-MAIN-2017-47
refinedweb
422
56.52
Joerg Schilling schrieb am 2006-02-03: > "Jim Crilly" <jim@why.dont.jablowme.net> wrote: > > > On 02/03/06 07:31:58PM +0100, Joerg Schilling wrote: > > > Matthias Andree <matthias.andree@gmx.de> wrote: > > > > > > > So patches to the rescue -- please review the patch below (for 2.01.01a05). > > > > Note that GPL 2a and 2c apply, so you cannot merge a modified version of > > > > my patch without adding a tag that you goofed my fixes. > > > > > > OK, I did not look at it and I never will! > > > > > > Jörg > > > > This is an excellent example to verify how bad cdrecord developent > > is done..... > > Well, > > cdrecord is done as good as possible. Untrue. Proof: My patch makes it operate more smoothly on Linux. > Note that if peope send a patch together with personal infringements or > untrue claims, the best I can do is to ignore alltogether. Look who's talking, and what. Personal infringements? If you're sensitive, my apologies, I didn't mean to insult you. > I did spend a lot of time with a fruitful discussion with Matthias. > Then Matthias started this thread.... It now seems like Matthias > does not like to be serious anymore. I am absolutely serious about the patch and about my recent findings after looking at libscg. I just don't want my name tainted with accidents that happen during integration because you don't have a recent Linux installation. The RLIMIT_MEMLOCK was enough of an effort, my first patch would've worked, too, hence the GPL. > I am of course interested to make cdrecord better, but for the price > of spending an ridiculously amount of time ob LKML. Well, if you'd listened and attempted to understand our scanning concerns, you'd probably have had libscg use a unified ATA:/SCSI: namespace in Linux for 1½ years. OK, spilled milk. -- Matthias Andree
https://lists.debian.org/cdwrite/2006/02/msg00018.html
CC-MAIN-2017-22
refinedweb
303
74.19
I would like to list all the tags of a DICOM file in C#. I would like something like what is shown in the link below I'm assuming this is an older version as when I paste it into Visual Studio it is not recognised. Any help would be appreciated. Assuming you could use Evil Dicom: public class DicomManager { public List<Tag> ReadAllTags(string dicomFile) { var dcm = DICOMObject.Read(dicomFile); return dcm.AllElements.Select(e => e.Tag).ToList(); } } UPDATE: As per your comment, let's say you need to show the elements in a component on the UI. I'll give you an example of how you could show all the elements in a console app, but the traversing algorithm is the same in any other presentation technology. Take a look at how is defined IDICOMElement interface in Evil Dicom: public interface IDICOMElement { Tag Tag { get; set; } Type DatType { get; } object DData { get; set; } ICollection DData_ { get; set; } } It means that an element has all the info you would need to work with it. Iterate the elements and show the tag name - element value. var dcm = DICOMObject.Read(dicomFile); dcm.AllElements .ForEach(element => Console.WriteLine("{0} - {1}", element.Tag, element.DData)); As you can see, if all you want is to show the value - its string representation - the previous snippet should be enough, but in the element you have more info about the real type of the object inside as well as the collection of values - in case of multi-valued elements. However you need to be careful because some VRs inside a DICOMObject can be very large, so make sure you do the processing using async methods or worker threads in order to keep you UI responsive and don't get a value out unless you specifically need to. Hope this helps!
https://codedump.io/share/jhb0o3nxgZ7f/1/evil-dicom-list-all-elements
CC-MAIN-2018-13
refinedweb
304
51.07
02, 2008 09:10 AM Unlike Microsoft's FXCop, which was largely celebrated as an important step towards improving the consistency and quality of .NET code, StyleCop has been viewed with much suspicion. The main difference between the two is that FXCop focuses on compiled code from any .NET language while StyleCop works solely against C# source code. The biggest complain about StyleCop is that its recommendations are mainly based in opinion. While some of the guidelines in FXCop are subjective, most are grounded in sound logic based on a deep knowledge of how the CLR works. StyleCop, on the other hand, is mostly about hotly debated issues like how many spaces to use for indention. Some are even downright contrary to standard practices like placing "using" statements inside namespaces. With the release of the StyleCop SDK, developers can develop their own rules to supplement or outright replace the default ones. While in the long run developers are going to want to be able to simple configure the rules to match their company standards, this is at least a good temporary solution. In addition to creating new rules, developers will find information in the SDK on how to integrate StyleCop in MSBuild tasks. NEW: ANTS Memory Profiler 5 now out! Give-away eBook – Confessions of an IT Manager Effective Management of Static Analysis Vulnerabilities and Defects Ensuring Code Quality in Multi-threaded Applications NEW: ANTS Memory Profiler 5 just released! Some are even downright contrary to standard practices like placing "using" statements inside namespaces. Of all the examples that could be taken... I wouldn't be surprised to someday find this particular one as part of FxCop... While many of StyleCops rules are subjective (read: make people sad because their "flawless" naming conventions are not all that flawless, or are totally obsolete as they're simply ported from older environments with different restrictions and requirements), that particular one has to do with how the compiler works, and is meant to catch as many errors as possible at compile time and remove ambiguity. The standard Visual Studio templates have never followed -any- convention, and contredict each other...thats not
http://www.infoq.com/news/2008/09/StyleCop-SDK
crawl-002
refinedweb
358
51.68
Red Hat Bugzilla – Bug 66832 kyocera fs680 prints PCL commands on some pages, not others Last modified: 2008-05-01 11:38:02 EDT From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:0.9.9) Gecko/20020313 Description of problem: My Kyocera printer broke on the upgrade from rh7.0 to rh7.3. The break is subtle -test pages print ok, linux applications print ok. Most MS apps print ok over samba (samba works beautifully in all other respects). MS Access 97 reports sometimes work and sometimes dont. When they fail, they produce output similar to the following: !R!SEM6; [untypable Y umlaut char]%-12345X@PJL SET ECOPRINT=OFF @PJL COMMENT KYOCERA GPC 3.174 @PJL SET PAGEPROTECT+OFF @PJL SET IMAGEADAPT=ON @PJL SET RESOLUTION=600 @PJL ENTER LANGUAGE = PCL followed by a whole slew of unprintable characters Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1.open access 2.print report 3. curse and restart lpd; powercycle printer; hawk, spit and invoke the name of redhat Actual Results: as above Expected Results: should have printed a letter with fields from an access db. Works fine when the kyocera printer is locally attached (lpt1) Additional info: Please attach /etc/alchemist/namespace/printconf/local.adl. Thanks. Created attachment 61326 [details] /etc/alchemist/namespace/printconf/local.adl as requested by twaugh@redhat.com Is this printer connected to the parallel port? If so, what does cat /proc/sys/dev/parport/*/autoprobe* say? I suspect that this printer doesn't have PJL capability at all when the foomatic database thinks that it does. It's a FS-680? Could you please try editing /usr/share/foomatic/db/source/printer/311113.xml and removing the line that says: <pjl/> Then try removing and re-adding the print queue in printconf-gui. Thanks. cat /proc/sys/dev/parport/*/autoprobe* says: CLASS:PRINTER; MODEL:FS-680; MANUFACTURER:Kyocera; DESCRIPTION:Kyocera FS-680; COMMAND SET:PCL5E,PJL; removing the reference to <pjl/> makes no difference, nor does taking out the PJL from <commandset>PCL5E,PJL</commandset>. Print queue removed and re-added ok. No difference at all? You still get PJL stuff even when the <pjl/> line has been entirely removed? On the client side, are you printing to a PostScript printer? (If not, do.) There are small changes, (1st line 1st page reads %!PS-Adobe2.0 %%DocumentFonts: Courier Times-Bold 0 768 moveto (@PJL COMMENT KYOCERA GPC 3.174) show but yes, I still am getting PJL stuff. When I take your second suggestion and choose a postscript printer in win 98, (HP laserjet 4M postscript - there is no generic postscript and that is the closest I can find) I get a whole lot of postscript commands, but no proper printing. Gaaah. ! This: "0 768 moveto (@PJL COMMENT KYOCERA GPC 3.174) show" tells me that the PJL command is being generated on the other machine, not on the Red Hat Linux machine. It looks for all the world like the client is saying 'print this as text', and the server is rendering it as PostScript. Client-side bug of some sort. Please attach your /etc/smb.conf; also, which share are you printing to? Created attachment 61584 [details] smb.conf file I am printing to the lp printer. The wacky thing is that this used to work well for these types of reports. I am not trying to get something new to work, something has broken, around the time of the upgrade from rh7.0->7.3 upgrade. Weird huh !. and why do some Access reports print and others fail to. Bizarre. What happens if you downgrade to the version of LPRng that came with Red Hat Linux 7.0? 'rpm -Uvh --oldpackage LPRng-...' rpm -Uvh --nodeps --oldpackage /mnt/cdrom/RedHat/RPMS/LPRng-3.6.24-2.i386.rpm Very Strange. That doesn't work either for this Access report. I get a page that is different to the others - it starts %!PS-Adobe-2.0 - this is from the win98 to the remote linux printer. I restarted lpd after making the changes, and put back the original xml definitions (after trying them with our mods). This is so frustrating ... Is there any way you can set up a (for example) Red Hat Linux 7.2 system to test with, so that we can narrow down the problem a bit further? The printing system changed quite a lot between 7.0 and 7.3. Not really. I really am stuck with 7.3. (BTW) I find that some word docs also produce the same result. We are looking at migrating the print services to one of the windows boxes, as this is just hurting us too much.I have spent my 3 days playing with it and it is now time to move on. If it can't be done with 7.3, it is not going to get done I am afraid, as this box is our internal development box, and pulling it down is too expensive..
https://bugzilla.redhat.com/show_bug.cgi?id=66832
CC-MAIN-2016-50
refinedweb
847
67.76
This forum is no longer active. Please post your questions to our new community site 12,162 post(s) found Pages: « Previous 1 2 3 4 5 6 7 8 9 10 … 405 406 Next » Topic: Trac / Cannot install Bitnami Trac Stack. "There has been an error...nl.dat: Permission denied" I keep getting this message: There has been an error. Error running C:\Program Files\BitNami Trac Stack/pythonPackages/installBabel.bat :error: c:\progra~1\bitnam~2\python\lib\site-packages\Babel-0.9.5-py2.6.egg\babel\localedata\nl.dat: Permission denied The application will exit now I am trying to install it in a networked environment, in which I have been given admin rights on my local machine to install software. Any ideas as to why this may be happening? Thanks in Advance! Topic: Cloud (Amazon AWS & BitNami Cloud Hosting) / Unable to SSH newly created Drupal Instance Dear Support, I’m not able to connect to an EC2 instance that I created. I CAN access the web interface. My details are below: AMI Image: bitnami-drupal-6.22-0-linux-ubuntu-10.04-ebs (ami-da8a73b3) Zone: us-east-1d Type: m1.small I’m using windows. I tried with putty and sshclient.exe. For putty I used the steps described in….3f. In Addition, I also tried the command as below: putty.exe i "PATH\myKey.ppk " -l bitnami xxxxx.compute-1.amazonaws.com For sshclient I tried as below: “C:\Program Files\SSH Communications Security\SSH Secure Shell\SshClient.exe” -i C:\PATH\myKey.pem bitnami@XXXXXX.compute-1.amazonaws.com I’ve been trying for hours now. Please help. Thanks Ashraf Topic: Cloud (Amazon AWS & BitNami Cloud Hosting) / Default Username and Password Issues Beltran, That is what fixed it, to note for others, you can’t change the instance type, you have to actually terminate and reinstall again. Thanks Topic: Redmine / Need Stack for Redmine 1.0.0 Hello, I’m trying to simulate the environment of a client for a few tests, and would like to install the precise environment he has. Would it be possible to get a link to the Redmine Stack 1.0.0? Thanks in advance. EDIT: I forgot to mention the client is running it on a Windows XP machine (go figure…) as a native deployment Topic: RubyStack & JRubyStack / re installer fails Sorry, not yet. We will add this issue in our todo list. Topic: Redmine / batch file to backup DB under windows So I just append mysql backup line to “use bitnami redmine stack” batch file? (So I can automated it?) Topic: WordPress / can't get back to site Hi Judith, Could you try to go to your Wordpress installation directory and start the servers form the manager-osx tool? Could you post if you can see any error in the log tag?. Tried the mac app included in the stack and also via the terminal, following the steps on the forum. It seems the server is running but database won’t and can’t get access. What to do? Also installing new Bitnami install and then replace the htpdocs but obviously the database info is not correct and that didn’t work… Will try this again with only replacing the wp-content folder and then just leave my computer on? Any simpler ideas? Thanks, Judith Topic: DjangoStack / 2 DjangoStack admin problems with fixes Just a quick note to let you know that these issues are resolved in DjangoStack 1.3-1. Thanks garyrob for reporting them and for posting a detailed solution. Hi, Did you launch a micro instance? Alfresco have problems with micro instances, it requires a small or large instance to work properly. I checked that launching a large instance is working with “user”/“bitnami” by default. Topic: Redmine / Upgrade Redmine 1.1.2.stable >> 1.2.1 - NoMethodError: undefined method It seems that all the steps are correct. Do you have any plugin installled? Topic: General / Installing subversion on a bitnami LAMP stack image I’m affraid that the Subversion Stack also ships their own Apache server. You can use RubyStack wich ships PHP, Apache, Subversion and Ruby. Topic: Trac / Trac Stack with PostgreSQL please I think the problem is in how you are creating the project. It seems that the psycopg2 module installed correctly. Can you post the details of how are you creating it? Thanks. Topic: Redmine / redmine access other users… Topic: Moodle / Updating Moodle 2.03 to 2.1 I suggest to download and install the new BitNami Moodle and try to migrate your database to the new installation. You can find how to create a database backup and how to restore it in a different database at Topic: Trac / Testlink on bitnami trac stack You can uncompress it in your_installation_directory/apache2/htdocs folder but this will overwrite the welcome page. You can also create the same schema that other BitNami apps: installdir/apps/your_app/htdocs/ <- application files installdir/apps/your_app/conf/your_app.conf <- apache configuration file apache2/conf/httpd.conf <- add “Include installdir/apps/your_app/conf/your_app.conf” at the end of the file Topic: Joomla! / Installing over Xampp Hi Rene, We are only support installing BitNami Joomla! Module on top of BitNami LAMP. In this case you can migrate your installation to XAMPP importing the database and copying the Joomla! files. You can find how to create a database backup and how to restore it in the Wiki You can run the following command or write it in a batch script: > /installdir/mysql/bin/mysqldump -u root -p database_name > backup.sql > /installdir/mysql/bin/mysqldump -u root -p database_name > backup.sql You can find more info at Topic: Virtual Appliances / cannot create SCM in project setting I downloaded the VM version of bitnami redmine, it works well except the repositories. when i create a new project, and set the repository of project, i can select SCM, but the button “Create” is disabled, why? is there any thing wrong, i didn’t modify any file in VM, can someone help me solve this problem, thanks Hello, I have taken over a server recently running Redmine 1.1.2 BitNami Stack and I would like to upgrade it to 1.2.1 but running into a few snags along the way. I’ve read through the following resources ===……… === These are the steps I have done so far === Backup === cd /var/www/bitnami/ ./ctlscript.sh status ./ctlscript.sh stop redmine mysqldump —add-drop-database —add-drop-table —comments —dump-date redmine > ~/backup/redmine1.1.2-backup_2011-07-31_09-30_mysqldump.sql tar -czvf ~/backup/redmine1.1.2-backup_2011-07-31_09-30.tar.gz apps/redmine ===== cd ~/Source svn co svn://rubyforge.org/var/svn/redmine/trunk redmine cd /var/www/bitnami/ ./use_redmine mv apps/redmine apps/redmine-old mv ~/Source/redmine apps/redmine cd apps cp -r redmine-old/script redmine-old/conf redmine-old/files redmine/ cp redmine-old/config/email.yml redmine/config/configuration.yml cp redmine-old/config/database.yml redmine/config/ cp redmine-old/config/mongrel_cluster.yml redmine/config/ mkdir redmine/tmp/pids sed -i ‘s/RAILS_GEM_VERSION/#RAILS_GEM_VERSION/g’ redmine/config/environment.rb cp redmine-old/config/initializers/session_store.rb redmine/config/initializers cd redmine ======= vim config/initializers/mongrel_cluster_with_rails_23_fix.rb #Add the below to file module ActionController class AbstractRequest < ActionController::Request def self.relative_url_root=(path) ActionController::Base.relative_url_root=(path) end def self.relative_url_root ActionController::Base.relative_url_root end end end === rake db:migrate RAILS_ENV=“production” rake db:migrate:upgrade_plugin_migrations RAILS_ENV=“production” rake db:migrate_plugins RAILS_ENV=“production” rake tmp:cache:clear rake tmp:sessions:clear exit ./ctlscript.sh start redmine ==== Receiving Redmine 500 error & the following in the logs [apps/redmine/log/mongrel.3001.log] Sun Jul 31 12:02:20 -0700 2011: Error calling Dispatcher.dispatch #<NoMethodError: undefined method `[]’ for nil:NilClass> [apps/redmine/log/production.log] ActionView::TemplateError (undefined local variable or method `csrf_meta_tag’ for #) on line #8 of app/views/layouts/base.rhtml: Thank you for any assistance Is it somehow possible to add a bitnami subversion server easily? Hello. I have an image with apache2, mysql, php and ubuntu 10.04. I would like to install subversion on this system such that it uses the already installed apache2 facilities etc. How do i go about doing that? Thank you I installed the Bitnami image, AMI: bitnami-alfresco-3.4.d-1-linux-x64-ubuntu-10.04-ebs (ami-1334f07a) The default username and password are not correct on the image page so i followed the instructions and edited the /opt/bitnami/apache-tomcat/shared/classes/alfresco-global.properties file. I edited the alfresco_user_store.adminpassword and username and saved under sudo and restarted teh services but i still cannot login. I have tried every combination i have seen on all your image pages. How do i connect remotely to MySQL as well, i can’t find those details in my.cnf. Also, what is the ROOT password? This is like loading an instance and i can’t control anything or find any documentation of your passwords anywhere. Do I have to login as bitnami and then run sudo su root to get there? i try to edit the mysql information and always run into this error too…. /opt/bitnami/mysql/bin/mysqladmin.bin: connect to server at ‘localhost’ failed error: ’Can’t connect to local MySQL server through socket ‘/opt/bitnami/mysql/tmp/mysql.sock’ (2)’ Check that mysqld is running and that the socket: ‘/opt/bitnami/mysql/tmp/mysql.sock’ exists! Another error i find in teh alfresco.log is 02:03:15,407 ERROR [org.alfresco.repo.content.transform.RuntimeExecutableContentTransformerWorker] Failed to start a runtime executable content transformer:. >> Are you using the official installer of PostgreSQL? Yes, Im using official installer, version 9.0.4. I have reinstalled PostgreSQL, it did not help. I have removed and added a project using ‘initenv’, it did not help me, too. Console output was: … Project environment for ‘ScriptRecorder’ created. standalone web server `tracd`: Then point your browser to…. ….. I tried tracd, browser does not open project page, error is “404 not found”. Also, result of ‘psycopg2’ check in Python CLI is following: Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on win32 Type “help”, “copyright”, “credits” or “license” for more information. >>> psycopg2 Traceback (most recent call last): File “”, line 1, in NameError: name ‘psycopg2’ is not defined >>> import psycopg2 >>> psycopg2 <module ‘psycopg2’ from ‘C:\Program Files\BitNami Trac Stack\python\lib\site-pac kages\psycopg2-2.4.1-py2.5-win32.egg\psycopg2\init.pyc’> >>> You can see that ‘psycopg2’ command does not work without ‘import psycopg2’. It means that ‘psycopg2’ module is not imported by default. How can I make apache import psycopg2? Topic: Virtual Appliances / Database Problem with Drupal 7.4-0 VMware (Ubuntu 10.10) Thanks for your quick response Beltrán! :) I there an easy way to upgrade our running bitnami Moodle stack from version 2.03 to 2.1 ? I need some features of the new Moodle 2.1 and I do not want to lose any of my existing Moodle 2.03 settings. Hello once again, I’m trying to install testlink to integrate-it with trac, installing it with wamp i just unzip testlink folder to www page inside wamp folder. But how about in bitnami stack? i’m afraid to do something with and screw up my actual installation. thanks :) Hi out there, I already installed Joomla 1.7 after installing Xampp. How do I use Bitnami without installing my Joomla site again? Xampp has installed Apache, PHP and MySql. same error and u have eny ideas yet ? Next Page
https://bitnami.com/forums/posts?page=6
CC-MAIN-2015-11
refinedweb
1,950
50.84
STRCHR(3) BSD Programmer's Manual STRCHR(3) strchr, index - locate first occurrence of a character in a string #include <string.h> char * strchr(const char *s, int c); char * index(const char *s, int c); The strchr() function locates the first occurrence of the character c in the string s. The terminating NUL character is considered part of the string. If c is '\0', strchr() locates the terminating '\0'. The index() function is an old synonym for strchr(). The strchr() function returns a pointer to the located character or NULL if the character does not appear in the string. After the following call to strchr(), p will point to the string "oobar": char *p; char *s = "foobar"; p = strchr(s, 'o'); memchr(3), strcspn(3), strpbrk(3), strrchr(3), strsep(3), strspn(3), strstr(3), strtok(3) The strchr() function conforms to ANSI X3.159-1989 ("ANSI C"). The index() function is deprecated and shouldn't be used in new code..
https://www.mirbsd.org/htman/i386/man3/strchr.htm
CC-MAIN-2014-10
refinedweb
162
63.7
/* pwd.c - Try to approximate UN*X's getuser...() functions under MS-DOS. Copyright (C) 1990 by Thorsten Ohl, td12@ddagsi3.bit. */ /* This 'implementation' is conjectured from the use of this functions in the RCS and BASH distributions. Of course these functions don't do too much useful things under MS-DOS, but using them avoids many "#ifdef MSDOS" in ported UN*X code ... */ /* Stripped out stuff - MDLadwig <mike@twinpeaks.prc.com> --- Nov 1995 */ #include "mac_config.h" #include <pwd.h> #include <stdio.h> #include <stdlib.h> #include <string.h> static char *home_dir = "."; /* we feel (no|every)where at home */ static struct passwd pw; /* should we return a malloc()'d structure */ static struct group gr; /* instead of pointers to static structures? */ pid_t getpid( void ) { return 0; } /* getpid */ pid_t waitpid(pid_t, int *, int) { return 0; } /* waitpid */ mode_t umask(mode_t) { return 0; } /* Umask */ /* return something like a username in a (butchered!) passwd structure. */ struct passwd * getpwuid (int uid) { pw.pw_name = NULL; /* getlogin (); */ pw.pw_dir = home_dir; pw.pw_shell = NULL; pw.pw_uid = 0; return &pw; } /* Misc uid stuff */ struct passwd * getpwnam (char *name) { return (struct passwd *) 0; } int getuid () { return 0; } int geteuid () { return 0; } int getegid () { return 0; }
http://opensource.apple.com/source/cvs/cvs-30/cvs/macintosh/pwd.c
CC-MAIN-2016-30
refinedweb
194
62.54
Finding The Median September 11, 2015 Since there are only 256 possible values in the array, the easiest way to solve this exercise is to create an array of 256 counters, all initially zero, index through the input array incrementing the value that corresponds to the current element; when the input is exhausted two pointers approach from opposite ends of the counts array, accumulating until the counts equal or exceed half the length of the array, when the median is calculated: (define (median xs) (let* ((counts (make-vector 256 0)) (len (do ((xs xs (cdr xs)) (len 0 (+ len 1))) ((null? xs) len) (vector-set! counts (car xs) (+ (vector-ref counts (car xs)) 1))))) (let loop ((lo 0) (lo-count 0) (hi 255) (hi-count 0)) (cond ((< lo-count (/ len 2)) (loop (+ lo 1) (+ lo-count (vector-ref counts lo)) hi hi-count)) ((< hi-count (/ len 2)) (loop lo lo-count (- hi 1) (+ hi-count (vector-ref counts hi)))) (else (/ (+ lo hi) 2)))))) That’s clearly linear time, as it touches each item in the input once, and constant space, since the size of the counts array is independent of the input. Here are some examples: > (median '(2 4 5 7 3 6 1)) 4 > (median '(5 2 1 6 3 4)) 7/2 If you don’t mind scrambling the input, another algorithm that works in linear time and constant space uses a “fat pivot” quicksort, recurring only on the partition that contains the median index. You can run the program at. Similar solution in Perl: Clever! My knee-jerk reaction was to ignore the 8-bit integer part and just go for the slower generic solution. I’ve used this as an opportunity to learn some Rust. I like that it made me think about the empty array case. Python solution. @graham, I think you have an “off by one error”. In the loop ‘lo’ is incremented before use, so the value of count[0] is never added to lo_count. Similarly, ‘hi’ is pre-decremented, so count[255] is never added to hi_count. @Mike: good eye! Sorry for the errors, all. […] Programming Praxis problem. The problem is nicely constrained by limiting it to 8-bit […] It looks like my solution (along with the issue @Mike caught) doesn’t correctly handle the case of a single element array. Apologies! #!/usr/bin/perl -w use strict; my @array = qw/1 2 3 4 9 10 11/; print median(@array); sub median { my @vals = sort {$a $b} @_; my $len = @vals; if($len%2) #odd? { return $vals[int($len/2)]; } else #even { return ($vals[int($len/2)-1] + $vals[int($len/2)])/2; } } #include int main() { int a[10],i,j,n; printf(“Enter the no of events\n”); scanf(“%d”,&n); printf(“Input\n”); for(i=0;i<n;i++) scanf("%d",a+i); for(i=1;i<n;i++) for(j=0;ja[i]) a[j] = a[j]*a[i]/(a[i]=a[j]); printf(“Output\n”); for(i=0;i %d”,a[n/2]); else printf(“Median is -> %d”,(a[n/2]+a[n/2 -1])/2); }
https://programmingpraxis.com/2015/09/11/finding-the-median/2/
CC-MAIN-2019-04
refinedweb
519
55.88
I have been fortunate enough that my programming tasks were simple. All my projects were for the Internet, which meant that my classes only had to handle one or two aspects of my programs. I always had a main function to dump global instructions and procedures. Object-oriented programming (OOP) with a little web scripting did everything I needed without much code rewriting. However, I know not every program is as easy to develop. Some require runtime tasks that must be done at every method call. While you can create special classes to handle these tasks, you still have to write the code that calls these classes in each and every class and method you write. Fortunately, we have aspect-oriented programming. By adding some aspect-oriented programming in your C# code, you can kis your programming nightmares goodbye. Learn how to use OOP strategies in C# at Udemy What is Aspect Oriented Programming? Reusability is not a new concept. If you have been programming for some time, your programs already contain several pieces of reusable code that you created over the years. For most of us, OOP gave us all we need. We established each class as a single purpose object and set our main function to call these objects as needed. Life was good until we needed to do something more complicated like logging the execution of a program. For an application execution trace, we need to log the names of each and every method as they are called as well as such things as how much time it took to execute the method. So, we write a simple logger class to handle these tasks. Our logger class will have methods that will do such things as BeginLog, EndLog. These methods will record the class being logged into a file or database. Now, we let each method of our program execute these logger methods, once at the start and once just before they return control back to our main function. We also implement reflections to track the calling methods. We now have something that looks like this: class BusinessModule{ … Core data methods BeginLog(); Public static void method1() { … BeginLog(); … Perform core operation … EndLog(); } public static void method2() { … BeginLog(); … Perform core operation … EndLog(); } } As you can see, even with reusable code, we have to do lots of coding to get the logger to work as advertised. We can remove some of this repetitive code by using some aspect oriented programming C# techniques. We create attributes and aspect such as “LogEnable” to take care of the logging code for us. Developed in the 1990s, aspect-oriented programming (AOP) provides us attribute/ aspect components we can use in our classes at every level. These aspects exist as metadata for our classes which we can implement at run time to provide the required services on an as needed basis. Learn about other ways you can extend the .Net Framework at Udemy Aspect Oriented Programming in C# AOP handles aspects as component frameworks. It uses something called an aspect wrapper to insert our aspect methods into our classes at compile time. However, C# does come with an aspect wrapper. C# is a hybrid procedural and object-oriented programming language. Therefore, we have to mimic AOP using a combination of special classes, namespaces, and interfaces. First, we must inherit our AOP-enabled classes from the ContextBoundClass. The ContextBoundClass class lets us build AOP components by letting us extend the .Net Framework providing the metadata for our components. public class ContextClass: ContextBoundObject { } We add our metadata to this class where we need to implement our aspect methods. In C#, we create our aspect modules as attribute classes. As you see below, our attribute classes will have their own attributes. In our example, the AttributeTarget attribute controls the application of the attribute to class/method/property and so on. [AttributeUsage(AttributeTargets.Class)] public class ContextAtrbAttribute :Attribute { private string _fileName; public ContextAtrbAttribute(string fileName) { this._fileName = fileName; } } Understand how to implement C# interfaces at Udemy Contexts in C# C# implements AOP through contexts. Contexts are logical groupings of the objects with the same aspect values. They establish the places where our aspect interceptors can intercept their calls from our objects. Objects with different contexts can still communicate with each other but only through the .Net context architecture. .Net transmits data between contests through chains of message sinks which we can custom build as we please. These sinks pass our message data from sink to sink until it reaches the end of the chain, and it is here where we insert our aspect interceptors. The C# run-time sets up this chain of message sinks the moment we set our classes to inherit ContextBoundObject. It also sets up proxy objects to help facilitate communication between our contests. The transparent proxy takes the message from the runtime and serializes it into the stack before sending it on to the real proxy. The real proxy represents our class in the message sink channel. It passes the messages to the sinks in the channel for processing. Each sink can preprocess the message before sending it onto the next sink in the chain. At the end, the stack builder sink de-serializes the message into the stack and calls the program class’ method. Run time then reverses the process and sends the parameter and return values back through the chain for post processing. At any step in the chain, we can implement our aspect oriented programming attribute methods. We do this by first setting up some prerequisites and then adding a few event triggers to our program classes. All of our prerequisites fall under the following System.Runtime.Remoting namespaces. So, you want to include them in all your AOP classes. using System.Runtime.Remoting.Contexts; using System.Runtime.Remoting.Activation; using System.Runtime.Remoting.Messaging; We then have to make sure our main classes inherit ContextBoundObject so they understand and can call our AOP procedures. We then have to create our aspect classes making sure they all implement the IContextAttribute interface and inherit from the Attribute class. We then add in our context properties by implementing the IContextProperty and IContributeObjectSink interfaces. Finally, we develop our message sinks using the IMessageSink interface to complete the aspect. From that point on, all we need to do is write our code. If we implemented these inheritances and interfaces correctly we will have aspect oriented programming C# classes on our hands.
https://blog.udemy.com/aspect-oriented-programming-in-c-sharp/
CC-MAIN-2017-17
refinedweb
1,075
55.54
Ok, let me rephrase this a bit. When you install a new EPiServer 7.5 site, regardless if it's based on MVC or Web Forms, you get the following configuration added for the site to be able to run. The EPiServer User Interface uses some MVC for it's views, thus you need MVC even if your templates are built using Web Forms. I would suggest that you add the line in bold from the previous post to see if this solves your problem. Thanks Valdis - Is app pool running on correct .Net version? - Yes - Is EPiServer.Framework.dll in bin/ folder - Yes - Is System.Web.Mvc.dll present in bin/ dir? - No we didn't find it in Episerver folder neither in published site folder in UAT server but same scenario with Test server and test site is up and running fine. Can you give us excact path to find? - Is it 7.0 or 7.5 site? - IT is7.0 site Hm, strange. Which .Net version are you targeting? If you can't find System.Web.Mvc.dll in your bin/ then most probably you are using Mvc installed "globally" on the machine (it's better to reference particular NuGet package). Try to find it in GAC or in C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET MVC 4\Assemblies. Don't have 7.0 installed anymore to play around - so just a wild guesses.. Hi, EPiserver 7 site(migrated from 5.2) is working fine on test environment but after deploying it on uat server it is giving below mentioned error.All the required softwares are installed properly on UAT server. code at line 1529 in we.config is:- Please look this issue on priority. Exception information: Exception type: HttpCompileException Exception message: c:\Teller\Web.config(1529): error CS0234: The type or namespace name 'Framework' does not exist in the namespace 'EPiServer' (are you missing an assembly reference?).Compilation.BuildManager.CallAppInitializeMethod())
https://world.optimizely.com/forum/legacy-forums/Episerver-7-CMS/Thread-Container/2014/9/The-type-or-namespace-name-Framework-does-not-exist-in-the-namespace-EPiServer/
CC-MAIN-2022-40
refinedweb
326
69.38
cc [ and the hardware sampling and sampling. For the CPC request, this initialization condition can be detected using cpc_set_sample(3CPC) and looking at the counter value for any requests with CPC_OVF_NOTIFY_EMT set. The value of any such counters will be UINT64_MAX. For the SMPL request, no special value returned by cpc_set_sample(3CPC) is prepared to tell the initialization condition of the freshly created LWP. purpose of the flags argument is to modify the behavior of cpc_bind_cpu() to adapt to different calling strategies. Values for the flags argument are defined in libcpc.h as follows: #define CPC_FLAGS_DEFAULT 0 #define CPC_FLAGS_NORELE 0x01 #define CPC_FLAGS_NOPBIND 0x02 When flags is set to CPC_FLAGS_DEFAULT, the library binds the calling LWP to the measured CPU with processor_bind(2) . The application must not change its processor binding until after it has unbound the set with cpc_unbind(). The remaining flags may be used individually or bitwise-OR'ed together. When only CPC_FLAGS_NORELE is asserted, the library binds the set to the measured CPU using processor_bind(). When the set is unbound using cpc_unbind(), the library will unbind the set but will not unbind the calling thread from the measured CPU. When only CPC_FLAGS_NOPBIND is asserted, the library does not bind the calling thread the measured CPU when binding the counter set, with the expectation that the calling thread is already bound to the measured CPU. If the thread is not bound to the CPU, the function will fail. When the set is unbound using cpc_unbind(), the library will unbind the set and the calling thread from the measured CPU. If both flags are asserted (CPC_FLAGS_NOPBIND| CPC_FLAGS_NORELE), the set is bound and unbound from the measured CPU but the calling thread's CPU binding is never altered. The intended use of CPC_FLAGS_NOPBIND and CPC_FLAGS_NORELE is to allow a thread to cycle through a collection of counter sets without incurring overhead from altering the calling thread's CPU binding unnecessarily.. When a hardware sampling for a SMPL request with the CPC_OVF_NOTIFY_EMT flag set collected the requested number of SMPL records, the LWP to which the set is bound receives a SIGEMT signal, but the hardware sampling would not be frozen unlike the CPC request. In the signal handler for the SIGEMT, if the application wants to temporarily stop the hardware sampling, cpc_disable(3CPC) can be called to stop the hardware sampling. And, cpc_enable(3CPC) can be called to restart the hardware sampling. The cpc_unbind() function unbinds the set from the resource to which it is bound. All hardware resources associated with the bound set are freed. If the set was bound to a CPU, the calling LWP is unbound from the corresponding CPU according to the policy requested when the set was bound using cpc_bind_cpu().: For cpc_bind_curlwp(), the system has Pentium 4 processors with HyperThreading and at least one physical processor has more than one hardware thread online. See NOTES. For cpc_bind_cpu(), the process does not have the cpc_cpu privilege to access the CPU's counters. For cpc_bind_curlwp(), cpc_bind_cpc(), and cpc_bind_pctx(), access to the requested hypervisor event was denied. For cpc_bind_curlwp() and cpc_bind_pctx (), the performance counters are not available for use by the application. For cpc_bind_cpu(), another process has already bound to this CPU. Only one process is allowed to bind to a CPU at a time and only one set can be bound to a CPU at a time. The set does not contain any requests or cpc_set_add_request () was not called.. For cpc_bind_cpu(), the specified processor is not online. The cpc_bind_curlwp() function was called with the CPC_OVF_NOTIFY_EMT flag, but the underlying processor is not capable of detecting counter overflow. For cpc_bind_pctx(), the specified LWP in the target process does not exist. */Example 3 Use Hardware Performance Counters and Hardware Sampling to Measure Events in a Process The following example demonstrates how a standalone application can be instrumented with the libcpc(3LIB) functions to use hardware performance counters and hardware sampling to measure events in a process on an Intel platform supporting the Precise Event Based Sampling (PEBS). The sample code binds two monitoring events for the hardware performance counters and two monitoring events for the hardware sampling to the current thread. If any monitoring request caused an overflow, the signal handler invoked by a SIGEMT signal retrieves the monitoring results. When the sample code finishes the task that would be coded in the section commented as Do something here, the sample code retrieves the monitoring results and closes the session. #include <stdio.h> #include <libcpc.h> #include <unistd.h> #include <stdlib.h> #include <errno.h> #define NEVENTS 4 #define EVENT0 "mem_uops_retired.all_loads" #define EVENT1 "mem_uops_retired.all_stores" #define EVENT2 "uops_retired.all" #define EVENT3 "mem_trans_retired.load_latency" #define RATIO0 0x100000ULL #define RATIO1 0x100000ULL #define RATIO2 0x100000ULL #define RATIO3 0x100000ULL #define PRESET_VALUE0 (UINT64_MAX - RATIO0) #define PRESET_VALUE1 (UINT64_MAX - RATIO1) #define PRESET_VALUE2 (UINT64_MAX - RATIO2) #define PRESET_VALUE3 (UINT64_MAX - RATIO3) typedef struct _rec_names { const char *name; int index; struct _rec_names *next; } rec_names_t; typedef struct _rec_items { uint_t max_idx; rec_names_t *rec_names; } rec_items_t; typedef struct { char *event; uint64_t preset; uint_t flag; cpc_attr_t *attr; int nattr; int *recitems; uint_t rec_count; int idx; int nrecs; rec_items_t *ri; } events_t; static cpc_attr_t attr2[] = {{ "smpl_nrecs", 50 }}; static cpc_attr_t attr3[] = {{ "smpl_nrecs", 10 }, { "ld_lat_threshold", 100 }}; static events_t events[NEVENTS] = { { EVENT0, PRESET_VALUE0, CPC_COUNT_USER | CPC_OVF_NOTIFY_EMT, NULL, 0, NULL, 0, 0, 0 }, { EVENT1, PRESET_VALUE1, CPC_COUNT_USER | CPC_OVF_NOTIFY_EMT, NULL, 0, NULL, 0, 0, 0 }, { EVENT2, PRESET_VALUE2, CPC_COUNT_USER | CPC_OVF_NOTIFY_EMT | CPC_HW_SMPL, attr2, 1, NULL, 0, 0, 0 }, { EVENT3, PRESET_VALUE3, CPC_COUNT_USER | CPC_OVF_NOTIFY_EMT | CPC_HW_SMPL, attr3, 2, NULL, 0, 0, 0 } }; static int err; static cpc_t *cpc; static cpc_set_t *cpc_set; static cpc_buf_t *cpc_buf_sig; /* ARGSUSED */ static void mk_rec_items(void *arg, cpc_set_t *set, int request_index, const char *name, int rec_idx) { events_t *ev = (events_t *)arg; rec_names_t *p, *q, *nn; if ((nn = malloc(sizeof (rec_names_t))) == NULL) return; nn->name = name; nn->index = rec_idx; p = NULL; q = ev->ri->rec_names; while (q != NULL) { if (rec_idx < q->index) break; p = q; q = q->next; } nn->next = q; if (p == NULL) ev->ri->rec_names = nn; else p->next = nn; if (ev->ri->max_idx < rec_idx) ev->ri->max_idx = rec_idx; } static rec_names_t * find_recitem(events_t *ev, int index) { rec_names_t *p = ev->ri->rec_names; while (p != NULL) { if (p->index == index) return (p); else if (p->index > index) return (NULL); else p = p->next; } return (NULL); } static int setup_recitems(events_t *ev) { if ((ev->ri = calloc(1, sizeof (rec_items_t))) == NULL) return (-1); errno = 0; cpc_walk_smpl_recitems_req(cpc, cpc_set, ev->idx, ev, mk_rec_items); if (errno != 0) return (-1); return (0); } static void show_record(uint64_t *rec, events_t *ev) { rec_names_t *item; int i; (void) printf("----------------------------------\en"); for (i = 0; i <= ev->ri->max_idx; i++) { if ((item = find_recitem(ev, i)) == NULL) { continue; } (void) printf("%02d: \"%s\": 0x%" PRIx64 "\en", i, item->name, rec[i]); } (void) printf("----------------------------------\en"); } static void show_buf_header(cpc_buf_t *buf) { hrtime_t ht; uint64_t tick; (void) printf("***************** results *****************\en"); ht = cpc_buf_hrtime(cpc, buf); (void) printf("hrtime: %" PRId64 \en", ht); tick = cpc_buf_tick(cpc, buf); (void) printf("tick: %" PRIu64 \en", tick); } static void show_cpc_buf(cpc_buf_t *buf, events_t *ev) { uint64_t val; (void) printf("Req#%d:"\en", ev->idx); if (cpc_buf_get(cpc, buf, ev->idx, &val) != 0) { err = 1; return; } (void) printf(" counter val: 0x%" PRIx64, val); if (val < ev->preset) (void) printf(" : overflowed\en"); else (void) printf("\en"); } static void show_smpl_buf(cpc_buf_t *buf, events_t *ev) { uint64_t *recb; int i; (void) printf("Req#%d:\en", ev->idx); (void) printf(" retrieved count: %u", ev->rec_count); if (ev->rec_count == ev->nrecs) (void) printf(" : overflowed\en"); else (void) printf("\en"); for (i = 0; i < ev->rec_count; i++) { recb = cpc_buf_smpl_get_record(cpc, buf, ev->idx, i); if (recb == NULL) { err = 1; return; } show_record(recb, ev); } } static int retrieve_results(cpc_buf_t *buf) { int i; int repeat = 0; if (cpc_set_sample(cpc, cpc_set, buf) != 0) { return (-1); } show_buf_header(buf); /* Show CPC results */ for (i = 0; i < NEVENTS; i++) { if (!(events[i].flag & CPC_HW_SMPL)) { /* CPC request */ show_cpc_buf(buf, &events[i]); continue; } /* SMPL request */ if (cpc_buf_smpl_rec_count(cpc, buf, events[i].idx, &events[i].rec_count) != 0) { return (-1); } if (events[i].rec_count > 0) show_smpl_buf(buf, &events[i]); if (events[i].rec_count == events[i].nrecs) repeat++; } /* Show remaining SMPL results */ while (repeat > 0) { if (cpc_set_sample(cpc, cpc_set, buf) != 0) return (-1); repeat = 0; for (i = 0; i < NEVENTS; i++) { if (!(events[i].flag & CPC_HW_SMPL)) { /* CPC request */ continue; } if (cpc_buf_smpl_rec_count(cpc, buf, events[i].idx, &events[i].rec_count) != 0) { return (-1); } if (events[i].rec_count > 0) { (void) printf("For req#%d, more than 1 " "retrieval of the sampling results " "were required. Consider to adjust " "the preset value and smpl_nrecs " "value.\en", i); show_smpl_buf(buf, &events[i]); } if (events[i].rec_count == events[i].nrecs) repeat++; } } /* flushed all SMPL results */ return (0); } /* ARGSUSED */ static void sig_handler(int sig, siginfo_t *sip, void *arg) { (void) fprintf(stdout, "signal handler called\en"); if (sig != SIGEMT || sip == NULL || sip->si_code != EMT_CPCOVF) { err = 1; return; } /* Disable all requests */ if (cpc_disable(cpc) != 0) { err = 1; return; } if (retrieve_results(cpc_buf_sig) != 0) { err = 1; return; } /* Enable all requests */ if (cpc_enable(cpc) != 0) { err = 1; return; } /* Restart and reset requests */ if (cpc_set_restart(cpc, cpc_set) != 0) { err = 1; return; } } int main(void) { struct sigaction sa; events_t *ev; cpc_buf_t *cpc_buf; int i; int result = 0; if ((cpc = cpc_open(CPC_VER_CURRENT)) == NULL) { (void) fprintf(stderr, "cpc_open() failed\en"); exit(1); } if ((cpc_caps(cpc) & CPC_CAP_OVERFLOW_SMPL) == 0) { (void) fprintf(stderr, "OVERFLOW CAP is missing\en"); result = -2; goto cleanup_close; } if ((cpc_caps(cpc) & CPC_CAP_SMPL) == 0) { (void) fprintf(stderr, "HW SMPL CAP is missing\en"); result = -2; goto cleanup_close; } if ((cpc_set = cpc_set_create(cpc)) == NULL) { (void) fprintf(stderr, "cpc_set_create() failed\en"); result = -2; goto cleanup_close; } for (i = 0; i < NEVENTS; i++) { ev = &events[i]; if (ev->flag & CPC_HW_SMPL) { ev->nrecs = ev->attr[0].ca_val; } ev->idx = cpc_set_add_request(cpc, cpc_set, ev->event, ev->preset, ev->flag, ev->nattr, ev->attr); if (ev->idx < 0) { (void) fprintf(stderr, "cpc_set_add_request() failed\en"); result = -2; goto cleanup_set; } if (ev->flag & CPC_HW_SMPL) { if (setup_recitems(ev) != 0) { (void) fprintf(stderr, "setup_recitems() failed\en"); result = -2; goto cleanup_set; } } } if ((cpc_buf = cpc_buf_create(cpc, cpc_set)) == NULL) { (void) fprintf(stderr, "cpc_buf_create() failed\en"); result = -2; goto cleanup_set; } if ((cpc_buf_sig = cpc_buf_create(cpc, cpc_set)) == NULL) { (void) fprintf(stderr, "cpc_buf_create() failed\en"); result = -2; goto cleanup_set; } sa.sa_sigaction = sig_handler; sa.sa_flags = SA_RESTART | SA_SIGINFO; (void) sigemptyset(&sa.sa_mask); if (sigaction(SIGEMT, &sa, NULL) != 0) { (void) fprintf(stderr, "sigaction() failed\en"); result = -2; goto cleanup_set; } if (cpc_bind_curlwp(cpc, cpc_set, 0) != 0) { (void) fprintf(stderr, "cpc_bind_curlwp() failed\en"); result = -2; goto cleanup_set; } /* * ================== * Do something here. * ================== */ if (err) { (void) fprintf(stderr, "Error happened\en"); result = -2; goto cleanup_bind; } (void) cpc_disable(cpc); if (retrieve_results(cpc_buf) != 0) { (void) fprintf(stderr, "retrieve_results() failed\en"); result = -2; goto cleanup_bind; } cleanup_bind: (void) cpc_unbind(cpc, cpc_set); cleanup_set: (void) cpc_set_destroy(cpc, cpc_set); cleanup_close: (void) cpc_close(cpc); return (result); } CPC. When a SMPL request is added to a set with the CPC_OVF_NOTIFY_EMT flag set, then as before, the control registers and counter for the sampling are preset from the 64-bit preset value given. When the flag is set, however, the kernel arranges to send the calling process a SIGEMT signal when the hardware collected the requested number of SMPL records for the SMPL request. The si_code member of the corresponding siginfo structure is set to EMT_CPCOVF and the si_addr member takes the program counter value at the time the overflow interrupt for the sampling hardware was delivered. Sampling is kept enabled..
http://docs.oracle.com/cd/E36784_01/html/E36876/cpc-bind-curlwp-3cpc.html
CC-MAIN-2017-43
refinedweb
1,845
50.57
MS Dynamics CRM 3.0 You get the current time and date with functions declared in <time.h>. In particular, you can get the current calendar time with (surprise!) the function time(), which returns a variable of type time_t. If all you want is a string representing the current time and date, you can use ctime() with the value from time() which returns a pointer to a string. Here is an example: #include <stdio.h> #include <time.h> int main(void) { time_t t; struct tm tstruct; t = time(0); tstruct = *localtime(&t); printf("The time from ctime is %s", ctime(&t)); printf("The time from asctime is %s", asctime(&tstruct)); tstruct.tm_min += 6; t = mktime(&tstruct); printf("In 6 minutes,\n"); printf("The time from ctime will be %s", ctime(&t)); printf("The time from asctime will be %s", asctime(&tstruct)); return 0; If you insist on using implementation-specific (and particular hardware-specific) things like timer interrupts, CMOS, gettime(), and getdate(), then you are not writing code that can be used in any other environment, In fact, many of us will not be able to even compile that code. Such code is _off-topic_ in <news:comp.lang.c>. You need to check your documentation and, if you have problems, check a newsgroup or mailing list for your 20-year-old compiler and its non-standard functionality, if you can find it. Short answer: Easy on DOS, more difficult on a modern Windows box. Check the IBM-PC spec on CMOS, depending on which OS you use, the "C code" will vary, except beeing off-topic here. The best place to ask, might be in an assembler programming group (they can do C snippets as well). -- Tor > We don't do specific compilers here. >> I have reqritten the timer interrupt in PC as per my requirement.When > We don't do timer interrupts here. >> I use gettime() and getdate() >> in my program leter I am not getting current time and date. > We don't support non-standard C functions. >> How to get the Time and Date directly from the CMOS. > We don't access CMOS here. > Check the IBM-PC spec on CMOS, depending on which OS you use, the "C > code" will vary, except beeing off-topic here. > The best place to ask, might be in an assembler programming group > (they can do C snippets as well).
http://www.megasolutions.net/c/Time-and-date-77004.aspx
CC-MAIN-2014-52
refinedweb
399
72.46