text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
30 November 2010 09:35 [Source: ICIS news] HO CHI MINH (ICIS)--State-owned PetroVietnam is conducting a feasibility study on expanding the capacity of its 150,000 tonne/year polypropylene (PP) plant in Quang Ngai province, to meet the growing domestic demand for the polymer, a company source said on Wednesday. "The study is expected to be completed by March 2011," the source said. The PP plant located at Dung Quat industrial park began commercial production in late August this year. Its capacity may be increased by around 35% if the study recommended an expansion, the source said. Meanwhile, the study would also cover a possible capacity increase at PetroVietnam’s refinery and fluid catalytic cracking (FCC) unit at the site, he said. The FCC unit currently produces around 150,000 tonnes/year of propylene, which is supplied to the PP plant, the source said. Demand for PP in ?xml:namespace>
http://www.icis.com/Articles/2010/11/30/9414965/petrovietnam-studies-expansion-at-quang-ngai-pp-plant.html
CC-MAIN-2015-14
refinedweb
152
50.77
As the creator of Foo, a platform for website quality monitoring, I recently endeavored in a migration to Kubernetes and EKS (an AWS service). Kubernetes provides a robust level of DNS support. Luckily for us, within a cluster, we can reference pods by host name as defined in a spec. But what if we want to expose an app to the outside world as a website under a static domain? I thought this would be a common, well documented case, but boy was I wrong. Assume a Service named fooin the Kubernetes namespace bar. A Pod running in namespace barcan look up this service by simply doing a DNS query for foo. A Pod running in namespace quuxcan look up this service by doing a DNS query for foo.bar~ DNS for Services and Pods - Kubernetes Yes, that's great ❤️ But this still leads to many unsolved mysteries. Let's take this one step at a time shall we?! This post will address the following items. - How to define services - How to expose multiple services under one NGINX server. No fancy schmancy "Ingress" needed ? - How to create an external DNS and connect to a domain you've acquired through any qualified registry like GoDaddy or Google Domains, for example. We'll use Route 53 and ExternalDNS to do the heavy lifting. This post assumes a setup with EKS and eksctl as documented in "Getting started with eksctl", but many of the concepts and examples in this post could be applicable in a variety of configurations. Step 1: Define Services Connecting Applications with Services explains how to expose an NGINX application by defining a Deployment and Service. Let's go ahead and create 3 applications in the same manner: a user facing web app, an API and a reverse proxy NGINX server to expose the two apps under one host. web-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: web spec: selector: matchLabels: app: web template: metadata: labels: app: web spec: containers: - name: web # etc, etc web-service.yaml apiVersion: v1 kind: Service metadata: name: web labels: app: web spec: ports: - name: "3000" port: 3000 targetPort: 3000 selector: app: web api-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: api spec: replicas: 1 selector: matchLabels: app: api template: metadata: labels: app: api spec: containers: - name: api # etc, etc api-service.yaml apiVersion: v1 kind: Service metadata: name: api labels: app: api spec: ports: - name: "3000" port: 3000 targetPort: 3000 selector: app: api Fair enough, let's move on! Step 2: Expose Multiple Services Under One NGINX Server NGINX is a reverse proxy in that it proxies a request by sending it to a specified origin, fetches the response, and sends it back to the client. Going back to the bit about service names being accessible to other pods in a cluster, we can setup an NGINX configuration to look something like this. sites-enabled/ upstream api { server api:3000; } upstream web { server web:3000; } server { listen 80; server_name; location / { proxy_pass; } location /api { proxy_pass; } } Note how we can reference origin hosts like web:3000 and api:300. Niiiice! nginx-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 nginx-service.yaml apiVersion: v1 kind: Service metadata: name: nginx annotations: # this part will make more sense later external-dns.alpha.kubernetes.io/hostname: labels: app: nginx spec: type: LoadBalancer ports: - name: "80" port: 80 targetPort: 80 selector: app: nginx ...and, we're done! Right? In my experience, initially I thought so. The LoadBalancer provides an externally-accessible IP. You can confirm by running kubectl get svc and sure enough you'll find a host name listed in the EXTERNAL-IP column. Assuming you've acquired a domain from a provider that offers an interface to manage DNS settings, you could simply add this URL as a CNAME and you're good, right? Well, kinda... but not so much. Kubernetes Pods are considered to be relatively ephemeral (rather than durable) entities. Find more on this in "Pod Lifecycle - Kubernetes". With that said, anytime a significant change has been made in the lifecycle of a service, in our case the NGINX app, we will have a different IP address which will in turn cause significant downtime in our app which defeats a main purpose of Kubernetes - to help establish a "highly available" application. Okay, don't panic - we'll get through this ? Step 3: Create an External DNS Service to Dynamically Point NGINX In the previous step, with our LoadBalancer spec coupled with EKS we actually created an Elastic Load Balancer (for better or worse). In this section we'll create a DNS service that points our load balancer via "ALIAS record". This ALIAS record is essentially dynamic in that a new one is created whenever our service changes. The stability is established in the name server records. The tl;dr for the remaining portion is simply follow the documentation for using ExternalDNS with Route 53. Route 53 is "cloud Domain Name System (DNS) web service". Below were things I had to do that weren't obvious from the documentation. Hold on to your horses, this gets a little scrappy. eksctl utils associate-iam-oidc-provider --cluster=your-cluster-nameper eksctlservice accounts documentation. - When creating the IAM policy document per the ExternalDNS documentation, I actually had to do it via CLI vs online in my account. I kept getting this error: WebIdentityErr: failed to retrieve credentials\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\n\tstatus code: 403. When I created the policy via CLI the issue went away. Below is the full command you should be able to literally copy and execute if you have the AWS CLI installed. aws iam create-policy \ --policy-name AllowExternalDNSUpdates \ --policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Action":["route53:ChangeResourceRecordSets"],"Resource":["arn:aws:route53:::hostedzone/*"]},{"Effect":"Allow","Action":["route53:ListHostedZones","route53:ListResourceRecordSets"],"Resource":["*"]}]}' - Use the policy ARN output above to create an IAM role bound to the ExternalDNS service account with a command that will look something like eksctl create iamserviceaccount --cluster=your-cluster-name --name=external-dns --namespace=default --attach-policy-arn=arn:aws:iam::123456789:policy/AllowExternalDNSUpdates. - We should now have a new role from the above that we can see in the IAM console which will have a name of something like eksctl-foo-addon-iamserviceaccount-Role1-abcdefg. Click on the role from the list and at the top of the next screen make note of the "Role ARN" as something like arn:aws:iam::123456789:role/eksctl-foo-addon-iamserviceaccount-Role1-abcdefg. - Follow these steps to create a "hosted zone" in Route 53. - You can confirm things in the Route 53 console. - If your domain provider allows you to manage DNS settings, add the 4 name server records from the output of the command you ran to create a "hosted zone". - Deploy ExternalDNS by following the instructions. Afterwards, you can tail the logs with kubectl logs -f name-of-external-dns-pod. You should see a line like this at the end: time="2020-05-05T02:57:31Z" level=info msg="All records are already up to date" Easy, right?! Okay, maybe not... but at least you didn't have to figure all of that out alone ? There could be some gaps above, but hopefully it helps guide you through your process. Conclusion Although this post may have some grey areas, if it helps you establish dynamic DNS resolution as part of a highly available application, you've got something really special ? Please add comments if I can help clear up anything or correct my terminology!
https://www.freecodecamp.org/news/how-to-setup-dns-for-a-website-using-kubernetes-eks-and-nginx/
CC-MAIN-2021-43
refinedweb
1,287
53.21
Ezt is a collection of #defines and functions() helpful to the Tcl or Jim extension writer. Features: - Boilerplate code for extension initialization that can do namespaces and ensembles. - Junkpile storage facility. - Conversion to/from timeval. - A set of #defines that smooth out the differences between Tcl and Jim, allowing extensions to be written that support both. - The same set of #defines can also be used to write smaller code; some #defines fill in commonly-used values. - A "report" function that makes it easier to set an interp's result, and macros to make that easier. - A "wrong # args" function that returns TCL_ERROR. - Functions and include files for misc stuff, yet to be documented or created, even. - Public domain. Release tarballs:
https://chiselapp.com/user/stwo/repository/ezt/home
CC-MAIN-2022-05
refinedweb
120
66.74
A pretty progressbar library Project description PRETTY PROGRESSBAR LIBRARY The progress bar library that started with the idea of looking pretty Installation Only works with python3. pip3 install ppl How to use Simple usage import time from ppl import pb for i in pb(range(100)): time.sleep(0.1) Show task name along with the progress bar import time import random from ppl import pb total = 120 tasks = [ 'Make paintball', 'Find dragons', 'Code in python', 'Take out the trash', 'Fill up water bottles for trip' ] for task in tasks: i = 0 for i in pb(range(total), task=task): sleep_time = [.05, .04, .03, .02, .01][random.randint(0, 4)] time.sleep(sleep_time) # emulating long-playing task Custom bar length By default the bar length is the full width of the terminal window import time from ppl import pb for i in pb(range(100), bar_len=20): time.sleep(0.1) Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages. Source Distribution ppl-0.4.5.tar.gz (3.3 kB view hashes)
https://pypi.org/project/ppl/
CC-MAIN-2022-27
refinedweb
193
61.26
I am new to C++ and am writing a simple program to test out working with strings. I have defined a separate function that returns a string and want to call it within the main method. The same code works within the main method but I need to define it as a separate function. My code is below: 7 #include <cstdlib> 8 #include <iostream> 9 #include <string> 10 11 using namespace std; 12 13 // function declaration 14 string hi(); 15 16 int main(int argc, char** argv) { 17 // call method 18 string hi; 19 hi = hi(); 20 cout << hi << endl; 21 } 22 23 string hi() { 24 //simple string function 25 return "Hello World"; 26 } test.cpp: In function ‘int main(int, char**)’: test.cpp:19:13: error: no match for call to ‘(std::__cxx11::string {aka std::__cxx11::basic_string<char>}) ()’ hi = hi(); ^ You named the variable name and the function name, both hi, so the compiler gets confused about which one you mean. As @Cheersandhth said, do this: hi = ::hi(); to get the global hi in its global namespace, which is the function. int main(int argc, char** argv) { string l_hi; l_hi = hi(); cout << l_hi; } Or the alternative: int main(int argc, char** argv) { string hi; hi = hello(); cout << hi; } string hello() { return "Hello, World!"; } Both of these solutions would work, the one you want to use would be a matter of preference. Also, use this answer as advice to not do name shadowing, which could either lead the compiler into picking one of the two names, (provided they fit the right context), or an error may be thrown.
https://codedump.io/share/negfMWnmcaZp/1/calling-a-function-returning-a-string-within-the-main-method
CC-MAIN-2017-04
refinedweb
269
70.87
Item Description: (see title) Review Synopsis: A Road Map to Processing XML with Perl Perl & XML is hard book to categorize - it is not a beginner's book and it is not a cookbook. I instead found it to be a nice road map to the many XML processing CPAN modules available to Perl programmers. (And i also found it to be a nice departure to the many XML books available that are only for Java). This is YAGOB (Yet Another Good O'Reilly Book), nice typesetting, thorough explanations, and a quirky animal cover. The index is decent, but i was a bit disheartened to find that XML::Twig was not included in the index. It is, however, covered in chapter 8. The first chapter is the obligatory introduction, it introduces XML::Simple and discusses 'XML Gotchas'. Chapter two provides a very nice overview of XML in general. It provides the necessary base for XML newbies while providing a decent reference to refer to while working through the rest of the book. It also gives an example of an XSLT transformation - converting an XML document to an XHTML document without the help of Perl. The fun starts with chapter three where actual XML processing is discussed and demonstrated. The CPAN modules XML::Parser, XML::LibXML, XML::XPath, and XML::Writer are given brief introductions with sample code. Also included is a demonstration of the wrong way to write an XML Parser (a well-formedness checker by hand) and the right way (by using XML::Parser). Document validation and DTD's are introduced with XML::LibXML code as a demonstration, and finally, Unicode encodings are compared and contrasted. Chapters four and six cover event-based and tree-based parsing respectively. Chapter four goes into more detail with XML::Parser and discusses 'repackaging' XML as PYX via XML::PYX. Chapter six discusses XML::Parser yet again (along with XML::Simple) and introduces XML::SimpleObject, XML::TreeBuilder, and XML::Grove. Each module covered is given a good overview and sample code to help demonstrate. Chapters five and seven cover the SAX and DOM modules respectively. (I recommend reading chapters four and six before covering five and seven.) An example of converting Excel spreadsheets to XML via XML::SAXDriver::Excel is covered in chapter five as well as SAX2 and installing your own XML::SAX parsers via the h2xs utility. The majority of chapter seven is a DOM class interface reference. There are two examples in this chapter, one that processes an XHTML document with XML::DOM and one that works with DOM2 and namespaces via XML::LibXML. Chapter eight discusses how to make tree-based parsing faster and more efficient via a hand-rolled DOM iterator module (named XML::DOMIterator) that is used in conjunction with XML::DOM, and also revisits the 'node hunter' module XML::XPath. Also included is mirod's XML::Twig which is used in three examples, one of which shows how tree-based parsing can be optimized by only parsing the smallest part of the tree that needs to be parsed. XSLT is also given a more thorough discussion than the overview given in chapter one, including how it can be used in conjunction with Perl via XML::LibXSLT. Chapters nine and ten round the book off with application examples. Chapter nine covers RSS with XML::RSS and briefly discusses XML::Generator::DBI (but makes no mention of DBIx::XML_RDB - see mirod's comment below). It also briefly discusses the controversial SOAP::Lite. Chapter ten provides an application that subclasses an XML parser to provide an API via CGI for manipulating an XML document. Also included is a mod_perl application for converting DocBook files into HTML on the fly, as well as a discussion, solution, and work-around involving the pitfalls of using the Expat library in mod_perl. The only cons i found were a few typos dealing with the ampersand character. Sometimes you will find & when the authors meant & and vice versa. Any seasoned Perl programmer will immediately spot these typos, but some beginners might not. Another con is that the authors discuss XML::Writer, but fail to use it in many examples that write XML. They instead do so by hand, which contradicts using CPAN modules in the first place. Again, a seasoned Perl programmer will know better. The last con is a minor nit-pick: a lot of the code seemed somewhat Java-like to me. However, these cons weigh considerably less than the pros. Again, i recommend this book to any seasoned Perl programmer that has not yet entered the realm of XML processing. Overall i feel this is an excellent book for intermediate to advanced Perl programmers with little or no knowledge of XML processing. Tired of only knowing how to use XML::Simple? This book will show you the alternatives! Just a precision here: XML::Generator::DBI is meant as a replacement for DBIx::XML_RDB (see the README), so it should not be mentionned any more. I was gifted this book and I am reading it. I am about half the way and I'd like to note down my first impressions here. It's surely a good book, even with the typos and contraddictions that jeffa pointed out. But I found one more contraddiction that I would like to stress. There are points where the authors get a bit pedantic, trying to clearify things that are already clear. Let's see an example: in chapter 5, "SAX", section "External Entity Resolution" the authors give an example of a book written in XML that was split in four files; these files were pulled in an XML file via four external entities. Since a filter that was showed before resolved this kind of entity, filtering this file would result in the whole book as output. They say: I feel that the phrase Your file separation...multiple files adds nothing to the concept expressed; worse: it makes the book more boring and hard to read. And I found a lot of these (I am at page 99 now...) Again, it's a good book, but I feel that it would benefit of a revision in the language to make it more direct. That could make the book slimmer, but personally I don't care about the thickness more than I care about the content..
http://www.perlmonks.org/index.pl/jacques?node_id=174980
CC-MAIN-2017-47
refinedweb
1,059
61.77
Noise: Creating a Synthesizer for Retro Sound Effects - Core Engine This is the second. If you have not already read the first Engine Demo By the end of this tutorial all of the core code required for the audio engine will have been completed. The following is a simple demonstration of the audio engine in action. Only one sound is being played in that demonstration, but the frequency of the sound is being randomised along with its release time. The sound also has a modulator attached to it to produce the vibrato effect (modulate the sound's amplitude) and the frequency of the modulator is also being randomised. AudioWaveform Class The first class that we will create will simply hold constant values for the waveforms that the audio engine will use to generate the audible sounds. Start by creating a new class package called noise, and then add the following class to that package: package noise { public final class AudioWaveform { static public const PULSE:int = 0; static public const SAWTOOTH:int = 1; static public const SINE:int = 2; static public const TRIANGLE:int = 3; } } We will also add a static public method to the class that can be used to validate a waveform value, the method will return true or false to indicate whether or not the waveform value is valid. static public function validate( waveform:int ):Boolean { if( waveform == PULSE ) return true; if( waveform == SAWTOOTH ) return true; if( waveform == SINE ) return true; if( waveform == TRIANGLE ) return true; return false; } Finally, we should prevent the class from being instantiated because there is no reason for anyone to create instances of this class. We can do this within the class constructor: public function AudioWaveform() { throw new Error( "AudioWaveform class cannot be instantiated" ); } This class is now complete. Preventing enum-style classes, all-static classes, and singleton classes from being directly instantiated is a good thing to do because these types of class should not be instantiated; there is no reason to instantiate them. Programming languages such as Java do this automatically for most of these class types but currently in ActionScript 3.0 we need to enforce this behaviour manually within the class constructor. Audio Class Next on the list is the Audio class. This class in similar in nature to the native ActionScript 3.0 Sound class: every audio engine sound will be represented by an Audio class instance. Add the following barebones class to the noise package: package noise { public class Audio { public function Audio() {} } } The first things that need to be added to the class are properties that will tell the audio engine how to generate the sound wave whenever the sound is played. These properties include the type of waveform used by the sound, the frequency and amplitude of the waveform, the duration of the sound, and its release time (how quickly it fades out). All of these properties will be private and accessed via getters/setters: private var m_waveform:int = AudioWaveform.PULSE; private var m_frequency:Number = 100.0; private var m_amplitude:Number = 0.5; private var m_duration:Number = 0.2; private var m_release:Number = 0.2; As you can see, we have set a sensible default value for each property. The amplitude is a value in the range 0.0 to 1.0, the frequency is in hertz, and the duration and release times are in seconds. We also need to add two more private properties for the modulators that can be attached to the sound; again these properties will be accessed via getters/setters: private var m_frequencyModulator:AudioModulator = null; private var m_amplitudeModulator:AudioModulator = null; Finally, the Audio class will contain a few internal properties that will only be accessed by the AudioEngine class (we will create that class shortly). These properties do not need to be hidden behind getters/setters: internal var position:Number = 0.0; internal var playing:Boolean = false; internal var releasing:Boolean = false; internal var samples:Vector.<Number> = null; The position is in seconds and it allows the AudioEngine class to keep track of the sound's position while the sound is playing, this is needed to calculate the waveform sound samples for the sound. The playing and releasing properties tell the AudioEngine what state the sound is in, and the samples property is a reference to the cached waveform samples that the sound is using. The use of these properties will become clear when we create the AudioEngine class. To finish the Audio class we need to add the getters/setters: Audio.waveform public final function get waveform():int { return m_waveform; } public final function set waveform( value:int ):void { if( AudioWaveform.isValid( value ) == false ) { return; } switch( value ) { case AudioWaveform.PULSE: samples = AudioEngine.PULSE; break; case AudioWaveform.SAWTOOTH: samples = AudioEngine.SAWTOOTH; break; case AudioWaveform.SINE: samples = AudioEngine.SINE; break; case AudioWaveform.TRIANGLE: samples = AudioEngine.TRIANGLE; break; } m_waveform = value; } Audio.frequency [Inline] public final function get frequency():Number { return m_frequency; } public final function set frequency( value:Number ):void { // clamp the frequency to the range 1.0 - 14080.0 m_frequency = value < 1.0 ? 1.0 : value > 14080.0 ? 14080.0 : value; } Audio.amplitude [Inline] public final function get amplitude():Number { return m_amplitude; } public final function set amplitude( value:Number ):void { // clamp the amplitude to the range 0.0 - 1.0 m_amplitude = value < 0.0 ? 0.0 : value > 1.0 ? 1.0 : value; } Audio.duration [Inline] public final function get duration():Number { return m_duration; } public final function set duration( value:Number ):void { // clamp the duration to the range 0.0 - 60.0 m_duration = value < 0.0 ? 0.0 : value > 60.0 ? 60.0 : value; } Audio.release [Inline] public final function get release():Number { return m_release; } public function set release( value:Number ):void { // clamp the release time to the range 0.0 - 10.0 m_release = value < 0.0 ? 0.0 : value > 10.0 ? 10.0 : value; } Audio.frequencyModulator [Inline] public final function get frequencyModulator():AudioModulator { return m_frequencyModulator; } public final function set frequencyModulator( value:AudioModulator ):void { m_frequencyModulator = value; } Audio.amplitudeModulator [Inline] public final function get amplitudeModulator():AudioModulator { return m_amplitudeModulator; } public final function set amplitudeModulator( value:AudioModulator ):void { m_amplitudeModulator = value; } You no doubt noticed the [Inline] metadata tag bound to a few of the getter functions. That metadata tag is a shiny new feature of Adobe's latest ActionScript 3.0 Compiler and it does what says on the tin: it inlines (expands) the contents of a function. This is extremely useful for optimisation when used sensibly, and generating dynamic audio at runtime is certainly something that requires optimisation. AudioModulator Class The purpose of the AudioModulator is to allow the amplitude and frequency of Audio instances to be modulated to create useful and crazy sound effects. Modulators are actually similar to Audio instances, they have a waveform, an amplitude, and frequency, but they don't actually produce any audible sound they only modify audible sounds. First thing first, create the following barebones class in the noise package: package noise { public class AudioModulator { public function AudioModulator() {} } } Now let's add the private private properties: private var m_waveform:int = AudioWaveform.SINE; private var m_frequency:Number = 4.0; private var m_amplitude:Number = 1.0; private var m_shift:Number = 0.0; private var m_samples:Vector.<Number> = null; If you are thinking this looks very similar to the Audio class then you are correct: everything except for the shift property is the same. To understand what the shift property does, think of one of the basic waveforms that the audio engine is using (pulse, sawtooth, sine, or triangle) and then imagine a vertical line running straight through the waveform at any position you like. The horizontal position of that vertical line would be the shift value; its a value in the range 0.0 to 1.0 that tells the modulator where to begin reading it's waveform from and in turn can have a profound affect on the modifications the modulator makes to a sound's amplitude or frequency. As an example, if the modulator was using a sine waveform to modulate the frequency of a sound, and the shift was set at 0.0, the sound's frequency would first rise and then fall due to the curvature of the sine wave. However, if the shift was set at 0.5 the sound's frequency would first fall and then rise. Anyway, back to the code. The AudioModulator contains one internal method that is only used by the AudioEngine; the method is as follows: [Inline] internal final function process( time:Number ):Number { var p:int = 0; var s:Number = 0.0; if( m_shift != 0.0 ) { time += ( 1.0 / m_frequency ) * m_shift; } p = ( 44100 * m_frequency * time ) % 44100; s = m_samples[p]; return s * m_amplitude; } That function is inlined because it is used a lot, and when I say "a lot" I mean 44100 times a second for each sound that is playing that has a modulator attached to it (this is where inlining becomes incredibly valuable). The function simply grabs a sound sample from the waveform the modulator is using, adjusts that sample's amplitude, and then returns the result. To finish the AudioModulator class we need to add the getters/setters: AudioModulator.waveform public function get waveform():int { return m_waveform; } public function set waveform( value:int ):void { if( AudioWaveform.isValid( value ) == false ) { return; } switch( value ) { case AudioWaveform.PULSE: m_samples = AudioEngine.PULSE; break; case AudioWaveform.SAWTOOTH: m_samples = AudioEngine.SAWTOOTH; break; case AudioWaveform.SINE: m_samples = AudioEngine.SINE; break; case AudioWaveform.TRIANGLE: m_samples = AudioEngine.TRIANGLE; break; } m_waveform = value; } AudioModulator.frequency public function get frequency():Number { return m_frequency; } public function set frequency( value:Number ):void { // clamp the frequency to the range 0.01 - 100.0 m_frequency = value < 0.01 ? 0.01 : value > 100.0 ? 100.0 : value; } AudioModulator.amplitude public function get amplitude():Number { return m_amplitude; } public function set amplitude( value:Number ):void { // clamp the amplitude to the range 0.0 - 8000.0 m_amplitude = value < 0.0 ? 0.0 : value > 8000.0 ? 8000.0 : value; } AudioModulator.shift public function get shift():Number { return m_shift; } public function set shift( value:Number ):void { // clamp the shift to the range 0.0 - 1.0 m_shift = value < 0.0 ? 0.0 : value > 1.0 ? 1.0 : value; } And that wraps up the AudioModulator class. AudioEngine Class Now for the big one: the AudioEngine class. This is an all-static class and manages pretty much everything related to Audio instances and sound generation. Let's start with a barebones class in the noise package as usual: package noise { import flash.events.SampleDataEvent; import flash.media.Sound; import flash.media.SoundChannel; import flash.utils.ByteArray; // public final class AudioEngine { public function AudioEngine() { throw new Error( "AudioEngine class cannot be instantiated" ); } } } As mentioned before, all-static classes should not be instantiated, hence the exception that is thrown in the class constructor if someone does try to instantiate the class. The class is also final because there's no reason to extend an all-static class. The first things that will be added to this class are internal constants. These constants will be used to cache the samples for each of the four waveforms that the audio engine is using. Each cache contains 44,100 samples which equates to one hertz waveforms. This allows the audio engine to produce really clean low frequency sound waves. The constants are as follows: static internal const PULSE:Vector.<Number> = new Vector.<Number>( 44100 ); static internal const SAWTOOTH:Vector.<Number> = new Vector.<Number>( 44100 ); static internal const SINE:Vector.<Number> = new Vector.<Number>( 44100 ); static internal const TRIANGLE:Vector.<Number> = new Vector.<Number>( 44100 ); There are also two private constants used by the class: static private const BUFFER_SIZE:int = 2048; static private const SAMPLE_TIME:Number = 1.0 / 44100.0; The BUFFER_SIZE is the number of sound samples that will be passed to the ActionScript 3.0 sound API whenever a request for sound samples is made. This is the smallest number of samples allowed and it results in the lowest possible sound latency. The number of samples could be increased to reduce CPU usage but that would increase the sound latency. The SAMPLE_TIME is the duration of a single sound sample, in seconds. And now for the private variables: static private var m_position:Number = 0.0; static private var m_amplitude:Number = 0.5; static private var m_soundStream:Sound = null; static private var m_soundChannel:SoundChannel = null; static private var m_audioList:Vector.<Audio> = new Vector.<Audio>(); static private var m_sampleList:Vector.<Number> = new Vector.<Number>( BUFFER_SIZE ); - The m_positionis used to keep track of the sound stream time, in seconds. - The m_amplitudeis a global secondary amplitude for all of the Audioinstances that are playing. - The m_soundStreamand m_soundChannelshouldn't need any explanation. - The m_audioListcontains references to any Audioinstances that are playing. - The m_sampleListis a temporary buffer used to store sound samples when they are requested by the ActionScript 3.0 sound API. Now, we need to initialize the class. There are numerous ways of doing this but I prefer something nice and simple, a static class constructor: static private function $AudioEngine():void { var i:int = 0; var n:int = 44100; var p:Number = 0.0; // while( i < n ) { p = i / n; SINE[i] = Math.sin( Math.PI * 2.0 * p ); PULSE[i] = p < 0.5 ? 1.0 : -1.0; SAWTOOTH[i] = p < 0.5 ? p * 2.0 : p * 2.0 - 2.0; TRIANGLE[i] = p < 0.25 ? p * 4.0 : p < 0.75 ? 2.0 - p * 4.0 : p * 4.0 - 4.0; i++; } // m_soundStream = new Sound(); m_soundStream.addEventListener( SampleDataEvent.SAMPLE_DATA, onSampleData ); m_soundChannel = m_soundStream.play(); } $AudioEngine(); If you have read the previous tutorial in this series then you will probably see what's happening in that code: the samples for each of the four waveforms are being generated and cached, and this only happens once. The sound stream is also being instantiated and started and will run continuously until the app is terminated. The AudioEngine class has three public methods that are used to play and stop Audio instances: AudioEngine.play() static public function play( audio:Audio ):void { if( audio.playing == false ) { m_audioList.push( audio ); } // this allows us to know exactly when the sound was started audio.position = m_position - ( m_soundChannel.position * 0.001 ); audio.playing = true; audio.releasing = false; } AudioEngine.stop() static public function stop( audio:Audio, allowRelease:Boolean = true ):void { if( audio.playing == false ) { // the sound isn't playing return; } if( allowRelease ) { // skip to the end of the sound and flag it as releasing audio.position = audio.duration; audio.releasing = true; return; } audio.playing = false; audio.releasing = false; } AudioEngine.stopAll() static public function stopAll( allowRelease:Boolean = true ):void { var i:int = 0; var n:int = m_audioList.length; var o:Audio = null; // if( allowRelease ) { while( i < n ) { o = m_audioList[i]; o.position = o.duration; o.releasing = true; i++; } return; } while( i < n ) { o = m_audioList[i]; o.playing = false; o.releasing = false; i++; } } And here come the main audio processing methods, both of which are private: AudioEngine.onSampleData() static private function onSampleData( event:SampleDataEvent ):void { var i:int = 0; var n:int = BUFFER_SIZE; var s:Number = 0.0; var b:ByteArray = event.data; // if( m_soundChannel == null ) { while( i < n ) { b.writeFloat( 0.0 ); b.writeFloat( 0.0 ); i++; } return; } // generateSamples(); // while( i < n ) { s = m_sampleList[i] * m_amplitude; b.writeFloat( s ); b.writeFloat( s ); m_sampleList[i] = 0.0; i++; } // m_position = m_soundChannel.position * 0.001; } So, in the first if statement we are checking if the m_soundChannel is still null, and we need to do that because the SAMPLE_DATA event is dispatched as soon as the m_soundStream.play() method is invoked, and before the method gets a chance to return a SoundChannel instance. The while loop rolls through the sound samples that have been requested by m_soundStream and writes them to the provided ByteArray instance. The sound samples are generated by the following method: AudioEngine.generateSamples() static private function generateSamples():void { var i:int = 0; var n:int = m_audioList.length; var j:int = 0; var k:int = BUFFER_SIZE; var p:int = 0; var f:Number = 0.0; var a:Number = 0.0; var s:Number = 0.0; var o:Audio = null; // roll through the audio instances while( i < n ) { o = m_audioList[i]; // if( o.playing == false ) { // the audio instance has stopped completely m_audioList.splice( i, 1 ); n--; continue; } // j = 0; // generate and buffer the sound samples while( j < k ) { if( o.position < 0.0 ) { // the audio instance hasn't started playing yet o.position += SAMPLE_TIME; j++; continue; } if( o.position >= o.duration ) { if( o.position >= o.duration + o.release ) { // the audio instance has stopped o.playing = false; j++; continue; } // the audio instance is releasing o.releasing = true; } // grab the audio instance's frequency and amplitude f = o.frequency; a = o.amplitude; // if( o.frequencyModulator != null ) { // modulate the frequency f += o.frequencyModulator.process( o.position ); } // if( o.amplitudeModulator != null ) { // modulate the amplitude a += o.amplitudeModulator.process( o.position ); } // calculate the position within the waveform cache p = ( 44100 * f * o.position ) % 44100; // grab the waveform sample s = o.samples[p]; // if( o.releasing ) { // calculate the fade-out amplitude for the sample s *= 1.0 - ( ( o.position - o.duration ) / o.release ); } // add the sample to the buffer m_sampleList[j] += s * a; // update the audio instance's position o.position += SAMPLE_TIME; j++; } i++; } } Finally, to finish things off, we need to add the getter/setter for the private m_amplitude variable: static public function get amplitude():Number { return m_amplitude; } static public function set amplitude( value:Number ):void { // clamp the amplitude to the range 0.0 - 1.0 m_amplitude = value < 0.0 ? 0.0 : value > 1.0 ? 1.0 : value; } And now I need a break! Coming Up... In the third and final tutorial in the series we will be adding audio processors the to audio engine. These will allow us to push all of the generated sound samples though processing units such as hard limiters and delays. We will also be taking a look at all of the code to see if anything can be optimised. All of the source code for this tutorial series will be made available with the next tutorial. Follow us on Twitter, Facebook, or Google+ to keep up to date with the latest posts. Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
http://gamedevelopment.tutsplus.com/tutorials/noise-creating-a-synthesizer-for-retro-sound-effects-core-engine--gamedev-1536
CC-MAIN-2015-32
refinedweb
3,027
50.84
Practical .NET You can turn on logging for your Entity Framework code with a single line. Configuring it to write to a file takes only a little bit more effort. Assuming that you're using Entity Framework 6, you already have a logging tool that can give you some insights into the SQL your queries are generating and the time they take to run. The Database property of your DbContext object has a Log property that will give you that information. To turn on logging, you just have to set the Log property to a method that will write to a log. This example sets up logging to write Entity Framework messages to to the Debug window: Dim db As SalesOrderEntities db = New SalesOrderEntities() db.Database.Log = AddressOf Debug.WriteLine In C#, you won't need the AddressOf keyword. You don't have to touch your code to make this change and you can also write your output to file by making some changes in your config file. To turn on logging to a file from your config file, just add a reference to the DatabaseLogger in the System.Data.Entity.Infrastructure.Interception namespace and point it to a file with XML, like this: <interceptors> <interceptor type= "System.Data.Entity.Infrastructure.Interception.DatabaseLogger, EntityFramework"> <parameters> <parameter value="C:\Logs\MyApplication.txt"/> </parameters> </interceptor> </interceptors> Here's what the output looks like (slightly edited): Opened connection at 08-Nov-16 12:15:51 PM -05:00 SELECT TOP (1) [c].[Id] AS [Id], [c].[FirstName] AS [FirstName], [c].[LastName] AS [LastName], [c].[CustCreditStatus] AS [CustCreditStatus], [c].[CreditLimit] AS [CreditLimit], [c].[RenewalDate] AS [RenewalDate], [c].[Valid] AS [Valid] FROM [dbo].[Customers] AS [c] -- Executing at 08-Nov-16 12:15:51 PM -05:00 -- Completed in 3 ms with result: SqlDataReader Closed connection at 08-Nov-16 12:15:51 PM -05:00 when I used the First method on one of my
https://visualstudiomagazine.com/articles/2017/01/02/entity-framework-query-visual-studio.aspx
CC-MAIN-2020-40
refinedweb
321
58.08
Journal: I Suppose I Should Update This Thing A more recent skydive earlier this year. Much better form than my last one. I just hit 181 today, and have my wingsuit on order. A more recent skydive earlier this year. Much better form than my last one. I just hit 181 today, and have my wingsuit on order. Eeh, not the best exit but in my defense it was the first one where I've been "on my own" at the exit. All previous times, someone's been holding on to me. The second video is my 10th jump. I'm running out of levels to fail, though! I'm much rather take it slow and be sure I have it right than rush through the program.. Maybe not, but if you "set > workspace/job.properties" and pass your Job Path in to jmeter as a property, you can read the damn things back out with a beanshell sampler later on. Handy for grabbing variables from your Hudson parameterized build. Just don't forget to define JobPath as ${__P(JobPath,)} in your user defined variables. import java.io.BufferedReader; String response = ""; try { BufferedReader in = new BufferedReader(new FileReader("${JobPath}/workspace/job.properties")); String nextLine = in.readLine(); while(null != nextLine) { String[] keyval = nextLine.split("="); if (2 == keyval.length) { vars.put(keyval[0],${__eval(keyval[1])}); } response = response + nextLine + "\n"; nextLine = in.readLine(); } } catch (java.io.FileNotFoundException e) { response = "Unable to locate properties file for environment; using defaults."; } SampleResult.setResponseData(response);. If you want ruby mode to do syntax highlighting, make sure to have (global-font-lock-mode 1) before you load the ruby mode library. This will save much misery. The codemonkeys of Slashdot have obviously been pounding randomly on their keyboards recently. Here's a thought, if you are going to hire monkeys to maintain your code, you should at least test it before deploying it to your live servers. This hasn't been Rob Malda's personal blog for years, it's a fricken' business. Do you Slashdot employees like your jobs? Do you want them to continue to exist? If so, perhaps you should start treating this like a business and not like a hobby. Quit breaking things. They made me do it! It was the peer pressure! I... I'm on... Facebook now. I just had an interesting revelation regarding freedom. My mom came down with pancreatic cancer about a year ago, and I felt my personal sense of freedom curtailed. Sure, it was only curtailed by my own sense morality and obligation. but it was limited nonetheless. And I noticed, there is only so much freedom I am willing to give up. I was suddenly much more aware of, and resistant to, all the other limitations on my freedom like my marriage and my job and living in a society where I have to wear pants. Then my mom died, and I inherited a house and quite a bit of money. Now that my freedom is far less constrained by finances, or by dying single mother, only child dynamics, the minor impositions of job and marriage and pants obsessed society don't even register. I've read that the sense of certainty is simply an emotion, a specific analog circuit that engages and drives our logical mind to come up with explanations. Now, through experience, I believe our sense of freedom is another emotional circuit. While in a strictly deterministic world individual freedom does not exist as such, the sense of personal freedom is a very real part of the chain of cause and effect. (And thus, a personal conundrum is resolved, cognitive dissonance is decreased, and pants are worn.) After nearly a decade with the same sig, I've decided to get rid of the Python quote and replace it with something even more combative. I saw it in an Empire: Total War loading screen, heh heh. "None can love freedom heartily, but good men; the rest love not freedom, but license." --John Milton. Liberty is a social contract, it requires active participation to achieve it. License is "I get to do what I want.".. Reference the NULL within NULL, it is the gateway to all wizardry.
http://slashdot.org/~o1d5ch001/journal/friends
CC-MAIN-2014-42
refinedweb
702
66.74
Scala downright rocks at certain things. Java is great in others. How do you mix Scala into an existing Java Spring Maven program without causing trouble? With a few simple tricks that really don’t require much. Scala uses the JVM and has a matching java collections compatibility. Using these it is possible to update nearly any Java Spring-maven program with Scala classes. Dependencies for Maven There is one simple dependency for this to work, the Scala Programming Language. Add the language to your POM as follows. org.scala-lang scala-library 2.11.3 This will cause the scala language to be included with Maven at runtime. Wiring your Class with XML In discovering this, I am using XML due to the need to easily manipulate a class many times. Spring is basically acting as GUI, enforcer, and code reducer. Class variables are considered as part of the constructor and must include the scala.beans dependency. All variables need the @BeanProperty to work. Furthermore, importing scala.collection.JavaConversions._ will give java collections scala like properties such as the ability to use .foreach(). import scala.beans._ import scala.collection.JavaConversions._ class MyClass{ @BeanProperty var source:String = null @BeanProperty var fromImports:String = null } This @BeanProperty annotation will create a getter and setter for Spring when the program builds. It is required even when using a constructor argument. The following xml can be used to wire the bean. <property name="source" value="mySource"/> If the variables are placed in a constructor, wire as such. <bean id="Notify" class="com.hygenics.parser.MyClass" parent="imports"> <constructor-arg </bean>
https://dadruid5.com/2015/09/04/mixing-scala-and-java-in-java-spring/
CC-MAIN-2019-30
refinedweb
269
52.36
A lot of this write-up is based on my own (rather limited) understanding of the concept. I am, in no way, an expert and have barely begun to learn programming. I couldn't find any good articles explaning Signals as a concept and hence wrote this article hoping to help others who might be struggling similarly. There may (and will) be glaring mistakes and downright incorrect assumptions & statements in the post and I implore you to correct me wherever necessary so that I may change the post as and where required for others to read and learn later on. "F**king signals, how do they work?!" When django receives a request for content at a specific url, the request-routing mechanism picks up the appropriate view, as defined by your urls.py, to generate content. For a typical GET request, this involves extracting relevant information from the database and passing it to the template which constructs the HTML to be displayed and sends it to the requesting user. Quite trivial, really. In case of a POST request, however, the server receives data from the user. There may be a case where you need/want to modify this data to suit your requirements before committing/storing it to your database. Consider, for example, the situation where you need to generate a profile for every new user who signs up to your site. By default, Django provides a basic User model via the the django.contrib.auth module. Any extra information can be added either by way of customising your User model or creating a separate UserProfile model in a separate app. Okay, I choose the LATTER option. Alright, let's say our UserProfile model looks like this: GENDER_CHOICES = ( ('M', 'Male'), ('F', 'Female'), ('P', 'Prefer not to answer'), ) class UserProfile(models.Model): user = models.OneToOneField(User, related_name='profile') nickname = models.TextField(max_length=64, null=True, blank=True) dob = models.DateField(null=True, blank=True) gender = models.CharField(max_length=1, choices=GENDER_CHOICES, default='M') bio = models.TextField(max_length=1024, null=True, blank=True) [...] Now, we need to ensure that a corresponding UserProfile instance is created (or already exists) each time a User instance is saved to the database. Hey, we could override the save method on the User model and achieve this, right? Maybe, the code might look something like this: def save(self, *args, *kwargs): u = super(User, self).save(*args, **kwargs) UserProfile.objects.get_or_create(user_id=u.id) return u # save needs to return a `User` object, remember! BUT wait! The User model comes from django.contrib.auth which is a part of the django installation itself! Do you really want to override that? I certainly wouldn't recommend that! So, what now? Okay, if there was a way we could somehow 'listen' to Django's internal mechanisms and figure out a point in the process where we can 'hook in' our little code snippet, it would make our lives so much easier. In fact, wouldn't it be great if Django 'announced' whenever such a point in the process was reached? Wouldn't it be great if Django could 'announce' had finished creating a new user? That way, you could simply wait for such an 'announcement' to happen and write your code to act upon it only when such an 'announcement' happens. Well, we're in luck because Django does exactly that. These announcements are called 'Signals'. Django 'emits' specific signals to indicate that it has reached a particular step in its code-execution. These signals provide an 'entry point' into its code-execution mechanism allowing you to execute your own code, at the point where the signal is emitted. Django also provides you with the identity of the sender (and other relevant, extra data) so that we can fine-tune our 'hook-in' code to the best possible extent! Excellent! That still doesn't explain HOW, though... For our scenario, we have a very convenient signal called post_save that is emitted whenever a model instance gets saved to the database - even the User model! Therefore, all we need to do is 'receive' this signal and hook in our own code to be executed at that point in the process, i.e. the point where a new user instance has just been saved to the database! from django.dispatch import receiver from django.core.signals import post_save from django.contrib.auth.models import User @receiver(post_save, sender=User) def ensure_profile_exists(sender, **kwargs): if kwargs.get('created', False): UserProfile.objects.get_or_create(user=kwargs.get('instance')) Confused? Don't worry. Let's try to read the code line-by-line to understand what it says. > @receiver(post_save, sender=User) (NB: I'm hoping the three import lines are kinda obvious.) The first line of the snippet is a decorator called @receiver. This decorator is simply a shortcut to a wrapper that invokes a connection to the post_save signal. The decorator can also takes extra arguments which are passed onto the signal. For instance, we are specifying the sender=User argument in this case to ensure that the receiver is invoked only when the signal is sent by the User model. What the decorator essentially states is this: "Keep an eye out for any instance of a User model being saved to the database and tell Django that we have a receiver here which is waiting to execute code when such an event happens." > def ensure_profile_exists(sender, **kwargs): We are now defining a function that will be called when the post_save signal is intercepted. For purposes of easy understanding and code-readability, we have named this function ensure_profile_exists and we are passing the Signal sender as an argument to this function. We are also passing extra keyword arguments to the funtion and we'll shortly see how beneficial these will be. > if kwargs.get('created', False): UserProfile.objects.get_or_create(user=kwargs.get('instance')) To understand this, we first need to understand what kind of extra data gets sent along with the post_save signal. So, let's anthropomorphize the situation for a bit. Imagine instructing a person at a printing press (where a certain book is being printed) by telling them, "Tell me when each published copy of a book comes out of the press." By doing this, you have ensured that this person will approach you and inform you whenever a book copy appears at the end of printing press production line. Now, imagine that this person is extra-efficient and also brings with them, a copy of the published book that came out of the press. You now have two pieces of information - the first being that a book has just finished publishing and the second being the actual physical copy of the book that was brought to you to do with as you please. The kwargs variable is an example of the post_save signal being extra-efficient - it sends out highly-relevant information pertaining to the Signal-sender. Typically, the post_save signal sends out a copy of the saved instance and a boolean variable called created that indicates whether a new instance was created or an older instance was saved/updated. There are a few other things that are bundled into the kwargs dictionary but it is these two variables that we shall use to carry out our ultimate task - that of creating a UserProfile whenever a new User is created. This part of our code-snippet, therefore, simply checks for the (boolean-)value of the created key in the kwargs dictionary (assuming it to be False, by default) and creates a UserProfile whenever it finds that a new user has been created... ... which is precisely what we have wanted all along! Hmm... But where should all this code live? In which file? As per the official Django documentation on Signals:. Note that models.py is the recommended location. That certainly doesn't mean it is the location to dump all your signal registrations and handlers. If you want to be adventurous, you could write your code in a separate file (say signals.py or something similar) and import it at the exact point where your signals need to be registered/handled. Whatever you do, wherever you choose to put your code, make sure that your signal handler has been properly registered and is being invoked correctly and at the right time whenever & wherever required. Good luck! POINTS TO NOTE Notice that we didn't add any extra information to the UserProfile- we merely created an empty (but associated) instance and left it to be modified later. Since we have defined most of them have been defined to accept nullvalues, our code will still work. However, if you so wanted, you could use other APIs (internal, as well as external) to acquire the relevant data for the attributes (i.e. the nickname, bio, dob, gender, etc.) and pre-fill them while creating the UserProfile- for example, by extracting the relevant data from a social network profile. Always check the arguments provided by a signal whenever it is sent. Not all signals send the same arguments and some third-party apps may send arguments that will be useful to you in more ways than one. Try and make good use of these arguments in your receiver functions as best as you can.
https://coderwall.com/p/ktdb3g/django-signals-an-extremely-simplified-explanation-for-beginners
CC-MAIN-2016-22
refinedweb
1,550
63.29
Parsing HTML in Python I've been having (mis)adventures learning about Python's various options for parsing HTML. Up until now, I've avoided doing any HTMl parsing in my RSS reader FeedMe. I use regular expressions to find the places where content starts and ends, and to screen out content like advertising, and to rewrite links. Using regexps on HTML is generally considered to be a no-no, but it didn't seem worth parsing the whole document just for those modest goals. But I've long wanted to add support for downloading images, so you could view the downloaded pages with their embedded images if you so chose. That means not only identifying img tags and extracting their src attributes, but also rewriting the img tag afterward to point to the locally stored image. It was time to learn how to parse HTML. Since I'm forever seeing people flamed on the #python IRC channel for using regexps on HTML, I figured real HTML parsing must be straightforward. A quick web search led me to Python's built-in HTMLParser class. It comes with a nice example for how to use it: define a class that inherits from HTMLParser, then define some functions it can call for things like handle_starttag and handle_endtag; then call self.feed(). Something like this: from HTMLParser import HTMLParser class MyFancyHTMLParser(HTMLParser): def fetch_url(self, url) : request = urllib2.Request(url) response = urllib2.urlopen(request) link = response.geturl() html = response.read() response.close() self.feed(html) # feed() starts the HTMLParser parsing def handle_starttag(self, tag, attrs): if tag == 'img' : # attrs is a list of tuples, (attribute, value) srcindex = self.has_attr('src', attrs) if srcindex < 0 : return # img with no src tag? skip it src = attrs[srcindex][1] # Make relative URLs absolute src = self.make_absolute(src) attrs[srcindex] = (attrs[srcindex][0], src) print '<' + tag for attr in attrs : print ' ' + attr[0] if len(attr) > 1 and type(attr[1]) == 'str' : # make sure attr[1] doesn't have any embedded double-quotes val = attr[1].replace('"', '\"') print '="' + val + '"') print '>' def handle_endtag(self, tag): self.outfile.write('</' + tag.encode(self.encoding) + '>\n') Easy, right? Of course there are a lot more details, but the basics are simple. I coded it up and it didn't take long to get it downloading images and changing img tags to point to them. Woohoo! Whee! The bad news about HTMLParser Except ... after using it a few days, I was hitting some weird errors. In particular, this one: HTMLParser.HTMLParseError: bad end tag: '' It comes from sites that have illegal content. For instance, stories on Slate.com include Javascript lines like this one inside <script></script> tags: document.write("<script type='text/javascript' src='whatever'></scr" + "ipt>"); This is technically illegal html -- but lots of sites do it, so protesting that it's technically illegal doesn't help if you're trying to read a real-world site. Some discussions said setting self.CDATA_CONTENT_ELEMENTS = () would help, but it didn't. HTMLParser's code is in Python, not C. So I took a look at where the errors are generated, thinking maybe I could override them. It was easy enough to redefine parse_endtag() to make it not throw an error (I had to duplicate some internal strings too). But then I hit another error, so I redefined unknown_decl() and _scan_name(). And then I hit another error. I'm sure you see where this was going. Pretty soon I had over 100 lines of duplicated code, and I was still getting errors and needed to redefine even more functions. This clearly wasn't the way to go. Using lxml.html I'd been trying to avoid adding dependencies to additional Python packages, but if you want to parse real-world HTML, you have to. There are two main options: Beautiful Soup and lxml.html. Beautiful Soup is popular for large projects, but the consensus seems to be that lxml.html is more error-tolerant and lighter weight. Indeed, lxml.html is much more forgiving. You can't handle start and end tags as they pass through, like you can with HTMLParser. Instead you parse the HTML into an in-memory tree, like this: tree = lxml.html.fromstring(html) How do you iterate over the tree? lxml.html is a good parser, but it has rather poor documentation, so it took some struggling to figure out what was inside the tree and how to iterate over it. You can visit every element in the tree with for e in tree.iter() : print e.tag But that's not terribly useful if you need to know which tags are inside which other tags. Instead, define a function that iterates over the top level elements and calls itself recursively on each child. The top of the tree itself is an element -- typically the <html></html> -- and each element has .tag and .attrib. If it contains text inside it (like a <p> tag), it also has .text. So to make something that works similarly to HTMLParser: def crawl_tree(tree) : handle_starttag(tree.tag, tree.attrib) if tree.text : handle_data(tree.text) for node in tree : crawl_tree(node) handle_endtag(tree.tag) But wait -- we're not quite all there. You need to handle two undocumented cases. First, comment tags are special: their tag attribute, instead of being a string, is <built-in function Comment> so you have to handle that specially and not assume that tag is text that you can print or test against. Second, what about cases like <p>Here is some <i>italicised</i> text.</p> ? in this case, you have the p tag, and its text is "Here is some ". Then the p has a child, the i tag, with text of "italicised". But what about the rest of the string, " text."? That's called a tail -- and it's the tail of the adjacent i tag it follows, not the parent p tag that contains it. Confusing! So our function becomes: def crawl_tree(tree) : if type(tree.tag) is str : handle_starttag(tree.tag, tree.attrib) if tree.text : handle_data(tree.text) for node in tree : crawl_tree(node) handle_endtag(tree.tag) if tree.tail : handle_data(tree.tail) See how it works? If it's a comment (tree.tag isn't a string), we'll skip everything -- except the tail. Even a comment might have a tail: <p>Here is some <!-- this is a comment --> text we want to show.</p> so even if we're skipping comment we need its tail. I'm sure I'll find other gotchas I've missed, so I'm not releasing this version of feedme until it's had a lot more testing. But it looks like lxml.html is a reliable way to parse real-world pages. It even has a lot of convenience functions like link rewriting that you can use without iterating the tree at all. Definitely worth a look! [ 14:04 Jan 08, 2012 More programming | permalink to this entry | comments ]
http://shallowsky.com/blog/programming/parsing-html-python.html
CC-MAIN-2014-42
refinedweb
1,166
67.15
I want to add label (or other element) only on one platform, how achieve that using XAML? I tried: <OnPlatform x: <OnPlatform.WinPhone> <Label Text="Only on WinPhone"></Label> </OnPlatform.WinPhone> </OnPlatform> but it throws exception. I guess I could use visibility and set false on every platform, but one, but maybe there is more elegant way. I've seen this before, and I finally spent some time trying to figure out how all this works and why this particular case doesn't work. For a hacky workaround skip to the end. Here's the long, boring explanation of why it doesn't work as-is: When the XAML loader adds children to an element it looks for a method named "Add" (using reflection) on the content property of the parent. A Layout<T>has a Childrenproperty, which is an IList<T>, which has an Add(T)method. So when the parser creates the child element and wants to add it to the parent it goes to the content property and looks for the Addmethod and then calls it using a reflection Invokemethod something like this: The problem is that Addtakes a Viewin this case, and OnPlatformis not a View. Instead, OnPlatform<T>has an implicit operator to convert to T(so in this case OnPlatform<View>can implicitly convert to a View). It turns out that the Invokemethod does not do any implicit conversions so it ends up actually trying to insert the OnPlatformobject into the list of children instead of the Viewthat would result from the conversion. I think I found a way that Xamarin could have written their code to get the conversion, which would require using a TypeConverterand a custom Binderimplementation. For instance, this code works: And the invocation: What's going on here is that I have two unrelated classes Aand Band a method Foothat takes an A. I then dynamically call that method and give it a B(an unrelated type), but when I do that I give it a custom Binderimplementation that knows how to handle type conversions. Then I added a custom TypeConverterto Bthat tells it how to convert to an A. In this example Ais like Viewand Bis like OnPlatform<View>. Since OnPlatformis generic the converter gets a bit trickier. I made it work doing something like this: There might be a more direct way to get the implicit operator through reflection instead of introducing a new method. Of course there could be a much more elegant way to handle this, but that's what I know so far. It seems plausible. Here's the hacky workaround: Create a custom Viewclass like this: Then just wrap your OnPlatformelement in your new custom view: Now the view that's added to the parent is a Grid(subclass), and the custom view handles the conversion. I just realized that the OnPlatformViewimplementation doesn't really need to be specific to OnPlatform. All you need is some kind of view that has a single child rather than a collection. Therefore you don't really need the custom class at all. Just wrap the OnPlatformin a ContentView: @adamkemp Thanks for comprehensive answer, wraping with ContentView works. A control can be added just to one platform using XAMLby setting its IsVisibleproperty using OnPlatform. For example if we want a StackLayoutjust in iOS, we lay that out using XAMLas below <StackLayout> <StackLayout.IsVisible> <OnPlatform x: <OnPlatform.iOS> true </OnPlatform.iOS> <OnPlatform.Android> false </OnPlatform.Android> </OnPlatform> </StackLayout.IsVisible> </StackLayout> Wouldn't it be detrimental to the performance when displaying the page ? It would have to calculate the layout even if IsVisible is false, or maybe I'm mistaken ? It works like this. But I have a question to this code. First I forgot the 'x:' before the Boolean, I just wrote: OnPlatform x:TypeArguments="Boolean" After adding the x: to the TypeArguments, it worked: OnPlatform x:TypeArguments="x:Boolean" What is the x: doing and why is it necessary? Thanks & Cheers J. @Clapoti - my timing tests suggest that using IsVisible=False creates (constructs) the invisible content, but does not do layout call. So in release build, using XAML compilation and AOT+LLVM [VS Enterprise edition], the time is much less than when content is visible. So it is a useful technique in many cases. However, for the case of platform-specific content, using OnPlatformis superior, as then the content is completely skipped. Yeah, using visibility isn't the best approach in this case. NOTE: The current revised syntax for OnPlatform(so new platforms can be added more easily, as the names are strings rather than values of an enum), means changing @adamkemp's code to: Note the space between "On" and "Platform" in the sub-nodes. @jamell x: is the mapping to the xaml namespace. refer the link for detail understanding. l
https://forums.xamarin.com/discussion/comment/351197/
CC-MAIN-2019-47
refinedweb
804
63.29
Embedded C Programming with the PIC18F14K50 - 4. Using a Button Published Hi there! In the previous tutorial, we figured out how to use the MPLAB Code Configurator to create a simple Blink program. This time we’ll move forward and add another part to the schematics - a button. The task for today is quite simple - to toggle the LED state every time the button is pressed. Here is the schematic diagram to implement the current task (Fig. 1). As you can see, it is very similar to the previous one, the only additional part is the tactile switch or button S1. One end of which is connected to the RB6 pin of the MCU and the other end is connected to ground. This pin was not selected randomly, and to explain this choice, I need to refer to Table 1 (which I already put in this tutorial: Programming PIC in C - 2. Our First Program). As you know, for a button to work properly, you need to use a pull-up resistor. And if you don’t, you can refer to this tutorial where I explained its application: Button Inputs - Part 9 Microcontroller Basics (PIC10F200). Like the PIC10F200, the PIC18F14K50 MCU has internal pull-up resistors on several pins. The list of pins which have these resistors is listed in Table 1, column “Pull-up”. You may notice that there are pull-ups only on the RA3-RA5 and RB4-RB7 pins. All pins of Port A have alternative functions (column “Basic” in Table 1) so it’s better not to use them unnecessarily. Pins RB4 and RB5 have joint Analog functions (column “Analog” in Table 1), so they are configured as ADC inputs by default. I don’t want to overload you with a lot of extra information, so for now let’s just not use these pins as well. Finally, we only have RB6 and RB7 pins which can be used without any problems or additional configuration. That’s why I connected switch S1 to the RB6 pin (Fig. 1) but you can also use the RB7 pin if you want. Let’s now create a new project (like I showed you in this tutorial: Programming PIC in C - 2. Our First Program) and write the code which will implement the given task. // PIC18F14K50 Configuration Bit Settings // 'C' source line config statements // CONFIG1L #pragma config CPUDIV = NOCLKDIV // CPU System Clock Selection bits (No CPU System Clock divide) #pragma config USBDIV = OFF // USB Clock Selection bit (USB clock comes directly from the OSC1/OSC2 oscillator block; no divide) // CONFIG1H #pragma config FOSC = IRC // Oscillator Selection bits (Internal RC oscillator) #pragma config PLLEN = OFF // 4 X PLL Enable bit (PLL is under software control) #pragma config PCLKEN = ON // Primary Clock Enable bit (Primary clock enabled) #pragma config FCMEN = OFF // Fail-Safe Clock Monitor Enable (Fail-Safe Clock Monitor disabled) #pragma config IESO = OFF // Internal/External Oscillator Switchover bit (Oscillator Switchover mode disabled) // CONFIG2L #pragma config PWRTEN = OFF // Power-up Timer Enable bit (PWRT disabled) #pragma config BOREN = SBORDIS // Brown-out Reset Enable bits (Brown-out Reset enabled in hardware only (SBOREN is disabled)) #pragma config BORV = 19 // Brown-out Reset Voltage bits (VBOR set to 1.9 V nominal) // CONFIG2H #pragma config WDTEN = OFF // Watchdog Timer Enable bit (WDT is controlled by SWDTEN bit of the WDTCON register) #pragma config WDTPS = 32768 // Watchdog Timer Postscale Select bits (1:32768) // CONFIG3H #pragma config HFOFST = ON // HFINTOSC Fast Start-up bit (HFINTOSC starts clocking the CPU without waiting for the oscillator to stablize.) #pragma config MCLRE = ON // MCLR Pin Enable bit (MCLR pin enabled; RA BBSIZ = OFF // Boot Block Size Select bit (1kW boot block size) #pragma config XINST = OFF // Extended Instruction Set Enable bit (Instruction set extension and Indexed Addressing mode disabled (Legacy mode)) // CONFIG5L #pragma config CP0 = OFF // Code Protection bit (Block 0 not code-protected) #pragma config CP1 = OFF // Code Protection bit (Block 1 not code-protected) // CONFIG5H #pragma config CPB = OFF // Boot Block Code Protection bit (Boot block not code-protected) #pragma config CPD = OFF // Data EEPROM Code Protection bit (Data EEPROM not code-protected) // CONFIG6L #pragma config WRT0 = OFF // Table Write Protection bit (Block 0 not write-protected) #pragma config WRT1 = OFF // Table Write Protection bit (Block 1 not write-protected) // CONFIG6H #pragma config WRTC = OFF // Configuration Register Write Protection bit (Configuration registers not write-protected) #pragma config WRTB = OFF // Boot Block Write Protection bit (Boot block not write-protected) #pragma config WRTD = OFF // Data EEPROM Write Protection bit (Data EEPROM not write-protected) // CONFIG7L #pragma config EBTR0 = OFF // Table Read Protection bit (Block 0 not protected from table reads executed in other blocks) #pragma config EBTR1 = OFF // Table Read Protection bit (Block 1 not protected from table reads executed in other blocks) // CONFIG7H #pragma config EBTRB = OFF // Boot Block Table Read Protection bit (Boot block not protected from table reads executed in other blocks) // #pragma config statements should precede project file includes. // Use project enums instead of #define for ON and OFF. #define _XTAL_FREQ 1000000 //CPU clock frequency #include <xc.h> //Include general header file void main(void) //Main function of the program { TRISCbits.TRISC0 = 0; //Configure RC0 pin as output TRISBbits.TRISB6 = 1; //Configure RB6 pin as input WPUBbits.WPUB6 = 1; //Enable pull-up resistor at RB6 pin INTCON2bits.nRABPU = 0; //Allow pull-up resistors on ports A and B while (1) //Main loop of the program { if (PORTBbits.RB6 == 0) //If button is pressed (RB6 is low) { __delay_ms(20); //Then perform the debounce delay if (PORTBbits.RB6 == 0) //If after the delay RB6 is still low { while (PORTBbits.RB6 == 0); //Then wait while button is pressed __delay_ms(20); //After button has been released, perform another delay if (PORTBbits.RB6 == 1) //If the button is released after the delay (RB6 is high) { LATCbits.LATC0 ^= 0x01; //Then perform the required action (toggle RC0 pin) } } } } } Lines 1 - 59 were generated automatically by the “Configurations bits” section. I explained how to use it in this tutorial Programming PIC in C - 2. Our First Program. They will be the same or very similar in the following tutorials as well, so if there are no changes I’ll just skip the explanation of this part, otherwise I’ll point out the changes and tell why they are required. In the current tutorial they are the same as in the previous one, so let’s move to line 60. It is also familiar to you, it’s the CPU clock definition required by the delay function. Line 62 is also the same as in the previous tutorial, so I won’t stop on it. The main function of the program (lines 64-86) starts with configuring RC0 pins as an output by resetting the corresponding TRISC bit to 0 (line 66). Then we configure the RB6 pin as an input by setting the bit 6 of the TRISB register to 1 (line 67). In line 68 there is a new register called WPUB. This register enables the pull-up resistor on the specific pin of the Port B. As I mentioned before, only ports A and B have the ability to enable the pull-up resistors, so there are registers WPUA and WPUB, but not WPUC. In line 68, we use the familiar construction WPUBbits which allows us to operate with each bit. In our case we need to enable the pull-up resistor on pin RB6, thus we write WPUBbits.WPUB6 = 1; In line 69 we meet another new register - INTCON2. As follows from its name there are several registers with the same name but different numbers. Actually there are three of them: INTCON, INTCON2, and INTCON3. These registers are used for interrupt control. I don’t know why but besides the interrupt control, there is a bit called RABPU (this is bit #7, just FYI) in the register INTCON2 which allows using the pull-up resistors in general. To me, this bit isn’t related to the interrupts system but who I am to argue with the Microchip developers? So, in line 69 we see the text INTCON2bits.nRABPU = 0; The “bits” suffix should be already very familiar to you, so I’ll not dwell on it. The bit RABPU (ports RA and B Pull-Up) has the prefix “n”. It means that the bit has the inverted value. Thus when the RABPU bit is reset to 0 then the pull-up resistors are allowed, and when it’s set to 1, they are disabled. The default value of this bit is 1, so we need to reset it to use the pull-up resistors, which we do here, in line 69. I will talk about the other bits of the INTCONx registers when I talk about the interrupts but for now we’ll skip them. This is the last line of the configuration part. The next lines (70 to 85) are the main loop of the program. The algorithm of the button processing was described in this tutorial: Button Inputs - Part 9 Microcontroller Basics (PIC10F200) but I’ll repeat it briefly with the C language. In line 72, we check if the bit RB6 of the register PORTB is 0. This register represents the actual input state of the pin, so we need to use it to check the button state. If the button is not pressed, then the pin state will be 1 because of the pull-up resistor. When we press the button, the second end of which is connected to the ground, the state of the pin will become 0. So, in line 72 we actually check if the button is pressed. After registering the first moment when the button was pressed, we perform the debounce delay of 20 ms (line 74). If the button isn’t very good quality, then the debounce delay time may be increased. After this time we check one more time if the button is still pressed (line 75). If it is (which means that it was not a false signal) then we wait all the time while the button is pressed and just do nothing. This is implemented by means of the while loop (line 77). When the loop condition becomes false, which means that the button has been released, then we perform another debounce delay (line 78). After that we check if the pin RB6 is high (line 79). If it is, that means the button has truly been released. And only after that do we perform the required action. In our case, it’s toggling the LED (line 81) but this can be any action you need. As you see, in C language this looks almost the same as in Assembly language, despite in C we use the high-level functions and operations, and in Assembly we use the MCU instructions. The code size of the C-based program is 120 bytes while the Assembly-based program is just 43 bytes, even taking into account that the Assembly program processes two buttons. And again, this is the price of the simplicity of using the high-level language. Now you can assemble the circuit according to Fig. 1, compile the code, and download it into the MCU. If everything is done correctly, the LED will change its state every time you press the button. If the operation is not consistent, try to increase the debounce delay. If nothing works at all, double check the connection. That’s it for now. Today we learned how to operate with a button in C language, and got acquainted with two new registers: WPUx and INTCON2. As homework, change the program to blink the LED with the frequency of 4 Hz while the button is pressed. After releasing the button, turn off the LED. Hope to see you soon despite these hard and unpredictable times. Terms Used Explore CircuitBread - 175 Tutorials - 6 Textbooks - 12 Study Guides - 31 Tools - 81 EE FAQs - 295 Equations Library - 184 Reference Materials - 91 Glossary of Terms Friends of CircuitBread 10% Student Discount for Components Product Samples Available by Request Free YouTube Product Training Guides Get the latest tools and tutorials, fresh from the toaster.
https://www.circuitbread.com/tutorials/embedded-c-programming-with-the-pic18f14k50-4-using-a-button
CC-MAIN-2022-40
refinedweb
2,027
59.33
#include <stdio.h> #include <wchar.h> wint_t btowc(int c); The btowc() function determines whether c constitutes a valid (one-byte) character in the initial shift state. The behavior of this function is affected by the LC_CTYPE category of the current locale. See environ(5). The btowc() function returns WEOF if c has the value EOF or if (unsigned char)c does not constitute a valid (one-byte) character in the initial shift state. Otherwise, it returns the wide-character representation of that character. No errors are defined. See attributes(5) for descriptions of the following attributes: setlocale(3C), wctob(3C), attributes(5), environ(5), standards(5) The btowc() function can be used safely in multithreaded applications, as long as setlocale(3C) is not being called to change the locale.
http://docs.oracle.com/cd/E36784_01/html/E36874/btowc-3c.html
CC-MAIN-2016-22
refinedweb
130
54.83
Learn Data Scraping Using Python and Selenium. This article was published as a part of the Data Science Blogathon Introduction: There is a vast amount of data available on the internet, most of it is in the unstructured format in the form of image data, audio files, text data, etc. Such data cannot be used directly to build a Machine Learning model. Industries tend to be benefitted greatly if they can find a way to extract meaningful information from unstructured data. This is where web scraping comes into the picture. According to Wikipedia, “Web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites. It is a form of copying in which specific data is gathered and copied from the web, typically into a central local database or spreadsheet, for later retrieval or analysis.” So, web scraping basically crawls through a bunch of websites and pulls out the information required. Selenium and BeautifulSoup are the commonly used libraries for web scraping using python. However, we will be using Selenium in this project. One should also be careful because not all websites allow scraping and one should not violate their terms and conditions. For the same reason, some of the sites like Twitter provide their own APIs using which one can download whatever data needed. Selenium Overview: Selenium is a powerful automation tool that is basically used to test web applications. It is compatible with different programming languages such as C#, Java, Perl, Python, etc. and supports a wide variety of browsers such as Chrome, Firefox, Safari, etc. WebDriver is the main component of Selenium which is used to interact with the web browser. It can perform operations on web elements such as clicking a button, filling a form, resizing the browser windows, handling browser popups, adding or deleting the browser cookies, etc. Further information on selenium can be accessed from here. Installing Selenium: Selenium can be installed in Python using pip or conda. We will also need Chrome Driver or Gecko driver depending on the browser, it can be downloaded from here. Pip install selenium conda install selenium Objective: Our main objective is to scrape through the job portal and fetch Machine Learning related jobs or any other jobs for that matter. We will also be fetching information such as Job Title, Location, Company Name, Experience needed, Salary offered, Company ratings and number of reviews of the company, etc. We will do the following, - Open a new browser using selenium. - Open the job portal and search for the relevant job in the search field and parse through the required number of jobs and get the relevant details. - All this is done automatically and all you need to pass is the search term, the number of jobs to be fetched and the path to your drivers. Note: It is assumed that you have some basic knowledge about selenium. However, all the code used in this project will be thoroughly explained. Implementation: 1. Importing Libraries. We will be using selenium, pandas libraries and they can be imported using the code below, from selenium.common.exceptions import NoSuchElementException from selenium import webdriver import time import pandas as pd 2. Initializing the WebDriver and Getting the URL. The first step before opening any URL using selenium is to initialize the WebDriver, the WebDriver basically acts as a mediator between the browser and your selenium code. The code below is used to initialize the driver and open the URL. # initializing the chromedriver options = webdriver.ChromeOptions() driver = webdriver.Chrome(executable_path=path+"/chromedriver.exe", options=options) # opening the URL driver.get("") 3. Closing the Unwanted Popups. For some reason, when you open the Naukri.com website using selenium, some unwanted ads in the form of popups open up as you can see below and they need to be closed. For this, we will initially note down the window handle of the main page, then loop through all the open window handles and close the ones that are not necessary. Follow the code below to do just that, #looping through all the open windows and closing ones that are not needed for winId in popup_windows: if winId != Main_Window: driver.switch_to.window(winId) driver.close() # switching to the main window driver.switch_to.window(Main_Window) 1. Entering the required Keyword to search. We can find the search field using the class_name and send in the keys to search. We will also get the URL once the search results are obtained (this URL will be modified later as we will see below). Now the URL obtained is in a specific format and we need to split it based on “?” and get the two parts of the URL. # Searching using the keyword driver.find_element_by_class_name("sugInp").send_keys(search_keyword) driver.find_element_by_class_name("search-btn").click() # getting the current url which has a specific format which will be used later get_url = driver.current_url # getting the two parts of the url by splitting with "?" first_part = get_url.split("?")[0] second_part = get_url.split("?")[-1] 1. Initializing Empty Lists. We will initialize some empty lists to store the information necessary. We will use these lists at the end to build our final data frame. # defining empty lists to store the parsed values Title = [] Company = [] Experience = [] Salary = [] Location = [] Tags = [] Reviews = [] Ratings = [] Job_Type = [] Posted = [] 2. Parsing. The search results that are obtained are divided into pages with 20 results displayed in each page. To find the number of pages to scrape through we will use the number of jobs required to be fetched and loop through all the pages. And the number of pages to be scraped through are found by using the Range function in python as below, range(1,int(num_of_jobs/20)+1) To loop through all the search result pages, we will use two parts of the URL that we obtained earlier and form a new URL as below, # forming the new url with the help of two parts we defined earlier url = first_part+"-"+str(i)+"?"+second_part The way it works is, once the first page of the search result is displayed, all the data needed is gathered and is appended to the respective empty lists, then, a new URL that we formed above is opened and the information from the second page is appended and this cycle repeats. So instead of clicking on the Page Numbers at the end of each page, a new URL is formed and opened. 3. Finding Individual Elements. There are plenty of ways to find the elements in selenium, some of them are using CSS Selectors, Class Names, ID value, XPATH, Tag Name, Partial Link Text, etc. To find the individual elements such as Experience, Salary, etc. we first obtain the list of all jobs on the page, and then from that list we will fetch individual elements. It can sometimes be difficult to find the elements if they do not have a unique selector or id value. # getting job listing details job_list = driver.find_elements_by_class_name("jobTuple.bgWhite.br4.mb-8") We will also be using try and expect statements to find if the element is missing and if so, we will be appending an appropriate value to the list. You can see a sample code to find the elements below, # getting the number of days before which the job was posted try: days = element.find_element_by_css_selector('div.type.br2.fleft.grey').text Posted.append(days) except NoSuchElementException: try: days = element.find_element_by_css_selector('div.type.br2.fleft.green').text Posted.append(days) except NoSuchElementException: Posted.append(None) 4. Final Data Frame. Once the parsing of all results is completed, and all the values are appended to the respective empty lists, we initialize an empty data frame and then create the columns of the data frame using the lists, data frame can then be exported to any format needed. # initializing empty dataframe df = pd.DataFrame() # assigning values to dataframe columns df['Title'] = Title df['Company'] = Company df['Experience'] = Experience df['Location'] = Location df['Tags'] = Tags df['Ratings'] = Ratings df['Reviews'] = Reviews df['Salary'] = Salary df['Job_Type'] = Job_Type df['Posted'] = Posted df.to_csv("Raw_Data.csv",index=None) Conclusion: Although it is easy to parse a few hundred job listings it can be time-consuming if you need to scrape through tens of thousands of jobs. This was also challenging because it was difficult to find some elements because of differences in class names (Eg: Posted element had two different CSS Selectors as you can see in the code above). You also need to give ample time so that all the required elements are loaded else you are bound to face “No Such Element Exception”. So, this was just one way of scraping through data, one can also use the BeautifulSoup library to do the same task, but it has its set of advantages and disadvantages. Note: The entire code for this project can be found on the following GitHub page: Code_File. This is also part of my Data Analysis project from a scratch project (In Progress) which will be updated in the same GitHub repo. This is my first blog, there is always room for improvement, and any tips, suggestions are always welcome 😊. Leave a Reply Your email address will not be published. Required fields are marked *
https://www.analyticsvidhya.com/blog/2021/07/learn-data-scraping-using-python-and-selenium/
CC-MAIN-2022-33
refinedweb
1,535
53.51
#include <db.h> int DB->set_re_source(DB *db, char *re_source); Set the underlying source file for the Recno access method. The purpose of the re_source value is to provide fast access and modification to databases that are normally stored as flat text files. If the re_source field is set, it re_source cannot be transaction-protected because it involves filesystem operations that are not part of the Db transaction methodology. For this reason, if a temporary database is used to hold the records, it is possible to lose the contents of the re_source file, for example, if the system crashes at the right instant. If a file is used to hold the database, normal database recovery on that file can be used to prevent information loss, although it is still possible that the contents of re_source will be lost if the system crashes. The re_source file must already exist (but may be zero-length) when DB->open is called. It is not an error to specify a read-only re_source file when creating a database, nor is it an error to modify the resulting database. However, any attempt to write the changes to the backing source file using either the DB->sync or DB->close functions will fail, of course. Specify the DB_NOSYNC flag to the DB->close function to stop it from attempting to write the changes to the backing file; instead, they will be silently discarded. For all of the previous reasons, the re_source field is generally used to specify databases that are read-only for Berkeley DB applications; and that are either generated on the fly by software tools or modified using a different mechanism -- for example, a text editor. The DB->set_re_source interface may be used only to configure Berkeley DB before the DB->open interface is called. The DB->set_re_source function returns a non-zero error value on failure and 0 on success. The DB->set_re_source function may fail and return a non-zero error for the following conditions: Called after DB->open was called. The DB->set_re_source function may fail and return a non-zero error for errors specified for other Berkeley DB and C library or system functions. If a catastrophic error has occurred, the DB->set_re_source function may fail and return DB_RUNRECOVERY, in which case all subsequent Berkeley DB calls will fail in the same way.
http://pybsddb.sourceforge.net/api_c/db_set_re_source.html
crawl-001
refinedweb
393
54.76
How to send email using VB.NET code By: Issac Printer Friendly Format Surprised, we can use SMTP protocol and .net source code for sending an email, SMTP stands for Simple Mail Transfer Protocol, in VB.NET we can use System.Net.Mail namespace for send mail. We can instantiate SmtpClient class and assign the Host and Port. The default port using SMTP is 25, but it may vary different Mail Servers. The following example shows how to send emails from a Gmail address.("[email protected]", "password") SmtpServer.Port = 587 SmtpServer.Host = "smtp.gmail.com" mail = New MailMessage() mail.From = New MailAddress("[email protected]"). nicely working please check it with following URL< View Tutorial By: darshana at 2010-09-28 11:11:35 2. It won't work for me ;S View Tutorial By: jordi at 2011-01-21 08:22:45 3. I get an error returned saying the SMPT server req View Tutorial By: Christian Peut at 2011-04-04 04:35:51 4. Imports System.Web Imports System.Net.Mail< View Tutorial By: Naresh at 2011-05-24 07:27:12 5. "I get an error returned saying the SMPT serv View Tutorial By: Dennis at 2011-06-24 10:18:10 6. Thanks a lot View Tutorial By: praveen at 2011-10-01 09:38:02 7. how about if you attach microsoft word documents?p View Tutorial By: gian at 2012-01-06 03:11:44 8. mail sent but not shown in sent items what View Tutorial By: Hafiz Hussaim at 2012-04-14 06:55:03 9. i cannot add attachment its saying & View Tutorial By: Rintu Das at 2012-05-04 18:07:48 10. set the property of SMTP.EnableSsl = True View Tutorial By: Kowsalya at 2012-06-03 10:47:58 11. Very nice tutorial. Hi Rintu for attachment u can View Tutorial By: ras at 2013-03-14 17:19:47 12. Keep writing. Very helpful information provided. T View Tutorial By: send email vb.net at 2014-08-27 09:32:03 13. Juust wish to sayy your article is as astounding.< View Tutorial By: escorts barcelona at 2017-05-27 15:59:09
https://www.java-samples.com/showtutorial.php?tutorialid=1056
CC-MAIN-2021-43
refinedweb
366
70.5
Centralizing Session Access Storing and retrieving information from the session state is a common use case in most web applications. For example, user information might be retrieved from the database and then stored in the session upon log in of a user. Following this approach, database calls that retrieve information about a user are reduced (ideally to one call only). The Simple Way of Doing It In small or simple applications, retrieving from and assigning values to the session are done by directly accessing the Session object with typing the key: // Set session variable Session["MyVar"] = "Hello, world!"; // Retrieve session variable string myVar = null; object sessionVar = Session["MyVar"]; if (sessionVar != null) { myVar = sessionVar.ToString(); } While accessing the session value this way works, it has the following disadvantages: - Prone to error. Even though session keys are case-insensitive, you could still mess up the spelling of the key by accidentally adding, deleting, or modifying the characters. - Difficult to change name of session variable. If instead of "MyVar" we replaced the key name with something else, we would have to look for all the places it was declared and replace it there. - Weakly typed. Session values have the type object. Therefore, to enable operations on the session values that are meaningful to the problem domain, they have to be converted to the correct type. Similar to the previous two points, this is error prone and hard to maintain. A Better Way We can get rid of these disadvantages by making Session access the exclusive responsibility of a class. Below is an example of such a class. public sealed class SessionHelper { private SessionHelper() { } // Fully qualified namespace used here for clarity purposes private static System.Web.SessionState.HttpSessionState Session { get { return System.Web.HttpContext.Current.Session; } } // Session property: username (string) public static string Username { get { if (Session["Username"] == null) { return String.Empty; } return Session["Username"].ToString(); } set { Session["Username"] = value; } } } Now the session variable is wrapped inside a property, which has an enforceable type. In addition, implementation details (such as the name of the session key or how to handle null values) are now handled by this class and not on the client, resulting in a better separation of concerns. In this post you have seen how to wrap session access in a class. I would recommend handling session variables in this manner (as opposed to direct access) in all applications regardless of size.
http://www.ojdevelops.com/2013/06/centralizing-session-access.html
CC-MAIN-2020-05
refinedweb
400
54.52
\begin{code} {-# OPTIONS badHead :: a badHead = errorEmptyList "head" -- This rule is useful in cases like -- head [y | (x,y) <- ps, x==t] {-# RULES "head/build" forall (g::forall b.(a->b->b)->b->b) . head (build g) = g (\x _ -> x) badHead "head/augment" forall xs (g::forall b. (a->b->b) -> b -> b) . head (augment g xs) = g (\x _ -> x) (head xs) #-} -- | Extract the elements after the head of a list, which must be non-empty. tail :: [a] -> [a] tail (_:xs) = xs tail [] = errorEmptyList "tail" -- | Extract the last element of a list, which must be finite and non-empty. last :: [a] -> a #if finite and non-empty. init :: [a] -> [a] #ifdef USE_REPORT_PRELUDE init [x] = [] init (x:xs) = x : init xs init [] = errorEmptyList "init" #else -- eliminate repeated cases init [] = errorEmptyList "init" init (x:xs) = init' x xs where init' _ [] = [] init' y (z:zs) = y : init' z zs #endif -- | Test whether a list is empty. null :: [a] -> Bool null [] = True null (_:_) = False -- | "filter" [~1] forall p xs. filter p xs = build (\c n -> foldr (filterFB c p) n xs) "filterList" [1] forall p. foldr (filterFB (:) p) [] = filter p "filterFB" forall c p q. filterFB (filterFB c p) q = filterFB c (\x -> q x && p x) #-} -- Note the filterFB rule, which has p and q the "wrong way round" in the RHS. -- filterFB (filterFB c p) q a b -- = if q a then filterFB c p a b else b -- = if q a then (if p a then c a b else b) else b -- = if q a && p a then c a b else b -- = filterFB c (\x -> q x && p x) a b -- I originally wrote (\x -> p x && q x), which is wrong, and actually -- gave rise to a live bug report. SLPJ. -- | :: (a -> b -> a) -> a -> [b] -> [a] scanl f q ls = q : (case ls of [] -> [] x:xs -> scanl f (f q x) xs) -- | 'scanl1' is a variant of 'scanl' that has no starting value argument: -- -- > scanl1 f [x1, x2, ...] == [x1, x1 `f` x2, ...] scanl1 :: (a -> a -> a) -> [a] -> [a] scanl1 f (x:xs) = scanl f x xs scanl1 _ [] = [] -- foldr, foldr1, scanr, and scanr1 are the right-to-left duals of the -- above functions. -- | 'foldr1' is a variant of 'foldr' that has no starting value argument, -- and thus must be applied to non-empty lists. foldr1 :: (a -> a -> a) -> [a] -> a foldr -- | 'scanr1' is a variant of 'scanr' that has no starting value argument. scanr1 :: (a -> a -> a) -> [a] -> [a] scanr1 _ [] = [] scanr1 _ [x] = [x] scanr1 f (x:xs) = f x q : qs where qs@(q:_) = scanr1 f "repeat" [~1] forall x. repeat x = build (\c _n -> repeatFB c x) "repeatFB" [1] repeatFB (:) = repeat #-} -- | 'replicate' @n x@ is a list of length @n@ with @x@ the value of -- every element. -- It is an instance of the more general 'Data.List.genericReplicate', -- in which @n@ may be of any integral type. {-# INLINE replicate #-} replicate :: Int -> a -> [a] replicate n x = take n (repeat x) -- | 'cycle' ties a finite list into a circular one, or equivalently, -- the infinite repetition of the original list. It is the identity -- on infinite lists. cycle :: [a] -> [a] cycle [] = error "Prelude.cycle: empty list" cycle xs = xs' where xs' = xs ++ xs' -- | ] == [] -- takeWhile :: (a -> Bool) -> [a] -> [a] takeWhile _ [] = [] takeWhile p (x:xs) | p x = x : takeWhile p xs | otherwise = [] -- | 'dropWhile' @p xs@ returns the suffix remaining after 'takeWhile' @p xs@: -- -- > dropWhile (< 3) [1,2,3,4,5,1,2,3] == [3,4,5,1,2,3] -- > dropWhile (< 9) [1,2,3] == [] -- > dropWhile (< 0) [1,2,3] == [1,2,3] -- dropWhile :: (a -> Bool) -> [a] -> [a] dropWhile _ [] = [] dropWhile p xs@(x:xs') | p x = dropWhile p xs' | otherwise = xs -- | 'Data.List.genericTake', -- in which @n@ may be of any integral type. take :: Int -> [a] -> [a] -- | 'Data.List.genericDrop', -- in which @n@ may be of any integral type. drop :: Int -> [a] -> [a] -- | 'Data.List.genericSplitAt', -- in which @n@ may be of any integral type. splitAt (I# n#) ls | n# <# 0# = ([], ls) | otherwise = splitAt# n# ls where splitAt# :: Int# -> [a] -> ([a], [a]) splitAt# 0# xs = ([], xs) splitAt# _ xs@[] = (xs, xs) splitAt# m# (x:xs) = (x:xs', xs'') where (xs', xs'') = splitAt# (m# -# 1#) xs #endif /* USE_REPORT_PRELUDE */ -- | )@ span :: (a -> Bool) -> [a] -> ([a],[a]) span _ xs@[] = (xs, xs) span p xs@(x:xs') | p x = let (ys,zs) = span p xs' in (x:ys,zs) | otherwise = ([])@. break :: (a -> Bool) -> [a] -> ([a],[a]) #ifdef USE_REPORT_PRELUDE break p = span (not . p) #else -- HBC version (stolen) break _ xs@[] = (xs, xs) break p xs@(x:xs') | p x = ([],xs) | otherwise = let (ys,zs) = break p xs' in (x:ys,zs) #endif -- | 'reverse' @xs@ returns the elements of @xs@ in reverse order. -- @xs@ must be finite. reverse :: [a] -> [a] #ifdef USE_REPORT_PRELUDE reverse = foldl (flip (:)) [] #else reverse l = rev l [] where rev [] a = a rev (x:xs) a = rev xs (x:a) #endif -- | 'and' returns the conjunction of a Boolean list. For the result to be -- 'True', the list must be finite; 'False', however, results from a 'False' -- value at a finite index of a finite or infinite list. and :: [Bool] -> Bool -- | 'or' returns the disjunction of a Boolean list. For the result to be -- 'False', the list must be finite; 'True', however, results from a 'True' -- value at a finite index of a finite or infinite list. or :: [Bool] -> Bool #ifdef USE_REPORT_PRELUDE and = foldr (&&) True or = foldr (||) False -- | Applied to a predicate and a list, 'any' determines if any element -- of the list satisfies the predicate. any :: (a -> Bool) -> [a] -> Bool -- | Applied to a predicate and a list, 'all' determines if all elements -- of the list satisfy the predicate. all :: (a -> Bool) -> [a] -> -- | 'elem' is the list membership predicate, usually written in infix form, -- e.g., @x \`elem\` xs@. elem :: (Eq a) => a -> [a] -> Bool -- | -- | 'lookup' @key assocs@ looks up a key in an association list. lookup :: (Eq a) => a -> [(a,b)] -> Maybe b lookup _key [] = Nothing lookup key ((x,y):xys) | key == x = Just y | otherwise = lookup key xys -- | Map a function over a list and concatenate the results. concatMap :: (a -> [b]) -> [a] -> [b] concatMap f = foldr ((++) . f) [] -- | _ _ _ = [] {-# INLINE [0] zipWithFB #-} zipWithFB :: (a -> b -> c) -> (d -> e -> a) -> d -> e -> b -> c zipWithFB c f x y r = (x `f` y) `c` r {-# RULES "zipWith" [~1] forall f xs ys. zipWith f xs ys = build (\c n -> foldr2 (zipWithFB c f) n xs ys) "zipWithList" [1] forall f. foldr2 (zipWithFB (:) f) [] = zipWith f #-} \end{code} \begin{code} -- | The 'zipWith3' function takes a function which combines three -- elements, as well as three lists and returns a list of their point-wise -- combination, analogous to 'zipWith'. zipWith3 :: (a->b->c->d) -> [a]->[b]->[c]->[d] zipWith3 z (a:as) (b:bs) (c:cs) = z a b c : zipWith3 z as bs cs zipWith3 _ _ _ _ = [] -- | 'unzip' transforms a list of pairs into a list of first components -- and a list of second components. unzip :: [(a,b)] -> ([a],[b]) {-# INLINE unzip #-} unzip = foldr (\(a,b) ~(as,bs) -> (a:as,b:bs)) ([],[]) -- | The 'unzip3' function takes a list of triples and returns three -- lists, analogous to 'unzip'.}
http://hackage.haskell.org/packages/archive/base/4.0.0.0/doc/html/src/GHC-List.html#filter
crawl-003
refinedweb
1,195
74.63
Timeline Mar 28, 2008: - 9:37 PM Ticket #2816 (trac.ini setting for formula text size) created by - I'm not fluent in latex (yet), but the \Large keyword doesn't appear … - 4:16 PM TracBlogPlugin edited by - added into about not trunk not working currently (diff) - 4:12 PM Ticket #2814 ("Blog" link in main navigation gives "Error: Not Found") closed by - invalid: This is entirely expected behavior as the port to 0.11 isn't finished. … -. … - 1:47 PM Ticket #2814 ("Blog" link in main navigation gives "Error: Not Found") created by - * Running Trac 0.11b2 and TracBlog-0.3dev_r3371-py2.4 * Trac is … - 1:13 PM Ticket #2758 (SPF fails on trac-hacks.org) closed by - worksforme: Works for me: […] - 1:12 PM Ticket #2099 (trac-hacks.org authentication fails when using https) closed by - fixed: This should be working now. - 1:10 PM Ticket #1481 (no code found, neither in the zip file nor in the subversion repository) closed by - invalid - 1:10 PM Ticket #1243 (Smarter logo and background image) closed by - wontfix - 1:09 PM Ticket #1168 (Error while uploading image) closed by - fixed: Assuming fixed? - 12:41 PM Ticket #2813 (WikiRename 1.2 does not work correctly with TracTags 0.4) created by - If a page which has tags is renamed using WikiRename it does not carry … - … - 10:45 AM Changeset [3423] by - ticketboxmacro/0.10/TicketBox.py - ticketboxmacro/0.11/TicketBox.py - ticketboxmacro/0.8 - ticketboxmacro/0.9/TicketBox.py Fix to use ReportModule.sql_sub_vars() for the issue #2810. Old code is not good for query like "... LIKE '%$USER%'" on 0.10.x or 0.11 on replacing variables. Close #2810. And also drop supporting for trac 0.8 or before. - 10:29 AM scratcher created by - New user scratcher registered - 7:21 AM Ticket #2812 (test summary) created by - test description - 3:54 AM Ticket #2811 (error while loading large file) created by - Downloader seems to be working fantastically on smaller files (<20mb) … - 12:56 AM Ticket #2810 (fails when used with report containing LIKE operator) created by - Reports which contain something like the following work in the reports … - 12:07 AM Ticket #2809 (Port to 0.11) created by - This isn't pacakged up all nice or anything, but if you replace api.py … Mar 27, 2008: - 10:34 PM Ticket #2808 (Could this be extended for general document control/reviewing?) created by - Hi, I am new to Trac, looking at different system to migrate slowly … - 1:09 PM Ticket #2807 (resync breaks if first commit is empty) created by - If the first commit is empty, e.g. by importing using git-svn a … - 9:24 AM Changeset [3422] by - gitplugin/0.11/setup.py declare namespace_packages meta-data (addresses #2806) - 8:46 AM Ticket #2806 (Need to declare namespace packages) created by - See … - 7:04 AM Ticket #2803 (nikoniko plugin error-Parent module 'nikoniko' not loaded) reopened by - i selected the error Component. - 12:11 AM gregmac created by - New user gregmac registered: […] - 8:33 AM AuthOpenIdPlugin edited by - (diff) -:22 AM AuthOpenIdPlugin edited by - …
https://trac-hacks.org/timeline?from=2008-03-28T13%3A33%3A02%2B01%3A00&precision=second
CC-MAIN-2016-30
refinedweb
516
54.73
Working with Unit Tests in Project or Solution Discover unit tests in solution dotCover adds the Unit Test Explorer window to Visual Studio ( or , or Control+Alt+T ). Using this window, you can explore and run, debug or cover. For each test framework, you can prefer either speed or accuracy for discovering unit tests after the build by choosing one of the following options on the Alt+R, O):page of dotCover options ( - Metadata (default) In this mode, dotCover analyzes the build artifact without launching the test runner. As tests are defined using attributes, dotCover can quickly scan the metadata of managed artifacts to find most tests in the project. However, it may fail to find tests that require running some special hooks of the framework to define their parameters. This is the fastest way of discovering tests. - Test runner In this mode, dotCover launches the framework runner in the discovery mode on the build artifact, and then uses the results from the runner. Using the framework runner can take considerably longer to analyze the project, but the list of discovered tests will be complete in most cases. In the unit test explorer, you can: Explore tests in the solution: browse all unit tests in a tree view, search tests and filter by a substring, regroup unit tests by project, namespace, category, and so on. Navigate to source code of any test or test class by double-clicking it in the view. run, debug or cover selected tests. Create unit tests sessions from selected tests and test classes and/or add selected items to the current test session. run, debug or cover unit tests in project or solution You can run, debug or cover Control+T R/ Debug Unit Tests Control+T D or Cover Unit Tests Control+T H on the toolbar. To run, debug or cover tests from Solution Explorer or Class View, select one or more items ( classes, files, folders, projects) that contain tests, and use the Run Unit Tests Control+T R/ Debug Unit Tests Control+T D or Cover Unit Tests Control+T H commands, which are also available in the main menu ( ) and in the context menu. To run, debug or cover all tests in solution, choose/ in the main menu or press Control+T L/ Control+T K. Whatever way you choose to run, debug or cover repeat execution or coverage analysis of the tests that you executed last by pressing Control+T T or choosingin the menu.
https://www.jetbrains.com/help/dotcover/Unit_Testing_in_Solution.html
CC-MAIN-2021-21
refinedweb
415
54.56
VSTS comes with many tools integrated in it for the developers & testers and also for architects and managers. The unit testing tool that comes with the Visual Studio 2005 enables the developers to generate the test classes and methods. Team Test does not provide tools only for developers but it provides tools for testers as well. The other feature of the Team Test is the ability to load data from the database to test the methods. Here we will see an example for creating unit tests and with exceptions as well. Let me walk through how to create unit tests. Let's take an example of creating an assembly with an Employee class and method for adding an Employee with his/her First Name, Last Name and Date of Birth. Create a new class library project named "VSTSEmp". You can see the checkbox option "Create directory for solution". This checkbox is checked by default. This enables us to create the test project in a separate directory. If the checkbox is deselected, the test project will get created in the VSTSEmp project directory itself. Lets leave the checkbox option selected. The new project contains a class file with the default name as "Class1.cs". Let's rename the class file as "Employees.cs" which will also rename the class name to Employees. Include the constructor for the Employee class with three parameters as first Name as string, lastName as string and dateofBirth as datetime. Now we will see how to create the test class and test method for the employee class and its constructor. Anywhere on the employee class right click and choose the option "Create Unit Tests...". Selecting the menu option "Create Unit Tests..." will display a dialog for generating the unit tests as different project. Lets give the name for the test project as "VSTSEmp.Test". Enter this value in the dialog box. Make sure "Create a new Visual C# test project..." option is selected in the output project options as we are using C# for our test. The test project creates with 2 default files as "AuthoringTests.txt" and "EmployeesTest.cs". The generated test project contains references to "VSTSEmp" project against which the test project is created and the Microsoft.VisualStudio.QualityTools.UnitTestFramework which is the testing framework assembly for the test engine to execute the test project. Following is the generated test class and the test method for the constructor with the default implementation. Listing 1 // The following code was generated by Microsoft Visual Studio 2005. // The test owner should check each test for validity. using Microsoft.VisualStudio.TestTools.UnitTesting; using System; using System.Text; using System.Collections.Generic; using VSTSEmp; namespace VSTSEmp.Test { /// <summary> ///This is a test class for VSTSEmp.Employees and is intended ///to contain all VSTSEmp.Employees Unit Tests ///</summary> [TestClass()] public class EmployeesTest { private TestContext testContextInstance; /// <summary> ///Gets or sets the test context which provides ///information about and functionality for the current test run. ///</summary> ///A test for Employees (string, string, DateTime) [TestMethod()] public void ConstructorTest() string firstName = null; // TODO: Initialize to an appropriate value string lastName = null; // TODO: Initialize to an appropriate value DateTime dateOfBirth = new DateTime(); // TODO: Initialize to an appropriate value Employees target = new Employees(firstName, lastName, dateOfBirth); // TODO: Implement code to verify target Assert.Inconclusive("TODO: Implement code to verify target"); } } } Asserts Here the checks are done using the Assert.AreEqual<T>() method. The assert method also supports an AreEqual() method without Generics. The generic version is always preferred because it will verify at compile time that the types match. Let's run the test with whatever implementation we have. The implementations are not yet over but let's proceed to testing. We need to run the test project to run all the tests. Just right click the test project and make it the Startup project. Using the Debug-> Start (F5) menu item begin running the test project. After running the project the Test Result window appears. The result shows one failed and nothing else because we have only one method for testing. Initially the result will be shown as "pending" then "In progress" and then it will display "Failed". You can add/remove the default columns in the test result window using the menu options To see additional details on the Constructor test result, just double click on the error message. The ConstructorTest[Results] widow opens up with more information on the error. Now Lets test for an exceptions using the "ExpectedExcpetionAttribute". Below code shows two methods which checks for null or empty string value for the employee First Name. This value null or empty value to the constructor throws the exception of type "ArgumentException". TestMethod] ExpectedException(typeof(ArgumentException), "Null value for Employee First Name is not allowed")] public void EmployeeFirstNameNullInConstructor() Employees Employee = new Employees(null, "Kumar", System.DateTime.Today()); [TestMethod] [ExpectedException(typeof(ArgumentException), "Empty value for Employee First Name is not allowed")] Employees Employee = new Employees("", "Kumar", System.DateTime.Today()); private string _firstName; public string FirstName get { return _firstName; } set { if (value == null || value.Trim() == string.Empty()) { throw new ArgumentException("Employee First Name cannot be null or Empty"); } _firstName = value; } Now run the test and make sure the implementation is correct. ©2016 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/article/unit-tests-in-visual-studio-2005/
CC-MAIN-2016-40
refinedweb
875
57.57
Today I released BFC 1.0! BFC: Brainf**k Compilers bfc.rb is a compiler written in Ruby, which can compile BF code to Ruby, C, Haskell, Scheme and LLVM. USAGE OF BFC $ ./bfc.rb --help $ ./bfc.rb [-v|--version] $ ./bfc.rb [-r|--ruby] helloworld.bf > helloworld.rb $ ./bfc.rb [-c|--c] helloworld.bf > helloworld.c $ ./bfc.rb [-h|--haskell] helloworld.bf > helloworld.hs $ ./bfc.rb [-l|--llvm] helloworld.bf > helloworld.ll $ ./bfc.rb [-s|--scheme] helloworld.bf > helloworld.scm $ cat helloworld.bf | ./bfc.rb --ruby $ ./bfc.rb [-r|--ruby|-c|--c|-h|--haskell|-l|--llvm] helloworld.bf --run $ ./bfc.rb [-c|--c] helloworld.bf --without-while > helloworld.c $ spec ./bfc.rb THE BRAINF**K LANGUAGE According to Wikipedia, the programming language Brainf**k has the following 8 tokens that each have semantics. Here is the equivalent transformation from Brainf**k to C. The bfc.rb converts BF codes to each languages mostly based on the table. C Translation Table in bfc.rb: ',' => '*h=getchar();', '.' => 'putchar(*h);', '-' => '--*h;', '+' => '++*h;', '<' => '--h;', '>' => '++h;', '[' => 'while(*h){', ']' => '}' Ruby Translation Table in bfc.rb: ',' => 'a[i]=STDIN.getc.ord', '.' => 'STDOUT.putc(a[i])', '-' => 'a[i]-=1', '+' => 'a[i]+=1', '<' => 'i-=1', '>' => 'i+=1', '[' => 'while a[i]!=0', ']' => 'end' They are straightforward enough not to be explained the detail. In the same way, we can write translation tables for most programming languages except special languages including Haskell and Assembly languages. TRANSLATING TO HASKELL Translating BF to Haskell needs two tricks. Haskell was difficult to handle BF because: - Variables in Haskell are not allowed to be re-assigned ++his impossible There's no feature like whilestatement So I used IO Monad with biding same-name variables, and defined while function. Haskell Translation Table in bfc.rb: ',' => 'tmp <- getChar; h <- return $ update (\_ -> ord tmp) i h;', '.' => 'putChar $ chr $ h !! i;', '-' => 'h <- return $ update (subtract 1) i h;', '+' => 'h <- return $ update (+ 1) i h;', '<' => 'i <- return $ i - 1;', '>' => 'i <- return $ i + 1;', '[' => '(h, i) <- while (\(h, i) -> (h !! i) /= 0) (\(h, i) -> do {', ']' => 'return (h, i);}) (h, i);' And the definition of while is: while cond action x cond x = action x >>= while cond action otherwise = return x This is short, but can handle loop with changing the value with larger scope like C's. TRANSLATING TO C WITHOUT WHILE STATEMENTS Unlike the effort on Haskell, it is impossible to write simple translation table for C when I can use only goto for control flows instead of while statements. So I made the compile to have label counters to make labels for goto a lot. Excerpt from bfc.c: when ','; '*h=getchar();' when '.'; 'putchar(*h);' when '-'; '--*h;' when '+'; '++*h;' when '<'; '--h;' when '>'; '++h;' when '['; "do#{counter += 1}:" when ']' "if (*h != 0) goto do#{counter}; else goto end#{counter};" << "end#{counter}:" end TRANSLATING TO LLVM LLVM Assembly language is similar to Haskell to the extent of the prohibition of re-assignments, and not similar to Haskell to the extend of having do syntax for Monad. So I decided to use pointers to store values. Also, LLVM needs many temporary variables which cannot be re-assigned, so I used counters again to use temporary constants. The translation table with counters is too big to paste here, so I'll just show the definition of '+' which means '++h' in C. when '+' a = tc += 1; b = tc += 1; c = tc += 1; d = tc += 1 "%tmp#{a} = load i32* %i, align 4\n" << "%tmp#{b} = getelementptr [1024 x i8]* %h, i32 0, i32 %tmp#{a}\n" << "%tmp#{c} = load i8* %tmp#{b}, align 1\n" << "%tmp#{d} = add i8 1, %tmp#{c}\n" << "store i8 %tmp#{d}, i8* %tmp#{b}, align 1\n" (where tc is the abbreviation of tmp counter.) One more thing. LLVM is famous for its aggressive optimizations. For example, the result of the conversion from helloworld.bf to LLVM Assembly Language is very long. $ ./bfc.rb --llvm ./helloworld.bf | wc -l 2842 But once you optimize the assembly by opt command of LLVM, the line of code will become shorter and more succinct. SUMMARY BFC supports compiling BF to the following language. - Ruby - C - Haskell - LLVM Assembly Language - Scheme In some languages it was easy to write the translator, but Haskell and LLVM was tough for me. If I have a plenty of time, I'd like to try these challenges: - Compiling to Erlang - Compiling to IA-32 Assembly Language - Compiling to LLVM Bitcode - More Spec! - Benchmark Suite Anyway, I recommend you to take a look at the BFC. Enjoy! s/Excerpt from bfc.c:/Excerpt from bfc.rb:/ (in TRANSLATING TO C WITHOUT WHILE STATEMENTS) thanks!!
http://ujihisa.blogspot.com/2009/12/bfc-brainfk-compilers.html
CC-MAIN-2018-22
refinedweb
768
74.9
Hi, I'm using POI HSSF version poi-bin-2.0-RC1-20031102, excel 97/XP I'm trying to use a nested if formula in excel which is used for writing a string based on a cell value. this cell value is also a formula of devision ("A2/A3"). When I'm using simple IF it works but when I use a nested IF then a #VALUE! comes up. When I enter the sheet using excel, go to the cell and just press enter on the formula line then I get the proper value. What am I doing wrong ? I have seen that sometimes it does not necessary connected to the fact that the devision cell is also a formula but to the fact that the cell is a float and not integer and sometimes when I use integer in the IF formula then it works. Moreover when using % in a formula parser fails. Source Code is:FRow; import org.apache.poi.hssf.usermodel.HSSFSheet; import org.apache.poi.hssf.usermodel.HSSFWorkbook; import java.io.FileOutputStream; /** * A Writer which writes to XLS file wit the #VALUE! problem * */ public class TestXLSWriter { public static final int COLUMN_A = 0; public static final int COLUMN_B = 1; public static final int COLUMN_C = 2; public static final int COLUMN_D = 3; /** * Creates a new demo. */ public TestXLSWriter() { } public void write() throws Exception { HSSFWorkbook wb = createTestWorkbook(); FileOutputStream out = new FileOutputStream("test.xls"); wb.write(out); out.close(); } private HSSFWorkbook createTestWorkbook() throws Exception { HSSFWorkbook wb = new HSSFWorkbook(); HSSFSheet sheet = wb.createSheet("Test Sheet"); HSSFRow row; HSSFCell cell; // Create a row and put some cells in it. Rows are 0 based. row = sheet.createRow((short)0); // Create a cell cell = row.createCell((short)COLUMN_A); cell.setCellValue(50); cell = row.createCell((short)COLUMN_B); cell.setCellValue(100); cell = row.createCell((short)COLUMN_C); cell.setCellFormula("A1/B1"); // Although problem occurs with or without representing // fraction using precent style I use it in my // program and that's why I put it in the test. HSSFCellStyle style = wb.createCellStyle(); style.setDataFormat(HSSFDataFormat.getBuiltinFormat("0%")); cell.setCellStyle(style); cell = row.createCell((short)COLUMN_D); // Here is the problem : // basically I want 0 if C1 < 0.3, 2 if C1 > 0.8 and 1 if in between. // In real life I will turn 0,1,2 to "Failed", "OK", "GOOD" // why does this line produce #VALUE! ??? // However when I enter excel 97/XP and click inside this value // it works. // Moreover, I know that for sure the nested IF is the problem. cell.setCellFormula("IF(C1<0.3, 0, IF(C1>0.8, 2, 1))"); // Other setCellFormulas that work are : // without nested IF it works //cell.setCellFormula("IF(C1<0.3, 0, 1)"); // if I try with 30% the parser fails. // cell.setCellFormula("IF(C1<30%, 0, IF(C1>80%, 2, 1))"); return wb; } public static void main (String[] args) throws Exception { System.out.println("DEBUG: hello"); TestXLSWriter w = new TestXLSWriter(); w.write(); } } Thanks. Created attachment 9248 [details] the test file in java Created attachment 9249 [details] Resulted Excel File (Using Excel 97) the result xls with #VALUE! Created attachment 10961 [details] Proposed fix for nested if statements #VALUE Created attachment 10962 [details] The REAL proposed fix for nested if #VALUE problem The problem is that cell references in the nested ifs were not getting their Token classes set properly because the recursion would bail out early when it reached nodes that were not AbstractFunctionPtgs. In fact, we want to recurse into the entire tree to push out this token munging stuff. NOTE: The second patch file i submitted is the one i am proposing. The other one is just cruft. Sorry. hmm sounds and looks good enough :). gonna apply and check out unit tests. Patch applied
https://bz.apache.org/bugzilla/show_bug.cgi?id=24925
CC-MAIN-2020-45
refinedweb
621
58.28
A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate in the Unity community. Rays from the sun under water on the ground. Google Caustics. I belive you dont have it either, after looking at the screenshots. Thanks for sharing. One question: Do you have caustics? I had the same problem(black screen) when using Gaia, Aquas and Enviro, I removed something that Gaia adds and its working, dont remeber what... Hi Sir, I have a problem small problem with your water, after importing I get this worning: 1)... exorakhilas Thank you so much !!!! I think buygamescode . com is a bot and someone from the unity staff should look in to it. Nice idea :) Your game look really good, try too get some sites to review it. Great looking asset. However not on unity asset store :( Love the art work :) Nice graphics, didnt download to many flappy clones out there. Anyway good look for a first game imho. Great Job! Love the graphic, late today I am going to download it and givbe it a try. Great trailer! imphy thank you for your replay. I made it to work :) needed some tweaking but its working 95% :) Its on sale, nice :). Just before I buy, I have a question, will this work with ortographic man camera? And can I rotate the background sphere (I... Thanks for the eCPM :) I will give Gameads a try when my game is finished. lol :) didnt see that coming, thanks :) works like a charm :) Hi Jerots, how to call this function via playmaker(have problem with passsing Transform trans)?Also what script needs to be attached to prefab?... The problem was velocity, using UnityEngine; using System.Collections; public class test : MonoBehaviour { void OnSpawned() {... Thanks for answer. I have 3.0 now and I am afraid to go higher :) Also Jerotas, I have a problem with pooling. When I use triggered spawner on... Hi Jerotas, I was wondering is there a way to despawn object via code? I am using 3.0 (didnt upgrade cause dont want to lose waves configuration...
https://forum.unity.com/search/111940547/
CC-MAIN-2019-43
refinedweb
356
76.93
Earlier today, i.e. a few minutes ago, I was working on my latest guinea pig, TiThess. It’s an events guide for my city, and the latest project I’m trying everything on. I like having a project I can try new things on, as it helps keep my skills sharp. As I was testing page load times with the excellent Web Page Test, trying to get them down to the absolute minimum, I was getting an F in time-to-first-byte. This is very odd, because the whole site is supposed to be cached, so I was wondering whether the cache is doing something wrong and slowing page generation down. To make sure, I needed a simple way to show how long page generation took, like the old “page generated in X seconds” footer that was all the rage with PHP sites way back when. Here’s how I did it: Displaying the page generation time As I didn’t want to be cheesy, I wanted the output to be available, but discreet. A header is the best place for this, so all I needed was something to calculate how long the response took to put together. No better way to do this than Django middleware. Here’s the class that does the entire thing (put it in a file called something like stats_middleware.py: import time class StatsMiddleware(object): def process_request(self, request): "Store the start time when the request comes in." request.start_time = time.time() def process_response(self, request, response): "Calculate and output the page generation duration" # Get the start time from the request and calculate how long # the response took. duration = time.time() - request.start_time # Add the header. response["X-Page-Generation-Duration-ms"] = int(duration * 1000) return response That’s all there’s to it. Just store the time when the request comes in, and retrieve it later. To install the middleware above, just add it to your settings.py: MIDDLEWARE_CLASSES = ( 'project.stats_middleware.StatsMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', ... ) Thanks to the magic of Django middleware processing cycle, adding the class above first in our MIDDLEWARE_CLASSES guarantees that it will be the first thing that runs when the request comes in and the last thing that runs when the response goes out. This allows us to measure the full generation time, including any other middleware. Conversely, if you want to only measure the time it took to run your specific view, and not any middleware, just move the above entry to the end of the MIDDLEWARE_CLASSES list. Let me know in the comments below or on Twitter if you found this useful. Kisses!
https://www.stavros.io/posts/django-page-load-time/
CC-MAIN-2017-39
refinedweb
440
61.16
The empirical approach is to program the competing algorithms and try them on different instances with the help of a computer. The theoretical approach is to determine mathematically the quantity of resources needed by each algorithm as a function of the size of the instances considered. The resources are computing time and storage space. Computing time being the more critical of the two. The size of an instance is any integer that in some way measures the number of components in an instance. We usually consider the worst case, i.e. for a given instance size we consider those instances which requires the most time. The average behavior of an algorithm is much harder to analyze. We analyze an algorithm in terms of the number of elementary operations that are involved. An elementary operation is one whose execution time is bounded by a constant for a particular machine and programming language. Thus within a multiplicative constant it is the number of elementary operations executed that matters in the analysis and not the exact time. Since the exact time for an elementary operation is unimportant, we say, that an elementary operation can be executed at unit cost. We use the "Big O" notation for the execution of algorithms. The "Big O" notation gives the asymptotic execution time of an algorithm. Algorithms can be classified using the "Big O" notation. sum = 0; for item in a; sum = sum + itemThe number of additions depends on the length of the array. Hence the run time is O(N). def selectionSort (a): for i in range (len(a) - 1): // Find the minimum min = a[i] minIdx = i for j in range (i + 1, len(a)): if (a[j] < min): min = a[j] minIdx = j // Swap the minimum element with the element at the ith place a[minIdx] = a[i] a[i] = min
http://www.cs.utexas.edu/~mitra/csSpring2013/cs313/lectures/algo.html
CC-MAIN-2015-35
refinedweb
307
54.73
Table Of Contents Kivy Namespaces¶ New in version 1.9.1. Warning This code is still experimental, and its API is subject to change in a future version. The KNSpaceBehavior mixin class provides namespace functionality for Kivy objects. It allows kivy objects to be named and then accessed using namespaces. KNSpace instances are the namespaces that store the named objects in Kivy ObjectProperty instances. In addition, when inheriting from KNSpaceBehavior, if the derived object is named, the name will automatically be added to the associated namespace and will point to a proxy_ref of the derived object. Basic examples¶ By default, there’s only a single namespace: the knspace namespace. The simplest example is adding a widget to the namespace: from kivy.uix.behaviors.knspace import knspace widget = Widget() knspace.my_widget = widget This adds a kivy ObjectProperty with rebind=True and allownone=True to the knspace namespace with a property name my_widget. And the property now also points to this widget. This can be done automatically with: class MyWidget(KNSpaceBehavior, Widget): pass widget = MyWidget(knsname='my_widget') Or in kv: <MyWidget@KNSpaceBehavior+Widget> MyWidget: knsname: 'my_widget' Now, knspace.my_widget will point to that widget. When one creates a second widget with the same name, the namespace will also change to point to the new widget. E.g.: widget = MyWidget(knsname='my_widget') # knspace.my_widget now points to widget widget2 = MyWidget(knsname='my_widget') # knspace.my_widget now points to widget2 Setting the namespace¶ One can also create ones own namespace rather than using the default knspace by directly setting KNSpaceBehavior.knspace: class MyWidget(KNSpaceBehavior, Widget): pass widget = MyWidget(knsname='my_widget') my_new_namespace = KNSpace() widget.knspace = my_new_namespace Initially, my_widget is added to the default namespace, but when the widget’s namespace is changed to my_new_namespace, the reference to my_widget is moved to that namespace. We could have also of course first set the namespace to my_new_namespace and then have named the widget my_widget, thereby avoiding the initial assignment to the default namespace. Similarly, in kv: <MyWidget@KNSpaceBehavior+Widget> MyWidget: knspace: KNSpace() knsname: 'my_widget' Inheriting the namespace¶ In the previous example, we directly set the namespace we wished to use. In the following example, we inherit it from the parent, so we only have to set it once: <MyWidget@KNSpaceBehavior+Widget> <MyLabel@KNSpaceBehavior+Label> <MyComplexWidget@MyWidget>: knsname: 'my_complex' MyLabel: knsname: 'label1' MyLabel: knsname: 'label2' Then, we do: widget = MyComplexWidget() new_knspace = KNSpace() widget.knspace = new_knspace The rule is that if no knspace has been assigned to a widget, it looks for a namespace in its parent and parent’s parent and so on until it find one to use. If none are found, it uses the default knspace. When MyComplexWidget is created, it still used the default namespace. However, when we assigned the root widget its new namespace, all its children switched to using that new namespace as well. So new_knspace now contains label1 and label2 as well as my_complex. If we had first done: widget = MyComplexWidget() new_knspace = KNSpace() knspace.label1.knspace = knspace widget.knspace = new_knspace Then label1 would remain stored in the default knspace since it was directly set, but label2 and my_complex would still be added to the new namespace. One can customize the attribute used to search the parent tree by changing KNSpaceBehavior.knspace_key. If the desired knspace is not reachable through a widgets parent tree, e.g. in a popup that is not a widget’s child, KNSpaceBehavior.knspace_key can be used to establish a different search order. Accessing the namespace¶ As seen in the previous example, if not directly assigned, the namespace is found by searching the parent tree. Consequently, if a namespace was assigned further up the parent tree, all its children and below could access that namespace through their KNSpaceBehavior.knspace property. This allows the creation of multiple widgets with identically given names if each root widget instance is assigned a new namespace. For example: <MyComplexWidget@KNSpaceBehavior+Widget>: Label: text: root.knspace.pretty.text if root.knspace.pretty else '' <MyPrettyWidget@KNSpaceBehavior+TextInput>: knsname: 'pretty' text: 'Hello' <MyCompositeWidget@KNSpaceBehavior+BoxLayout>: MyComplexWidget MyPrettyWidget Now, when we do: knspace1, knspace2 = KNSpace(), KNSpace() composite1 = MyCompositeWidget() composite1.knspace = knspace1 composite2 = MyCompositeWidget() composite2.knspace = knspace2 knspace1.pretty = "Here's the ladder, now fix the roof!" knspace2.pretty = "Get that raccoon off me!" Because each of the MyCompositeWidget instances have a different namespace their children also use different namespaces. Consequently, the pretty and complex widgets of each instance will have different text. Further, because both the namespace ObjectProperty references, and KNSpaceBehavior.knspace have rebind=True, the text of the MyComplexWidget label is rebound to match the text of MyPrettyWidget when either the root’s namespace changes or when the root.knspace.pretty property changes, as expected. Forking a namespace¶ Forking a namespace provides the opportunity to create a new namespace from a parent namespace so that the forked namespace will contain everything in the origin namespace, but the origin namespace will not have access to anything added to the forked namespace. For example: child = knspace.fork() grandchild = child.fork() child.label = Label() grandchild.button = Button() Now label is accessible by both child and grandchild, but not by knspace. And button is only accessible by the grandchild but not by the child or by knspace. Finally, doing grandchild.label = Label() will leave grandchild.label and child.label pointing to different labels. A motivating example is the example from above: <MyComplexWidget@KNSpaceBehavior+Widget>: Label: text: root.knspace.pretty.text if root.knspace.pretty else '' <MyPrettyWidget@KNSpaceBehavior+TextInput>: knsname: 'pretty' text: 'Hello' <MyCompositeWidget@KNSpaceBehavior+BoxLayout>: knspace: 'fork' MyComplexWidget MyPrettyWidget Notice the addition of knspace: ‘fork’. This is identical to doing knspace: self.knspace.fork(). However, doing that would lead to infinite recursion as that kv rule would be executed recursively because self.knspace will keep on changing. However, allowing knspace: ‘fork’ cirumvents that. See KNSpaceBehavior.knspace. Now, having forked, we just need to do: composite1 = MyCompositeWidget() composite2 = MyCompositeWidget() composite1.knspace.pretty = "Here's the ladder, now fix the roof!" composite2.knspace.pretty = "Get that raccoon off me!" Since by forking we automatically created a unique namespace for each MyCompositeWidget instance. - class kivy.uix.behaviors.knspace.KNSpace(parent=None, keep_ref=False, **kwargs)[source]¶ Bases: kivy.event.EventDispatcher Each KNSpaceinstance is a namespace that stores the named Kivy objects associated with this namespace. Each named object is stored as the value of a Kivy ObjectPropertyof this instance whose property name is the object’s given name. Both rebind and allownone are set to True for the property. See KNSpaceBehavior.knspacefor details on how a namespace is associated with a named object. When storing an object in the namespace, the object’s proxy_ref is stored if the object has such an attribute. - Parameters - - fork()[source]¶ Returns a new KNSpaceinstance which will have access to all the named objects in the current namespace but will also have a namespace of its own that is unique to it. For example: forked_knspace1 = knspace.fork() forked_knspace2 = knspace.fork() Now, any names added to knspace will be accessible by the forked_knspace1 and forked_knspace2 namespaces by the normal means. However, any names added to forked_knspace1 will not be accessible from knspace or forked_knspace2. Similar for forked_knspace2. - keep_ref = False¶ Whether a direct reference should be kept to the stored objects. If True, we use the direct object, otherwise we use proxy_refwhen present. Defaults to False. - property(self, name, quiet=False)[source]¶ Get a property instance from the property name. If quiet is True, None is returned instead of raising an exception when name is not a property. Defaults to False. New in version 1.0.9. Changed in version 1.9.0: quiet was added. - class kivy.uix.behaviors.knspace.KNSpaceBehavior(knspace=None, **kwargs)[source]¶ Bases: builtins.object Inheriting from this class allows naming of the inherited objects, which are then added to the associated namespace knspaceand accessible through it. Please see the knspace behaviors moduledocumentation for more information. - knsname¶ The name given to this instance. If named, the name will be added to the associated knspacenamespace, which will then point to the proxy_ref of this instance. When named, one can access this object by e.g. self.knspace.name, where name is the given name of this instance. See knspaceand the module description for more details. - knspace¶ The namespace instance, KNSpace, associated with this widget. The knspacenamespace stores this widget when naming this widget with knsname. If the namespace has been set with a KNSpaceinstance, e.g. with self.knspace = KNSpace(), then that instance is returned (setting with None doesn’t count). Otherwise, if knspace_keyis not None, we look for a namespace to use in the object that is stored in the property named knspace_key, of this instance. I.e. object = getattr(self, self.knspace_key). If that object has a knspace property, then we return its value. Otherwise, we go further up, e.g. with getattr(object, self.knspace_key) and look for its knspace property. Finally, if we reach a value of None, or knspace_keywas None, the default knspacenamespace is returned. If knspaceis set to the string ‘fork’, the current namespace in knspacewill be forked with KNSpace.fork()and the resulting namespace will be assigned to this instance’s knspace. See the module examples for a motivating example. Both rebind and allownone are True. - kivy.uix.behaviors.knspace.knspace = <kivy.uix.behaviors.knspace.KNSpace object>¶ The default KNSpacenamespace. See KNSpaceBehavior.knspacefor more details.
https://kivy.org/doc/master/api-kivy.uix.behaviors.knspace.html
CC-MAIN-2021-39
refinedweb
1,552
50.73
In C programming, a string is a sequence of characters terminated with a null character \0. For example: char c[] = "c string"; When the compiler encounters a sequence of characters enclosed in the double quotation marks, it appends a null character \0 at the end by default. How to declare a string? Here's how you can declare strings: char s[5]; Here, we have declared a string of 5 characters. How to initialize strings? You can initialize strings in a number of ways. char c[] = "abcd"; char c[50] = "abcd"; char c[] = {'a', 'b', 'c', 'd', '\0'}; char c[5] = {'a', 'b', 'c', 'd', '\0'}; Let's take another example: char c[5] = "abcde"; Here, we are trying to assign 6 characters (the last character is '\0') to a char array having 5 characters. This is bad and you should never do this. Assigning Values to Strings Arrays and strings are second-class citizens in C; they do not support the assignment operator once it is declared. For example, char c[100]; c = "C programming"; // Error! array type is not assignable. Note: Use the strcpy() function to copy the string instead. Read String from the user You can use the scanf() function to read a string. The scanf() function reads the sequence of characters until it encounters whitespace (space, newline, tab, etc.). Example 1: scanf() to read a string #include <stdio.h> int main() { char name[20]; printf("Enter name: "); scanf("%s", name); printf("Your name is %s.", name); return 0; } Output Enter name: Dennis Ritchie Your name is Dennis. Even though Dennis Ritchie was entered in the above program, only "Dennis" was stored in the name string. It's because there was a space after Dennis. How to read a line of text? You can use the fgets() function to read a line of string. And, you can use puts() to display the string. Example 2: fgets() and puts() #include <stdio.h> int main() { char name[30]; printf("Enter name: "); fgets(name, sizeof(name), stdin); // read string printf("Name: "); puts(name); // display string return 0; } Output Enter name: Tom Hanks Name: Tom Hanks Here, we have used fgets() function to read a string from the user. fgets(name, sizeof(name), stdlin); // read string The sizeof(name) results to 30. Hence, we can take a maximum of 30 characters as input which is the size of the name string. To print the string, we have used puts(name);. Note: The gets() function can also be to take input from the user. However, it is removed from the C standard. It's because gets() allows you to input any length of characters. Hence, there might be a buffer overflow. Passing Strings to Functions Strings can be passed to a function in a similar way as arrays. Learn more about passing arrays to a function. Example 3: Passing string to a Function #include <stdio.h> void displayString(char str[]); int main() { char str[50]; printf("Enter string: "); fgets(str, sizeof(str), stdin); displayString(str); // Passing string to a function. return 0; } void displayString(char str[]) { printf("String Output: "); puts(str); } Strings and Pointers Similar like arrays, string names are "decayed" to pointers. Hence, you can use pointers to manipulate elements of the string. We recommended you to check C Arrays and Pointers before you check this example. Example 4: Strings and Pointers #include <stdio.h> int main(void) { char name[] = "Harry Potter"; printf("%c", *name); // Output: H printf("%c", *(name+1)); // Output: a printf("%c", *(name+7)); // Output: o char *namePtr; namePtr = name; printf("%c", *namePtr); // Output: H printf("%c", *(namePtr+1)); // Output: a printf("%c", *(namePtr+7)); // Output: o }
https://cdn.programiz.com/c-programming/c-strings
CC-MAIN-2020-40
refinedweb
609
74.19
. read/write support for the POSIX.1-1988 (ustar) format. read/write support for the GNU tar format including longname and longlink extensions, read-only support for the sparse extension. read/write support for the POSIX.1-2001 (pax) format. New in version 2.6. handles directories, regular files, hardlinks, symbolic links, fifos, character devices and block devices and is able to acquire and restore file information like timestamp, access permissions and owner.. It is supposed to be at position 0.: Class for with a different one. New in version 2. The encoding and errors arguments control the way strings are converted to unicode objects and vice versa. The default settings will work for most users. See section Unicode issues for in-depth information. New in version 2.6. The pax_headers argument is an optional dictionary of unicode strings which will be added as a pax global header if format is PAX_FORMAT. New in version 2.6. "..". New in version 2.5.() How to read a gzip compressed tar archive and display some member information: import tarfile tar = tarfile.open("sample.tar.gz", "r:gz") for tarinfo in tar: print tarinfo.name, "is", tarinfo.size, "bytes in size and is", if tarinfo.isreg(): print "a regular file." elif tarinfo.isdir(): print "a directory." else: print "something else." gigabytes. This is an old and limited but widely supported format. The GNU tar format (GNU_FORMAT). It supports long filenames and linknames, files bigger than 8 gigabytes .
http://docs.python.org/release/2.6.4/library/tarfile.html
CC-MAIN-2013-20
refinedweb
244
63.15
. ADDITIONS.: - Performance improvements - making the .NET Framework XSLT processor the best performing processor. - Functional improvements - improving the usability and feature set of the existing .NET Framework processor. For the love of God! Why do you practically duplicate the Stream "interface" (and when is Stream going to be an interface and not an abstract class?) with some oddly named methods? Wouldn’t it have been a much more better design to have one method Stream ValueChunk() which returns null in case it cannot return the chunk, otherwise a stream which performs the reading? Oh well, it’s just one MORE wrapper to write… Hi Dare, Great work as usual… however, do you know what the status of KB 317611 (support for Chameleon Schemas is?)… The .NET XML classes are well-grained and powerful..but..err..a bit complex when compared to MSXML. Check out for a good example of what I mean The XQuery part is a major ARGGGGGHHH! And, no, supporting XQuery in SQL server is no stopgap for that; I’m interested in applying XQuery on local XML files, I’m not interested in SQL server. Is there any hope that the existing XQuery support will be released somehow as a stand-alone technology (‘unsupported’ if deemed necessary)? Or do we have to use ILDasm/Ilasm on the whidbey beta 1 files to ‘extract’ an assembly with those classes ourselves? I perfectly well understand your reason for not supporting a ‘standard’ that is not going to be a standard for some time, but retracting the existing XQuery part from whidbey beta 2 means that there isn’t even an experimental XQuery implementation for the .NET framework around, anywhere. Because after Microsoft announced that they would support XQuery in Whidbey noone in their right mind would start designing their own XQuery implementation for .NET. So there are no third party implementations. Luc, If you feel you need an implementation of XQuery for the .NET Framework right away you can take a look at the Saxon.NET project hosted at anonymouse, The problems with chameleon schemas are fixed in Whidbey. I seem to remember us considering whether to fix this in Everett SP1 but I’m not sure if we ever did. Still no easy way way to choose encodings other than UTF-16 for XmlDocument? Does the lack of an XQuery implementation affect your decision on XSLT 2? (see) Dare Obasanjo shares an important update on the Beta 2 version of System.Xml 2.0. .NET Framework 2.0 beta 2 で System.Xml が変わる I second Luc Cluitmans comments. XQuery is key to our strategy and we need a non SqlServer approach even if unsupported and experimental. While I understand the desire for XQuery on the client, are you sure you’d rather have partial support and, potentially, a client that doesn’t align well to the standard deployed on every client you want to touch? That makes me jumpy. Don and Luc, I’d like to hear more about your use cases, too. Follow the link for more contact info. I have to third the comments by Don and Luc. Since when has Microsoft let a little thing like standards get in its way? But seriously folks, what do different clients have to do with anything? For example, we have an n-tier app. We have our own Windows.Form thin client talking to a server using Web services. All processing is done on the server. This is especially pertinent to XQuery as that’s what would be ideal to push sparse data to the client. Please, please, please. Microsoft has always been very accessible in early beta stages of products. Don’t rip out XQuery now. Consitency ? You have refactored the ReadValueAsXXX() methods to be a much more appropriate ReadContentAsXXX(). Appropriate becuase the name of the method reflects the domain knowledge. Xml has content not value ! But then you’ve left other methods called CanReadValueChunk(), that makes no sense at all. And why are you duplicating functionality. Whilst it may seem attractive to have all the features in one place, adding methods like ReadElementContentAsBase64() just polutes the simplicity and dilutes the consistency of the API. Your duplicating functionality in an unneccessary and inconsistent way. Surely it would be better to have one or two methods like ReadElementContentAsStream() and ReadElementContentAsByteArray() and allow people to use System.Convert or whatever other standard feature exists for this task. Making a half effort to support a couple for encodings it pointless when there are guaranteed to be more, so always implement the base functionality and no more. If your going to design an API, or an object model which you expect other people to use you may as well use it yourself otherwise its just hypocracy… not to mention bad design. Steven, I feel your pain! XQuery in the middle does make sense and would have a smaller change management foot print. Just a thought: Suppose you installed an instance of SQL Server 2005 Express on the local app server. It wouldn’t any normal database work, other than have procs that take your XML and XQuery parameters and then return results to you. Would that work? As much as I love XQuery, I’d rather not have it in Whidbey for a number of reasons. See Over the weekend, I tried performing stylesheet transformation using the XslTransform class only to find that vbscript is not supported. Any plans to change this in the upcoming version? The XslTransform class only supports the primary Common Language Runtime languages which are VB.NET, C# and JScript.NET I may be misunderstanding the purpose of XmlWriter.WriteValue, so I am sorry if this is way off. In the docs for XmlWriter.WriteValue( DateTime ), it says "If no parent type is available, the value is typed using the default mapping; thus, a DateTime value is mapped to the xsd:dateTime type." According to, the format should be in CCYY-MM-DDThh:mm:ss, but it looks like you are using CCYY-MM-DDThh:mm:ss.sssssss-HH:MM. I think that your representation may be valid as ISO 8601 goes, but as it is, there are two major problems: 1) a time zone is being pulled from outside of the DateTime which invalids it as a representation of the original DateTime and 2) that format is not supported at all in the W3C recommendation. It is possible that I am misunderstanding the W3C recommendation (or maybe it has changed), but on the time zone, I think that WriteValue is wrong. The data going into the XML should represent the object, not the instance with a system context appended to it. An example is below. using System; using System.Xml; public class Test { public static void Main() { XmlWriter writer = new XmlTextWriter( Console.Out ); writer.WriteStartElement( "SomeDate" ); writer.WriteValue( DateTime.Now ); writer.WriteEndElement(); writer.Close(); } } …produces… <SomeDate>2004-10-26T15:45:24.5706128-06:00</SomeDate> I think that the output should be… <SomeDate>2004-10-26T15:45:24</SomeDate> Thank you for your time. – jeremiah… B… Almost… PingBack from
https://blogs.msdn.microsoft.com/dareobasanjo/2004/10/13/upcoming-changes-to-system-xml-in-net-framework-2-0-beta-2/
CC-MAIN-2019-30
refinedweb
1,183
65.93
Sticky View Mode In previous Episerver versions, a default view for the current content type was loaded when you navigated the content tree. By using a UIDescriptor, it was possible to change the default view for a specific ContentType. For example, in Alloy demo source code, ContainerPage has a default view set to AllPropertiesView. [UIDescriptorRegistration] public class ContainerPageUIDescriptor : UIDescriptor<ContainerPage> { public ContainerPageUIDescriptor(): base(ContentTypeCssClassNames.Container) { DefaultView = CmsViewNames.AllPropertiesView; } } As a part of Episerver 10.11.0, we are introducing a new feature called “Sticky View mode”. “Sticky” means that, when navigating the content tree, the previously used view is loaded instead of the default view. This functionality is enabled for every content type by default. Whenever you change the view using the toggle button, the currently selected view is saved in the local storage and is used when you change the context. If a saved view is not available for the current content type, then the default view for that content is loaded. Disabling Sticky View mode Sticky View mode can be disabled per content type. In UIDescriptor, there is new property EnableStickyView. If you want to turn the feature off, this property should be set to false. For example, if you would like to force the All properties view to be the default view for StartPage, then EnableStickyView should be false and DefaultView should be set to AllPropertiesView. [UIDescriptorRegistration] public class StartPageUIDescriptor : UIDescriptor<StartPage> { public StartPageUIDescriptor(): base(ContentTypeCssClassNames.Container) { DefaultView = CmsViewNames.AllPropertiesView; EnableStickyView = false; } } This is welcomed! thx It's the small things, like this, that makes you more productive, and love the CMS. Please also make MenuPin a part of the main product + something like this :-) Really good update, this has bugged me for a while! Agree with the above comment as well about MenuPin :) Best feature since sliced bread!
https://world.optimizely.com/blogs/grzegorz-wiechec/dates/2017/9/sticky-view-mode/
CC-MAIN-2022-40
refinedweb
302
57.37
I think it depends on the situation. You should never use it inside a .h file in my opinion. Code readability goes down a bit by not using it, but it is more clear what classes belong to what namespaces, so maybe that compensates for it again. If your own engine uses only one core lib for example it is perfectly fine to do "using namespace CoreLib" inside your .cpp files in my eyes. If however you later plan to add support for another library that happens to have the same class name, you have to explicitly name that one. I personally prefer not to use "using namespace", as then you are always safe (unless there is a lib with the same namespace name lol). I think generally it should be avoided, but in some cases it is fine I think.
http://www.gamedev.net/user/103127-buckshag/?tab=posts
CC-MAIN-2016-26
refinedweb
141
72.46
Determines if an entry contains the specified attribute. If the entry contains the attribute, the function returns a pointer to the attribute. #include "slapi-plugin.h" int slapi_entry_attr_find( const Slapi_Entry *e, const char *type, Slapi_Attr **attr ); This function takes the following parameters: Entry that you want to check. Name of the attribute that you want to check. Pointer to the attribute, if the attribute is in the entry. This function returns 0 if the entry contains the specified attribute; otherwise it returns -1. Do not free the returned attr. It is a pointer to the internal entry data structure. It is usually wise to make a copy of the returned attr, using slapi_attr_dup(), to avoid dangling pointers if the entry is freed while the pointer to attr is still being used.
http://docs.oracle.com/cd/E19528-01/820-2492/aaify/index.html
CC-MAIN-2016-18
refinedweb
131
65.42
Hello, I want to do something very simpple. I would like to create directories within my output or input folders that currently do not exist. Normally this would easily be achievable through the python command os.mkdir. Here is my code: import dataiku import pandas as pd, numpy as np from dataiku import pandasutils as pdu import os # Read recipe inputs test = dataiku.Folder("SZxomfdY") test_info = test.get_info() # Write recipe outputs outputfolder = dataiku.Folder("DBryoOH2") outputfolder_info = outputfolder.get_info() #for file in test.list_paths_in_partition(): #print(file) outputdir = outputfolder_info['accessInfo']['root'] newoutput = os.path.join(outputdir,'newdir') print(newoutput) os.mkdir(newoutput) however I get the error FileNotFoundError Traceback (most recent call last) <ipython-input-29-3b604d987ec9> in <module>() 19 newoutput = os.path.join(outputdir,'newdir') 20 print(newoutput) ---> 21 os.mkdir(newoutput) I don't understand though, it can't locate the folder however it shouldn't be locating the folder, it should be making the folder. Does anyone have any idea what's going on? The FileNotFoundError also happens when you want to create a new folder within a folder that doesn't exist in the first place. However, your python code should work (I did test it just in case, and it was successful). Could you share the output from print(newoutput) ? If that doesn't give some idea of what might be happening, the next steps would be to check what kind of managed folder are you using. Is it a local filesystem managed folder? To create directories in DSS, I'd recommend using the Dataiku's API instead of mkdir, it is more convenient to manage permissions and handle the user isolation framework. Could this thread solve your issue ?
https://community.dataiku.com/t5/Using-Dataiku/Creating-directories-within-python-script-os-mkdir-causing/td-p/15526
CC-MAIN-2021-43
refinedweb
282
59.6
Eclipse Community Forums - RDF feed Eclipse Community Forums Performance degredation <![CDATA[Hi, I have an application that parses a file and creates numerous objects (<100,000 but >20,000) from it. The entire read operation must either succeed or fail, so this demarcates a single transaction. To begin with, the application runs with about 60-70% of the CPU in Java, and 30-40% in postgres. However, over time this degrades to under 1% postgres and 99% Java. With persistence, the application can take up to 2 hours to run. When I replace the DAO with one that manages the objects in-memory, it runs in perhaps 2 seconds. I am using the latest (non-milestone) release of eclipselink and spring in this project. I've not noticed the performance issue being resolved when upgrading from eclipselink 1.* to 2.*, or from Spring 2.5 to 3.*. I have not so far tried an alternative JPA provider to see if this is specific to eclipselink. The code is part of a larger infrastructure, so it would be a pain to pull out into a test-case. All ideas welcome. Matthew]]> Matthew Pocock 2010-01-19T18:00:42-00:00 Re: Performance degredation <![CDATA[That is very odd. What exactly are you doing (code) and how have you configured things (persistence.xml, config, etc.). "over time this degrades" what do you means by this? Are you performing this read/insert once and it gets slower as it processes the file, or are you performing the read/insert over and over again and the server eventually gets slower? If it degrades over the single operation, then you probably need to split up the batch of objects into smaller sets. You can still do this in a single transaction using a flush() then a clear() after say 1000 objects. If it degrades over time, then you probable have a memory leak somewhere. Check your cache settings, and ensure you application is releasing it handle on the objects. You may wish to use a memory profiler. In general with any performance issue it is best to figure out what the issue is first using a performance profiler, such as JProfiler. There are some performance information on EclipseLink here, ]]> James Sutherland 2010-01-21T15:29:18-00:00 Re: Performance degredation <![CDATA[=What exactly are you doing (code) and how have you configured things (persistence.xml, config, etc.). My persitsence.xml is almost empty. In spring, I turn sql logging and ddl generation on, and that's about it. It uses RESOURCE_LOCAL transactions. It's about as stripped-down as it can possibly be. The problem classes are: @Entity @Table(uniqueConstraints = @UniqueConstraint(columnNames = { "orfName" })) @NamedQueries(@NamedQuery(name = "orfByName", query = "select o from Orf o where o.orfName = :orfName")) public class Orf implements Serializable, Comparable<Orf> { @Id @GeneratedValue(strategy= GenerationType.SEQUENCE) private Integer id; @Basic(fetch = FetchType.EAGER) private String orfName; ... } @Entity @Table(uniqueConstraints = {@UniqueConstraint(columnNames = {"orf1_id", "orf2_id"})}) @NamedQueries( { @NamedQuery(name = "allPairs", query = "select p from Pair p"), @NamedQuery(name = "pairsByOrf", query = "select p from Pair p where p.orf1 = :orf or p.orf2 = :orf"), @NamedQuery(name = "pairByOrfs", query = "select p from Pair p where (p.orf1 = :orf1 and p.orf2 = :orf2) or (p.orf2 = :orf1 and p.orf1 = :orf2)") }) public class Pair implements Serializable { @Id @GeneratedValue(strategy= GenerationType.SEQUENCE) private Integer pairID; @ManyToOne(fetch = FetchType.EAGER ) private Orf orf1; @ManyToOne(fetch = FetchType.EAGER) private Orf orf2; ... } In the dao, pairs are written by this code: public Pair fetchOrMakePair(Orf orf1, Orf orf2) { List<Pair> res = entityManager.createNamedQuery("pairByOrfs") .setParameter("orf1", orf1) .setParameter("orf2", orf2).getResultList(); if(res.isEmpty()) { Pair p = new Pair(orf1, orf2); p.normalize(); entityManager.persist(p); return p; } else { return res.get(0); } } I make sure that Orf instances are pre-loaded. Then in the same transaction I add about 50k pairs. ="over time this degrades" what do you means by this? Are you performing this read/insert once and it gets slower as it processes the file, or are you performing the read/insert over and over again and the server eventually gets slower? All the read/inserts are done in a single transaction and then bulk-committed. As more pairs are added to the transaction, the performance degrades. The same behaviour can be seen with hibernate. It's not specific to eclipselink. I don't think it is a memory leak of some kind, as once the transaction commits, the performance goes back to normal and all memory is recovered.]]> Matthew Pocock 2010-01-21T22:22:43-00:00 Re: Performance degredation <![CDATA[Since it degrades over the single operation, then you probably need to split up the batch of objects into smaller sets. You can still do this in a single transaction using a flush() then a clear() after say 1000 objects.]]> James Sutherland 2010-01-25T16:18:05-00:00 Re: Performance degredation <![CDATA[I've identified the issue and have a workaround. However, I can't help feeling that it's the sort of thing that should be handled for me. The performance was being killed by looking up entities by things other than their primary key, within the loop that persisted them. So, for the case of Orf, the orfName has a unique constraint, but the Orf has a numeric ID. I was doing lots of queries to fetch orfs by their name. All the Java cpu was going into this lookup - presumably doing a linear scan, despite this having a unique constraint. When I re-wrote my DAO to keep a local cache map of orfName->Orf, this overhead went away. I did a similar thing for Pair, making a cache map from (int,int) to Pair. The time taken to run the app without these caches is about 1030 minutes. The time with these caches is about 2 minutes. In both cases, I was looking for objects by unique constraints within the same transaction as they where created. The same behaviour is seen in both hibernate and eclipselink, so I guess this is a performance issue in how in-transaction objects are searched by something fairly low-down in JPA. Matthew (who now feels dirty for maintaining caches of objects in-memory)]]> Matthew Pocock 2010-01-26T17:02:09-00:00 Re: Performance degredation <![CDATA[Objects are only cache by their Id (primary key) in the persistence context. Any JPA Query is required by default to first "flush" to the database any change made to any of the objects in the persistence context, and then query the database for the object. JPA defines a flushMode which can be set on the EntityManager or a Query that will avoid the flush until COMMIT. EclipseLink also supports a persistence unit property and query hint to configure the flush mode. Setting the flush mode to COMMIT would avoid the cost of flushing, which is probably your issue, but then the object would not be written to the database, and your query would not find it (unless you manually called flush after persisting the object). Another option is to use conforming in EclipseLink, which allows a query to find an object in memory. You would also need to set the query type to make it a ReadObject query to avoid accessing the database though. In general you solution to caching the object by their alternate Id is probably best. Otherwise you could change you queries to use the real Id, or change your mapping Id to what you are using in queries. ]]> James Sutherland 2010-01-28T14:59:29-00:00 Re: Performance degredation <![CDATA[=Objects are only cache by their Id (primary key) in the persistence context. It's a pity that implementers do not additionally cache by unique constraints. I have never looked into the guts of any JPA implementations, so have no idea how feasible this would be. However, given my experience, I find it likely that a lot of applications would suddenly start going a lot faster if this where done.]]> Matthew Pocock 2010-01-28T15:27:28-00:00
http://www.eclipse.org/forums/feed.php?mode=m&th=160969&basic=1
CC-MAIN-2015-27
refinedweb
1,346
66.33
Digit Separators in C++ Sign up for FREE 1 month of Kindle and read all our books for free. Get FREE domain for 1st year and build your brand new site In this article we are going to discuss about Digit Separators in C++. This topic is introduced in C++14. Using digit separators in the code make it easy for human readers to parse large numeric values. Problem Detection As we can see, everyone around us is in fond of comfort so when it comes to read a number having many digits, it seems sometimes difficult. For example : Reading a number like 1000 is easy but when the number is like 1000000 it becomes little difficult and uncomfortable to read it and what if more zeros are added to it ? Let's have a look at some problems below : - Pronounce 345682384908 - Add 123456789 and 987654321 - Compare 34576807908 and 34576867908 We can solve it easily using programs but can we easily read it by our own? The answer is 'NO'! This problem can be solved by using of Digit separators. Solution To solve this problem in our daily life we use commas (,) to separate digits. Then the number 1000000 looks like 10,00,000 and can be easily read as 'Ten Lakhs'. In a programming language we cannot use this commas so the concept of Digit Separators is introduced. Here a simple quotation mark (') is used to separate the digits. Now it becomes easier for us to read the number. Implementation A program to show that single quote marks are ignored while determining the value : #include <iostream> using namespace std; int main() { long long int a = 10 '00' 000; cout << a; return 0; } Output : 1000000 A program to show that single quote marks are just for user's convenience and value of given number is not affected by there position : #include <iostream> using namespace std; int main() { long long int a = 23'4'589; long long int b = 23'45'89; long long int c = 2345'89; cout<<a<<" "<<b<<" "<<c; return 0; } Output : 234589 234589 234589 From the above program it is clear that the use of digit separators does not affect the value of original number. We can say that the numbers 23'456'76 , 2345'676 and 2'3456'7'6 represents the same value which can be written manually as 23,45,676 (twenty three lakhs forty five thousand six hundred and seventy six). We can also use Digit separators in declaration of hexadecimal, octal and binary numbers in C++. long long hexadecimal_no = 0xFFFA'FBFF; long long octal_no = 00'43'02; long long binary_no = 0b1000'1101; Why not something else ? One might be thinking that why only single quotation mark is used and why not space character or underscore? - Underscore ( _ ) character is already taken in C++11 by user defined literals. So it will be confusing if we use it as a digit separator also. - In case of comma (,) If we are thing a number like 1,23,456 and assign it to integer then compiler may not decide that you are passing three integers separately or only one integer in which digit separator is used. So again this is a bad idea. - Space character ( ) i.e. white space also indicates that a new line is there. We know that C++ treats all white spaces equally. So again this is not sufficient to fulfill the condition why digit separators are used. Due to this and many more different reasons a single quotation mark (') is selected as a digit separator to smoothen our work flow.
https://iq.opengenus.org/digit-separators-in-cpp/
CC-MAIN-2021-21
refinedweb
599
67.69
. Trufax. Link to backup in meantime. Mirror. Mega.co.nz MOD IS OUT. I'll drink to that !! (I actualy just opened a nice Merlot, funny timing...) awesome!! so will the dl be faster on here or depositfiles? ... I just started a regular dl and it will take 4 hours+ according to the info and I have BB... doesn't matter bc I will just let it work, I was just curious. If it finishes uploading to Moddb then it will take shorter to download, but for me it's taking at least twice the time to upload it than for the other link. I wish I could get a mod here to check out the link, download it, and add it to their servers. We might be able to set up a Torrent, eventualy... with that size would be nice to be able to resume the download if anything happens (ie. power shortage), plus DF is behaving badly with me, for some reason... torrent would work counting down to 36 minutes remaining The mirror is excellent. Got the mod in 30/40 minutes. Cheers. This just in, if you get a crash related to a "can't open section m16a1" or somesuch, open gamedata\config\weapons\weapons.ltx and add this into the list of included weapons. #include "m16a1.ltx" into weapons.ltx I didn't get this until the 11th hour of testing, and forgot to add the new weapons.ltx into it. Never mind that, first hotfix since release. Pastebin.com I bet that was the fastest hotfix release ever seen on moddb.com ! Even before the download goes live in it ! heheh Oh my jeebux. Outstanding. DL'ing now. Thanks over 9000 times. Just finished DLing from DepositFiles, clocked in at 4 1/2 hours. So now I'm trying to make a torrent...can you do that in Tribler? Sure, I'd love to have a torrent upload link. Crashing on client sync without bug report, extracted without any issues all files intact. Anyone any ideas what the problem might be ? mine works fine, are you using a clean 1.004 ? Fatal, could you share the r2 part of your user.ltx? (Actualy, any configs you might think are important...) Just wanna be sure I aint missing anything ! On a side note, is it possible on a near future to add other features from SWTC? Their shader related stuff is looking really really good (albeit still a WIP). Part of them has been released, and I was wondering if at least the sun shafts could be merged, as I belive even Meltac has checked them out. Thank you for the hard work !! Here's my r2 section. Pastebin.com thank you very much ! Will have to do reinstall . Did not realise it required 1.0004 working with 1.0005 . well, I belive there is a way of rolling back to 1.0004, but I never actualy attempted it... always kept copies of the whole folder... complete reinstal no joy same crash. Did you disable prefetching in your shortcut? And what game version are you using, WW, US, RU? Did not disable prefetch,will try that. All I need to do is remember how to disable it. I need to roll back also ... is there a way to do it without me reinstalling ... I actually did get it start but after Sid gave me back my gear and I tried to exit out the bunker I CTDd. but to make it no fetch & some other stuff is --- "E:\THQ\S.T.A.L.K.E.R. - Shadow of Chernobyl\bin\XR_3DA.exe" -nointro -noprefetch -nodistort -nolog Used disable mod, running well so far , thanks.. you're welcome.
http://www.moddb.com/mods/zone-of-alienation-mod/images/waiting-for-107-to-finish-uploading-to-moddb
CC-MAIN-2018-05
refinedweb
621
86.5
There are many scenarios in which you might want a part of the code in Adobe Flex application to be loaded only when required, that is you might not want a piece of code to be compiled into main application SWF, instead load it when required. Scenarios can include screens which user accesses very rarely or a set of libraries which you want to load when required. This can achieved using Modules in Flex. There might be many screens in Adobe Flex application, which user access very rarely. Examples can include screens where users give suggestions, complaints etc. if all such screens are included in your application, your application size might be huge, affecting the initial loading experience. Similarly you might have a library code which might be used rarely. Examples can include library for creating reports, which is required only when the user wants the reports to be generated. All such code which you think will be used very rarely can be included in modules and load them dynamically when required. About modules Modules are SWF files which can be loaded dynamically by an application. They cannot run independently, they have to be invoked by application. Any number of applications can share the same module. Modules allow you to interact with the parent application and also interact with other modules. Modules can be loaded either using ModuleLoader or ModuleManager. Sample application This application shows how to load a module dynamically. The module loaded dynamically will display a window in which we can capture user’s suggestions. The module accepts input from the parent application and also invokes properties of the parent application. Application includes three files. Modules.mxml – this will invoke the module SuggestionModule.mxml – this is the module IUserInput.as – this interface is implemented by the module. This is for module-application communication. (Optional) Screen shot Modules.mxml <?xml version=”1.0″ encoding=”utf-8″?> <mx:Application xmlns:mx=”” layout=”vertical”> <mx:Script> <![CDATA[ import mx.controls.Alert; import mx.managers.PopUpManager; import mx.events.ModuleEvent; import mx.modules.ModuleManager; import mx.modules.IModuleInfo; private var moduleInfo:IModuleInfo; private var suggestion:SuggestionModule; private function loadSuggestionModule():void { moduleInfo = ModuleManager.getModule(“SuggestionModule.swf”); moduleInfo.addEventListener(ModuleEvent.READY, renderSuggestionModule); moduleInfo.addEventListener(ModuleEvent.ERROR, handleError); moduleInfo.addEventListener(ModuleEvent.PROGRESS, showProgress); moduleInfo.load(); suggestionProgress.visible = true; } private function renderSuggestionModule(event:ModuleEvent):void { suggestion = moduleInfo.factory.create() as SuggestionModule; suggestion.setHeaderMessage(“Suggestion – Home Page”); PopUpManager.addPopUp(suggestion, this, true); PopUpManager.centerPopUp(suggestion); suggestionProgress.visible = false; } public function closeSuggestion():void { PopUpManager.removePopUp(suggestion); moduleInfo.unload(); } private function showProgress(event:ModuleEvent):void { suggestionProgress.text = “Loaded ” + Math.round((event.bytesLoaded / event.bytesTotal)* 100) + “%”; } private function handleError(event:ModuleEvent):void { Alert.show(event.errorText,”Error”); suggestionProgress.visible = false; } ]]> </mx:Script> <mx:Button id=”chartModuleBtn” label=”Suggest” click=”loadSuggestionModule();”/> <mx:Label id=”suggestionProgress” text=”Loaded 0%” visible=”false”/> </mx:Application> SuggestionModule.mxml <?xml version=”1.0″ encoding=”utf-8″?> <mx:Module xmlns:mx=”” layout=”vertical” width=”366″ height=”200″ borderStyle=”solid” cornerRadius=”11″ borderColor=”#000305″ borderThickness=”2″ implements=”com.adobe.IUserInput”> <mx:Script> <![CDATA[ import mx.controls.Alert; [Bindable] private var heading:String = “Your suggestion please”; private function submitSuggestion():void { parentApplication.suggestionProgress.text = “We will consider your suggestion”; parentApplication.suggestionProgress.visible = true; parentApplication.closeSuggestion(); } public function setHeaderMessage(message:String):void { this.heading = message; } ]]> </mx:Script> <mx:Form width=”100%” height=”100%”> <mx:FormHeading label=”{heading}”/> <mx:FormItem label=”Your Name”> <mx:TextInput id=”userName” width=”233″/> </mx:FormItem> <mx:FormItem label=”Suggestion”> <mx:TextArea id=”description” width=”234″/> </mx:FormItem> <mx:ControlBar width=”100%” horizontalAlign=”center”> <mx:Button label=”Submit” click=”submitSuggestion();”/> <mx:Button label=”Cancel” click=”parentApplication.closeSuggestion();”/> </mx:ControlBar> </mx:Form> </mx:Module> IUserInput.as package com.adobe { public interface IUserInput { function setHeaderMessage(message:String):void; } } Thank you so much for this post, the module idea in Flex was giving me problems getting my head around but this post really makes it clear, and also explains why it’s so useful! Regards, Peter Witham Hi Peter, Thanks for the comments. Module is a great feature of Flex. There is another one called RSL in FLex, which can also be very useful. I will blog about RSL soon 🙂 I look forward to it, I’m still getting my head around the move from Flash Pro to Flex. But it’s all good fun 🙂 Nice post, I have one problem. I am loading Module inside my main application and then inside that module I want to navigate from one canvas to another. Like in HTML if I click on a hyper link, all screen gets clear and the html content of another page is loaded. But in flex I want to persist my Shell and navigate to another canvas. I know its a small thing, but I am missing it. Please help me in it. Thanks Chetan Hi Chetan, You can try navigator containers or view states. Below are the URLs to the tutorials. Let me know if this helps 🙂 First of all apologies if this leads to an endless discussion, or maybe it’s just a silly question, but I’ll go ahead anyway .. I’m building a crm kind-of application. For now there are 2 parts: Customers and Products .. Would it be wise to make the Customers.mxml and the Products.mxml (which each have sub-pages and are now components) modules which only get loaded when a user clicks one of the 2 items on the main navigationbar? I know there are discussions about this all around, but since I really enjoy this blog and visit it daily I thought I’d stick to it 😉 Thanks in advance //jurlan Hi Jurlan, Great to know that people are reading my blogs regularly 🙂 Thanks 🙂 You can obviously modify them to modules and load then only when required. If you have them as modules, there is a time lag when the user clicks because the module has to be loaded from the server. If they are components then they are loaded with the intial SWF and so if the components are heavy then there is a time lag in the initial load. Now you can decide what you want to do 🙂 Hope this helps. Hi! Thanks for the reply 😉 I guess I’ll make my user wait at startup and present them with a nice preloader, because if they have to wait during clickevents they might think I did bad coding 😉 Great 🙂 When I load modules, I use the Application.application.enabled property and CursorManager.addBusyCursor function to make the application look like it’s doing something important. Then, when I get the READY or ERROR event, I clear those. In the meantime, I use a timer to pop up a custom window with a progress bar if the loading takes more than 7 seconds. The “I’m adding that functionality to the application…” caption almost makes a user happy to wait. 🙂 I have a question– i have a main Application that has some icons. on clicking an Icon, i want to send a HTTP POST and the response of that would contain a Module name. I want to parse that and load the Module. And each Module can have some components and buttons, and clicking on the button could send another HTTP request and that can return another Module Name..etc.. Also can the Module interact with the parent application — eg – to send the http request.. i want to use common code for that. I was wondering whether i can do this with FLEX and Action Script? Thanks in advance prasad Hi Prasad, You can definitely do this. This information is great. I’m building a social application with various views/pages. I see that modules would be the best way to setup the interaction. hi sujit, hw r u???u might not remember but i met you in hyderabad when you at satyam(Flex boot camp)…Since then i m trying my hands on flex and m njoying it a lot.. I have a query i want to display my data in pages..for example a scrapbook has many messages now if i want to display 10 messages at a time and give a next or previous button to see the next 10 messages..how can i do it???? and yes i forgot to say..your share market app looks great..can you share the code with me on my email id:escortnotice@gmail.com…… Hi Shubham, Thanks 🙂 I will share that code soon on my blog. If you want to load all the data on the client and display in multiple views you just have to store the data in a ArrayCollection or Array and display only required. If you want fetch the data from the server as when user navigates to different pages, you will have to modify your back end code to support this and invoke the service when required to get the data for a page. Hope this helps. Awesome blog with helpful contents. Just a thought, can i use a module between different application hosted in different locations. ie. I have a module called “A”, and have two applications “B”, and “C”, and both the applications use the module “A”.. Is there anything else i can use here? Hi sujit.Thanks for the suggestion.I am able to display it in a data grid. I have another query..its not flex..but i hope you can help me.I am trying to build a search facility for a website(JAVA).which can index my website and answer to the users query.I dont want to use the like query is there any other option.I have heard abt hibernate search and lucene but not finding enough material on net to start with.If its possible for u then please do guide me with this… Hi Priyank, You can load a module from a different domain. Make sure the cross domain policy file is present. Hope this helps. Hi Sujit I have a Flex mxml application that gets a user defined XML from a HttpService. For example a snippet of the XML is In the MXML when I am rednering the and using a Repeater on the Widgets list, how can I do a nested if/else (or switch-case ) like statement to render different UI FormItems? Thanks sorry Sujit, when I entered the actual XML characters in this “comment form”, the XML didn’t show up in first time. I am trying to add the XML now with encoding the characters <Widgets> <Widget type=”text” name=”first name” width=”30″ /> <Widget type=”date” name=”date of birth /> .. </Widgets> Hi Babji, Try any of the List/Tile based components. Hope this helps. Folks, I’m new to Flex so forgive my lack of knowledge. I’m setting up a data entry application that has many different screens. I was looking at setting up the handling of each screen in a module. Since one screen can cause another screen to be displayed, I’ve in-effect created a screen hierarchy or a nesting. In reading the manual I got the impression that modules can’t load other modules. Is that true? If so, any suggestions on how to structure something like this? Hi,Sujit.. i must say nice blog n comments,i got nice knowledge regarding the modules.. m developing chatting application..i’m having 3modules in my application,loginpage and registrationpage..n 3rd 1 is the application which does chatting..all is set,the only problem is when i click the SignIn button on my login page,i want to nevigate to the user’s home page,(i.e. 3rd module)..how can i get it?? inshort,i’ve 4 mxml files..main app file n 3modules..i wanna nevigate from my 1st module to 3rd module wen i press a button in my 1st module..will custom events help?? Sujit, I’m having an interesting issue. I have a button that triggers the loading of a module and adds the module as a child into a TabNavigator. I have to click the button twice the first time to load the module, and one after the first time…. public function loadModule(event:Event):void { var _module:IModuleInfo = ModuleManager.getModule("view/HomeScreen(ApplicationDomain.currentDomain, SecurityDomain.currentDomain); } private function loadModule_startHandler(event:ModuleEvent):void { trace("setup!!!"); } private function loadModule_readyHandler(event:ModuleEvent):void { trace("Module Loaded"); var _module:IModuleInfo = event.target as IModuleInfo; var _displayComp:DisplayObject = _module.factory.create() as DisplayObject; vs.addChild( _displayComp ); } private function loadModule_errorHandler(event:ModuleEvent):void { var _module:IModuleInfo = event.target as IModuleInfo; trace("Error Occured!"); } private function loadModule_progressHandler(event:ModuleEvent):void { trace("Progress: " + (event.bytesLoaded / event.bytesTotal ) * 100 ); } Henry, Can can look into using Universal Mind Extensions with Cairngorm. Hi, Can someone please help me out here? My org is using using flex for front end. we have 2 different front ends which looks exactly the same but serve different purposes. so using view states for that. now i need to develop a 3rd front end which is 70% the same, but need to add a couple of extra columns here and there and a couple of extra popups. i was wondering if its possible using modules, to use the common part of the application, and add the required extra columns and popups only in my module? using modules, can we get as granular as that? okie! hope i get a solution soon! thanks in advance! Hi Henry, You can load a module from another module. If you think the modules will anyways load and the user has to wait till the modules are loaded, don’t go for a module, keep the screens in the main application itself. Please visit the URL below for details. Hope this helps. Hi Manan, Please find details on how to load modules at the URL below. Hope this helps. Hi Hem, Please try making changes show below. private var _module:IModuleInfo; public function loadModule(event:Event):void { _module = ModuleManager.getModule(“Module1(); } Hope this helps. Hi Keerthika, You can do this using modules, but looks like creating a custom component and having it in the main application seems to be a better solution in your case than loading a module. Modularize your component if you think that view will be used very rarely or want to make the initial load of the application faster by loading the component later. Hope this helps. Getting this below error when i run mxmlc module.mxml in the command prompt Error: Type was not found or was not a compile-time constant:TideFaultEvent How could that possibly due to? Hi, I should say this is nice blog, as i ref I dont understand how to Authorize my CustomerService Application. Actually This Application is divided into several parts and these are interacting in between them.These parts are not modules as of now, they are components only in my application. Depending on the login credentials,some components should be activated, even some fields in that components should not be in active state for that user. Can someone help me out ?Hoping the reply soon Thanks. lanc, you need to add in this to your script tag import mx.modules.*; Hi Lanc, Looks like you have TideFaultEvent referenced in your module.mxml file, please make sure it is found by the compiler as Jason mentioned. Hope this helps. Hi Vijay, Since you don’t have the views as modules, they are already loaded in the SWF file. I would have stored list of allowed views for a particular role on the server and load the list when the user logs in. Depending on the list, you can show/hide views for a user. Hope this helps. Hi sujit, i have one issue that, i have 2 module and i want to dispatch a custom event from one module and catch in 2nd module. but because of some reason i am not able to catch the event in 2nd module. can you please help me and pointed out the problem? Thanks Somu If you directly reference your module class (SuggestionModule,mxml) within the application that is loading it (Modules.mxml), that class gets compiled into your main application SWF. This defeats the purpose of using modules to incrementally deliver content. In doing this your SuggestionModule class and the classes it references are compiled into both the main application SWF and the SuggestionModule.swf. Sujit and other experts, I’m doing a large application and doing some thinking on splitting code/functionality between modules. Looks like this is the forum where I can look forward for an answer to my question. My UI will change a bit (80% common) depending on user Role ad will provide different functionality on mouse and keyboard events to different roles. How can I have this code split into different modules incrementally. For library functions its easy to split, but for UI it’s tricky. I do not want to have 5 modules for 5 different roles which have 80% common stuff. Will appreciate your thoughts on this. great example friend, 🙂 Hi,Sujit.. in addition to your code examples As I see, you don’t use module interface at all, and the main application invokes setHeaderMessage() module method with using factory property and casting to SimpleModule, right? i.e. the module instance is used here to call module’s methods. Think, IUserInput interface and ModuleLoader using might be more useful in that case to avoid so-called hard dependencies between the module and the application. something like that: var isuggestion:* = mod.child as IUserInput; isuggestion.setHeaderMessage(“Suggestion – Home Page”); where mod is an instance of ModuleLoader Then we can implement interfaces for the application as well so that to invoke methods of the application from the module (if it’s really needed) what do you thinks about this approach? Hi Sujit, I have an existing application. This each view shares data with in the app. Say, the Greetings tab canvas shares data in Main Tab canvas. Can u please tell me how to convert it into modules. Like Guest Module, User Module, Shopping Cart Module etc., so that these data connections/ relations wont be affected. Hi Sujit, As this blog is active, I thought of posting my question here regarding module loading/unloading memoryleak issue. The detailed question is given in this link. I dont think there is any concrete answer given by Adobe on this. Do you have any workaround? Hi Sujit, I am using modules, but i get a error , when multiple modules are loaded quickly. or some time when I Try to load the first module. TypeError: Error #1009: Cannot access a property or method of a null object reference. at ModuleInfo/completeHandler()[C:\autobuild\3.3.0\frameworks\projects\framework\src\mx\modules\ModuleManager.as:717] I found a bug was registered but I am not clear about the fix. please help Thanks Hi, I am currently working on a project using flex 3 and weblogic server 10.3. I came across ur article on building flex modules, and thought that my project could use it. However i am not able to figure out, how i could deploy the individual .swf files onto my weblogic server. I have only one experience in building a flex application. and for that, i was using blazeds and java, and the single swf file was being wrapped into a html page and compiled into a .ear file before deploying onto my server. Hence upon coming across the concept of modules, i couldnt figure out how i could actually deploy theses modules onto my web server, and loading them in the shell application Pls help Thanks in advance Hi , I need some urgent help. We load a module from the parent application. When we deply the application on local server and run the module, the UI is perfect. BUT, when we deploy the application on our distant unix dev server, the module loads fine and fetches all date, BUT — the UI is distorted.By distorted I mean that if I scroll up /down/left/write the the page display breaks as if the page is not refreshed. The page looks scrambled. Then I noticed that when I hit the dev URL in Internet explorer, the module UI is distorted. But , when I hit the dev URL in firefox, the UI is perfect. Note : there are lots of repeaters and other such components used in the module. Please help!! Hey Porter, did you get any answers on how to proceed? I have a similar requirement (Hope you see this comment) can any one help us to create a login page using flex and LCDS(how to insert and retrieve data from mysql database) ? Hi pms_11, you can use HTTPService componens for flex frontend and some web service (PHP or jsp page for example) as web backend that would retrieve data from mysql database. Here the example how it can be done It is a nice sample. One of the question is how to drive this via a configuration file. Meaning if the config is changed swf is loaded otherwise not. That would be a super example for building configurable applications Hi, I have a problem using modules in FLEX builder 3. We have a AIR application project which creates an swf file and we have another project which creates a swc file which is what is used by the AIR application. When I create a module in the the project which is creating the swc file I see it being added to the project. I also can see it catalog.xml file part of the swc file. But when I try to load it from one of the controls in the swc file I get the following error. Error #2035: URL Not Found. URL: app:SuggestionModule.swf The question I have is first of all I do not see any SuggestionModule.swf file any where. I am assuming it is created on demand. If that is so then why is it not being found. It is also not clear whether the FLEX builder 3 as it encounters the module directive in a module file creates the swf file if it is true where does it get written. I can create the swf file using the mxmlc command but I do not want to do it as I need to understand why the IDE is not doing what should be a simple one step process. My question is can we create a new module using the file new (module) wizard. Then add code to it. Then invoke it anywhere using the loadmodule. Is there any path or root path we have to take care. Any help or suggestion on how one trouble shoot the path problem would be useful for the community. who have some problem with profiling module with text input inside..????? like me?? who have some problem with profiling module with text input inside..????? like me?? Hi Sujit, I am big fan of your blog. Nice useful articles and examples to support them. I have one problem with the modules example. My module loads fine without any issues. One issue which I am facing is the Error #1069: Property closeSuggestion not found on Modules and there is no default value, occurs when I click the submit button on the module component. Any idea what wrong I am doing? Do I need to write a closeSuggestion() function? Thanks in Advance Hi Sujit, We have a AIR desktop client which talks to a WAMP server. We want to convert it to Flex app that can be used from the browser. How do i approach changing it? Thanks anil thanks for sourcecode,It’s useful Hi Anil, You can create a web application and import the existing project files into the new one. You might have to change the root application WindowedApplication. If you have any code that uses AIR specific APIs, you need to change the same. Hope this helps.
https://sujitreddyg.wordpress.com/2008/02/05/splitting-flex-application-into-modules/
CC-MAIN-2018-34
refinedweb
3,999
58.18
fma, fmaf, fmal FP_FAST_FMAF, FP_FAST_FMA, or FP_FAST_FMALare defined, the corresponding function fmaf, fma, or fmalevaluates faster (in addition to being more precise) than the expression x*y+z for float, double, and long double arguments, respectively. If defined, these macros evaluate to integer 1. fmalis called. Otherwise, if any argument has integer type or has type double, fmais called. Otherwise, fmafis called. [edit] Parameters [edit] Return value If successful, returns the value of (x*y) + z as if calculated to infinite precision and rounded once to fit the result type (or, alternatively, calculated as a single ternary floating-point operation). the implementation supports IEEE floating-point arithmetic (IEC 60559), - If x is zero and y is infinite or if x is infinite and y is zero, and z is not a NaN, then NaN is returned and FE_INVALID is raised - If x is zero and y is infinite or if x is infinite and y is zero, and z is a NaN, then NaN is returned and FE_INVALID may be raised - If x*y is an exact infinity and z is an infinity with the opposite sign, NaN is returned and FE_INVALID is raised - If x or y are NaN, NaN is returned - If z is NaN, and x*y aren't 0*Inf or Inf*0, then NaN is returned (without FE_INVALID) [edit] Notes This operation is commonly implemented in hardware as fused multiply-add CPU instruction. If supported by hardware, the appropriate FP_FAST_FMA* macros are expected to be defined, but many implementations make use of the CPU instruction even when the macros are not defined. POSIX specifies that the situation where the value x*y is invalid and z is a NaN is a domain error. Due to its infinite intermediate precision, fma is a common building block of other correctly-rounded mathematical operations, such as sqrt or even the division (where not provided by the CPU, e.g. Itanium). As with all floating-point expressions, the expression (x*y) + z may be compiled as a fused mutiply-add unless the #pragma STDC FP_CONTRACT is off. [edit] Example #include <stdio.h> #include <math.h> #include <float.h> #include <fenv.h> #pragma STDC FENV_ACCESS ON int main(void) { // demo the difference between fma and built-in operators double in = 0.1; printf("0.1 double is %.23f (%a)\n", in, in); printf("0.1*10 is 1.0000000000000000555112 (0x8.0000000000002p-3)," " or 1.0 if rounded to double\n"); double expr_result = 0.1 * 10 - 1; printf("0.1 * 10 - 1 = %g : 1 subtracted after " "intermediate rounding to 1.0\n", expr_result); double fma_result = fma(0.1, 10, -1); printf("fma(0.1, 10, -1) = %g (%a)\n", fma_result, fma_result); // fma use in double-double arithmetic printf("\nin double-double arithmetic, 0.1 * 10 is representable as "); double high = 0.1 * 10; double low = fma(0.1, 10, -high); printf("%g + %g\n\n", high, low); //error handling feclearexcept(FE_ALL_EXCEPT); printf("fma(+Inf, 10, -Inf) = %f\n", fma(INFINITY, 10, -INFINITY)); if(fetestexcept(FE_INVALID)) puts(" FE_INVALID raised"); } Possible output: 0.1 double is 0.10000000000000000555112 (0x1.999999999999ap-4) 0.1*10 is 1.0000000000000000555112 (0x8.0000000000002p-3), or 1.0 if rounded to double 0.1 * 10 - 1 = 0 : 1 subtracted after intermediate rounding to 1.0 fma(0.1, 10, -1) = 5.55112e-17 (0x1p-54) in double-double arithmetic, 0.1 * 10 is representable as 1 + 5.55112e-17 fma(+Inf, 10, -Inf) = -nan FE_INVALID raised [edit] References - C11 standard (ISO/IEC 9899:2011): - 7.12.13.1 The fma functions (p: 258) - 7.25 Type-generic math <tgmath.h> (p: 373-375) - F.10.10.1 The fma functions (p: 530) - C99 standard (ISO/IEC 9899:1999): - 7.12.13.1 The fma functions (p: 239) - 7.22 Type-generic math <tgmath.h> (p: 335-337) - F.9.10.1 The fma functions (p: 466)
http://en.cppreference.com/w/c/numeric/math/fma
CC-MAIN-2016-50
refinedweb
648
59.3
Show Me the Code Here’s a Sunspot for Ruby ditty to play with. To use this code, you must first install and launch the development Solr server provided with Sunspot. $ sudo gem install optiflag -v 0.6.5 $ sudo gem install solr-ruby $ sudo gem install outoftime-sunspot --source= $ sunspot-solr start By default, the Sunspot Solr server listens to post 8983 and stores its index in /tmp. 8983 is usually used by production Solr servers and may conflict if you are already use Solr. However, you can change the port with the -p portnumber option. You can also change the index’s directory with -d /path/to/storage. The sample application searches for books in inventory. In a nutshell, the application uses RAM to persist objects (for the purpose of demonstration) and uses Sunspot to make complex and very fast queries with Solr. The first listing is the application; the second is an adapter named Memory that serves as a bridge between Solr and the application. More specifically, it adapts a vanilla Ruby object to provide methods needed to map from the application to Solr and vice versa. require 'memory' class Book # A book includes instance variables for # the author, a title, a publisher, an edition, a 10- and 13-digit # ISBN number, a blurb, a publication date, and a price. attr_accessor :author, :blurb, :edition, :isbn10, :isbn13, :price, :published_at, :publisher, :title def id self.object_id end def initialize( attrs = {} ) attrs.each_pair { |attribute, value| self.send "#{attribute}=", value } end end Sunspot::Adapters::InstanceAdapter.register( Memory::InstanceAdapter, Book) Sunspot::Adapters::DataAccessor.register( Memory::DataAccessor, Book ) Sunspot.setup(Book) do text :author text :blurb integer :edition string :isbn10, :isbn13 float :price time :published_at text :publisher string :sort_title do title.downcase.sub(/^(an?|the)\W+/, '') if title = self.title end text :title end Sunspot.index( king = Book.new( { :author => 'Stephen King', :blurb => 'Things get really weird out West', :edition => 1, :isbn10 => '1234567890', :isbn13 => 'abcdef0123456', :price => 12.99, :published_at => Time.now, :publisher => 'Random Number House', :title => 'The Dark Tower' } ) ) Sunspot.index( reaper = Book.new( { :author => 'Josh Bazell', :blurb => 'A hitman becomes a doctor', :edition => 1, :isbn10 => '9876543210', :isbn13 => 'abcdef1111111', :price => 25.99, :published_at => Time.now, :publisher => 'Knopf', :title => 'Beat the Reaper' } ) ) Sunspot.commit Sunspot.search( Book ) { keywords 'King' }.results.each {|x| puts x.title } search2 = Sunspot.search( Book ) do with( :price ).less_than( 30 ) end search2.results.each { |s| puts s.title } Sunspot.remove_all!( Book ) require 'rubygems' require 'sunspot' module Memory class InstanceAdapter < Sunspot::Adapters::InstanceAdapter def id @instance.object_id end end class DataAccessor < Sunspot::Adapters::DataAccessor def load( id ) ObjectSpace._id2ref( id.to_i ) end def load_all( ids ) ids.map { |id| ObjectSpace._id2ref( id.to_i ) } end end end The first search returns The Dark Tower. The second seach returns both books. Some comments about the code: Again, memory is a (very) atypical persistent store and used for this demo for brevity. Typically, your data is stored in a database; the ID stored in the search engine is the row ID of the record; and the adapter to pull the data is a fetch. Indeed, this is shown in the next section, which uses ActiveRecord as the persistent store. Sunspot on Rails Recently, Brown released Sunspot on Rails to provide seamless integration between Sunspot and ActiveRecord. All of the machinations shown earlier—persistence, mapping, and lookups—are performed automatically. You must call Sunspot.setup( ClassName ) to define field types for ClassName, but the rest is easy. Sunspot on Rails requires an additional gem and a few lines of configuration. $ sudo gem install outoftime-sunspot outoftime-sunspot_rails \ --source= Since the names of the Sunspot and Sunspot on Rails gems differ from the name of the library each provides, add the following two lines to your gem dependencies. config.gem 'outoftime-sunspot', :lib => 'sunspot' config.gem 'outoftime-sunspot_rails', :lib => 'sunspot/rails' You must also create a config/sunspot.yml file. common: &common solr: hostname: localhost port: 8983 production: <<: *common development: <<: *common solr: port: 8982 test: <<: *common solr: port: 8981 With those amendments in place, you can start the Sunspot on Rails server with rake. $ rake sunspot:solr:start To make an ActiveRecord model searchable, simply usesearchable. class Book < ActiveRecord::Base searchable do text :author text :blurb integer :edition string :isbn10, :isbn13 float :price time :published_at text :publisher string :sort_title do title.downcase.sub(/^(an?|the)\W+/, '') if title = self.title end text :title end end By default, a model is indexed whenever it’s saved, and is removed from the index whenever it is destroyed. Options can alter these defaults. Once a model is made searchable, search is an analog to find. results = Book.search do keywords 'King' end In a scenario where you prefer to not load the data for all matching objects, Sunspot on Rails provides search_ids, which returns only the IDs of objects that match your criteria and not the entire object. Don’t Just Find. Search. Sunspot makes complex searches as easy as database queries. Installing and configuring Solr is an additional burden, but it’s not onerous and the community of Solr developers is very cordial. Mat Brown has also made it simple to index all your data. The class method reindex empties the existing index (if any) and reindexes all records. For example, to reindex all the books in the bookstore application, run… Book.reindex …from within your code, your Rails application console, or deployment task. If you search for “Rails Solr” on Google, you will also find the acts_as_solr plug-in. I use acts_as_solr now, but plan to switch to Sunspot as soon as possible..
http://www.linux-mag.com/id/7341/2/
CC-MAIN-2016-30
refinedweb
925
59.09
NAME¶ Term::Table::HashBase - Build hash based classes. SYNOPSIS¶ A class: package My::Class; use strict; use warnings; # Generate 3 accessors use Term::Table: Term::Table:'; DESCRIPTION¶ inheritance is also supported. THIS IS A BUNDLED COPY OF HASHBASE¶ This is a bundled copy of Object::HashBase. This file was generated using the "/home/exodist/perl5/perlbrew/perls/main/bin/hashbase_inc.pl" script. METHODS¶ PROVIDED BY HASH BASE¶ - $it = $class->new(%PAIRS) - $it = $class->new(\%PAIRS) - $it = $class->new(\@ORDERED_VALUES) - Create a new instance. HashBase will not export "new()" if there is already a "new()" method in your packages inheritance chain. If you do not want this method you can define your own you just have to declare it before loading Term::Table::HashBase. package My::Package; # predeclare new() so that HashBase does not give us one. sub new; use Term::Table:. HOOKS¶ - $self->init() - This gives you the chance to set some default values to your fields. The only argument is $self with its indexes already set from the constructor. Note: Term::Table:. ACCESSORS¶ READ/WRITE¶ To generate accessors you list them when using the module: use Term::Table::HashBase qw/foo/; This will generate the following subs in your namespace: - foo() - Getter, used to get the value of the "foo" field. - set_foo() - Setter, used to set the value of the "foo" field. - FOO() - Constant, returns. READ ONLY¶ use Term::Table::HashBase qw/-foo/; DEPRECATED SETTER¶ use Term::Table::HashBase qw/^foo/; NO SETTER¶ use Term::Table::HashBase qw/<foo/; Only gives you a reader, no "set_foo" method is defined at all. NO READER¶ use Term::Table::HashBase qw/>foo/; Only gives you a write ("set_foo"), no "foo" method is defined at all. CONSTANT ONLY¶ use Term::Table::HashBase qw/+foo/; This does not create any methods for you, it just adds the "FOO" constant. SUBCLASSING¶ You can subclass an existing HashBase class. use base 'Another::HashBase::Class'; use Term::Table::HashBase qw/foo bar baz/; The base class is added to @ISA for you, and all constants from base classes are added to subclasses automatically. GETTING A LIST OF ATTRIBUTES FOR A CLASS¶ Term::Table::HashBase provides a function for retrieving a list of attributes for an Term::Table::HashBase class. - @list = Term::Table::HashBase::attr_list($class) - @list = $class->Term::Table::HashBase::attr. SOURCE¶ The source code repository for HashBase can be found at. MAINTAINERS¶ AUTHORS¶ This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See
https://manpages.debian.org/bullseye/libterm-table-perl/Term::Table::HashBase.3pm.en.html
CC-MAIN-2021-43
refinedweb
418
55.84
Eclipse Community Forums - RDF feed Eclipse Community Forums ClassNotFoundException <![CDATA[package coff; public class Mammal { static { int a = 1; assert a == 1; System.out.println("mammal initialized"); } } package coff2; import coff.Mammal; public class Dog extends Mammal { @SuppressWarnings("unused") public static void main(String[] args) { assert (1 != 0) : "this should never appear..."; System.out.println("Dog extends Mammal"); } } Caused by: java.lang.ClassNotFoundException: coff2.Dog" ]]> Jonathan Camilleri 2012-06-06T12:34:22-00:00 Re: ClassNotFoundException <![CDATA[Does this have anything to do with Eclipse? This looks like a general problem running a java jar. This forum is for Eclipse JDT (compilation) problems/questions. For general java questions you need to go somewhere else. Rich]]> 2012-06-06T14:24:25-00:00 Re: ClassNotFoundException <![CDATA[Well, the attached rar contains a proper Eclipse / JDT project. If the exception occurs when running the application inside the IDE, then something's fishy in JDT/Debug. Otherwise, (if exception ocurrs outside Eclipse) I concur that the JDT forum is not a likely place to get help for this issue. Stephan]]> Stephan Herrmann 2012-06-06T14:34:52-00:00 Re: ClassNotFoundException <![CDATA[I wouldn't say it's a "proper" project--you have two JREs on the Java Build Path, and unless you've actually set up a 1.7 JRE in the preferences, your build path will be considered broken. An erroneous build path will, by default, abort the build and leave you with no class files, hence the Error at runtime.]]> Nitin Dahyabhai 2012-06-06T15:38:46-00:00
http://www.eclipse.org/forums/feed.php?mode=m&th=359304&basic=1
CC-MAIN-2014-42
refinedweb
259
52.26
Sean Mullan wrote: > > Yes, that should be fixed. It is not the same problem as the xalan > workaround which copies all the namespaces to every element in the doc. > I think the problem is in XMLUtils.createElementInSignatureSpace. It > really should only set the namespace attribute if it is the Signature > element. > > Can you file a bug? > Ok, bug filed. Incidentally, within the XML Security project in Bugzilla, there does not currently exist any version option for entering new issues for any Java 1.4.x versions, the highest version listed is "Java 1.3". > Incidentally, this problem does not occur if you are using the JSR 105 > API to create signatures which has its own marshalling code. > I'll look at that. I doubt we can easily switch (in the short term) our OpenSAML 2 library code to use that new API (we started writing before it was available), but it is something we may investigate doing in the future. Thanks, Brent
http://mail-archives.apache.org/mod_mbox/santuario-dev/200706.mbox/%3C466604D0.4020003@georgetown.edu%3E
CC-MAIN-2018-47
refinedweb
162
65.42
On the context menu for either the Object Pane (left) or Member Pane (top-right), you can sort by four different options: Alphabetically, Object Type, Object Access, or Group by Object Type. Alphabetically is self-explanatory. Object type will have the following effect. Notice how Classes appear first in the list, then Structures, then Enums. Next is sort by Object Access. Notice in the example below how the first three classes of the foobar namespace are public, but the fourth is private. And lastly, is Group by Object Type. As shown in the Microsoft.VisualBasic.dll, all objects are grouped by what type of creature they are. And of course you can rinse and repeat for the Members pane.
http://blogs.msdn.com/b/saraford/archive/2008/05/22/did-you-know-you-can-sort-objects-and-members-in-the-object-browser-221.aspx
CC-MAIN-2015-32
refinedweb
119
68.26
These are chat archives for jdubray/sam nap()to loop thru all sprites after each render to see if any sprites had no height set (since i was letting the browser do the heights previously, and only settings widths) -- if there was no height, that would mean it was a freshly added sprite and i needed to wait for its image to load and then read its height and set it on the sprite. so waiting for assets to load and then parsing/reading/whatever from them seems like a good "automatic action" caculateSpriteHeightaction in the state function, but also using jquery to wait for the sprite's image to load and in the callback kicking off another doneCalculatingHeightaction. but then i realized probably i only needed the former action in there, and all that other jquery stuff could be moved out into the calculateSpriteHeightaction itself. wasn't sure if that was a bad idea -- presenting to the model, but also kicking off another action after some async occurred mousedowninto appropriate actions -- so now in mousedowni figure out which sprite's grip i'm clicking on, inspect the DOM to find the sprite's image's current height, and then fire both a setSpriteHeightaction with that height, as well as the beginSizingSpriteaction. a sort of JIT setting of the height i guess, heh I would not say "centralized", it's more like "deliberate", the step during which you mutate state must be delineated. You can't arbitrarily/liberally assign property values. That is (IMHO) what's wrong with the way we write code. @chuckrector why is it that the model has to know about the sprite until it's loaded? Can't you write an action where the proposal will include the new sprite, including it's action? addSprite( url ) { request(url, function(data) { let height = h(data); let width = w(data); present( { newSprite: { data, height, width}}) ; }); } I really don't see the need for nap() to be involved. That's kind of the big value add of SAM, you need to inform the application state of what's happening (unless it logically requires that you do so) model[property] = "newValue";anywhere instead of only in the model. addSpritesort of action heightfor each sprite would make my current initial model data invalid app.tsvia imports and wire up various events with it and it's great. but if i wish to use jquery as part of my onClickevents that are generated in the view, i must also include jquery in my index.htmlvia script tag anyeverywhere. probably also due to not designing a lot up front webpack is appealing to learn never though I would see a sentence containing these words V = f(S(M)) "on paper ng2 would be everything I dislike"then I understand, how perhaps "sometimes you just can't explain it"and you were able to make the best of it with ng2 and avoid the bad parts. @jdubray Computables in MobX would be a good paradigm for the State function That was the first thing I thought when looking at sam class State { @observable _counter = COUNTER_MAX; @observable _aborted = false; @observable _started = false; @observable _launched = false; public actions: Actions constructor( a: Actions) { this.actions = a ; } getActions() { return actions ; } representation (model) { console.log(model) ; this._counter = model.counter; this._started = model.started; this._aborted = model.aborted; this._launched = model.launched; } // Derive the current state of the system @computed get ready() { return ((this._counter === COUNTER_MAX) && !this._started && !this._launched && !this._aborted); } @computed get counting() { return ((this._counter <= COUNTER_MAX) && (this._counter >= 0) && this._started && !this._launched && !this._aborted) ; } @computed get launched() { return ((this._counter == 0) && this._started && this._launched && !this._aborted) ; } @computed get aborted() { return ( ( this._counter <= COUNTER_MAX) && (this._counter >= 0) && this._started && !this._launched && this._aborted ) ; } // Next action predicate, derives whether // the system is in a (control) state where // an action needs to be invoked nextAction() { if (this.counting) { if (this._counter>0) { actions.decrement({counter: this._counter}) ; } if (this._counter === 0) { actions.launch({}) ; } } } render(model) { this.representation(model) this.nextAction() ; } } TOP_LEVEL_RANDOM_GIF_UPDATEDand FIRST_RANDOM_GIF_UPDATED? Why can't RandomGif just be a single, reusable component and you can just put as many instances on the page as you want? "In fractal architectures, the whole can be naively packaged as a component to be used in some larger application. In non-fractal architectures, the non-repeatable parts are said to be orchestrators over the parts that have hierarchical composition." await, which is in a future spec of JS). when you pass a callback, that's usually when async stuff can occur. stateand napfunction statefunction imo V=render(S(present(A(M))) mySAMAction(event) { let proposal = my_functional_action(event) ; present(proposal) }
https://gitter.im/jdubray/sam/archives/2016/11/07?at=5820e8c478ec59ab0545113b
CC-MAIN-2019-51
refinedweb
778
53.51
51745/how-to-convert-hash160-to-bitcoin-address-in-python I want to convert hash160 to bitcoin address in python wondering is it possible. please help me out with code. this is Hash160 :- a054ae4797fbdb24cdbbc6dbb277f53f0165c13f convert it to bitcoin address. But how? Hey @Raj. I use the following script. Give it a try: from bitcoin import * hash160 = '010966776006953D5567439E5E39F86A0D273BEE' addr = pubtoaddr(hash160.encode('utf-8')) print (addr) I ran your code in but it gives this address - 1GQQiJZdMKg6C1XEgUwPcnjmddZC8ippC1 but it should be 16UwLL9Risc3QfPqBUvKofHmBQ7wMtjvM this the link check for your hash160 string:- Please check and tell ,. Solution Please. In your generate_address_from_public_key_hash method, the checksum should be over ...READ MORE There is a package to decode and ...READ MORE You need to understand that Bitcoin address and ...READ MORE you can use const Socket = require('blockchain.info/Socket'); const mySocket ...READ MORE What is the argument utxos int the ...READ MORE yes all are over TCP/IP connections secured by TLS encryption in hashgraph architecture-hashgraph, ...READ MORE replace with a newline char. file.write(response.read().decode('utf-8').replace(";","\n")) In [72]: ...READ MORE This was a bug. They've fixed it. ...READ MORE Look at the following code : function uintToString(uint ...READ MORE This information can easily be calculated by ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/51745/how-to-convert-hash160-to-bitcoin-address-in-python
CC-MAIN-2021-49
refinedweb
235
61.02
pulseio – Support for individual pulse based protocols¶ The pulseio module contains classes to provide access to basic pulse IO. Individual pulses are commonly used in infrared remotes and in DHT temperature sensors. All classes change hardware state and should be deinitialized when they are no longer needed if the program continues after use. To do so, either call deinit() or use a context manager. See Lifetime and ContextManagers for more info. - class pulseio. PulseIn(pin: microcontroller.Pin, maxlen: int = 2, *, idle_state: bool = False)¶ Measure a series of active and idle pulses. This is commonly used in infrared receivers and low cost temperature sensors (DHT). The pulsed signal consists of timed active and idle periods. Unlike PWM, there is no set duration for active and idle pairs.) maxlen:int¶ The maximum length of the PulseIn. When len() is equal to maxlen, it is unclear which pulses are active and which are idle. paused:bool¶ True when pulse capture is paused as a result of pause()or an error during capture such as a signal that is too fast. __exit__(self)¶ Automatically deinitializes the hardware when exiting a context. See Lifetime and ContextManagers for more info. resume(self, trigger_duration: int =. __len__(self)¶ Returns the current pulse length This allows you to: pulses = pulseio.PulseIn(pin) print(len(pulses)) - class pulseio. PulseOut(carrier: pwmio.PWMOut)¶ Pulse PWM “carrier” output on and off. This is commonly used in infrared remotes. The pulsed signal consists of timed on and off periods. Unlike PWM, there is no set duration for on and off pairs. Create a PulseOut object associated with the given PWMout object. Send a short series of pulses: import array import pulseio import pwmio import board # 50% duty cycle at 38kHz. pwm = pwmio.PWMOut(board.D13, frequency=38000, duty_cycle=32768) pulse = pulseio.PulseOut(pwm) # on off on off on pulses = array.array('H', [65000, 1000, 65000, 65000, 1000]) pulse.send(pulses) # Modify the array of pulses. pulses[0] = 200 pulse.send(pulses) __exit__(self)¶ Automatically deinitializes the hardware when exiting a context. See Lifetime and ContextManagers for more info. send(self, pulses: ReadableBuffer.
https://circuitpython.readthedocs.io/en/6.0.x/shared-bindings/pulseio/index.html
CC-MAIN-2020-50
refinedweb
349
52.56
A previous version of this article confused the process of "hashing" with the process of "encryption". It's worth noting that, while similar, hashing and encryption are two different processes. Hashing involves a many-to-one transformation, where a given input is mapped to a (usually fixed-size, usually shorter) output, which is not unique to that particular input. In other words, collisions are likely when running a hashing algorithm over a huge range of input values (where different inputs map to the same output). Hashing is an irreversible process because of the nature of hashing algorithms. This SO answer gives a good overview of the differences between hashing and encryption and provides a nice example of why a hashing algorithm might be practically irreversible. As a mathematical example, consider the modulo (aka. modulus aka. mod) function (expressed here as defined by Donald Knuth): a % n = mod(a, n) = a - n * floor(a/n) The simple interpretation of the mod function is that it's the remainder of integer division: 7 % 2 = 1 4 % 3 = 1 33 % 16 = 1 Note how all three examples above give the same output even though they have different inputs. The modulo function is non-invertible because we lose information when we apply it. Even given one of the inputs along with the output, there's no way to calculate the other input value. You just have to guess until you get it right. Hashing algorithms take advantage of non-invertible functions like this (as well as bit shifts, etc.), and are often repeated many times, astronomically increasing the number of required guesses. The goal of hashing is to make it extremely expensive, computationally, to decode the original information. Oftentimes, it's easier to brute force a hashing algorithm (by trying many possible inputs, as quickly as possible) than to try to "reverse" the hashing algorithm to decode the hashed information. A common method of deterring even these brute-force attacks is to add a second random piece of information as "salt". This prevents hackers from performing dictionary attacks, where a list of common passwords are mapped to their hashed outputs -- if the hacker knows the hashing algorithm used and can gain access to the database where the hashes are stored, they can use their "dictionary" to map back to the original password and gain access to those accounts. Salting the data means that not only do the hackers have to run a particular password through the hashing algorithm and verify that it matches the hashed output, but they have to re-run that process for every possible value of the salt (usually a string of tens to hundreds of bytes). A random 100-byte salt means (100*2^8 or) 256,000 total password-salt combinations must be tried for each possible password. Another method of deterring brute-force attacks is to simply require your algorithm to take some long amount of time to run. If your algorithm takes just 2 seconds to run and uses the 100-byte salt mentioned above, it would take nearly 6 days to try all possible salt strings for just a single potential password. This is partly why hashing algorithms are often iterated thousands of times. To produce a secure hash, make sure you re-introduce the salt each time you iterate, otherwise you're setting yourself up for more hash collisions than necessary (see that SO link, above). Encryption, as opposed to hashing, is always one-to-one and reversible (via decryption). So a particular input will always produce a particular output and only that specific input will produce that specific output. Unlike hashing algorithms, which produce hashes of a fixed length, encryption algorithms will produce variable-length outputs. A good encryption algorithm should produce output which is indistinguishable from random noise, so that patterns in the output cannot be exploited in order to decode it. Encryption should be used when the data stored needs to be extracted at some point. For instance, messaging apps might encrypt data before transporting it, but that data needs to be decrypted back into plaintext once it's received, so that the recipient can read it. Note that this is not the case with passwords: if your password and the stored salt generate the hashcode stored in the database, then it's very likely that the password you input is the correct one. It's safer to hash the password, then, and just re-calculate the hash code than it is to store an encrypted password which could possibly be decrypted. But if encrypted information can be decrypted, how can it be secure? Well, there are two standard methods of encryption and decryption, symmetric key encryption and asymmetric key encryption. Symmetric key encryption means that the same cryptographic key is used to encrypt and decrypt the data. A common analogy used to explain encryption is sending locked packages through the post. If you put a padlock on a box and send it to me, and I have a copy of the key which opens the padlock, then I can easily unlock it and read your message inside. I can send you back a message and you can use the same key to open the box on the other side. While the box is in transport, no one can open the box and read the message unless they have the same symmetric key that we have. The idea, then, is to keep the key a secret and not share it with anyone other than the intended recipient of the message. Since there are two symmetric private keys, though, if anyone manages to steal (or create) a copy of either private key, all future messages between you and me will be compromised. In other words, we trust each other to keep our keys secure. Asymmetric key encryption is slightly different, in that you and I both have padlocks and keys, but the first package we send to each other in the post should be our unlocked padlocks: Then, you can write a message, and lock the message in a box with my padlock. From that point on, the only person who can unlock the box is me, with my private key. When I receive your message, I lock it in a box with your padlock and send it back to you. From that point on, the only person who can unlock that box is you, with your private key. This method is also known as public key encryption because, oftentimes, the "padlock" in this case is made widely available. Anyone who wants to encrypt a message intended for a particular recipient can do so. Public key encryption is recommended over plaintext passwords because it effectively makes the password much longer and more difficult to guess, reduces the chances of someone "looking over your shoulder" and stealing your password, and is just generally much easier than retyping a password over and over again. Obviously this barely scratches the surface of hashing and encryption, but I hope it gives you a better understanding of the differences between the two. Now, back to our regularly scheduled programming... Original article (updated for clarity):. We just need to check that the password the user enters recreates the hash that we've saved in a database.) Password-based encryption generates a cryptographic key using a user password as a starting point. We irreversibly convert the password to a fixed-length hash code using a one-way hash function, adding a second random string as "salt", to prevent hackers from performing dictionary attacks, where a list of common passwords are mapped to their hashed outputs -- if the hacker knows the hashing algorithm used and can gain access to the database where the hashcodes are stored, they can use their "dictionary" to map back to the original password and gain access to those accounts. We can generate salt simply by using Java's SecureRandom class: import java.security.SecureRandom; import java.util.Base64; import java.util.Optional; private static final SecureRandom RAND = new SecureRandom(); public static Optional<String> generateSalt (final int length) { if (length < 1) { System.err.println("error in generateSalt: length must be > 0"); return Optional.empty(); } byte[] salt = new byte[length]; RAND.nextBytes(salt); return Optional.of(Base64.getEncoder().encodeToString(salt)); } (Note: you could also get a SecureRandom instance with SecureRandom.getInstanceStrong(), though this throws a NoSuchAlgorithmException and so needs to be wrapped in a try{} catch(){} block.) Next, we need the hashing code itself: import java.security.NoSuchAlgorithmException; import java.security.spec.InvalidKeySpecException; import java.util.Arrays; import javax.crypto.SecretKeyFactory; import javax.crypto.spec.PBEKeySpec; private static final int ITERATIONS = 65536; private static final int KEY_LENGTH = 512; private static final String ALGORITHM = "PBKDF2WithHmacSHA512"; public static Optional<String> hashPassword (String password, String salt) { char[] chars = password.toCharArray(); byte[] bytes = salt.getBytes(); PBEKeySpec spec = new PBEKeySpec(chars, bytes, ITERATIONS, KEY_LENGTH); Arrays.fill(chars, Character.MIN_VALUE);(); } } ...there's a lot going on here, so let me explain step-by-step: public static Optional<String> hashPassword (String password, String salt) { char[] chars = password.toCharArray(); byte[] bytes = salt.getBytes(); First, we ultimately need the password as a char[], but we have the user pass it in as a String -- (how else would we get a password from the user?) -- so we must convert it to a char[] at the outset. The salt is also passed in as a String and must be converted to a byte[]. The assumption here is that the hashed password and salt will be written to a database as character strings, so we want to generate the salt outside this algorithm as a String and pass it in as a String, as well. Keeping the user's password in a String is dangerous, because Java Strings are immutable -- once one's made, it can't be overwritten to hide the user's password. So it's best to gather the password, do what we need to do with it, and immediately toss the reference to the original password String so it can be garbage collected. (You can suggest that the JVM garbage collect a dead reference with System.gc(), but garbage collection occurs at unpredictable intervals and cannot be forced to occur.) Similarly, if we convert the password String to a char[], we should clear out the array when we're finished with it (more on that later). PBEKeySpec spec = new PBEKeySpec(chars, bytes, ITERATIONS, KEY_LENGTH); Here, we're specifying how we're going to generate the hashed password. chars is the plaintext password as a char[], bytes is the salt String converted to a byte[], ITERATIONS is how many times we should perform the hashing algorithm, and KEY_LENGTH is the desired length of the resulting cryptographic key, in bits. When we perform the hashing algorithm, we take the plaintext password and the salt and generate some pseudorandom output string. Iterated hashing algorithms will then repeat that process some number of times, using the output from the first hash as the input to the second hash. This is also known as key stretching and it greatly increases the time required to perform brute-force attacks (having a large salt string also makes these kind of attacks more difficult). Note that while increasing the number of iterations increases the time required to hash the password and makes your database less vulnerable to brute-force attacks, it can also make you more vulnerable to DoS attacks because of the extra processing time required. The length of the final hashed password is limited by the algorithm used. For example, "PBKDF2WithHmacSHA1" allows hashes of up to 160 bits, while "PBKDF2WithHmacSHA512" hashes can be as long as 512 bits. Specifying a KEY_LENGTH longer than the maximum key length of the algorithm you've chosen will not lengthen the key beyond its specified maximum, and can actually slow down the algorithm. Finally, note that spec holds all of the information about the algorithm, the original plaintext password, and so on. We really want to delete all of that when we're finished. Arrays.fill(chars, Character.MIN_VALUE); We're done with the chars array now, so we can clear it out. Here, we set all elements of the array to \000 (the null character).(); } Our hashPassword() method ends on this try{} catch(){} block. In it, we first get the algorithm that we defined earlier ("PBKDF2WithHmacSHA512") and we use that algorithm to hash the plaintext password, according to the specifications laid out in spec. generateSecret() returns a SecretKey object, which is "an opaque representation of the cryptographic key", meaning that it contains only the hashed password and no other identifying information. We use getEncoded() to get the hashed password as a byte[] and save it as securePassword. If this all goes off without a hitch, we encode that byte[] in base-64 (so it's composed only of printable ASCII characters) and return it as a String. We do this so that the hashed password can be saved in a database as a character string without any encoding issues. If there are any Exceptions during the encryption process, we return an empty Optional. Otherwise, we finish the method by clearing the password from spec. Now, there are no references left to the original, plaintext password in this method. (Note that the finally block is executed whether or not there is an Exception and before anything is returned from any preceding try or catch blocks, so it's safe to have it at the end like this -- the password will be cleared from spec.) The last thing to do is write a small method which determines whether or not a given plaintext password generates the hashed password, when the same salt is used. In other words, we want a function that tells us if the entered password is correct or not: public static boolean verifyPassword (String password, String key, String salt) { Optional<String> optEncrypted = hashPassword(password, salt); if (!optEncrypted.isPresent()) return false; return optEncrypted.get().equals(key); } The above uses the original salt string and a plaintext password in question to generate a hashed password which is compared against the previously-generated hash key. This method returns true only if the password is correct and there were no errors re-hashing the plaintext password. Okay, now that we have all of this, let's test it! (Note: I have these methods defined in a utility class called PasswordUtils, in a package called watson.) jshell> import watson.* jshell> String salt = PasswordUtils.generateSalt(512).get() salt ==> "DARMFcJcJDeNMmNMLkZN4rSnHV2OQPDd27yi5fYQ77r2vKTa ... Wt9QZog0wtkx8DQYEAOOwQVs=" jshell> String password = "Of Salesmen!" password ==> "Of Salesmen!" jshell> String key = PasswordUtils.hashPassword(password, salt).get() key ==> "djaaKTM/+X14XZ6rxjN68l3Zx4+5WGkJo3nAs7KzjISiT6aa ... sN5DcmOeMfhqMGCNxq6TIhg==" jshell> PasswordUtils.verifyPassword("Of Salesmen!", key, salt) $5 ==> true jshell> PasswordUtils.verifyPassword("By-Tor! And the Snow Dog!", key, salt) $6 ==> false Works like a charm! Go forth and hash! Posted on by: Andrew (he/him) Got a Ph.D. looking for dark matter, but not finding any. Now I code full-time. Je parle un peu français. dogs > cats Read Next Stop Guessing: What is a JWT? Steve Cruz - How 4 lines of Java code end up in 518772 lines in production. Brian Vermeer 🧑🏼🎓🧑🏼💻 - ✋🏼🔥 CS Visualized: CORS Lydia Hallie - Discussion Just in case someone ever asks me why I refuse to use Java ;) Very nice article though! You can use BCrypt in Java as well Hashing passwords in Java with BCrypt. While this is a well written guide, it should be pointed out that hashing != encryption. Hashing is 1-way, encryption is 2-way. In other words, you can't decrypt a hash, you can only check that rehashing the same value gives the same results. Thanks for the heads-up! I'm working on an amended version of the article that discusses this issue. I'll post it tonight or tomorrow. Updated. Let me know what you think! Looking good! I've seen the same implementation in PHP done in 3 lines Yes, most of us agree that dynamically typed languages are easier/faster to code in and generally involve less lines of code But your comment isn't very encouraging, and doesn't add value to this post. Maybe I'm misreading, but it sounds fairly hostile. Do reconsider next time. If anyone's interested, here are different implementations of this general procedure in languages like PHP, Ruby, JavaScript, and so on. The PHP implementation is indeed just 3 lines: Yes, I remember because I had to port a similar algorithm in Java and it was like 200 lines of code vs this one! Could you post it? A couple comments: Aside from making sure you're not retaining references to it forever, worrying about trying to overwrite the the String containing the user's password is basically futile. There are likely lots of copies made of that string along the way. If someone has access to your program's memory, it's not protected anyway. At this point in time, please use argon2 or scrypt as your password hashing algorithm. They force much larger use of memory which makes brute force attack schemes less feasible on GPUs and more expensive on ASICs and FPGUs. Using a more expensive password hashing scheme should never be a vector for a DoS attack. The correct solution is to implement exponential backoff on repeated failed login attempts: the first failure lets you try again in 100ms, the second failure in 200ms, the third in 400ms, etc. The exception is in environments that specify otherwise, such as health care in the USA, where HIPAA specifies three tries then lockout. Fair points. Thanks for taking the time to read and comment. These are definitely things I'll have to change if I implement this commercially. (This example was for a term project for a class I took.) I am try to check the Hashpasword with .equals() method but it is showing stored password and user entered password as false even though both are same please suggest me a code fix. HashedPassword is saving is working fine while registering. //Java public boolean userAuthentication(Userdetails userdetails) { Optional salt = passwordUtils.generateSalt(CableTVConstants.SALT_LENGTH); Optional userinfo = userDetailService.findById(userdetails.getUsername()); if(userinfo.isPresent()) { return passwordUtils.verifypassword(userdetails.getPassword(),userinfo.get().getPassword(),salt.get()); } logger.log(Level.ALL,"Invalid user credentials"); return false; } public boolean verifypassword(String password, String key, String salt) { Optional password_check = generateHashPassword(password,salt); if(!password_check.isPresent()) { return false; } return password_check.get().equals(key); } It's a great idea to toss the password string to the garbage collector ASAP, I've saw a lot of implementation which doesn't consider this fact. Yeah, unfortunately, there's no way to force garbage collection in Java. You can only suggest it by calling System.gc(). Correct. Good article btw, I would probably use the Kotlin/Java implementation of NaCl library but in this case the ones you used are strong enough. I have a requirement of taking a password from the user and then when he needs to see, then I must decrypt it and show on screen, is this method can be used for decryption.
https://dev.to/awwsmm/how-to-encrypt-a-password-in-java-42dh
CC-MAIN-2020-40
refinedweb
3,181
61.87
Law Ask a Law Question, Get an Answer ASAP! Hi will try and help Hello again hi TO what extent have the tenant moved out? Have you had a look through the window and is the place completely abandoned. ? yes we looked through the windows last night and it is empty they have moved to another property in the same town according to a relative of theirs ok thanks in these circumstances I always advise caution. Technically they are allowed back to that property until the date the notice expires (and in in fact until you get a court order) however where a tenant has voluntarily given up possession then the landlrod is allowed to re-enter the property to secure it. The reason I say exercise caution is if the tenant has "fully" vacated the proepryy and later argues that because they left they toaster there, for example, they were still in occupation. what i suggest you do then is to send/hand deliver them a letter saying that you believe they have vacated and you are concerned over the secutiy of your preoprty, and that if you have not heard from them within, say, 48 ours, you will breack in and change the locks to ensure that yot proeprty is secure. (terrible spelling in that one sorry) I have sent them a text reading this- we know you have left and moved out, you need to give us notice that you have left and return the keys for us to take it back. If this is not done by the 1st of December we will seek a possession order from the court and you will be liable for December's and November's rent. we can come through today or tomorrow, if we do not hear from you today, legal proceedings will start tomorrow. then it is out of our hands which i'm sure neither of us want. we have had no response from them and they have only paid 1 weeks rent this month which was on the 10/10 If they have actually vacated then there is no need to get a court order. You can take possession back. You just have to be very sure they have actually vacated of their own free will. I am assuming that it is more important to get the property back and rented out again, as you will unlikely recover any arrears from the errant tenants? we are not renting again we are selling but I do not want to enter the property if we are going to break the law. they have definitely left but have left a key in the front door to stop us gaining access which would mean us breaking in is there a back door that you or the tenants can enter? we only had a key for the front door but yes there is a back door which can be used we also do not have a forwarding address for them There are two lawful ways in which a landlord can get a property back; either he serves notice and gets a court order, or the tenant hands the property back to him. If a tenant leaves, and there is no intention for him to remain or come back, and all of the evidence here points to it, then the landlord can lawfully take back possession. If you have to break in to do it because they haven't arranged to give you the keys back then that is what you will have to do. However this is why i was suggesting the written notice through the door (i don't like text messages as you cant easily evidence them in court). This is a belt and braces approach. if the worse happens, the tenant returns and says you have unlawfully evicted them you can legitimately say to the court that you had texted them, and written to them, the property was completely empty and as far as you were concerned the property had been abandoned. You were worried about security, and after the expiry of the notice you had no choice but to break in and change the locks. so if I gain access to the property tomorrow and change the locks is there anything else I need to do. and like I said we don't have an address for them to put a letter through the door so at present that would not be possible sorry i mean serve the notice through the door of the property. That is the last known address. i know it sounds odd as you suspect they don't live there, but what you are doing it laying the ground work so say that, if they were in occupation (or intending to return) to the property, you had done all you could to protect their interest. Let someone else deliver it do that they can give evidence that they had done so, and I would suggest leaving it longer that tomorrow to break in so tat you have given them a good 48 - 72 hours notice of your intention. It is all unlikely to be necessary if they have moved out and never intend to return, I am just giving you the best advise in case of the slim chance that they do actually return (in my experience they very rarely do) not sure if it was yourself who spoke to me last time, but we can have a chat over the phone for an additional fee if you wish to chat over any other concerns? ok that is great thank you for your advice again, hopefully this will be the last time just 1 last thing is there anything specific I need to put in the notice As I said above, something along the line of that you are lead to believe that they have vacated the property and from the lack of furniture and lack of reply to text message you believe they do not intend to return. If this is not the case ask them to contact you immediately on receipt of the note. If not then, as you are concerned about the security of the property you intend to enter the property on the .......date at such-and-such a time,to ensure that the property is secure an free from tresspassers. that ok? fantastic thank you also, just FYI, in my day job my firm does conveynancing if you require my service on that when you come to sell thanks very much bye excellent rating cheers. If you click on the smiley face i can be paid for my time here. All the best. Hi your advice was great thank you we received the keys back for the house on Tuesday after pointing out they would still be liable for Decembers rent and that we would have to take them to court in order to get the house back, so thank you very much for your advice it WORKED
http://www.justanswer.co.uk/law/8sobl-hi-spoke-ago-issuing-section-21.html
CC-MAIN-2017-39
refinedweb
1,170
66
Steven Pemberton, CWI/W3C, Amsterdam These slides are in XHTML. They use the CSS media 'projection' to allow them to be displayed full-screen. Senior researcher at CWI, the Dutch National Research Centre for Mathematics and Computer Science. Involved in the Web from the beginning: organised two workshops at the first Web Conference in 1994 Chair of the HTML and Forms working groups at W3C Co-author of CSS, HTML4, XHTML1, XForms, XML Events, XHTML2, etc. by layering semantics on top of XHTML in this way, a lot of special-purpose formats are rendered unnecessary. This talk discusses the XHTML2 approach to Metadata. In the NITF Tutorial, it states: Web authors use HTML to describe the display of their pages. NITF, on the other hand, is designed to describe the substance of news article, This is actually not true: HTML was designed as a structure defining language. The browser manufacturers in classic Marking Behaviour, not understanding the structure defining design of HTML went and added presentation features. XHTML2 is the next iteration in the HTML family. XHTML1 addressed the problems of turning HTML into an XML application. XHTML2 addresses the remaining identified problems in HTML4/XHTML1). More structure, less presentation: use stylesheets for defining presentation. More usability: within the constraints of XML, try to make the language easy to write, and make the resulting documents easy to use. More accessibility: 'designing for our future selves' – the design should be as inclusive as possible. Better internationalization. semantics: integrate XHTML into the Semantic Web. Keep old communities happy Keep new communities happy Integration with RDF/Semantic Web Readable and writable by the HTML community Flexible, extensible News distribution is all about content and metadata. NewsML for instance is essentially a big metadata wrapper round XHTML. The question is: where should the metadata go, and how should it be expressed? What we have done is craftily mutated <meta> and <link> so that they look more or less the same to the HTML author, but now have a clear relationship to RDF. Then we generalised. This was originally proposed in a white paper RDF/A (warning: details have changed since this was published), and after much work in a joint semantic web/HTML WG task force, was adopted into XHTML2 (that work is still not quite finished, since a detail (bnodes) still has to be finalised). Extend the meta element: metaelement nameattribute is now called property, and can hold a namespaced value (a QName) aboutattribute, that defaults to the current document Example: <meta property="dc:creator">Steven Pemberton</meta> This is also still allowed: <meta property="dc:creator" content="Steven Pemberton"/> Extend the link element slightly: reland revattributes to hold namespaced values. aboutattribute Example: <link rel="dc:rights" href=""/> Add a role attribute applicable to any element, that specifies a semantic role for that element Examples <p role="nitf:byline">By Joseph P. Reporter</p> Having done that, we then allow all the attributes of <link> and <meta> on any element. This was already allowed: This work is licensed under the <a rel="dc:rights" href=""> Creative Commons Attribution License</a>. but you can also say things like this: <body> <h property="title">My Life and Times</h> ... which makes the top level heading and the title of the document the same thing, so they never get out of step. The about attribute allows you to describe other documents, but also parts of the current document Example <meta about="#p123" ... One usage of many is to allow richer metadata than the title attribute allows. Now we can just say that <p id="p123" title="whatever"> is equivalent to: <p id="p123"> <meta about="#p123" property="title">whatever</meta> <meta property="newsml:Identification"> <meta property="newsML:ProviderId">Reuters.com</meta> <meta property="newsML:DateId">20050524</meta> ... <p><span content="2005-05-23">Yesterday</span>, <span rel="references" href="..." property="foaf:fullName" content="Tony Blair" >the prime minister</span> travelled to ... </p> Because of the layered semantics, some formats are now strictly speaking unnecessary. For instance, the RSS format is used to describe something else. You have to dual author or ensure that both forms are mutually up-to-date. However, RSS is just a simple hypertext language. You could get the same effect by just marking up the very document you are describing: <h role="rss:title">... <p role="rss:description">... Finally I should say something about media, in particular images. In XHTML2, the src attribute (and its related attributes) may be applied to any element, not just <img>, with the implication that they should be considered equivalent: <p src="map.png">Turn left out of the station, walk straight on to the High Street, and turn right</p> <img type="image/jpeg" src="gates.jpg"> Bill Gates makes speech. </img> We can now say that <meta> and <link> define RDF triples: The URL for the predicate is obtained by concatenating the namespace URL from the prefix to the other part of the value. A parallel development, GRDDL, can be used to extract the RDF triples. You can layer new semantics on top of XHTML2 without having to define a new document type. XHTML2 is going to last call Really Soon More details:, (and add /Group to the end if you are a member company). This talk:
http://www.w3.org/2005/Talks/05-steven-Metadata-in-XHTML2/
crawl-002
refinedweb
882
53.92
I continue my efforts to convert the ICE Code Editor from JavaScript to Dart. The two big unknowns before I started this were calling JavaScript libraries (e.g. ACE) from Dart and reading gzip data. It turns out that working with JavaScript in Dart is super easy, thanks to js-interop. Working with gzip compressed data in Dart is also easy. But I have trouble reading the data gzip'd with js-deflate Jos Hirth pointed out that the Dart version was most likely doing what the gzipcommand-line version was doing: adding a standard gzip header and footer to the body of the deflated data. If that is the case, then I may have a decent migration strategy—add a few bytes before and after the old data and I ought to be good to go. To test this theory, I start in JavaScript. I have the code that I want to deflate stored in code:</script>\n" + "<script src=\"\"></script>\n" + "<script>\n" + " // Your code goes here...\n" + "</script>";Next I use js-deflate to deflate this code into str_d: str_d = RawDeflate.deflate(code)This deflated string, str_dshould serve as the body of the gzip data. Now I need the header and the footer. Per this onicos document, I should be able to make the header with: header = [0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03]. map(function(b) {return String.fromCharCode(b)}). join("")Those are ten bytes that comprise the header, mapped into a string just as the deflated bytes were mapped into str_d. The first two bytes are always those values, per the documentation. The next, 0x08signifies that the body contains deflate data. The remaining are supposed to hold Unix timestamp data, but I guess that this is not necessary. There are also one or two bytes that are supposed to hold optional data, but again, I leave them empty. The last byte could probably also be left empty, but I set it to 0x03to signify Unix data. As for the footer, it is supposed to hold 4 bytes of crc32 and 4 bytes describing the compressed size. For the time being, I leave them completely empty: ooter = [0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00]. map(function(b) {String.fromCharCode(b)}). join("")Hopefully this will result in a warning, but not an error. That is, hopefully, I can still gunzip this data even if a warning is given. With that, I concatenate header, body, and footer into a single string and then convert from bytes to base64: btoa(header + str_d + foot="That looks promising—much closer to the Dart output from the other day which was: H4sIAAAAAAAAA7JJAAAP//To test the JavaScript result, I run it through the Linux base64and gziputilities: ➜ ice-code-editor git:(master) ✗</script> <script src=""></script> <script> // Your code goes here... </script> gzip: stdin: invalid compressed data--crc error gzip: stdin: invalid compressed data--length errorSuccess! As feared, I see crc32 and length errors, but nonetheless I am finally able to gunzip the JavaScript data. Now that I understand everything, let's see if I can solve this in Dart. That is, can I take the body that was deflated in JavaScript and inflate it in Dart? The base64 and gzip'd version of the code is stored in the ICE Code Editor's localStorage as: =="The built-in Zlib library in Dart operates on streams. So I take this base64/deflated data, add it to a stream, pass the stream through an instance of ZlibInflater, and then finally fold and print the result: import 'dart:async'; import 'dart:io'; import 'dart:crypto'; main() { var data = =="; var controller = new StreamController(); controller.stream .transform(new ZLibInflater()) .fold([], (buffer, data) { buffer.addAll(data); return buffer; }) .then((inflated) { print(new String.fromCharCodes(inflated)); }); controller.add(CryptoUtils.base64StringToBytes(data)); controller.close(); }This fails when I run it because the ZLibInflaterexpects a header and a footer: ➜ ice-code-editor git:(master) ✗ dart test.dart Uncaught Error: InternalError: 'Filter error, bad data' Unhandled exception: InternalError: 'Filter error, bad data'So, add the 10 byte header to the stream before adding the body: var controller = new StreamController(); controller.stream .transform(new ZLibInflater()) .fold([], (buffer, data) { buffer.addAll(data); return buffer; }) .then((inflated) { print(new String.fromCharCodes(inflated)); }); controller.add([0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03]); controller.add(CryptoUtils.base64StringToBytes(data)); controller.close(); }Which results in: ➜ ice-code-editor git:(master) ✗ dart test.dart <body></body> <script src=""></script> <script src=""></script> <script> // Your code goes here... </script>Huzzah! I finally have it. Given js-deflated & base64 encoded data that is stored in the ICE Editor's localStorage, I can read it back in Dart. Interestingly, I do not even need the footer. In fact, if I add the footer to the stream, I again get bad filter data—no doubt due to the bogus crc32 and length information that I feed it. No matter, the footer is not necessary to get what I need. Of course, none of this really matters until I can convince the fine Dart folks that the Zlib libraries belong in "dart:cypto" instead of "dart:io". The former is available to browsers, which I where I need it if I am to read (and ultimately write) localStorage. The latter is only available server-side which is of no use to me. Thankfully, I can go on merrily using Dart's js-interop to use js-deflate for the time being. And hopefully the time being won't be too long. Day #735
https://japhr.blogspot.com/2013/04/the-strange-tale-of-dart-javascript-and.html
CC-MAIN-2017-47
refinedweb
924
63.8
This is not specific to HoloLens but I am approaching the problem from that perspective as that is of most interest to me right now. All of the code can be found in a Github repo here. The code and concepts will apply across the board and could be useful in any situation that involves sourcing data around real buildings and representing them in 3D – more specifically in Unity. I imagine this could help with multiple scenarios: Creating a proof-of-concept which enables real-time data to be overlayed onto or incorporated into 3D map data (this is the scenario that prompted me to investigate) Providing a starting point for a scene set in a real location Facilitating a more dynamic fly-through which could update in real-time as you navigate around streets or the globe This won’t provide an experts view on mapping data as I am not an expert and this seems like a complex area – my previous experience here has been placing some pins on a map for a 2D mobile app! What I can help with though is sharing what I have learned and hopefully helping you with a starting point if like a past me you want to get up and running. I am also not trying to replace or undermine any of the commercial offerings here; instead I used this exploration to aid my own understanding and would reach for those in any commercial setting. By the way, if you make one of these or can recommend a good one for use in HoloLens apps please send a note in the comments and I will be happy to review in another post. I have tried HoloMaps (which is excellent) but I couldn’t find a public API to the 3D Bing data used to try out From my initial research I quickly found that there is a JSON internet standard (GeoJSON) used to share geometry data which seemed to be fairly easy to understand and use. I decided to run with that and first build a component in Unity to generate a 3D mesh from GeoJSON; thinking was that I would later be able to find an API or service to retrieve GeoJSON data and plug that in. Since we’ll be using an open standard the hope is that the data source can be switched out for an alternative depending on app requirements. Rendering GeoJSON in Unity I started out by getting a GeoJSON sample: It is possible to use the Open Street Map site here to get data back in the format .osm by specifying a bounding box formed from the latitude and longitude coordinates. I wanted GeoJSON though and after some further digging I found that you can access OpenStreetMap from the Overpass API which has a tool to facilitate this which uses it’s own query language for requests. I used the tool to get myself some test data which I used to work out how to render the 3D buildings. Unity Custom Editor My initial goal was to create a tool that I could use in the Unity editor to generate the building geometry as opposed to a more dynamic control to let the end user explore a map but I may look into that next. Unity supports custom editors and the approach I took was to create a MonoBehaviour script to be attached to a game object which has an associated custom editor script. These two scripts work together to extend the game object and provide an editor user interface to be able to control how the game object gets extended. First the MonoBehaviour-derived script Then the editor itself (derived from the Editor base class): It’s important to keep in mind that the Editor-derived script is designed to only be run in the Unity editor but the MonoBehaviour is designed to be run in-game and is associated with a Game Object (we don’t want any dependency on the UnityEditor namespace here). I think that this approach makes sense as the plan is to create something that runs in-game ultimately. Given that the input at this stage is a JSON file we need to find a way to get this data into memory, a job I usually reserve for JSON.NET but there seem to be a few challenges getting this to work with Unity (see). Instead, I searched around and found fullserializer and decided to give that a try instead. On the whole that decision worked out very well as this seems like a robust and flexible JSON serializer – I did run into this issue though and needed to make some changes to the source code but I was bought in enough to warrant the extra effort. One inconvenience with using a custom editor is that Unity Coroutines don’t run in this environment as they need an update loop to keep running. It is straight-forward to make use of the EditorApplication.update event in order to provide that update loop but it does need the code to be written. Here’s an example of the type of code needed for this. In order to keep a clear separation between the MonoBehaviour and the custom editor I used interfaces; IProgress – allowed calls to update a progress dialog IUpdateHandler – facilitated hooking a callback to run coroutines from the editor IDialog – allowed calls to show a dialog box To use you can add an empty GameObject into your scene and then add the ThreeDMapScript as a new component to that GameObject. The custom editor for this component will provide some inputs to allow you to define a bounding box in terms of latitude and longitude. Also, you can specify the height of the levels used for the buildings. This could also be sourced from other data sets so could be a more accurate representation of the building heights. Once set the Generate Map button will cause the script to call the REST API to retrieve the GeoJSON and the satellite image, generate the meshes and apply the required material. Each building is currently represented by a separate mesh as can be seen in the scene hierarchy window and is named from data in the GeoJSON. There is a whole load more metadata around the buildings in the data which could be surfaced in an app Mesh Creation Once we have the GeoJSON in main memory we need to take the geometry data and convert to a polygonal mesh. The data has a list of ‘features’ in which can be found the buildings each with it’s own geometry defined. A quick scan of the data reveals different types of geometry which is specified as a collection of coordinates given in lat/long. I concentrated my efforts on the ‘polygon’ geometry type and used a polygon triangulator from here to convert the data to a mesh. Running this resulted in 2D polygons which could be extruded to the height of the associated building to give the final form of each building. The algorithm used here doesn’t support polygons with holes – this could be a future improvement Here’s some pseudo-code for the creation of the 3D buildings: FOREACH Building FOREACH GEOMETRY Convert Lat/Long to metres Convert from X/Y plane to X/Z plane Move Centre of polygon to the origin Triangulate & extrude Calculate UV coordinates Translate back out to original location Apply material END END Create a plane representing the ground tile Notice that some work is done here to ‘create’ the mesh centred on the origin and then use it’s transform to translate it back into position. Also notice that there are steps to generate UV coordinates and apply a material. This is to enable a satellite image to be textured onto the buildings (more on that later). I found some code here which I used to enable the conversion from Lat/Long coordinates to metres. GeoJSON API I had made an earlier assumption that GeoJSON data would be easy to get via an API but at this point I’m not so sure as I couldn’t find a free API which provided it. As a result I decided to roll my own API as well. The API I created makes a call to the Overpass API to retrieve data in OSM form and then I used an open source project OSMToGeoJSON.NET to convert the OSM to GeoJSON and return it. I created the API using ASP.NET core and just ran it locally whilst developing out the rest of the project. Here’s a snippet of code from the API which retrieves the GeoJSON from Overpass: Texture I decided to explore using some satellite imagery as a texture for the buildings and ground tile plane. I used the Bing Maps static map API to retrieve an aerial satellite image using the same lat/long bounding box as was used in the request for the geometry data. I proxied these calls via my API also and it might make sense to combine them into one call for a bounding box as they naturally need to be called together. The UV texture coordinate calculations turned out to be a little bit more complicated as in order to work out which sub rectangle of the returned image corresponded to the bounding box it was necessary to make another API call to get the associated metadata and then use that in some simple calculations to work out the offsets correctly. Also, I gave no consideration to the texturing of the vertical walls of the buildings and as a result this doesn’t look too good and I wondered if using something like tri-planar texture mapping would help but ultimately the image data for the side of the buildings is missing from the image. Optimisation All of the above is non-HoloLens specific but I wanted this to run well on a HoloLens so we need to go a bit further as the HoloLens is essentially a mobile device, with mobile CPU/GPU so we can’t just assume that everything is going to run at 60fps out of the box. In order to track the frame rate at which the device is running we can either use the HoloLens device portal or the FPSDisplay prefab from the HoloToolkit which provides a UI element which stays in your field of view to show the current FPS. I use one of these in the sample in the repo for this project on Github. To begin profiling on the HoloLens you need to navigate to the Build Settings in the Unity editor and make sure that the Development Build and Autoconnect Profiler options are both checked. Autoconnect Profiler will ensure that frames from the start of the run get captured Also, ensure that you have the InternetClient and InternetClientServer options checked in the Player settings. Now if you build and deploy your app to the HoloLens and then open the Unity Profiler window you should get an entry for your device in the active profiler dropdown at the top of the window. Now if you record you can collect some detailed profile information for each frame: As you can see from the video below the app is running at a low frame rate – I haven’t given much thought to optimisation at this stage but in a subsequent post I will take a closer look at the data and try to get the app closer to 60 fps. 12 thoughts on “HoloLens & 3D Mapping” Getting a few errors when trying to import the most up to date Holotoolkit from Github (as of today 06/06 master branch): “Assets/ThreeDMapping/Source/Reflection/fsTypeCache.cs(118,42): error CS1061: Type `System.Type’ does not contain a definition for `AsType’ and no extension method `AsType’ of type `System.Type’ could be found. Are you missing `HoloToolkit’ using directive?” Looks like when you add ‘using HoloToolkit’ it breaks a few of the fs scripts in the ThreeDMapping\Source\Internal\ folder. You need to include the scripts here – I think I must have added them under the HoloToolkit folder by mistake! Pete, If the Lytro Ilum could provide video, could you see a path to creating 3D maps viewed by Hololens? Pete Would like to connect you for my project could you connect using pdaukin at Microsoft dot com ? Pete this is awesome. I recently got my hands on a Hololens and i’ve been playing around with it quite a bit. Anyways, I am trying to create a new map using your editor and such. When I hit the generate Map button within unity it faults out with “cannot connect to destination host”. I am running the threeDMapDataAPI via command prompt and it just sits idle with “Now listening on:. Application Started”. I’m just wondering if you have any suggestions on what I am missing? If you look at this code line here it looks like the base url with port number is string urlBase = “”; so if you are running on port 5000 you need to edit this line (or run on port 8165). This should ideally be a public field and configurable in the editor but I guess I didn’t quite get to that. I would have liked to have extended this to create an example which loaded up tiles of the map as you pan around but unfortunately I ran out of time to work on it. Pete very compliments! I found it very usefull, even if I still didn’t try it. From your Reply: “I would have liked to have extended this to create an example which loaded up tiles of the map as you pan around”. If i would like to do that, I should write a trigger that on “position changes” it should use the API with a different rectangle and then call the method myTarget.Load() of the Editor. Correct? Hi Patrizio. Did you solved the localhost connection problem? I’m running the same with no solution so far. I found very useful as well. However, Couldn’t generate the new map. Seems like there should be a group of developers to work on making it a real working project. What do you think about mapbox? () Seems like a very good solution for displaying maps in the AR environment. Also, I updated HoloToolKit and others packages to run your demo. Should I create Pull Request? If any devs are interested I’d be happy to help make it into a real project if there is a need. I looked at mapbox some time ago and I don’t think the HoloLens support was quite there but it looks like I need to revisit. Please submit PR and I will update the repo. Many Thanks. Yeah, it is not yet working properly. But they are very close I believe P.S I will try to submit PR it sometimes this week.
https://peted.azurewebsites.net/hololens-3d-mapping/?replytocom=208579
CC-MAIN-2019-43
refinedweb
2,507
63.12
Our first example will create a basic email message to "John Doe" and send it through your local mail server. import org.apache.commons.mail.SimpleEmail; ... SimpleEmail email = new SimpleEmail(); email.setHostName("mail.myserver.com"); email.addTo("jdoe@somewhere.org", "John Doe"); email.setFrom("me@apache.org", "Me"); email.setSubject("Test message"); email.setMsg("This is a simple test of commons-email"); simpliest reciepient JavaMail API supports a debugging option that will can be very useful if you run into problems. You can activate debugging on any of the mail classes by calling setDebug(true). The debugging output will be written to System.out.: This is the only way to control the handling of bounced email. Specifically, the "Errors-to:" SMTP header is deprecated and cannot be trusted to control how a bounced message will be handled. Also note that it is considered bad practice to send email with an untrusted "from" address unless you also set the bounce address. If your application allows users to enter an address which is used as the "from" address on an email, you should be sure to set the bounce address to a known good address.
http://commons.apache.org/email/userguide.html
crawl-002
refinedweb
193
57.16
Editor commands are given in the form: [line_range]C[argument] where: The line range specifies which lines the command should operate on. It can consist of zero, one or two line addresses. If no range is specified then it will usually default to the current line. The current line is the line your cursor is on in the text area and will typically be updated by each command to reflect the last line it operated on. Check the section on each command for the exact behavior. The following line range forms are allowed: The elements of a line range are line addresses line and are composed of: Each line address above may be combined with other line addresses using the + and - characters to form expressions. For example: If you specify a line address which is outside the buffer you will get an error and the command will not be executed. The special character “|” can be used to limit the preceding line addresses to lie within the buffer (between one and $). This is very useful in defining macros. The | operator sets the condition register FALSE if the line address falls outside the buffer and needs to be limited. For example: &;.+23| is a safer form of the example shown above. Each major command consists of a single character which has been chosen to reflect its nature. For example, the character d was chosen for the delete command and the character w was chosen for the write command. This character must be in lower case. If a command line is entered which contains a line_range but no command 44 then the current line is set to the last line address specified. In this case the cursor will move to line 44. Some commands require extra information to specify their operation. For example, the Move (m) command requires the specification of the destination line address: {line1},{line2}m{line3} Other commands like the Zap (z) command represent a class of commands which are specified by sub-command characters. For example: zcd is a zap cursor delete command. The cd is a subcommand of the zap command and will not be interpreted as the major commands c and d. This form of subcommand is used by several editor major commands. The editor allows you to place more than one command on a line. Each command on the line is executed sequentially from left to right. For example: ob+ot+ will turn on option blank followed by option tab. The command: 1,4d$d will delete lines one through four and then delete the last line. Note that the line range only applies to the command that it immediately precedes. Should an error occur on any command then execution will be halted and any following commands will not be executed. Some editor commands consume all characters until the end of the line collecting their right argument. They must therefore be the last command on any line on which they occur. For example e filename consumes all characters until the end of the line collecting the filename. The Full Screen Editor treats a small number of characters as special in certain situations. These characters are described in the following subsections. The only character you cannot save in your text is an ASCII NUL (hex 00). The line separator character in QNX4 is a linefeed (hex 0A). Source files separate lines by a single linefeed character, not a carriage return. On input, whenever you enter a carriage return (hex 0D) it is mapped into a linefeed character. When the editor reads a file it collects characters up until a linefeed, replaces the newline with a null (hex 00) and saves the collected characters as a line in your buffer. The point to note is that the linefeed is not saved. It is stripped on a read and added to the end of each line when the file is written. In the definition of complex macros containing several lines, the lines may be separated by either a carriage return or a linefeed. The supplied macro file has adopted the convention of using the linefeed separator. The NUL character (hex 00) is used internally by the editor to delimit strings. It is therefore not possible to save this character in your buffer. Should you attempt to enter this character, the line (text or command) will be truncated at that point. When option meta-characters is on (m+), then these characters have a very special meaning when used within patterns (they are special only within patterns). The period (.) for example will match any character, not just a period. The meaning of these characters is explained in the section on pattern matching. The escape character on the command line is the backslash. When it precedes a meta character in a pattern it causes that character to be taken literally. That character loses any special significance it might have normally had. Following a backslash by two hexadecimal characters in a pattern or translate string results in a single character with the hexadecimal value specified. For example \0A is the single character whose hexadecimal value is 0A (a linefeed character). When displayed on your screen this character will be expanded into the necessary number of spaces to move to the next tab stop. Tab stops are fixed at every four columns with the first stop set on column five. You can display tabs by turning on option tabs display (ot+). Tabs are not treated with any special significance internally. They only affect your display and your cursor movement on the display. You can not position your cursor on the expanded spaces following the tab, only the tab character itself or the real character following it. On input, the character with hexadecimal value FF will cause all characters up until the next record separator (newline) to be collected (no echo) in a hidden buffer, then executed as a command. Any current text on the command line is not affected. This character is used heavily by the translate command when defining macros for the various cursor and function command keys. This character is only special on input. You can place a hexadecimal FF character in your text by using the substitute command and a \ff escape. e.g. to replace “C” with the hexadecimal character FF: s/C/\ff/ On input, the character with hexadecimal value FE will recall to the command line the last command typed. This character is used by the F9 and F10 function keys. You can place this character in your text by using the substitute command as above. When this character is encountered in a macro, the editor will accept a character from the keyboard. If several characters occur in a row, a maximum of that number of characters will be accepted. Entering a carriage return will always terminate input (skipping any remaining FD's) and the carriage return will be discarded. On input, the character with hexadecimal value A3 will prevent the next character from being expanded should a translate be in effect for it. For example, the Home key has a hexadecimal value of A0, but is translated on input into the three character string: <command char>1<newline> If you would like to prevent this expansion (to enter the key's value) then you should proceed it with the - key on the numeric keypad which generates the code for the Macro Disable character. You can of course enter the Macro Disable character itself by typing the - key twice. The editor maintains a special register called the condition register which is set to TRUE or FALSE by some of the editor commands. This register can be tested by the Branch (b) command and the Until (u) command to perform conditional execution of editor commands. These commands are commonly used in macros. The Editor maintains a buffer for deleted characters and another buffer for deleted lines. The character delete buffer is arranged as a stack 256 characters long. Adding a character to a full buffer will cause the oldest character to be lost. In this manner the most recent 256 characters are kept. The editor maintains primitive commands for: These primitives are provided by subcommands of the Zap (z) command. The saving of the last deleted character via the Del key is performed by a macro which saves the character under the cursor in the character delete buffer before deleting it. Likewise, the restoration of a deleted character via the Ctrl-Ins key combination is based upon a macro which inserts the last character placed in the delete buffer before the current cursor position. The editor maintains another buffer in parallel with your text buffer called the line delete buffer. This buffer has the same structure as your text buffer, however, it can not be displayed or directly operated on by the editor's many commands. Each time you delete a line via the Delete (d) command it is moved from your text buffer into your line delete buffer. You can restore deleted lines using the special forms of the Append (a) and Insert (i) commands which can move lines from the delete buffer back to your text buffer. The moving of lines between the two buffers is slightly more complicated than is indicated above and is best explained by an example. If you were to delete 5 lines, one at a time (say via the F3 key) it would be nice if you could undelete them one at a time so that the last line deleted was the first line restored. This is particularly nice when you delete one line too many and just want to restore the last one, not all of them. Conversely, if you were to delete a group of 100 lines as a block (say via a tagged delete) you do not want to have to restore them one at a time but want them restored as a block as well. The above two scenarios describe, from a user's point of view, the editor's implementation of the line delete buffer. Associated with the buffer is a flag which indicates whether the buffer contains a series of single line deletes or one block delete. To avoid confusion, the editor will purge the line delete buffer before adding in the following circumstances. Put simply, if you delete lines one at a time, they are undeleted one at a time and if you delete a block of lines they are undeleted as a block. Mixing blocks or types is prevented by purging before adding, if necessary. When working with a very large buffer it is possible for the editor to run out of memory. When this happens it will purge the delete buffer in an attempt to free up some space. You will be warned of this by a message on the command line which you must clear (like an error) by typing a carriage return. Deleting all lines in a file, then attempting to edit another large file will often generate this message. In this case you have all of the original file in memory in the line delete buffer and are trying to read another large file into the text buffer. They may not both fit! The editor will terminate any operation gracefully at the earliest possible moment after the Break key is typed. As a result of the break, any operation may be incomplete on the range of lines specified for a command, however, no line will be left in a partially modified state. Should you break out of an Edit (e), Read (r), or Write (w) command you may only move a subset of the lines into or out of your buffer to the specified file. After servicing the Break the editor will leave you in command state. The editor has a very powerful pattern matching facility which will match the class of patterns known as regular expressions. Patterns are used for line searches and by the Global (g) and Substitute (s) commands. It is the editor's pattern matching facility that gives it flexibility in writing powerful macros. For example, the Ctrl left and right arrow keys are implemented by a pattern which searches for the next or previous word in your text. We will attempt to describe the patterns accepted by the editor in a very rigorous manner. It is assumed that option meta characters is on (m+) during the definition of your pattern. If it is off (m-) then the editor will only recognize the class of patterns represented by (1) and (2) below. If number is zero or the character . (dot) then this pattern will match the null string before the current cursor position in the text area. If number is the character t then this pattern will match the null string before the next tab stop. This can be used to turn runs of spaces into tabs. See the Substitute (s) command. Match the string “hello” anywhere on a line: /hello/ Match the string “hello” at the start of a line: /^hello/ Match the string “hello” at the end of a line: /hello$/ Match a line containing only the string “hello”: /^hello$/ Match all trailing blanks (including zero) on a line: / *$/ Match all characters (including zero) on a line: /^.*$/ Match a number like “3”, “862”, etc: /[0-9][0-9]*/ Match a C language identifier: /[a-z_][a-z_0-9]*/ Match a digit in column ten: /@(10)[0-9]/ Match an empty line: /^^$/ Match a line containing only blanks: /^^*$/ Match a line starting with a period followed by a name (such as a command in the QNX2 DOC markup language): /^\.[a-z][a-z]*/ The pages following detail the all available editor commands, in alphabetical order.
http://www.qnx.com/developers/docs/6.5.0_sp1/topic/com.qnx.doc.neutrino_qed_manual/reference.html
CC-MAIN-2019-51
refinedweb
2,290
60.85
In this program, we will append elements to a Pandas series. We will use the append() function for this task. Please note that we can only append a series or list/tuple of series to the existing series. Step1: Define a Pandas series, s1. Step 2: Define another series, s2. Step 3: Append s2 to s1. Step 4: Print the final appended series. import pandas as pd s1 = pd.Series([10,20,30,40,50]) s2 = pd.Series([11,22,33,44,55]) print("S1:\n",s1) print("\nS2:\n",s2) appended_series = s1.append(s2) print("\nFinal Series after appending:\n",appended_series) S1: 0 10 1 20 2 30 3 40 4 50 dtype: int64 S2: 0 11 1 22 2 33 3 44 4 55 dtype: int64 Final Series after appending: 0 10 1 20 2 30 3 40 4 50 0 11 1 22 2 33 3 44 4 55 dtype: int64
https://www.tutorialspoint.com/how-to-append-elements-to-a-pandas-series
CC-MAIN-2021-49
refinedweb
154
76.42
You can subscribe to this list here. Showing 2 results of 2 Hello everybody, This is it. jEdit 4.2final is now available from <>. List of changes since 4.1final: <>. List of changes since 4.2pre15: <>. If you're upgrading from 4.1, remove all your plugins and download a fresh set from within 4.2. Have fun. Slava This is the first batch where I do the plugin central releases, so bear with me if there are any kinks to be sorted out. Many thanks to Mike Dillon for doing all the previous releases so well. Ollie Rutherfurd kindly did all the packaging for this batch. NEW PLUGINS: ------------ Factor 0.61: Initial release. LilyPondTool 0.2.3: Initial release. UPDATED PLUGINS: ---------------- DBTerminal 0.1.1: Fixes silly problems with PostGreSQL. HeadlinePlugin 1.2: Added preliminary support for Atom feeds. Added preliminary support for posting to blogs. Supports the BloggerAPI and the MetaWeblogAPI. Fixed bug where the option pane would stop working if you deleted all feeds. JakartaCommons 0.4.3: upgraded commons-httpclient from 2.0-rc2 to 2.0.1, added commons-lang-2.0.jar. JDiffPlugin 1.4.2: Updated for jEdit 4.2. JPyDebug 0.8: A large number of bug fixes. JSwatPlugin 1.5.4: New: Added action "Hotswap Class" to hotswap the class surrounding the current cursor position. Fixed bug: changing the dock position of the JSwat panel required a restart of jEdit. Fixed bug: jEdit would sometimes throw IndexOutOfBoundsException if the buffer was shorter than the current debug position. Fixed bug: Invoking a JSwatPlugin action before the JSwat panel was docked for the first time threw a NullPointerException. If the user opens a second jEdit view, display a message in the dock panel saying that only one instance of JSwatPlugin is allowed. BeanShell scripts can now invoke JSwat commands by calling com.bluemarsh.jswat.plugins.jedit.JSwatPlugin.invokeJSwatCommand(View view, String command). Renamed action "Dock JSwat Plugin" to "Show JSwat Plugin" (associated shortc uts remain the same). Lazy8Ledger 2.18: Changed Lazy 8 Ledger to be compatible with jEdit 4.2. Lazy 8 Ledger can no longer be run with a jEdit version 4.1 or less. Note that lazy8ledger does not use the new jedit 4.2 plugin API. Some window properties could not be saved and were lost when turning off the program. Fixed. Added a nice link to a bookkeeping tutorial. Customer/project reports added in the last version were completely redone in this version. Added a new report called Trial Balances. Added the ability to quickly produce a list of records after doing a search using the filtering dialog. Lots of very minor changes everywhere, both in the documentation and in the program. LazyImporter 1.06: New Feature: now allows manual configuration of import grouping/layout. Allows you to group and order import statements and configure the spacing between them. MoinMoin 0.3: User may supply a change comment when saving a page. PHPParser 1.0.2: A lot of bugfixes in the parsing. ProjectViewer 2.0.4: Bug fix release. Fix compatibility with jEdit 4.1. Fix for a serious bug that could cause project data loss, and other fixes for minor issues. SourceControl 0.6: New build to bring together a number of bug fixes. SuperScript 0.4: Updated for jEdit 4.2. TaskList 0.4.4: Allow tasks to contain non-comment token types, provided they're surrounded with comment tokens. This allows one to define task patterns for like things like "@TODO: ...", where "@TODO" is a LABEL in PHPDoc.
http://sourceforge.net/p/jedit/mailman/jedit-announce/?viewmonth=200408
CC-MAIN-2014-52
refinedweb
594
61.53
Join devRant Search - "hello devrant" - - - - Me learning JavaScript Step 1 console.log("hello world") Step 2 change the devrant avatar t shirt to js8 - - I’ve been on devRant for around a year or so but only just signed up. Hello everyone. Today was my last day at work. It’s time for me to start my own business rather than constantly work for one. I have an idea for an app so we’ll see how that goes. Wish me luck!21 - I mean, the laziest thing recently as a dev is lurk on this site for over a year and not make an account. Allo.2 -.24 - - Hello Monday: 0.Arrive late due to traffic.(Apparently a car hit a cow crossing the road) 1. Try upgrading php5 to php7 and break stuff in the process and waste 2 hours fixing things.(Poor connection so ssh sessions hung occasionally) 2.PHP fixed,open Gmail and get over 100 emails from clients about the server being down(because of (0)).Ignore all.Find a snaglist of over 20 TODOs. 3.Open Android Studio, update to 2.3 and everything becomes broken.Each time i open it ,it crashes and i have to "Report to Google" 4.Spend the next 1 hour reinstalling AS.It finally works. 5.Open Project and the libraries are broken.Spend another hour upgrading build tools. 6.Leave SDK to update and decide to check my Google Cloud console.$50 bill pending.Shit. 7.Try XCode. Remember the project is still in Swift 2 and I have to upgrade it(Would take eternity).Immediately closes xcode. 8.Gives up on life and decides to log into Devrant - Hello world! I'm new here. Discovered devRant randomly and I'm so happy I did 😊 Problem is, I can't seem to get any work done now that I'm reading rants all day long 🙃11 - - Hello, my name is Iván, I'm new at devRant and I don't even know what is this exactly. But since I've seen many photos of people's workspace, I will do the same as greeting. I hope to be welcome. Please visit for more info about me.28 -; }18 - - After a few dark and sad years of using windows phone, finally got an android today! Hello DevRant! 🤗9 - - print("Hello DevRant!") I'm new to the community but I really dig everyone's posts and attitude. This app is great!12 - - - Hello, I'm new to devRant. My name is Floydian, I just wanted to say: console.log('Hello devRant');47 - - Wow?!? Even removing this crap takes fucking centuries! I'm already sitting here for 45 minutes. How the fuck is this possible?!!25 - - public class HelloDevRant { public static void main(String[] args) { //Commenting for reasons! ;) System.out.print("Hello dear devRant community!\nI am new here!\nNice to meet you."); } }6 - Hello World! I started reading things here on devRant some time ago but finally decided to create an account now TL;DR; Hi. I'm new here.7 - I have been reading devRant for a few months and decided it was time to say hello. You guys seem like a cool community and I am happy to be apart of it now - - - Hello! For everyone who wants to follow progress of the hackathon where team devRant is programming etc, go to the following link (sorry, can't help the popup warning at the moment):. Enjoy!17 -)16 - - - - The following is written in Latin, German, and English, and is written in a custom script called "VuetendScriptor" aka "MadScript". Translation as follows: Hello devRant! I am very happy to announce my new script "MadScript". I am so happy right now! I have wanted to do this for a long time! Thank you everyone for your help! I could not have done this without you!62 - Spent all day on this. Debugging hardware is fucked man... Me proud. Wanted to make a devrant logo like always (its my Hello world to test my CNCs now). And.... When I finally had the machine calibrated... The pen stopped working lol. I'll just continue tomorrow, it's finished... Probably gonna use a few times and make a bigger one.27 - Hello devRant from Croatia! Nothing like a day of breezy beach with sunshine and, to finish off the day, learn some Spring-Boot! Enjoy your holidays!6 - - Hello @dfox , i have read the Spanish description of devRant app on play store and is a little weird. As a native spanish speaker, I'll give you a little suggestions to (maybe) help you. "Una comunidad de la diversión para los desarrolladores conectarse a través de código, tecnología y la vida como programador" Is better to understand the following text: "Una comunidad de diversión para desarrolladores, para conectarse a través del código, tecnología y la vida como programador." Another thing, the text: "devRant is a fun community for developers to share and bond over their successes and frustrations with code, tech, and life as a programmer!" Could be translated as: "devRant es una divertida comunidad de desarrolladores, para compartir y vincularse por medio de sus éxitos y frustraciones con el código, tecnología y ¡La vida como programador!" I don't know if you set this texts in Spanish or if they are auto-translated, but I hope to help I little bit. Never will be a exact translation, the key is share the idea in similar words, so, the spanish text I'd suggest to you may vary between countries.12 - Hello devRant! My name is Carter Schaap, a maker from SW Ohio. I've been voiding warranties for most of my life, and am currently at Great Oaks Tech School learning HTML, CSS and soon-to-learn JavaScript. I dabble in C++ (because Arduino is a thing) and I'm learning Python because I'm getting into Raspberry Pi. I can't wait to get involved in this community and do a lot of ranting. Have a great holiday season!8 - - #!?27 - - - Hello devRant, a question for you. I'm looking to redisign/setup my server 'infrastructure'. It'll exist out of: 7 vps's (6+gb ram/500gb+/100mbs up/down per vps) 2 dedicated servers running as virtualization servers. (16gb/4tb/1gbit up/down and another one but let's leave that one out for now because it's gonna take a shit ton of time to solve that clusterfuck) One server will function as an entry point for all websites I run, multiple database servers and multiple backup ones. Any advices/tips/ideas? Just a very serious hobby thing :)28 - !rant Hello world, i've just found this awesome community a few days ago and I cant stop reading. Anyway, I have a burning question: Do devRant devs rant on devRant?4 - - Just found this in my photos, sent to me by my sister. And hello devRant. Oh well, have a nice morning/afternoon/evening.1 - I had plugged in my Android phone to the PC, browsed files from internal storage, Ctrl + X'ed some files from there, Ctrl + V'ed them to the desktop. Nothing special. Bang. Files travelled to another dimension, absolutely gone from the original location, with no trace or them or any notification. Who thought it would be a good idea to delete stuff before making sure it's been successfully transfered first? Fuck you Windows. Also, hello, its my first rant but I've been lurking devRant for a while now. Loving it here.7 - - hello devRant! i've been lurking for about a week but i finally decided to activate my account and introduce myself :> this seems like an awesome community and i'm glad to be a part of it - - - Hi, I'm new to devRant. That's perfect because I love to rant about everything. So, let this be my first rant: Apache Velocity + last minute rush + heatwave - climatisation = Hello Migraine my old friend...16 - - - - - - *18 - - Hello, I'm new to devRant. This is amazing and I really want to see what more this site has to offer.10 - -. - hi! I'm your friendly neighborhood sysadmin/operations bastard. I also write mostly okay python, ruby, and c. This is called devrant because it's where you go to complain about devs, right? /s anyway, hello!7 - Hello everyone! I'd like to introduce myself. i'M SoRry if this is a bit long. I'm a hObbyist developer who occasionally does freelance worK for existing clients (no new clients though. They can get fucked). I make games in my spare time when I'm not feeling anxious, and I'm terrible with girls. They scare me more than moths and cockroaches. Anyways, I guess you could call me a full-stack guy. I've tried to stick with just the front or the bAck-end, but that never works out vEry well. When I want to make something, I want full control over it.21 - - - Hello! I'm new to devRant and so far I like it a lot, but I have a question. Can anyone tell me what does that "rubber duck" thing mean? Thanks.10 - - - Hello devRant, this is going to be my first time posting on the site. I work for a gaming community on the side, and today one of the managers asked me to implement a blacklist system into the chat and reactivate the previously existing one temporarily. This shouldn't have had any issues and should've been implemented within minutes. Once it was done and tested, I pushed it to the main server. This is the moment I found out the previous developer apparently decided it would be the best idea to use the internal function that verifies that the sender isn't blacklisted or using any blacklisted words as a logger for the server/panel, even though there is another internal function that does all the logging plus it's more detailed than the verification one he used. But the panel he designed to access and log all of this, always expects the response to be true, so if it returns false it would break the addon used to send details to the panel which would break the server. The only way to get around it is by removing the entire panel, but then they lose access to the details not logged to the server. May not have explained this the best, but the way it is designed is just completely screwed up and just really needs a full redo, but the managers don't want to redo do it since apparently, this is the best way it can be done.7 - That first days when you're on DevRant and you know nobody is gonna take you for serious as long as you have no avatar. #butfinalymadeit :) #hello6 - - I usually read rants while using public transport. What is the probability that the person about whom I'm reading is sitting right beside me!4 - Int main() { cout << "Hello Devrant!"; return 0; } My computer professor has told us our future grades depend on who ever can beat him at soul caliber 6 when it comes out. Are all programmers this goofy?2 - - - - - I've always felt like a world class programmer after printing a hello world message on screen until now. People are continuously ranting some technical shit that's completely above my head on devrant. Fuck you for downing my morale PS: I'm still in love with it!^_^4 - Well is a long story, but to make it short, a friend started to teach me HTML after that at school they taught me c++, and after my first "Hello World" I was hooked, since then I became an addict to programming and now to devrant4 - !rant Hello fellow devRanters, this weekend I've been working on devRant CLI client I want to share with you: I'm using it as a fortune when logging into terminal and since it stores rants locally it is fast. I spent only couple of hours developing it so there is some space for improvement :). Enjoy it and feel free to comment/do codereview.4 - Hello devRant. I finally joined after lurking for a few days on the app. This community already feels awesome 🙌4 - - Yay ...hello all I'm new to Devrant. And I am loving it, community here is vibrant,funny and geeky :).I was at stack exchange, considering switching to devrant : - - - - Hello all! First post here. I've been a lurker for a long time and have enjoyed all the content. I start a new gig working remote on Monday (second remote gig) I'm hoping it's better than the last one. How is everyone?8 -.33 - - Last night I posted a rant (link at bottom of this post) and what an amazing response from the entire community. Wow!! It was fun reading the comments. Many even used programming language to convey their message. I forgot to mention that please mention your language language and programming language. Haha.. @dfox @trogus I am sure you both must be proud after seeing people from across the globe unite at devRant. Users from various geographies, belonging to different cultures, following various traditions, speaking so many languages and coding in many more are supporting devRant with all their will. Truly incredible. :D - I'm back. I'm the old itsnameless. I left programming because high school stuff. It was just overwhelming. I, slowly, left programming until the point I forgot everything. A lot of evaluations, high school stuff just for remembering NOTHING. The only thing I did this year on high school was losing a really big part of my time remembering weird stuff. After all this stuff, I would love to spend my 2 months vacations mostly in the place that I've met lovely, awesome people. That place is called devRant. So, yeah, hello there. P.S: Of course after the vacations I'll still be here. lol12 - : - Hello Peoples of DevRant. Anyone know of any good resources on how to make animations or simple animated movies?7 - - Hello, ranters! in my last post (...) @dfox agreed to go live with the community for interview. I have created a Google form for all the input their questions which I will put forward to David and Tim. Google form:... The form does not require sign-in of any kind. But, if you are concerned and/or do not use Google, ping me/comment on this post and I will update the questions on your behalf. Let's also fix a time zone for this. I am in +05:30 UTC3 - - I just noticed I didn't say "Hello world!" when I first got on devRant... so... MessageBox.Show("Hello world! 💙");4 - - So, maybe 6 hours ago or so, I was randomly browsing Github and stuff, came across in the Trending page. "Hell yeah, I'd go for some swag". Started looking through them all, eventually found myself here on devRant. Instantly fell in love, it's the perfect mix of jokes, puns, and rants. Now it's 5 AM and I've got work at 10... Worth it, for sure! Anyways, hello guys, glad to have found this place, really loving the feel of it all - Hi guys, I'm back! I spent some time not using DevRant, I got a new phone and thought it would be better not to install it for some days. I had to do a lot of stuff for school these days and didn't want to geht distracted by amazing devs posting cool rants. So here I am, happy to see you all again. (Or at least your avatars)12 - Say hello to the ducks family! 😁 (I know, couldn't get the devrant ones, shipping fees are too much 😷) - After lot of efforts (connectivity and memory issues :p) finally got Devrant installed. Feeling excited. <"Hello Devs"/ - /** Called when the rant is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.dev_rant_first); Log.d("Introduction", "Hello, devRant community!"); } - - - !rant Last month I called my employer from last summer (the same place I had my internship and still have untold stories from) and asked if they could use a summer employee. "Certainly" was the reply, and whatever date I give, I can start then. On my way to a college tour Friday, I stopped by to say hello to my (again) co-workers. There were some new faces (including my replacement) but all were happy to have me back. They treated me as if I never left. I start tomorrow on a different project. I am excited. And now that I have devRant in my life, I'm sure I'll have some stories to tell. Maybe this summer I won't have to send code I find to The Daily WTF. 😁 - Hello fellow developers! I know this is devRant, but I don't know of a better community with such diversity of developers like you guys and I need your input. I decided to go on a language journey. I come from a background of php/javascript and feel the need to expand my horizons. I'm going to write the same app in each language to get the feel of it and become familiar with the syntax and language concepts. Since I'm a web developer I'll focus mainly on languages used on the web like: Java, Python, Ruby, etc.. But I want to cover others as well, like Objective-C/Swift, C++/C#. I'm having trouble figuring out what kind of an app would cover most of the ground. I know the basic guideline for this is a TODO app for web frameworks, but I don't feel like writing a TODO in Swift or C# really cover what the languages are intended for. I don't know enough about the environments yet to come up with a good idea. I want something, that can be language independent but would utilize the power of each language in one part or another and is still simple enough not to require weeks of development. Does anyone have a brilliant idea what that could be - - Hello folks, first rant (yeeah :D) with a big question of myself (made an account for this ^^): How does DevRant finance itself? is it through the merchandise? Dont see any adds so I am astonished :)10 - Isn't Perl a beutiful language? Just check the beutiful screenshot of a function I just written... Also this is so beutiful. Did you know that you can actually print directly from a perl script?: $qp = new PostScript::Simple(papersize => "A4",direction => "RightDown",coordorigin => "LeftTop", colour => 0, eps => 0, units => "pt"); $qp->newpage; $qp->setfont("Courier", 20); $qp->text(20,20,"Hello Devrant"); $psock = IO::Socket::INET->new(PeerAddr => "192.168.1.40", PeerPort => "9100", Proto => "tcp"); if ($psock) { $psock->autoflush(1); print $psock $qp->get(); close($psock); }6 - Hello devRant, CS student here. Thinking of applying for a student job to get some dev experience. How much knowledge is usually required? I know some languages, but never had a bigger project in uni.2 - :)13 - console.log("Starting devRant...") var devRant = new devRant devRant.start().then(function () { devRant.postRant("Hello world !") }).catch(function () { console.error("Oh craps, that's an error. Please don't call an administrator I want to live!") })8 - System.out.println("Hello Programming and DevRant"); One year since I started learning programming and one day since I have DevRant. Maybe I'm just gonna get crazy.2 - - So, this is my first actual rant since I joined devRant and I am not saying I am perfect either. Here goes nothing... 1. I honestly hate it when people use spacebar instead of tabs 2. People who have a bad indentation or no indentation at all (even though almost all IDEs have auto-indentation). The bad thing is when a person asks me to have a look at their code I always end up wasting time fixing the indentation rather the actual problem. I love a properly indented code and that's one of the major reason I usually recommend Python to most people. 3. Lastly, people who leave lots of unnecessary empty lines. E.g., public class HelloWorld{ public static void main(String[]args){ System.out.println("Hello world!"); } }13 - Hello. I am new to devRant. Like it so far, but how do you get one of those profile pics everyone has of a cartoon person - Guys.... Were getting famous? Oh shit that's Facebook.... * Posts the screenshot and runs away :) * Taylor Swift...... You betrayed?3 - - - PSA: microG implements enough of GSF to allow devRant to run. PS: Hello from my degoogled Android! :P5 - !rant Hello devRant, finally got around to create a profile after thoroughly enjoying it for some time here.1 - Been here for quite a while now and eventually made an Account. It cannot be said often enough: DevRant is awesome! following: mandatory "Hello World" import System.Drawing import System.Windows.Forms f = Form() f.Scale(.6,.45) f.Controls.Add(Label(Text: "Hai wurld!", Location: Point(60,30))) f.Controls.Add(Button(Text: "KTHXBYE", Location: Point(50,55), Click: {Applikation.Exit()})) Applikation.Run(f)2 - - - Not a rant more like a question Hello devRant, I am currently planning to purchase a small home server + media client (with Kodi). A small Linux Distro running the Hometheateroftware Kodi will run on the media clients (Odroid C2). The control is then over an app over the local network. The database of Kodi should be on the server in the form of a MySQL database. The movies, pictures, music are also streamed by the server (max. 2 simultaneously) via SMB (simplest variant). In addition, the server is to be accessible to the outside via a web interface to act as a cloud (maybe nextcloud). The whole should be optimized for stability and longevity. In addition, a small GitLab CE instance will probably run on the server. Do you have any comments or objections? The fact that I only take 2x ne 2 TB hard drive has the simple reason that I currently have no need for more space. Sometimes it happens to me that I forget completely obvious things :D - - When lector/teacher gives F to majority in computer graphics class because they don't have enough "Freshture" in their drawings... First post BTW, hello devrant : - Hello devRant, my old friend.... It's been a while since i've last checked devRant and I am sure a lot of stuff happened since then. Anyway I am back and I might vent some anger on my job soon (yes, I know I originally said that everything is perfect but it seems as if I just was naive enough to think it was)3 - - - Hello Geeks... Just joined the Devrant... And enjoying using it.. Got to see such awesome stuffs and rants... Will appreciate any kinda help though...8 -️ - Hello devrant, What are you guys thinking about applying -example- React.js job , you are already full stack and never wrote react before but you know good javascript . What CEOs,CTOs , Team Leaders and developers think about those kind a job applications Top Tags
https://devrant.com/search?term=hello+devrant
CC-MAIN-2018-47
refinedweb
3,893
75.1
In my previous column, I explained that I'll use form processing as a platform for introducing a few new JavaScript concepts. So in that spirit, this week I'm going to provide you with the nuts and bolts of the Forms Extension Framework, which I'll be referring to as ForX. This is not a regular column in the sense that I'll focus on how to perform a specific task using JavaScript -- those columns are to follow in the future. This is, instead, a reference document designed to help you understand the templates I've included and, ultimately, to help you write your own ForX scripts. Along those lines, I have a few suggestions. First, if you haven't read my two previous articles, Yajc -- Yet Another JavaScript Column and Working With Forms: An Introduction, do so now. Then you might want to look at my first library example, Forms Extension Framework (ForX). After that, breeze through this document once to become familiar with its content. Finally, I would then save this article and refer to it as you are playing with one of the ForX examples. If you follow my suggested path, I think that you'll find yourself coming up to speed quickly with this exciting method of JavaScript form processing. Now, on to the ForX elements themselves. ForX is a set of attributes designed to extend existing form functionality with desirable features such as validation, content types, and groups. ForX is based on two principles: With ForX, you can specify the elements that must be completed before a form can be submitted. To define these requirements, you use forx:required and forx:ctype, which can, in addition to existing attributes, be applied to the following elements (list items are linked to appropriate section of the HTML 4.01 reference): A "group" makes it easier to handle elements that are not associated by default, such as radio buttons. With a conditional requirement you can have a group that's dependent on another element's value or on a literal value. A form implementing ForX attributes must have a unique forx:id and the forx:enabled attribute set to true to be handled by the engine. To start the engine, you also have to attach the document's onload handler to the forxInit() function of the library -- after the library files have been included, of course. To make sure a user completed the form correctly, you can use an option to enable a warning after the user tried to send a valid form (which doesn't necessarily mean that the data entered is correct). This element is shown only once (if the form remains valid after reviewing). This behavior is managed by forx:review, which must be set to true to switch the warning on. The warning element is an empty element (DIV) that's identified by its ID, forx:review, and class, forx. Required elements and non-valid elements are marked with a special element, which is defined in the same way as the warning element above. You are generally free to fill the template elements with any content you want; you only need to have the correct ID and class. The template file contains everything you need to get started. (Use right-click to save, or view source -- the unaltered template file will just view as a blank page!) After that, you should examine the example page for some working demos. ForX uses its own namespace to avoid conflicts with other elements. The prefix for the namespace is defined as forx, but it can't be altered due to missing DOM Level 2 support in IE. See implementation details on that. This means that every ForX attribute looks like this: forx:required="true" Every element that is referenced by another ForX element needs to have its own unique forx:id. To mark any element as being "required" for the form to process, the forx:required attribute has to be set to true. In addition to an unconditional requirement, you can define a simple if-then condition, which will be evaluated before the element is marked as being required. A condition of this type should have the following format: if element operator value|element where element is some element of the same form. All elements referenced need a forx:id attribute value. The operator can be != (not equal) or == (equal). You can use a literal value, or the name of another element on the same form, on the right side of the condition. A literal value is indicated by single or double quotation marks. Checkboxes and radio buttons can use the keyword CHECKED instead of using the element's value on the right side. To make an element required only when some other element is checked, you write: forx:required="if pay_card=='CHECKED'" An element is required only when the condition evaluates true. If a required element has a content type or a regular expression also defined, the element is checked in regard to the content type or regular expression. If the variables don't match, the element is marked as being invalid. All required elements need to be specifically marked as defined in the template. A template needs to have class forx and ID forx:marker to be recognized by the engine. This is how a template should look: <div id="forx:marker" style="display:inline" class="forx"><b style="color:red">*</b></div> As long as you meet these requirements, you are free to use whatever you want as your marker. The engine copies this template to all elements and group containers that are required, but not valid. Elements that allow the user to enter any type of text should be checked to contain a special class of content. For this purpose, ForX defines a set of predefined content types. The content type for an element is defined within the forx:ctype attribute. Predefined content types: date- Format: mm/dd/yyyy integer- Integer value float- Floating point value number- Any number phone- Format: +CountryCode (CityCode) text- Any text any- Any data regexp- Indicates a regular expression. This type is only used internally. If no content type is specified for a required element, it's assumed to have the any content type. In those cases where a predefined set of content types is not good enough, a regular expression is the only solution. To use a regular expression, it has to be put into the forx:ctype attribute, where it differs from the defined content types by a leading and trailing slash. The syntax is the same as in JavaScript. Groups are probably one of the most desirable features. Just imagine the following situation: You need a form for an online store that allows customers to pay via invoice or credit card. For the latter, you need additional data such as the number and expiration date for the card. Previously, "in the old days," you'd write a custom function that forked after it detected the customer's selected payment method. With ForX, you can easily group required elements and make them dependent on other elements. To create such a group you need two things: a container element and a set of member elements. The container element is needed to place the marker element into, which reminds the user to fill out required fields. If a group is not valid, the marker template is copied and shown in the group container element. The container element is identified by the forx:group attribute value -- the name of the group is stored here. The container element also holds the forx:required attribute; the behavior is the same as with standard elements. To highlight all members of a group, you can set the group container's forx:highlight attribute to true. A group implementing this feature shows all members with a red background for a few seconds and switches back. To define the number of elements that must be valid to make the entire group valid, you use the forx:grouprequires attribute of the group container. You have three options: all elements, any element, or the exact number of elements must be valid. For example: forx:grouprequires="all" forx:grouprequires="any" forx:grouprequires="3" To add elements to a group you use the forx:member attribute. This value must correspond to a group defined by a container element. After adding an element to a group, the requirements are determined using the group container's attributes. You can define content types for all group members, however. It's not possible to retrieve the value of a group or to nest groups. See the example pages for working demos. Claus Augusti is O'Reilly Network's JavaScript editor. Read more Essential JavaScript columns. Return to the JavaScript and CSS DevCenter.
http://www.oreillynet.com/lpt/a/674
crawl-003
refinedweb
1,470
61.97
Dave is a professor in the Department of Mathematics and Computer Science at Northern Michigan University. He can be reached at dpowers@nmu.edu. Parallel Virtual Machine (PVM) is freely available network clustering software () that provides a scalable network for parallel processing. Developed at the Oak Ridge National Lab and similar in purpose to the Beowulf cluster, PVM supports applications written in Fortran and C/C++. In this article, I explain how to set up parallel processing clusters and present C++ applications that demonstrate multiple tasks executing in parallel. Setting up PVM-based parallel processing clusters is straightforward and can be done with existing workstations that are also used for other purposes. There is no need to dedicate computers to the cluster; the only requirements are that the workstations must be on a network and use UNIX/Linux. PVM creates a single logical host from multiple workstations and uses message passing for task communication and synchronization (Figure 1). My motivation for setting up a parallel processing cluster was to provide a system that students could use for coursework and research projects in parallel processing. My specific goals were to set up a working cluster and demonstrate with test software that multiple tasks could execute in parallel using the cluster. Why Use PVM? Granted, there is other softwaremost notably Beowulffor clustering workstations together for parallel processing. So why PVM? The main reasons I decided to use PVM were that it is freely available, requires no special hardware, is portable, and that many UNIX/Linux platforms are supported. The fact that I could use Linux workstations that were already available in our computer lab without dedicating the use of those machines to PVM was a major advantage for it. Other important PVM features include: - A PVM cluster can be heterogeneous, combining workstations of different architectures. For example, Intel-based computers, Sun SPARC workstations, and Cray supercomputers could all be in the same cluster. Also, workstations from different types of networks could be combined into one cluster. - PVM is scalable. The cluster can become more robust and powerful by just adding additional workstations to the cluster. - PVM can be configured dynamically by using the PVM console utility or under program control using the PVM API. For example, workstations can be added or deleted while the cluster is operational. - PVM supports both the SPMD and MPMD parallel processing models. SPMD is single program/multiple data. With PVM, multiple copies of the same task can be spawned to execute on different sets of data. MPMD is multiple program/multiple data. With PVM, different tasks can be spawned to execute with their own set of data. How PVM Works A PVM background task is installed on each workstation in the cluster. The pvm daemon (pvmd) is used for interhost communication. Each pvmd communicates with the other pvm daemons via User Datagram Protocol (UDP). PVM tasks written using the PVM API communicate with pvmd via Transmission Control Protocol (TCP). Parallel-executing PVM tasks can also communicate with each other using TCP. Communication between tasks using UDP or TCP is commonly referred to as "communication using sockets" (Figure 2). The pvmd task also acts as a task scheduler for user-written PVM tasks using available workstations (hosts) in the cluster. In addition, each pvmd manages the list of tasks that are running on its host computer. When a parent task spawns a child task, the parent can specify which host computer the child task runs on, or the parent can defer to the PVM task scheduler which host computer is used. A PVM console utility gives users access to the PVM cluster. Users can spawn new tasks, check the cluster configuration, and change the cluster using the PVM console utility. For example, a typical cluster change would be to add/delete a workstation to/from the cluster. Other console commands list all the current tasks that are running on the cluster. The halt command kills all pvm daemons running on the cluster. In short, halt essentially shuts the cluster down. The PVM console utility can be started from any workstation in the cluster. For example, if workstations in the cluster are separated by some physical distance, access to the cluster may be from different locations. However, when the cluster is shut down, the first use of the PVM console utility restarts the PVM software on the cluster. The machine on which the first use of the console utility occurs is the "master host." The console utility starts the pvmd running on the master host, then starts pvmd running on all the other workstations in the cluster. The original pvmd (running on the master host) can stop or start the pvm daemon on the other machines in the cluster. All console output from PVM tasks is directed to the master host. Any machine in the cluster can be a master host. Once the cluster is started up, only one machine in the cluster is considered the master host. Installing PVM Installing and running PVM is straightforward if you do it on a single machine. You can then use the PVM API to simulate parallel processing. However, to demonstrate true parallel processing (that is, multiple tasks executing at the same time), more workstations need to be added to the cluster. However, installing and configuring PVM in a multiworkstation cluster can initially be painful. PVM requires this hardware/software environment to function: - A heterogeneous cluster of workstations networked together. - A machine architecture that uses a supported version of UNIX/Linux. - rsh (remote shell), command network support for PVM. The configuration I selected consisted of five 850-MHz Pentium workstations (with network connections) running Red Hat 9.0 Linux. The reason I used this hardware and operating system was because it was already available in our campus computer lab. The installation and configuration of PVM may vary depending on the version of UNIX/Linux used. If you already have a workstation with a version of UNIX/Linux installed, there are three steps to installing PVM on your workstation: - Install and enable the rsh server. - Set environment variables for PVM. - Download and install the PVM software. The first two steps are the most challenging. Step 3 is relatively straightforward. In setting up PVM, I wanted nonroot users to be able to use the PVM cluster even though some of the installation steps may require root privileges. PVM will not run a cluster of machines unless rsh is installed and enabled on all workstations in the cluster. The rsh set-up is somewhat problematic and the man pages for rsh confusing. Additional information is available on the Web by using the phrase "Redhat rsh" on Google. There are five steps to installing and enabling the rsh server: - Install the rsh server. To install the rsh server on Red Hat 9, click on the red hat (lower-left of screen), select System Settings, and click on Add/Remove Applications. You must wait while the system checks to see which software packages are already installed. You are then presented with a screen from which you can add/delete applications by checking/unchecking the appropriate box. Under the Servers category and Network Servers subcategory, check the box for rsh-server, then click the Update button. You will be asked to insert a distribution CD for Red Hat. The rsh-server is copied from the CD and installed on your system. - Enable the rsh server. To install the rsh server on Red Hat 9, click on the red hat, then select System Settings, Server Settings, and Services. You are then presented with a screen from which you can add/delete applications by checking/unchecking the appropriate box. Check the rsh box, select the xinetd item, and press the restart icon. Quit the Services and save changes. - Create a file of users who can execute commands on this workstation. Create a file, /ect/hosts.equiv or $HOME/.rhosts, which lets users on other workstations execute commands on the workstation where this file exists. The /etc/hosts.equiv file is used system wide but will not provide root or superuser access for rsh (Figure 3). The $HOME/.rhosts file is created for a specific user, where $HOME refers to the user's home directory, such as /home/dsmith or ~dsmith. This file can be created for any user, including root (Figure 4). This file lets the same user from a different workstation execute commands on this workstation. - Set file permissions for the file in Step 3. The file permissions for /etc/hosts.equiv or $HOME/.rhosts must be set to 600 (rw access for the owner only) by using the chmod command. The owner of the file must issue the chmod command: chmod 600 /etc/hosts.equiv or chmod 600 $HOME/.rhosts. - Test rsh as a root user (if .rhosts file used) and nonroot user. To see if you can rsh to yourself, try: rsh your_host_name ' -l'. To see if you can rsh to another host, try: rsh another_host_name 's -l'. You will get the error "Permission denied" if the user account does not exist on the remote host or if the host/user is not in the remote host /etc/hosts.equiv or $HOME/.rhosts file. Set the environment variables for PVM in the /etc/profile or $HOME/.bash_profile file before downloading and installing PVM (Figure 5). Restart the computer so that the environment variables can take effect. Download and install the PVM software. Select the file pvm3.4.4.tgz and download PVM to the $HOME directory on the workstation. Uncompress the tgz file: tar xvfz pvm3.4.4.tgz. This creates a directory, pvm3, under the $HOME directory, which contains all of the PVM files. Build and install the PVM software using the command make from the $HOME/pvm3 directory. For example, assume that a cluster of four Linux workstations have the network hostnames alpha, beta, delta, and gamma. Also assume that PVM will be run by the nonroot user, myaccount, on all the workstations. When logged in as the user myaccount, $HOME is equal to /home/myaccount. On each host or workstation: - As a root user, create the file /etc/hosts.equiv with the contents: - Set the file permissions chmod 600 /etc/hosts.equiv. - As a root user, edit the file /etc/profile and add: - Restart the workstation so that the environment variables can take effect. - Download and install the PVM software in the /home/myaccount folder. - Create and compile programs. Store the binaries in the /home/myaccount/ pvm3/bin/LINUX folder. - On the master host, login as user "myaccount" and create a hostfile called "pvm_hosts" in /home/myaccount with the contents alpha, beta, delta, and gamma. - Run the pvm console utility using the command /home/myaccount/pvm3/console/LINUX/pvm pvm_hosts. This command starts the pvm console utility and the pvmd running on the master host. Also, slave pvmds are started on all the other hosts in the cluster, which are listed in pvm_hosts. - Using the conf (configuration) command at the pvm prompt lists all the host workstations in the PVM cluster. - Use the console command to start the first PVM task: pvm>spawn -> p1. alpha myaccount beta myaccount delta myaccount gamma myaccount # PVM environment variables PVM_ROOT=/home/myaccount/pvm3 PVM_ARCH=LINUX export PVM_ROOT PVM_ARCH # location of the pvm daemon, pvmd PVM_DPATH=/home/myaccount/pvm3/lib/ pvmd export PVM_DPATH Using PVM To create and compile programs that use the PVM API, you must include the header file pvm3.h and link with libpvm3.a. To compile and link a program (say, p1.cpp) to use the PVM API, enter the command: g++ -o $HOME/pvm3/bin/$PVM_ARCH/p1 -I $PVM_ROOT/include p1.cpp -L $PVM_ROOT/lib/$PVM_ARCH -lpvm3 The default folder for the executable program files (PVM binaries) is $HOME/ pvm3/bin/$PVM_ARCH. This is where the pvmd task looks for tasks to execute (spawn). If multiple architectures are used in the cluster, programs need to be compiled and linked for each architecture because any program could execute on any available workstation in the cluster. To execute tasks in the PVM environment, start the pvm console utility using: $HOME/pvm3/console/$PVM_ARCH/pvm hostfile. The pvm console utility starts the pvmd task(s) running if the daemon(s) are not running. Again, the workstation that starts up the pvmds is the master host. Additional hosts may be added from a list of hosts in the hostfile, by using the "add hostname" command at the pvm> console prompt or by adding hostnames from executing PVM tasks. The file hostfile, stored in the $HOME directory, can use any filename and contains a list of computers (DNS names or hostnames) to be added to the cluster. User programs can be started in one of these three ways: - From the pvm console utility by issuing: pvm>spawn -> p1. This requires a space after spawn and before the task name. - From the system prompt if pvmd is already running on the host: <system_ prompt>./p1. - By spawning the task from a currently executing task. The PVM execution environment requires the location of the program binaries on each host and the location of the pvmd on each host. The execution environment is set by editing /etc/profile. The PVM API The PVM API contains functions that can be grouped into several categories, including: - Process Management and Control, which contains functions that spawn child tasks, kill specific tasks, halt all pvm tasks and daemons, add hosts to the cluster, and delete hosts from the cluster. - Message Sending and Receiving, which contains functions for sending and receiving messages from one task to another. There is also a multicast function that lets one task send a message to multiple tasks. Messages are routed by using a task identification (TID). Each task running in the PVM cluster has a unique task ID. Communication is synchronized by using blocking receives. This means that the task's execution is suspended until the requested information is received. - Message Buffer Management and Packing/Unpacking, which handles message buffering and data conversion. PVM handles the data conversion involved when data is sent/received over different architectures. All data is packed before sending and unpacked after receiving. Test Programs and Results Getting multiple programs to run at the same time on multiple hosts is more complicated than merely starting several programs and assigning them to different hosts. Normally, there is a main program, which is started from the console utility via the spawn command. The main program then spawns child tasks. Example 1 is pseudocode for a main program that loops n times, each time spawning a new child task and sending information to the child task that it needs to continue processing. Then the parent task receives the results from the child when the child is done processing. The problem with this algorithm is that the child tasks are not executing at the same time. Receive operations are blocking and the first child task must finish and send a message to the parent before the next child is spawned. So even though multiple tasks are being executed on multiple hosts, the tasks are not executing in parallel; see the results in Figure 6. Example 2 is pseudocode for a revised main program. This main program contains two loops. The first loop spawns all of the child tasks and sends the information necessary for each child to begin processing. All the child tasks start execution about the same time and execute in parallel because there are no blocking operations in the first loop. The second loop waits for each child task to complete processing and sends results back to the parent task; see the results in Figure 7. The actual listings for the parent and child programs are in Listing One. Data can be passed from a parent task to a child task in one of two ways: Using a message (Listing Two; available electronically, see "Resource Center" page 4) or by using argument values when the child is spawned (Listing Three; also available electronically). Conclusion Using the techniques presented here, it is fairly easy to create a cluster of networked workstations that can be used for parallel processing. The cluster works as one virtual machine to reduce the elapsed execution time of multiple tasks. The test programs demonstrate that multiple tasks execute at the same time on multiple hosts. Future work with PVM would include efficient ways to load software changes over the cluster. Every time a program is updated and rebuilt, the program binaries must be updated on all machines in the cluster. A primary use of PVM in the future would be to implement and test algorithms decomposed for parallel operation. Some possible algorithms would include matrix multiplication, sorting, and puzzle solutions using trees. DDJ #include <stdio.h> #include <string.h> #include <time.h> #include <pvm3.h> // David J. Powers. This program spawns another 10 tasks, sends a text message // to each child task, then waits to receive a message from each child task. void itoa(int n, char w[]); int main(int argc, char *argv[]) { struct tm *p1, *p2; time_t t1, t2; int r; // spawn return code int m = 1; // message id int PTid; // parent task id int Tid[10]; // array of child task id's char mess[100]; // send buffer long res; // result value PTid = pvm_mytid(); printf("P5: Parent id = %d\n",PTid); t1 = time(NULL); p1 = localtime(&t1); printf("P5: Start time: %d:%d:%d \n",p1->tm_hour,p1->tm_min,p1->tm_sec); for (int k=0; k<10; k++) { // spawn child task, p6 // no argument values, let pvm select host // spawn 1 task and save child task id in Tid[k] pvm_spawn("p6",NULL,PvmTaskDefault,"",1,&Tid[k]); // send message itoa(k+27,mess); printf("P5: message:%s: \n",mess); pvm_initsend(PvmDataDefault); pvm_pkstr(mess); pvm_send(Tid[k],m); // receive message: blocking pvm_recv(Tid[k],m); pvm_upklong(&res,1,1); // display result printf("P5: k = %d, Fibonacci number = %ld \n",(k+27),res); } t2 = time(NULL); p2 = localtime(&t2); printf("P5: Stop time: %d:%d:%d \n",p2->tm_hour,p2->tm_min,p2->tm_sec); printf("P5: Elapsed time = %d seconds. \n",(t2-t1)); pvm_exit(); return 0; } void itoa(int n, char w[]) { int i=0; int j, c; do { w[i++] = n % 10 + '0'; } while ( (n /= 10) > 0); w[i] = '\0'; // now reverse the chars for (i=0, j = strlen(w) - 1; i<j; i++, j--) { c = w[i]; w[i] = w[j]; w[j] = c; } } // end of itoa(b) #include <stdio.h> #include <stdlib.h> #include <pvm3.h> // David J. Powers. This program receives a text message. The ASCII string is // converted to an integer value which is used as n in a Fibonacci function. // The Fibonacci number is sent back to the calling task. long fibr(long k); int main(int argc, char *argv[]) { int m=1; // message id int PTid; // parent task id char mess[100]; // send buffer int num; // float value received long res; // result after division PTid = pvm_parent(); // get parent task id printf("P6: Parent id = %d\n",PTid); // receive value: blocking pvm_recv(PTid,m); pvm_upkstr(mess); // calculate Fibonacci number num = atoi(mess); res = fibr(num); // send results pvm_initsend(PvmDataDefault); pvm_pklong(&res,1,1); pvm_send(PTid,m); pvm_exit(); return 0; } // recursive Fibonacci function long fibr(long k) { if (k==0 || k==1) return k; return fibr(k-1) + fibr(k-2); }Back to article
http://www.drdobbs.com/mobile/parallel-processing-clusters-pvm/184406313
CC-MAIN-2016-50
refinedweb
3,232
63.9
There are a number of ways to do exercise 7. My approach uses the switch statement suggestion to keep tabs of vowels. Also, it analyses one word at a time except you have to press enter after each word for it to tally it appropriately. As mentioned, this could be completed multiple ways. Take a look at my solution below: 7.Write a program that reads input a word at a time until a lone q is entered. The program should then report the number of words that began with vowels, the number that began with consonants, and the number that fit neither of those categories. One approach is to use isalpha() to discriminate between words beginning with letters and those that don’t and then use an if or switch statement to further identify those passing the isalpha() test that begin with vowels. A sample run might look like this: Enter words (q to quit): The 12 awesome oxen ambled quietly across 15 meters of lawn. q 5 words beginning with vowels 4 words beginning with consonants 2 others #include <iostream> #include <string> using namespace std; const int size = 50; int main() { char word[size]; int vowels = 0; int consonants = 0; int others = 0; cout << "Enter words (q to quit): "; while(cin.get(word, size) && (word[0] != 'q' || word[0] != 'Q')) { char * ch = new char; *ch = word[0]; if(isalpha(word[0])) { switch(*ch) { case 'a' : vowels++; break; case 'e' : vowels++; break; case 'i' : vowels++; break; case 'o' : vowels++; break; case 'u' : vowels++; break; default : consonants++; break; } } else others++; cin.get(); // keep grabbing words after enter is pressed delete ch; } cout << vowels << " words beginning with vowels\n" << consonants << " words begining with consonants\n" << others << " others"; return 0; }
https://rundata.wordpress.com/2012/12/16/c-primer-chapter-6-exercise-7/
CC-MAIN-2017-26
refinedweb
288
67.18
SP_event() #include <sicstus/sicstus.h> int SP_event(int (*func)(void*), void *arg) Schedules a function for execution in the main thread in contexts where queries cannot be issued. Nonzero on success, and 0 otherwise. If you wish to call Prolog back from a signal handler that has been installed with SP_signal). This function serves this purpose, and installs(). Note that SP_event() is one of the very few functions in the SICStus API that can safely be called from another thread than the main thread. Depending on the value returned from func, the interrupted Prolog execution will just continue ( SP_SUCCESS) or backtrack ( SP_FAILURE or SP_ERROR). An exception raised by func, using SP_raise_exception(), will be processed in the interrupted Prolog execution. If func calls SP_fail() or SP_raise_exception() the return value from func is ignored and handled as if func returned SP_FAILURE or SP_ERROR, respectively. In case of failure or exception, the event queue is flushed. It is generally not robust to let func raise an exception or fail. The reason is that not all Prolog code is written such that it gracefully handles being interrupted. If you want to interrupt some SP_WORD-running Prolog code, it is better to let your code test a flag in some part of your code that is executed repeatedly. How to install the predicate user:event_pred/1 as the signal handler for SIGUSR1 and SIGUSR2 signals. The function signal_init() installs the function signal_handler() as the primary signal handler for the signals SIGUSR1 and SIGUSR2. That function invokes the predicate) { SP_event(signal_event, (void *)signal_no); } void signal_init(void) { event_pred = SP_predicate("prolog_handler",1,"user"); SP_signal(SIGUSR1, signal_handler); SP_signal(SIGUSR2, signal_handler); } Calling Prolog Asynchronously, SP_signal(), SP_fail(), SP_raise_exception().
https://sicstus.sics.se/sicstus/docs/4.2.3/html/sicstus.html/cpg_002dref_002dSP_005fevent.html
CC-MAIN-2017-22
refinedweb
277
54.83
Today, we are releasing the initial version of Dash Deck, a library for building 3D interactive maps and graphical visualizations directly in Dash. Built on top of deck.gl, it offers seamless support for maps built with pydeck, and also support deck.gl’s JSON API (which you can try out here). If you want to directly dive into the demos and the code, check out this Dash Deck Explorer, which will look like this: You can also check our the source code in the Github repository. It also contains an extensive Readme to help you get started, as well as tips & tricks for more advanced Dash Deck use cases. If you haven’t used pydeck before, you can check out the Getting Started section. In essence, it ports deck.gl into Jupyter, allowing you to create maps like the ones in the explorer shown above. First, you need to ensure that your environment variable MAPBOX_ACCESS_TOKEN is a valid environment variable (more details in the Readme). Then, you can install pydeck like this: pip install pydeck Inside a notebook, you can build a 3D interactive map with pydeck using the following code (borrowed from the official docs): import os import pydeck as pdk mapbox_api_token = os.getenv("MAPBOX_ACCESS_TOKEN") # 2014 locations of car accidents in the UK UK_ACCIDENTS_DATA = ('' 'deck.gl-data/master/examples/3d-heatmap/heatmap-data.csv') # Define a layer to display on a map layer = pdk.Layer( 'HexagonLayer', UK_ACCIDENTS_DATA, get_position=['lng', 'lat'], auto_highlight=True, elevation_scale=50, pickable=True, elevation_range=[0, 3000], extruded=True, coverage=1) # Set the viewport location view_state = pdk.ViewState( longitude=-1.415, latitude=52.2323, zoom=6, min_zoom=5, max_zoom=15, pitch=40.5, bearing=-27.36) # Render r = pdk.Deck(layers=[layer], initial_view_state=view_state, mapbox_key=mapbox_api_token) r.to_html('demo.html') and you will get a map similar to what is shown in this demo. If you want to use it in Dash, you need to first install the library: pip install dash-deck Then, create the component: import dash_deck deck_component = dash_deck.DeckGL( r.to_json(), id="deck-gl", tooltip=True, mapboxKey=r.mapbox_key ) Now, you can directly use this deck_component inside your Dash app, and interact with it using callbacks! This initial version (v0.0.1) already contains 28 demos, mostly ported from Pydeck. They can handle many complex scenarios, including beta-features like lighting and globe views. Nonetheless, the API is still at an early stage and it has not been extensively integrated in the Dash ecosystem. If you find any bugs, would like to add new features, or contribute a new demo, feel free to open an issue! From there, we would be happy to work with you to fix them and provide a better experience for the Dash community. For any feedback, suggestion, ideas, feel free to share them in this thread. We are happy to know what you are planning to build with the library, and don’t forget to share what you’ve made with Dash Deck in the #show-and-tell thread!
https://community.plotly.com/t/initial-release-of-dash-deck-a-library-for-rendering-webgl-3d-maps-with-pydeck-and-deck-gl-in-dash/44528
CC-MAIN-2020-50
refinedweb
501
56.05
DRIVE IT YOURSELF: USB CAR Ever wondered how device drivers are reverse engineered? We’ll show you with a simple yet complete example Fun to play and also simple: this is the device for which we will write a driver. Ever been enticed into a Windows versus Linux flame war? If not, you are lucky. Otherwise, you probably know that Windows fanboys often talk as though support for peripherals in Linux is non-existant. While this argument loses ground every year (the situation is incomparably better than it was in around 2005), you can still occasionally come across a device that is not recognised by your favourite distribution. Most of the time, this will be some sort of a USB peripheral. The beauty of free software is that you can fix this situation yourself. The effort required is obviously dependent on how sophisticated the peripheral is, and with a shiny new 3D web camera you may be out of luck. However, some USB devices are quite simple, and with Linux, you don’t even need to delve into the kernel and C to write a working driver program for it. In this tutorial, we’ll show you how it’s done step by step, using a high-level interpreted language (Python, you guessed it) and a toy USB radio controlled car we happen to have lying around. What we are going to do is a basic variant of a process generally known as reverse engineering. You start examining the device with common tools (USB is quite descriptive itself). Then you capture the data that the device exchanges with its existing (Windows) driver, and try to guess what it means. This is the toughest part, and you’ll need some experience and a bit of luck to reverse engineer a non-trivial protocol. This is legal under most jurisdictions, but as usual, contact a lawyer if in doubt. Get to know USB Before you start reversing, you’ll need to know what exactly USB is. First, USB is a host-controlled bus. This means that the host (your PC) decides which device sends data over the wire, and when it happens. Even an asynchronous event (like a user pressing a button on a USB keyboard) is not sent to the host immediately. Given that each bus may have up to 127 USB devices connected (and even more if hubs are concerned), this design simplifies the management. USB is also a layered set of protocols somewhat like the internet. Its lowest layer (an Ethernet counterpart) is usually implemented in silicon, and you don’t have to think about it. The USB transport layer (occupied by TCP and UDP in the internet – see page 64 for Dr Brown’s exploration of the UDP protocol) is represented by ‘pipes’. There are stream pipes that convey arbitrary data, and message pipes for well-defined messages used to control USB devices. Each device supports at least one message pipe. At the highest layer there are the application-level (or class-level, in USB terms) protocols, like the ubiquitous USB Mass Storage (pen drives) or Human Interface Devices (HID). Inside a wire A USB device can be seen as a set of endpoints; or, simply put, input/output buffers. Each endpoint has an associated direction (in or out) and a transfer type. The USB specification defines several transfer types: interrupt, isochronous, bulk, and control, which differ in characteristics and purpose. Interrupt transfers are for short periodic real-time data exchanges. Remember that a host, not the USB device, decides when to send data, so if (say) a user presses the button, the device must wait until the host asks: “Were there any buttons pressed?”. You certainly don’t want the host to keep silent for too long (to preserve an illusion that the device has notified the host as soon as you pressed a button), and you don’t want these events to be lost. Isochronous transfers are somewhat similar but less strict; they allow for larger data blocks and are used by web cameras and similar devices, where delays or even losses of a single frame are not crucial. Bulk transfers are for large amounts of data. Since they can easily hog the bus, they are not allocated the bandwidth, but rather given what’s left after other transfers. Finally, the control transfer type is the only one that has a standardised request (and response) format, and is used to manage devices, as we’ll see in a second. A set of endpoints with associated metadata is also known as an interface. Any USB device has at least one endpoint (number zero) that is the end for the default pipe and is used for control transfers. But how does the host know how many other endpoints the device has, and which type they are? It uses various descriptors available on specific requests sent over the default pipe. They can be standard (and available for all USB devices), class-specific (available only for HID, Mass Storage or other devices), or vendor-specific (read “proprietary”). Descriptors form a hierarchy that you can view with tools like lsusb. On top of it is a Device descriptor, which contains information like device Vendor ID (VID) and Product ID (PID). This pair of numbers uniquely identifies the device, so a system can find and load the appropriate driver for it. USB devices are often rebranded, but a VID:PID pair quickly reveals their origin. A USB device may have many configurations (a typical example is a printer, scanner or both for a multifunction device), each with several interfaces. However, a single configuration with a single interface is usually defined. These are represented by Configuration and Interface descriptors. Each endpoint also has an Endpoint descriptor that contains its address (a number), direction (in or out), and a transfer type, among other things. Finally, USB class specifications define their own descriptor types. For example, the USB HID (human interface device) specification, which is implemented by keyboards, mice and similar devices, expects all data to be exchanged in the form of ‘reports’ that are sent/received to and from the control or interrupt endpoint. Class-level HID descriptors define the report format (such as “1 field 8 bits long”) and the intended usage (“offset in the X direction”). This way, a HID device is self-descriptive, and can be supported by a generic driver (usbhid on Linux). Without this, we would need a custom driver for each individual USB mouse we buy. It’s not too easy to summarise several hundred pages of specifications in a few passages of the tutorial text, but I hope you didn’t get bored. For a more complete overview of how USB operates, I highly recommend O’Reilly’s USB in a Nutshell, available freely at. And now, let’s do some real work. Under the hood For starters, let’s take a look at how the car looks as a USB device. lsusb is a common Linux tool to enumerate USB devices, and (optionally) decode and print their descriptors. It usually comes as part of the usbutils package. Bus 002 Device 036: ID 0a81:0702 Chesen Electronics Corp. Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub ... The car is the Device 036 here (unplug it and run lsusb again to be sure). The ID field is a VID:PID pair. To read the descriptors, run lsusb -v for the device in question: Bus 002 Device 036: ID 0a81:0702 Chesen Electronics Corp. Device Descriptor: idVendor 0x0a81 Chesen Electronics Corp. idProduct 0x0702 ... bNumConfigurations 1 Configuration Descriptor: ... Interface Descriptor: ... bInterfaceClass 3 Human Interface Device ... iInterface 0 HID Device Descriptor: ... Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: ... bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt ... Here you can see a standard descriptors hierarchy; as with the majority of USB devices, the car has only one configuration and interface. You can also spot a single interrupt-in endpoint (besides the default endpoint zero that is always present and thus not shown). The bInterfaceClass field suggests that the car is a HID device. This is a good sign, since the HID communication protocol is open. You might think that we just need to read the Report descriptor to understand report format and usage, and we are done. However, this is marked ** UNAVAILABLE **. What’s the matter? Since the car is a HID device, the usbhid driver has claimed ownership over it (although it doesn’t know how to handle it). We need to ‘unbind’ the driver to control the device ourselves. First, we need to find a bus address for the device. Unplug the car and plug it again, run dmesg | grep usb, and look for the last line that starts with usb X-Y.Z:. X, Y and Z are integers that uniquely identify USB ports on a host. Then run: 1.0 is the configuration and the interface that we want the usbhid driver to release. To bind the driver again, simply write the same into /sys/bus/usb/drivers/usbhid/bind. Now, Report descriptor becomes readable: Report Descriptor: (length is 52) Item(Global): Usage Page, data= [ 0xa0 0xff ] 65440 (null) Item(Local ): Usage, data= [ 0x01 ] 1 (null) ... Item(Global): Report Size, data= [ 0x08 ] 8 Item(Global): Report Count, data= [ 0x01 ] 1 Item(Main ): Input, data= [ 0x02 ] 2 ... Item(Global): Report Size, data= [ 0x08 ] 8 Item(Global): Report Count, data= [ 0x01 ] 1 Item(Main ): Output, data= [ 0x02 ] 2 ... Here, two reports are defined; one that is read from the device (Input), and the other that can be written back to it (Output). Both are one byte long. However, their intended usage is unclear (Usage Page is in the vendor-specific region), and it is probably why the usbhid driver can’t do anything useful with the device. For comparison, this is how a USB mouse Report descriptor looks (with some lines removed): Report Descriptor: (length is 75) Item(Global): Usage Page, data= [ 0x01 ] 1 Generic Desktop Controls Item(Local ): Usage, data= [ 0x02 ] 2 Mouse Item(Local ): Usage, data= [ 0x01 ] 1 Pointer Item(Global): Usage Page, data= [ 0x09 ] 9 Buttons Item(Local ): Usage Minimum, data= [ 0x01 ] 1 Button 1 (Primary) Item(Local ): Usage Maximum, data= [ 0x05 ] 5 Button 5 Item(Global): Report Count, data= [ 0x05 ] 5 Item(Global): Report Size, data= [ 0x01 ] 1 Item(Main ): Input, data= [ 0x02 ] 2 This is crystal clear both for us and for the OS. With the car, it’s not the case, and we need to deduce the meaning of the bits in the reports ourselves by looking at raw USB traffic. Detective work If you were to analyse network traffic, you’d use a sniffer. Given that USB is somewhat similar, it comes as no surprise that you can use a sniffer to monitor USB traffic as well. There are dedicated commercial USB monitors that may come in handy if you are doing reverse engineering professionally, but for our purposes, the venerable Wireshark will do just fine. Here’s how to set up USB capture with Wireshark (you can find more instructions at). First, we’ll need to enable USB monitoring in the kernel. The usbmon module is responsible for that, so load it now: Then, mount the special debugfs filesystem, if it’s not already mounted: This will create a /sys/kernel/debug/usb/usbmon directory that you can already use to capture USB traffic with nothing more than standard shell tools: 0s 0u 1s 1t 1u 2s 2t 2u There are some files here with cryptic names. An integer is the bus number (the first part of the USB bus address); 0 means all buses on the host. s stands for ‘statistics’ t is for ‘transfers’ (ie what’s going over the wire) and u means URBs (USB Request Blocks, logical entities that represents a USB transaction). So, to capture all transfers on Bus 2, just run: ffff88007d57cb40 296194404 S Ii:036:01 -115 1 < ffff88007d57cb40 296195649 C Ii:036:01 0 1 = 05 ffff8800446d4840 298081925 S Co:036:00 s 21 09 0200 0000 0001 1 = 01 ffff8800446d4840 298082240 C Co:036:00 0 1 > ffff880114fd1780 298214432 S Co:036:00 s 21 09 0200 0000 0001 1 = 00 Unless you have a trained eye, this feedback is unreadable. Luckily, Wireshark will decode many protocol fields for us. Now, we’ll need a Windows instance that runs the original driver for our device. The recommended way is to install everything in VirtualBox (theOracle Extension Pack is required, since we need USB support). Make sure VirtualBox can use the device, and run the Windows program (KeUsbCar) that controls the car. Now, start Wireshark to see what commands the driver sends over the wire. At the intial screen, select the ‘usbmonX’ interface, where X is the bus that the car is attached to. If you plan to run Wireshark as a non-root user (which is recommended), make sure that the /dev/usbmon* device nodes have the appropriate permissions. Suppose we pressed a “Forward” button in KeUsbCar. Wireshark will catch several output control transfers, as shown on the screenshot above. The one we are interested in is highlighted. The parameters indicate it is a SET_REPORT HID class-specific request (bmRequestType = 0x21, bRequest = 0x09) conventionally used to set a device status such as keyboard LEDs. According to the Report Descriptor we saw earlier, the data length is 1 byte, and the data (which is the report itself) is 0x01 (also highlighted). Pressing another button (say, “Right”) results in similar request; however, the report will be 0x02 this time. One can easily deduce that the report value encodes a movement direction. Pressing the remaining buttons in turn, we discover that 0x04 is reverse right, 0x08 is reverse, and so on. The rule is simple: the direction code is a binary 1 shifted left by the button position in KeUsbCar interface (if you count them clockwise). We can also spot periodic interrupt input requests for Endpoint 1 (0x81, 0x80 means it’s an input endpoint; 0x01 is its address). What are they for? Except buttons, KeUsbCar has a battery level indicator, so these requests are probably charge level reads. However, their values remain the same (0x05) unless the car is out of the garage. In this case, there are no interrupt requests, but they resume if we put the car back. We can suppose that 0x05 means “charging” (the toy is simple, and no charge level is really returned, only a flag). If we give the car enough time, the battery will fully charge, and interrupt reads will start to return 0x85 (0x05 with bit 7 set). It looks like the bit 7 is a “charged” flag; however, the exact meaning of other two flags (bit 0 and bit 2 that form 0x05) remains unclear. Nevertheless, what we have figured out so far is already enough to recreate a functional driver. Wireshark captures Windows driver-originated commands. Get to code The program we are going to create is quite similar to its Windows counterpart, as you can easily see from the screenshot above. It has six arrow buttons and a charge level indicator that bounces when the car is in the garage (charging). You can download the code from GitHub (); the steering wheel image comes from. The main question is, how do we work with USB in Linux? It is possible to do it from userspace (subject to permission checks, of course; see the boxout below), and the libusb library facilates this process. This library is written for use with the C language and requires the user to have a solid knowledge of USB. A simpler alternative would be PyUSB, which is a simpler alternative: it strives to “guess” sensible defaults to hide the details from you, and it is pure Python, not C. Internally, PyUSB can use libusb or some other backend, but you generally don’t need to think about it. You could argue that libusb is more capable and flexible, but PyUSB is a good fit for cases like ours, when you need a working prototype with minimum effort. We also use PyGame for the user interface, but won’t discuss this code here – though we’ll briefly visit it at the end of this section. Download the PyUSB sources from, unpack them and install with python setup.py install (possibly in a virtualenv). You will also need the libusb library, which should be available in your package manager. Now, let’s wrap the functionality we need to control a car in a class imaginatively named USBCar. import usb.core import usb.util class USBCar(object): VID = 0x0a81 PID = 0x0702 FORWARD = 1 RIGHT = 2 REVERSE_RIGHT = 4 REVERSE = 8 REVERSE_LEFT = 16 LEFT = 32 STOP = 0 We import two main PyUSB modules and define the direction values we’ve deduced from the USB traffic. VID and PID are the car ID taken from the output of lsusb. def __init__(self): self._had_driver = False self._dev = usb.core.find(idVendor=USBCar.VID, idProduct=USBCar.PID) if self._dev is None: raise ValueError("Device not found") In the constructor, we use the usb.core.find() function to look up the device by ID. If it is not found, we raise an error. The usb.core.find() function is very powerful and can locate or enumerate USB devices by other properties as well; consult for the full details. if self._dev.is_kernel_driver_active(0): self._dev.detach_kernel_driver(0) self._had_driver = True Next, we detach (unbind) the kernel driver, as we did previously for lsusb. Zero is the interface number. We should re-attach the driver on program exit (see the release() method below) if it was active, so we remember the initial state in self._had_driver. self._dev.set_configuration() Finally, we activate the configuration. This call is one of a few nifty shortcuts PyUSB has for us. The code above is equivalent to the following, however it doesn’t require you to know the interface number and the configuration value: self._dev.set_configuration(1) usb.util.claim_interface(0) def release(self): usb.util.release_interface(self._dev, 0) if self._had_driver: self._dev.attach_kernel_driver(0) This method should be called before the program exits. Here, we release the interface we claimed and attach the kernel driver back. Moving the car is also simple: def move(self, direction): ret = self._dev.ctrl_transfer(0x21, 0x09, 0x0200, 0, [direction]) return ret == 1 The direction is supposed to be one of the values defined at the beginning of the class. The ctrl_transfer() method does control transfer, and you can easily recognise bmRequestType (0x21, a class-specific out request targeted at the endpoint), bRequest (0x09, Set_Report() as defined in the USB HID specification), report type (0x0200, Output) and the interface (0) we saw in Wireshark. The data to be sent is passed to ctrl_transfer() as a string or a list; the method returns the number of bytes written. Since we expect it to write one byte, we return True in this case and False otherwise. The method that determines battery status spans a few more lines: def battery_status(self): try: ret = self._dev.read(0x81, 1, timeout=self.READ_TIMEOUT) if ret: res = ret.tolist() if res[0] == 0x05: return 'charging' elif res[0] == 0x85: return 'charged' return 'unknown' except usb.core.USBError: return 'out of the garage' At its core is the read() method, which accepts an endpoint address and the number of bytes to read. A transfer type is determined by the endpoint and is stored in its descriptor. We also use a non-default (smaller) timeout value to make the application more responsive (you won’t do it in a real program: a non-blocking call or a separate thread should be used instead). Device.read() returns an array (see the ‘array’ module) which we convert to list with the tolist() method. Then we check its first (and the only) byte to determine charge status. Remember that this it is not reported when the car is out of the garage. In this case, the read() call will run out of time and throw a usb.core.USBError exception, as most PyUSB methods do. We (fondly) assume that the timeout is the only possible reason for the exception here. In all other cases we report the status as ‘unknown’. Another class, creatively named UI, encapsulates the user interface – let’s do a short overview of the most important bits. The main loop is encapsulated in the UI.main_loop() method. Here, we set up a background (steering wheel image taken from OpenClipart.org), display the battery level indicator if the car is in the garage, and draw arrow buttons (UI.generate_arrows() is responsible for calculating their vertices’ coordinates). Then we wait for the event, and if it is a mouse click, move the car in the specified direction with the USBCar.move() method described earlier. One tricky part is how we associate directions with arrow buttons. There is more than one way to do it, but in this program we draw two sets of arrows with identical shapes. A first one, with red buttons you see on the screenshot, is shown to the user, while the second one is kept off-screen. Each arrow in that hidden set has a different colour, whose R component is set to a direction value. Outside the arrows, the background is filled with 0 (the USBCar.STOP command). When a user clicks somewhere in the window, we just check the R component of the pixel underneath the cursor in that hidden canvas, and action appropriately. The complete program with a GUI takes little more than 200 lines. Not bad for the device we didn’t even had the documentation for! That’s all folks! This concludes our short journey into the world of reverse engineering and USB protocols. The device for which we’ve developed a driver (or more accurately, a support program) was intentionally simple. However, there are many devices similar to this USB car out there, and many of them use a protocol that is close to the one we’ve reversed (USB missile launchers are good example). Reversing a sophisticated device isn’t easy, but now you can already add Linux support for something like a desktop mail notifier. While it may not seem immediately useful, it’s a lot of fun.
https://www.linuxvoice.com/drive-it-yourself-usb-car-6/
CC-MAIN-2018-47
refinedweb
3,731
62.38
What is the time complexity of LinkedList.getLast() in Java? Solution 1 It is O(1) and you should not have to cache it. The getLast method simply returns header.previous.element, so there is no computation and no traversal of the list. A linked list slows down when you need to find elements in the middle it since it starts one end and moves one element at a time. Solution 2 From the Java 6 source code: * @author Josh Bloch * @version 1.67, 04/21/06 * @see List * @see ArrayList * @see Vector * @since 1.2 * @param <E> the type of elements held in this collection */ public class LinkedList<E> extends AbstractSequentialList<E> implements List<E>, Deque<E>, Cloneable, java.io.Serializable { private transient Entry<E> header = new Entry<E>(null, null, null); private transient int size = 0; /** * Constructs an empty list. */ public LinkedList() { header.next = header.previous = header; } ... /** * Returns the first element in this list. * * @return the first element in this list * @throws NoSuchElementException if this list is empty */ public E getFirst() { if (size==0) throw new NoSuchElementException(); return header.next.element; } /** * Returns the last element in this list. * * @return the last element in this list * @throws NoSuchElementException if this list is empty */ public E getLast() { if (size==0) throw new NoSuchElementException(); return header.previous.element; } ... } so that's O(1) for both getFirst() & getLast() Solution 3 From the LinkedList docs: All of the operations perform as could be expected for a doubly-linked list. Operations that index into the list will traverse the list from the beginning or the end, whichever is closer to the specified index. It should be O(1) since a doubly-linked list will have a reference to its own tail. (Even if it doesn't explicitly keep a reference to its tail, it will be O(1) to find its tail.) Mark McDonald Developer Relations at Google. Home brewer.Updated on June 13, 2022 - Mark McDonald 3 months I have a private LinkedList in a Java class & will frequently need to retrieve the last element in the list. The lists need to scale, so I'm trying to decide whether I need to keep a reference to the last element when I make changes (to achieve O(1)) or if the LinkedList class does that already with the getLast() call. What is the big-O cost of LinkedList.getLast() and is it documented? (i.e. can I rely on this answer or should I make no assumptions & cache it even if it's O(1)?) - Akh over 11 yearsYou might have to look into the Java Source Code for implementation details like Yaneeve did. You can attach java core lib source code to IDE.
https://9to5answer.com/what-is-the-time-complexity-of-linkedlist-getlast-in-java
CC-MAIN-2022-40
refinedweb
451
64.51
The TinyShield Matrix LED Boards is a 6 x 9 grid of LEDs (54 total LEDs) that are individually addressable, which lets you scroll text, draw images and all sorts of other things. This TinyShield is compatible with the popular Arduino LoL Shield Library (Lots of LEDs Shield) created by Jimmie Rodgers. The LEDs are arranged using Charlieplexing, which is a technique to allow for control of multiple LEDs using fewer I/O signals. Available with Red, Green and Amber LED colour options. See our Tiny Circuits Range Here. 54 Top Facing LEDs in a 6×9 Matrix Ultra compact size and weight (smaller than a US Quarter!) Square Version: 20mm x 20mm (.787 inches x .787 inches) Max Height (from lower bottom TinyShield Connector to upper top of LEDs): 4.6mm (.181 inches) Weight: TBD grams (TBD ounces) Arduino pins 2, 3, 4, 5, 6, 7, 8, and 9 are used by this shield Code: Select all /* TEXT SAMPLE CODE for LOL Shield for Arduino Copyright 2009/2010 Benjamin Sonntag <benjamin@sonntag.fr> History: 2009-12-31 - V1.0 FONT Drawing, at Berlin after 26C "Charliplexing.h" #include "Font.h" #if defined(ARDUINO) && ARDUINO >= 100 #include "Arduino.h" #else #include "WProgram.h" #endif /* ----------------------------------------------------------------- */ /** MAIN program Setup */ void setup() // run once, when the sketch starts { LedSign::Init(); } /* ----------------------------------------------------------------- */ /** MAIN program Loop */ static const char test[]="HELLO WORLD! "; void loop() // run over and over again { for (int8_t x=DISPLAY_COLS, i=0;; x--) { LedSign::Clear(); for (int8_t x2=x, i2=i; x2<DISPLAY_COLS;) { int8_t w = Font::Draw(test[i2], x2, 0); x2 += w, i2 = (i2+1)%strlen(test); if (x2 <= 0) // off the display completely? x = x2, i = i2; } delay(80); } } The TinyShield Matrix LED shield can use the LOLShield library from Jimmie Rodgers: two modifications. The number of rows is different and the LED mapping is different. You can download a premodified version of this library here:
http://forum.hobbycomponents.com/viewtopic.php?f=50&t=1911&sid=aea50841389e5a7665b47f30a7ead3ac
CC-MAIN-2018-34
refinedweb
315
73.88
Spring 5 and Servlet 4 – The PushBuilder Last modified: December 23, 2020 1. Introduction The Server Push technology — part of HTTP/2 (RFC 7540) — allows us to send resources to the client proactively from the server-side. This is a major change from HTTP/1.X pull-based approach. One of the new features that Spring 5 brings – is the server push support that comes with Jakarta EE 8 Servlet 4.0 API. In this article, we’ll explore how to use server push and integrate it with Spring MVC controllers. 2. Maven Dependency Let’s start by defining dependencies we're going to use: <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>5.2.8.RELEASE</version> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>javax.servlet-api</artifactId> <version>4.0.0</version> <scope>provided</scope> </dependency> The most recent versions of spring-mvc and servlet-api can be found on Maven Central. 3. HTTP/2 Requirements To use server push, we'll need to run our application in a container that supports HTTP/2 and the Servlet 4.0 API. Configuration requirements of various containers can be found here, in the Spring wiki. Additionally, we'll need HTTP/2 support on the client-side; of course, most current browsers do have this support. 4. PushBuilder Features The PushBuilder interface is responsible for implementing server push. In Spring MVC, we can inject a PushBuilder as an argument of the methods annotated with @RequestMapping. At this point, it's important to consider that – if the client doesn't have HTTP/2 support – the reference will be sent as null. Here is the core API offered by the PushBuilder interface: - path (String path) – indicates the resource that we're going to send - push() – sends the resource to the client - addHeader (String name, String value) – indicates the header that we'll use for the pushed resource 5. Quick Example To demonstrate the integration, we'll create the demo.jsp page with one resource — logo.png: <%@ page <title>PushBuilder demo</title> </head> <body> <span>PushBuilder demo</span> <br> <img src="<c:url" alt="Logo" height="126" width="411"> <br> <!--Content--> </body> </html> We'll also expose two endpoints with the PushController controller — one that uses server push and another that doesn't: @Controller public class PushController { @GetMapping(path = "/demoWithPush") public String demoWithPush(PushBuilder pushBuilder) { if (null != pushBuilder) { pushBuilder.path("resources/logo.png").push(); } return "demo"; } @GetMapping(path = "/demoWithoutPush") public String demoWithoutPush() { return "demo"; } } Using the Chrome Development tools, we can see the differences by calling both endpoints. When we call the demoWithoutPush method, the view and the resource is published and consumed by the client using the pull technology: When we call the demoWithPush method, we can see the use of the push server and how the resource is delivered in advance by the server, resulting in a lower load time: The server push technology can improve the load time of the pages of our applications in many scenarios. That being said, we do need to consider that, although we decrease the latency, we can increase the bandwidth – depending on the number of resources we serve. It's also a good idea to combine this technology with other strategies such as Caching, Resource Minification, and CDN, and to run performance tests on our application to determine the ideal endpoints for using server push. 6. Conclusion In this quick tutorial, we saw an example of how to use the server push technology with Spring MVC using the PushBuilder interface, and we compared the load times when using it versus the standard pull technology. As always, the source code is available over on GitHub.
https://www.baeldung.com/spring-5-push
CC-MAIN-2021-39
refinedweb
611
51.18
In the coming weeks we will be changing the default slippage model used to simulate the price at which orders are filled in the live market. Slippage is the difference between the price you see in the market when you place an order and the fill price you actually get. The new model is more accurate and more consistent with the tools we use internally to evaluate algorithms for capital allocations. This model will be the new default in all parts of the Quantopian platform, including backtesting, paper trading, and the upcoming new contest format. Previously, the default model (the VolumeShareSlippage model) applied slippage to your orders as a function of two input parameters, a price impact constant and a participation volume limit. This model is conceptually sound, but research on our own live trading data has shown it to systematically underestimate slippage in many important cases. This makes some high-turnover strategies look better in simulation than we see in actual trading. For this reason we will be replacing our default model with a new FixedBasisPointsSlippage model. This model takes two input parameters: the impact to apply as a percent of the dollar amount of an order placed and the volume limit which determines how many shares of an order can be filled each minute. Our default will be 5 basis points (0.05%) charged on the order amount in dollars. In this new model, orders will fill at a price that is 0.05% worse than the close price of the minute following the order. Buys will fill at a price that is 0.05% higher than the close of the next minute, while sells will fill at a price that is 0.05% lower. The default volume limit will be 10% of the volume in each minute. An order can fill over multiple minute bars, but will be capped at 10% of the volume each minute. Attached is an implementation of this slippage model that you can use in your algorithms before we change the default. This model will be more accurate than the old one, particularly for stocks in the QTradableStocksUS. Your backtests will be more in line with our evaluation process, and more predictive of their real-world behavior. This 5bps fixed slippage model implicitly makes a number of simplifying assumptions, including the following: - Slippage doesn't vary with time of day. (In reality, slippage is typically worse when spreads are widest, in the first 30 minutes of the day.) - Slippage is the same in all stocks. (In reality, slippage is typically more severe in illiquid, thinly traded stocks.) - Slippage doesn't vary with participation rate. (In reality, slippage is more severe with high participation rates. Quantopian uses execution tactics and order types that are designed to spread your order out over time, lowering the participation rate and decreasing slippage costs.) We are actively working on addressing each of these aspects in a next generation market impact model that is still in the research and validation phase. def initialize(context): # Initialize slippage settings given the parameters of our model set_slippage(slippage.FixedBasisPointsSlippage())
https://www.quantopian.com/posts/changes-coming-to-the-default-slippage-model
CC-MAIN-2020-29
refinedweb
517
53.81
Better storage. Better transfers. Better internet. When you need file storage for your project, website, or application, Web3.Storage is here for you. All it takes to get started storing on the decentralized web is a free API token — no need to wrestle with complicated details. In the past, storing data on the decentralized web wasn't always easy — but that's where Web3.Storage comes in. Most data on the internet today is hosted by massive storage providers like Amazon, Google, and Microsoft. This makes it simpler for developers to store application data, but big corporate platforms like these create silos by locking developers and users alike into walled gardens of services. What's more, as the market continues to consolidate, a small oligopoly of providers are essentially becoming the storage backbone of the internet. One solution to this problem is using decentralized storage instead of big, corporate platforms to store data for apps and services. However, decentralized storage can be difficult to manage and add extra time and effort to an already crowded developer workflow — for example, most decentralized storage services need you to compile your data into a specific format, find a storage provider to host your data, buy some cryptocurrency to pay the storage provider, and then send your data across the internet. This is where Web3.Storage comes in. With Web3.Storage, you get all the benefits of decentralized storage technologies with the frictionless experience you expect in a modern dev workflow. All you need to use Web3.Storage is an API token and your data. Under the hood, Web3.Storage is backed by the provable storage of Filecoin and makes data accessible to your users over the public IPFS network — but when it comes down to building your next application, service, or website, all you need to know is that Web3.Storage makes building on decentralized technologies simple. And best of all, Web3.Storage is free. Quickstart Ready to get started using Web3.Storage right now? Get up and running in minutes by following this quickstart guide. In this guide, we'll walk through the following steps: - Creating a Web3.Storage account. - Getting a free API token. - Creating and running a simple script to upload a file. - Getting your uploaded file using your browser or curl. This guide uses Node.js since it's the fastest way to get started using the Web3.Storage JavaScript client programmatically, but don't worry if Node isn't your favorite runtime environment — or if you'd rather not do any coding at all. You can also use Web3.Storage in the following ways: - Work with the API methods in the JavaScript client library using the JS runtime of your choice. - Upload and retrieve files directly from your Account page on the Web3.Storage website. PREREQUISITES You'll need Node version 14 or higher and NPM version 7 or higher to complete this guide. Check your local versions like this: node --version && npm --version > v16.4.2 > 7.18.1 You need a Web3.Storage account to get your API token and manage your stored data. You can sign up for free using your email address or GitHub. - Github - Go to web3.storage/login to get started. - Enter your email address. - Check your inbox for a verification email from Web3.Storage, and click the Log in button in the email. - You're all set! - Go to web3.storage/login to get started. - Click GitHub on the Login screen. - Authorize Web3.Storage when asked by GitHub. - You're all set! Now that you're signed up and logged in, it's time to get your API token. ↓ Get an API token It only takes a few moments to get a free API token from Web3.Storage. This token enables you to interact with the Web3.Storage service without using the main website, enabling you to incorporate files stored using Web3.Storage directly into your applications and services. - Hover over Account and click Create an API Token in the dropdown menu to go to your Web3.Storage API Tokens page. - Enter a descriptive name for your API token and click Create. - Make a note of the Token field somewhere secure where you know you won't lose it. You can click Copy to copy your new API token to your clipboard. Keep your API token private Do not share your API token with anyone else. This key is specific to your account. Now that you have your new API token, it's time to use a simple script to upload a file to Web3.Storage. ↓ Create the upload script You can use the Web3.Storage site to upload files, but it's also quick and easy to create and run a simple upload script — making it especially convenient to add large numbers of files. This script contains logic to upload a file to Web3.Storage and get a content identifier (CID) back in return. CAUTION All data uploaded to Web3.Storage is available to anyone who requests it using the correct CID. Do not store any private or sensitive information in an unencrypted form using Web3.Storage. Create a folder for this quickstart project, and move into that folder: mkdir web3-storage-quickstart cd web3-storage-quickstart Create a file called put-files.jsand paste in the following code: import process from 'process' import minimist from 'minimist' import { Web3Storage, getFilesFromPath } from 'web3.storage' async function main () { const args = minimist(process.argv.slice(2)) const token = args.token if (!token) { return console.error('A token is needed. You can create one on') } if (args._.length < 1) { return console.error('Please supply the path to a file or directory') } const storage = new Web3Storage({ token }) const files = [] for (const path of args._) { const pathFiles = await getFilesFromPath(path) files.push(...pathFiles) } console.log(`Uploading ${files.length} files`) const cid = await storage.put(files) console.log('Content added with CID:', cid) } main() Create another file called package.jsonand paste in the following code: { "name": "web3-storage-quickstart", "version": "0.0.0", "private": true, "description": "Get started using web3.storage in Node.js", "type": "module", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "dependencies": { "minimist": "^1.2.5", "web3.storage": "^3.1.0" }, "author": "YOUR NAME", "license": "(Apache-2.0 AND MIT)" } Save both files, and then run npm installfrom your project folder: npm install This step may take a few moments. Once it's done, the command should output something like this: added 224 packages, and audited 225 packages in 14s 40 packages are looking for funding run `npm fund` for details found 0 vulnerabilities Your script is good to go! Next, we'll run the script to upload a file. ↓ Run the script Now that you've got your script ready to go, you just need to run it in your terminal window using node. Run the script by calling node put-files.js, using --tokento supply your API token and specifying the path and name of the file you want to upload. If you'd like to upload more than one file at a time, simply specify their paths/names one after the other in a single command. Here's how that looks in template form: node put-files.js --token=<YOUR_TOKEN> ~/filename1 ~/filename2 ~/filenameN Once you've filled in your details, your command should look something like this: node put-files.js --token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJkaWQ6ZXRocjoweGZFYTRhODlFNUVhRjY5YWI4QUZlZUU3MUE5OTgwQjFGQ2REZGQzNzIiLCJpc3MiOiJ3ZWIzLXN0b3JhZ2UiLCJpYXQiOjE2MjY5Njk3OTY1NTQsIm5hbWUiOiJib25maXJlIn0.0S9Ua2FWEAZSwaemy92N7bW8ancRUtu4XtLS3Gy1ouA ~/hello.txt Multiple files You can upload a whole directory full of files at once by giving the script the path to a local directory. You can also upload multiple files by passing in more than one file path when you run the script. The command will output a CID: Content added with CID: bafybeig7sgl52pc6ihypxhk2yy7gcllu4flxgfwygp7klb5xdjdrm7onse Make a note of the CID, which looks like bafyb.... You'll need it in order to get your file. Get the status of your upload It's possible to get a variety of details about your upload, including its CID, upload date, size on the network, and info on IPFS pinning and Filecoin storage deals, by using the status() method within the JavaScript client library. Check out the Query how-to guide for more information. Next up, we'll go over two methods for you to retrieve your data from Web3.Storage ↓ Get your file You've already done the most difficult work in this guide — getting your files from Web3.Storage is simple. - Go to, replacing YOUR_CIDwith the CID you noted in the last step. - You should see a link to your file. If you uploaded multiple files at once, you'll see a list of all the files you uploaded. - Click on a file's link to view it in your browser! Finding your files again If you ever need to find your files again, and you've forgotten the CID, head over to the Files table in Web3.Storage:
https://web3.storage/docs/intro/
CC-MAIN-2022-40
refinedweb
1,485
58.79
GUYS!! Cheap file storage online?? I think AWS S3 immediately jumps to mind for most people :P I’m going to be using it to store my training images for GAN. I’ve never set up one before, so I was pleasantly surprised to find that it wasn’t too painful at all. For people who want to start your own, read on! So I began here, with the in-house AWS tutorial - it’s pretty good! Once your bucket is up and running, how do you upload your files via code (not the console)? In comes boto3! Use the quickstart guide here: Do your pip install awscli, AND install AWS CLI (AWS command line). I wanted to see if there were other options without using AWS CLI, but… nope. so just install it haha. Type in aws configure, input your access key id and access key, and for people who selected Singapore as their region, input ap-southeast-1 (you can find the relevant names for other regions here if you need). There was a next field (I can’t remember!!) but just press enter to skip it if you don’t need it (that’s what I did) and you’re done with the configuration!! Get yourself into jupyter notebook, and here we go: import boto3 s3 = boto3.resource('s3') # tells boto which aws resource you are using for bucket in s3.buckets.all(): # prints out all the bucket names in the s3 print(bucket.name) s3.Bucket('picture_bucket').upload_file('/mypics/picture.jpg','picture.jpg') # uploads the file into the bucket from the given file path, and saves it as the name specified. DONE DONE DONEEEEE now to get that spider going…
https://www.yinglinglow.com/blog/2017/12/29/AWS-S3
CC-MAIN-2021-49
refinedweb
284
73.78
Tutorial: Arrow Magic: Metadata Dependent Page Generation - Supporting a “published: false” attribute on pages - Don’t generate unpublished pages at all - Timed releases Supporting a “published: false” attribute on pages Many content management systems or blog platforms support some kind of workflow that display articles differently or not at all depending on which state the article is in, for example whether it has a “published” attribute or not. Hakyll has no built-in support for anything like this, but since its compilers are just arrows, it is easy to implement arbitrary metadata dependent behaviour for rendering pages. Let’s start by adding support for a “published” attributes to the simpleblog example. We want to consider a blog post published if it has a published metadata element that does not have the value false. A function to test for this is simple: isPublished :: Page a -> Bool isPublished p = let published = getField "published" p in published /= "" && published /= "false" The next step is to write a function that tags a page with its published status, which can be either unpublished or published, using the standard Either datatype and then transform this function into a Compiler. The latter can be done with the standard arr function from Control.Arrow, which lifts a function into an arrow: isPagePublished :: Compiler (Page a) (Either (Page a) (Page a)) isPagePublished = arr (\p -> if isPublished p then Right p else Left p) For the next processing steps we now need a compiler that takes an Either (Page a) (Page a) instead of the usual Page a as an input. But the former can be built up from the latter using some standard combinators from the Control.Arrow library. The simplest one is |||, which takes two compilers (arrows) with the same output type and returns a new compiler that takes an Either of the input types of the Compilers as an input. Maybe we just want to render our unpublished posts with a big warning that they are provisional, so we just want to render the unpublished Left pages with another template than the published Right pages: match "posts/*" $ do route $ setExtension ".html" compile $ pageCompiler >>> isPagePublished >>> (applyTemplateCompiler "templates/embargo-post.html" ||| applyTemplateCompiler "templates/post.html") >>> applyTemplateCompiler "templates/default.html" >>> relativizeUrlsCompiler With the conditional rendering in place, the next step is to hide the unpublished posts from the homepage and the list of posts. Both lists are generated from the results of a requireAllA call. The last argument of requireAllA is a Compiler, and requireAllA passes a pair consisting of the currently rendered page and a list of all the required pages. All we have to do to suppress the pages is to write a Compiler that takes such a pair as input, leaves the first element of the pair unchanged and filters out all the unpublished pages from list in the second element of the pair and then pass the output from this compiler to the existing compilers handling the list generation for the index and posts pages. Again, we can use a function from Control.Arrow to build this compiler from simpler ones, in this case it is ***, which combines two arrows to one arrow from pairs to pairs. For our purposes, we combine the identity arrow, which leaves its input unchanged, and an ordinary filter on a list lifted into the compiler arrow: filterPublished :: Compiler (Page a, [Page b]) (Page a, [Page b]) filterPublished = id *** arr (filter isPublished) All that remains to do is to chain this compiler in front of the existing compilers passed to requireAllA in the code for posts.html >>> requireAllA "posts/*" (filterPublished >>> addPostList) and for index.html: >>> requireAllA "posts/*" (filterPublished >>> (id *** arr (take 3 . reverse . sortByBaseName)) >>> addPostList) You may have noticed that the code for the index page uses the same id *** something construct to extract some elements from the list of all posts. Don’t generate unpublished pages at all The above code will treat unpublished posts differently and hide them from all lists of posts, but they will still be generated, and someone who knows their URLs will still be able to access them. That may be what you need, but sometimes you might want to suppress them completely. The simplest way to do so is to leave the rendering pipeline for "posts/*" unchanged and just add the isPagePublished compiler at the end. This will not compile, since hakyll knows how to write a Page String, but not how to write an Either (Page String) (Page String). But that can be amended by a simple type class declaration: instance Writable b => Writable (Either a b) where write p (Right b) = write p b write _ _ = return () Now hakyll will happily generate published pages and ignore unpublished ones. This solution is of course slightly wasteful, as at will apply all the templates to an unpublished page before finally discarding it. You can avoid this by using the +++ function, which does for the sum datatype Either what *** does for the product type pair: match "posts/*" $ do route $ setExtension ".html" compile $ pageCompiler >>> isPagePublished >>> (id +++ (applyTemplateCompiler "templates/post.html" >>> applyTemplateCompiler "templates/default.html" >>> relativizeUrlsCompiler)) The other problem with this solution is more severe: hakyll will no longer generate the index and posts pages due to a rare problem in haskell land: a runtime type error. Hakyll tries to be smart and reuse the parsed pages from the match "posts/*" when processing the requireAllA "posts/*" calls by caching them. But the compilers there still expect a list of pages instead of a list of eithers, so we have to replace filterPublised with something that works on the latter. Luckily (or, probably, by design), Data.Either provides just the function we need, so the new filtering compiler is actually shorter that the original, even though it has a more intimidating type: filterPublishedE :: Compiler (Page a, [Either (Page b) (Page b)]) (Page a, [Page b]) filterPublishedE = id *** arr rights Timed releases Exploiting the fact that compilers are arrows, we can do more mixing and matching of compilers to further refine how hakyll deals with page attributes like published. Maybe you want cron to update your blog while you are on vacation, so you want posts to be considered published if the published field is either true or a time in the past. If you happen to live in the UK in winter or enjoy to do time zone calculation in your head, your new function to test if a page is published and the compiler derived from it might then look like this isPublishedYet :: Page a -> UTCTime -> Bool isPublishedYet page time = let published = getField "published" page in published == "true" || after published where after published = let publishAt = parseTime defaultTimeLocale "%Y-%m-%d %H:%M" published in fromMaybe False (fmap (\embargo -> embargo < time) publishAt) isPagePublishedYet :: Compiler (Page a, UTCTime) (Either (Page a) (Page a)) isPagePublishedYet = arr (\(p,t) -> if isPublishedYet p t then Right p else Left p) This compiler has a pair of a page and a time as its input, and we can use yet another function from Control.Arrow to construct a compiler that generates the input for it, the function &&&. It takes two compilers (arrows) with the same input type and constructs a compiler from that type to a pair of the output types of the two compilers. For the first argument we take the pageCompiler which we already call at the beginning of the page compilation. The second argument should be a compiler with the same input type as pageCompiler that returns the current time. But the current time lives in the IO monad and does not at all depend on the resource the current page is generated from, so we have to cheat a little bit by calling unsafeCompiler with a function that ignores its argument and returns an IO UTCTime, which unsafeCompiler will unwrap for us: match "posts/*" $ do route $ setExtension ".html" compile $ (pageCompiler &&& (unsafeCompiler (\_ -> getCurrentTime))) >>> isPagePublishedYet >>> (id +++ ( ... as above ...)) This is all we have to change if we don’t generate unpublished pages at all. If we just hide them from the lists, the call to ||| discards the information that a page is not (yet) published that was encoded in the Either. In that case we could use the setField function from Hakyll.Web.Page.Metadata to rewrite the published field of the left and right pages in isPagePublished(Yet) to canonical values that the original isPublished function called from filterPublished understands: isPagePublishedYet = arr (\(p,t) -> if isPublishedYet p t then pub p else unpub p) where pub p = Right $ setField "published" "true" p unpub p = Left $ setField "published" "false" p The final version of this code can be found in the timedblog example, together with the required import statements. Other tutorialsThe other tutorials can be found here. Helping outHakyll is an open source project, and one of the hardest parts is writing correct, up-to-date, and understandable documentation. Therefore, the authors would really appreciate it if you would give feedback about the tutorials, and especially report errors or difficulties you encountered. Thanks! If you run into any problems, all questions are welcome in the above google group, or you could try the IRC channel, #hakyllon freenode.
http://jaspervdj.be/hakyll/tutorials/03-arrows.html
CC-MAIN-2014-42
refinedweb
1,527
51.31
39277/shift-all-indices-in-numpy-array Hi, it is pretty simple, to be ...READ MORE If you have matplotlib, you can do: import ...READ MORE Good question, glad you brought this up. I ...READ MORE You can use np.zeros(4,3) This will create a 4 ...READ MORE just index it as you normally would. ...READ MORE Use the .shape to print the dimensions ...READ MORE Slicing is basically extracting particular set of ...READ MORE numpy.set_printoptions(threshold='nan') READ MORE You have a 0-dimensional array of object ...READ MORE my_list = [1,2,3,4,5,6,7] len(my_list) # 7 The same works for ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/39277/shift-all-indices-in-numpy-array
CC-MAIN-2022-21
refinedweb
134
71.21
Hi! So I am trying to write some code that tells a servo to spin for x amount of time every 24 hours. I am not sure how to do this, do you have any ideas? I am completely new to coding so sorry if this is a dumb question. Hi! @johanwik Installation and Troubleshooting is for Problems with the Arduino itself NOT your project. It says so in the description of the section. Therefore I have moved your post here. You may want to read this before you proceed:- how to get the best out of this forum Generally servos do not spin, they only turn over a limited range normally under 240 degrees. Unless you have a so called "continuous servo", which is not a servo at all but a normal motor. You need to be a lot more specific about what your project is about before anything meaningful can be discussed. Few things to say before going into details: Typical servos don't "spin", they wiggle from side to side. So if you really do mean spin, like a wheel, make sure you get what's called a "continuous rotation" servo. (edit... as mentioned by previous poster) Do you need this to happen at a particular time of day, like 5 past 3 every afternoon? If so you will need to get either an RTC (Real time clock) or be online to internet time. If you are just thinking of 24 hours since last time, you could use the internal timer of an Arduino but you will need to look at the accuracy. You'll also need to be aware that the time starts over if Arduino loses power... Can you tell us the application? What do you want your project to accomplish? Thanks... Tom.. to answer as unspecific as you: write a routine that creates the servo-control-signal according to the specs of the datasheet of the servo. In a regular manner read out the time of a real-time-clock (RTC) and compare actual time with the time to spin the servo. Does this description help? No not at all. It is the same situation for us with your vague description you have to describe your project much more specific - what servo ? (RC-servo, "real" servo-motor? - is there a requirement how fast the servo should spin? - how long in time or in rotations in case of an RC-servo what angle? at what speed? which exact type of microcontroller are you using. additional information that helps to adapt the answers to your knowledge how much do you know about electronics? - voltage?, current?, resistance? best regards Stefan Hi @TomGeorge! I am building an automatic pet feeder, so I will need my continuous servo to spin an auger-like thing inside of a piece of PVC at 3 o clock daily. Thanks! Hi, sorry for the unspecified variables. I have a continuous servo and there are no requirements about how fast it should spin. I would like it to spin for 10 seconds. Right now, I am only using a servo and an Arduino Uno and I know a bit about electronics, but not much about coding. I am using 3.3 volts, but i do not know how to measure resistance or current on an arduino. How fast for how long? Full speed for 10 seconds Here is a crude way to do it. Turn it on at 3 PM and it will feed around 3 PM each day. The clock will probably drift a few minutes a day but it should be OK for a while. You can compensate for most of the drift. For example, if the feedings are averaging 3 minutes and 18 seconds earlier each day, add 3 minutes and 18 seconds to the timer interval. if (currentMillis - lastMillis >= 24 * HOURS + (3 * MINUTES + 18 * SECONDS)) #include <Servo.h> const unsigned long SECONDS = 1000; const unsigned long MINUTES = 60 * SECONDS; const unsigned long HOURS = 60 * MINUTES; Servo AugerServo; const byte AugerServoPin = 4; const int StopAngle = 90; unsigned long lastMillis = 0; void Feed() { AugerServo.write(StopAngle); AugerServo.attach(AugerServoPin); AugerServo.write(180); // Full forward (use 0 for the other direction) delay(10 * SECONDS); AugerServo.write(StopAngle); AugerServo.detach(); } void setup() { Feed(); // Feed once on power up } void loop() { unsigned long currentMillis = millis(); // Feed every 24 hours after power up if (currentMillis - lastMillis >= 24 * HOURS) { lastMillis += 24 * HOURS; Feed(); } } @johnwasser Thanks so much, I just got it working! If the servosize is bigger than 4 gram (9gram, 12gram 20 gram etc.) The servo needs its own powersupply. An Arduino can not deliver enough current for a bigger servo under load. If you can afford to buy a Wemos D1 Mini (ESP8266) or a nodeMCU-board (ESP8266). These little boards can be programmed with the arduino-IDE just the same way as an arduino-Uno with after installing some additional files. This means these boards can do the job of the complete arduino. These boards have WLAN onboard and you can connect them to your home-WLAN. This gives you always actual time. If you can afford a little more money an ESP32-board the ESP32 has WLAN and bluetooth. best regards Stefan
https://forum.arduino.cc/t/servo-motor-code-problem/927611
CC-MAIN-2021-49
refinedweb
869
73.17
Tutorial: Debugging from the Python Shell In addition to launching code to debug from Wing's menu bar and Debug menu, it is also possible to debug code that is entered into the Python Shell and Debug Console. Enable this now by clicking on the bug icon in the top right of the Python Shell. Once this is done, the status message at the top of the Python Shell should change to include Commands will be debugged and an extra margin is shown in which you can set breakpoints. Wing will reach those breakpoints, as well as any breakpoints in editors for code that is invoked. Any exceptions will be reported in the debugger. Let's try this out. First stop any running debug process with the Stop icon in the toolbar. Then paste the following into the Python Shell and press Enter so that you are returned to the >>> prompt: def test_function(): x = 10 print(x) x += 5 y = 20 print(x+y) Next place a breakpoint on the line that reads print(x) by clicking in the breakpoint margin to the left of the prompt on that line. Then type this into the Python Shell and press Enter: test_function() Wing should reach the breakpoint on print(x). You can now work with the debugger in the same way that you would if you had launched code from the toolbar or Debug menu. Try stepping and viewing the values of x and y as they change, either in the Stack Data tool or by hovering the mouse over the variable names. Take a look at the stack in the Call Stack or Stack Data tool to see how stack frames that occur within the Python Shell are listed. You can move up and down the stack just as you would if your stack frames were in an editor. Notice that if you step off the end of the call, you will return to the shell prompt. If you press the Stop item in the toolbar or select Stop Debugging from the Debug menu, Wing will complete execution of the code without debug and return you to the >>> prompt. Note that the code is still executed to completion in this case becaused there is no way to simply abandon a number of stack frames in the Python interpreter. Recursive Debugging By default Wing will not return you to the >>> prompt until your code has finished executing. In Wing Pro, it is possible to enable recursive debugging. This is disabled by default because it can be confusing for users that don't understand it. To try this out, check the Enable Recursive Debug item in the Options menu in the Python Shell. Then type test_function() again in the Python Shell, or use the up arrow to retrieve it from command history. You will see that the shell returns immediately to the >>> prompt even though you are now at the breakpoint you set earlier on print(x). The message area in the Python Shell indicates that you are debugging recursively and gives you the level to which you have recursed. For example Debugging recursively (R=2) indicates two levels of recursive debugging. Now enter test_function() again and then press Enter. This is essentially the same thing as invoking test_function() from the line at which the debugger is currently paused, in this case within test_function itself. Try doing this several times. Each time, another level of recursive debugging will be entered. Look at the Call Stack tool and go up and down the stack to better understand what is happening. Now if you press Continue in the toolbar or use Start / Continue in the Debug menu you will exit one level of recursion. Similarly, Stop exits one level of recursion without debugging the remainder of that recursive invocation. See Debugging Code in the Python Shell for details.
https://wingware.com/doc/intro/tutorial-debugging-shell
CC-MAIN-2019-35
refinedweb
644
68.3
Hi Neeraja, If these are available in your package, when you do F4 in your SEGW project explorer, do you see no entries ? If yes, this can be due to authorization in target system. Check SU53 for your user-id, should have S_SERVICE and S_DEVELOP. Also after every transport remember to clean the front-end cache with below report to be executed in front-end system. Else please refer Andre's blog on proper transports.. Regards, Tejas Check out my blog that covers the topic how to transport SAP Gateway Services. In the screen shot we can see that only 500 projects are shown and these are probably limited to the SAP namespaces. Have you used the F4 Help "ZZ*" when trying to open your projects in SEGW? Add comment
https://answers.sap.com/questions/706068/gw-services-are-not-available.html
CC-MAIN-2019-13
refinedweb
130
72.36
Bug Description Since Kernel 3.0 I can not watch TV with my MyGica S870 USB Tuner. I use VLC. Usually I can not see any channel. Rarely I can see a channel but then stops working. This problem does not happen with 2.6.38 Kernel. All this I said I tested with Kubuntu 64bits: 10.04 Lucid, 11.04 Natty and 11.10 Oneiric. *Additional information: $ lsmod | grep -i dvb dvb_usb_dib0700 114669 0 dib7000p 39109 1 dvb_usb_dib0700 dib0090 33392 2 dvb_usb_dib0700 dib7000m 23415 1 dvb_usb_dib0700 dib0070 18434 1 dvb_usb_dib0700 dvb_usb 24444 1 dvb_usb_dib0700 dib8000 43019 2 dvb_usb_dib0700 dvb_core 110616 3 dib7000p, dib3000mc 23392 1 dvb_usb_dib0700 dibx000_common 14574 5 dvb_usb_ rc_core 26963 11 rc_dib0700_ $uname -r 3.0.0-7-generic The driver is in the package "linux- Possible firmware used: dvb-usb- dvb-usb- *Similar problems related: http:// https:/ Well, I do not understand very well the previous automated message. I do not think apport may add useful information to this bug, and so you can see on the bugzilla.kernel.org link, is a confirmed bug. This should be the same issue stated at http:// Patches are available ( http:// I just bought a Hauppauge Nova-TD, which doesn't seem to work and uses the same module, so I'll test the patches. Hmm, mine seems to work fine after all. I have installed Ubuntu 11.10 64bit and I have problem with this bug. Last kernel 3.0.0.12 and I have Winfast DTV Dongle (dib0700). In dmesg I have this: dib0700: tx buffer length is larger than 4. Not supported Could someone patch the kernel and build the packages for us to see if it works? Otherwise, Ubuntu Oneiric will be released with a kernel where this popular chip does not work. Thanks. Would it be possible for you to test the latest upstream kernel? It will allow additional upstream developers to examine the issue. Refer to https:/ Thanks in advance. the latest Oneiric kernel (3.0.0-12.20) and the mainline kernel (3.1.0- "dmesg" shows: dib0700: tx buffer length is larger than 4. Not supported. The link below mentions that the patch fixes the problem: http:// In the link I had written earlier and is now impossible to enter (https:/ 3.0.0-12.20, this issue still exists. And after applying the patches on top of 3.0.0-12.20, this issue is gone. @Jesse Can you attach the patches to this bug report? @Joseph, Patches can be pulled in by git pull http:// Three commits: http:// http:// http:// Hello, might my problem with the installation of a Cinergy T Stick Black also be a kernel related issue ? I am using 11.10 (32 bit) and get errors which don't seem to occur with 64 bit versions. Based on the installation description under : http:// I get the messages : . make -C /usr/src/ make[1]: Betrete Verzeichnis '/usr/src/ Building modules, stage 2. MODPOST 1 modules WARNING: "__udivdi3" [/home/ WARNING: "__umoddi3" [/home/ WARNING: "__divdi3" [/home/ make[1]: Verlasse Verzeichnis '/usr/src/ ubuntu@ cp dvb-usb-rtl2832u.ko /lib/modules/`uname -r`/kernel/ depmod -a . which later results via : dmesg | tail -n 30 . . [ 47.069721] EXT4-fs (sda3): re-mounted. Opts: errors= [22954.109240] usb 1-4: USB disconnect, device number 3 [23156.272071] usb 1-4: new high speed USB device number 5 using ehci_hcd [23157.163377] dvb_usb_rtl2832u: Unknown symbol __divdi3 (err 0) [23157.163443] dvb_usb_rtl2832u: Unknown symbol __umoddi3 (err 0) [23157.163474] dvb_usb_rtl2832u: Unknown symbol __udivdi3 (err 0) [23157.171572] dvb_usb_rtl2832u: Unknown symbol __divdi3 (err 0) [23157.171638] dvb_usb_rtl2832u: Unknown symbol __umoddi3 (err 0) [23157.171669] dvb_usb_rtl2832u: Unknown symbol __udivdi3 (err 0) Other forums suggest, that this may well be a kernel issue. Please advise on how to proceed now. Well, update to kernel 3.0.4 didn't help either. I still get the same messages. Any ideas anybody ? Ok, I expected a quick solution to this problem. Meanwhile, people with this problem can install new versions of the drivers with these instructions: http:// That is, install "git" and the basic build dependencies, then: git clone git://linuxtv. cd media_build ./build sudo make install Nope, didn't work for me : after : sudo make install ( T Sick not connected yet ) dmesg . [ 269.732128] usb 1-3.4: new high speed USB device number 6 using ehci_hcd [ 269.923917] dvb_usb_rtl2832u: disagrees about version of symbol dvb_usb_device_init [ 269.923931] dvb_usb_rtl2832u: Unknown symbol dvb_usb_device_init (err -22) [ 269.923946] dvb_usb_rtl2832u: disagrees about version of symbol dvb_usb_device_exit [ 269.923954] dvb_usb_rtl2832u: Unknown symbol dvb_usb_device_exit (err -22) [ 269.949394] dvb_usb_rtl2832u: disagrees about version of symbol dvb_usb_device_init [ 269.949409] dvb_usb_rtl2832u: Unknown symbol dvb_usb_device_init (err -22) [ 269.949424] dvb_usb_rtl2832u: disagrees about version of symbol dvb_usb_device_exit [ 269.949432] dvb_usb_rtl2832u: Unknown symbol dvb_usb_device_exit (err -22) [ 905.727724] usb 1-3.4: USB disconnect, device number 6 . . @ezilg, It seems that your device has a different chip that is mentioned in this bug. Are you sure your problem is related to this report? Install the drivers from git solves the problem for "dvb_usb_dib0700" with the error message: dib0700: tx buffer length is larger than 4. Not supported. For your chip I have only found this: http:// Well, first of all, I translated it into German and if requested,I can supply a English version as well. The sad part about this is : I got error messages again make[3]: *** [/home/ make[2]: *** [_module_ make[2]: Leaving directory `/usr/src/ make[1]: *** [default] Fehler 2 make[1]: Verlasse Verzeichnis '/home/ make: *** [all] Fehler 2 ubuntu@ ubuntu@ /home/ubuntu/ /home/ubuntu/ /lib/modules/ /lib/modules/ The problem in "dvb_usb_dib0700" has been fixed in 3.2 Kernel. $ uname -r 3.2.0-030200rc1 http:// @ezilg: I've had the same problems ("Unknown symbol __divdi3" etc.) with my DVB-Dongle "Trekstor Terres". This dongle uses the RTL2832U along with the Tuner FC0012. I use Ubuntu 10.10 with Kernel "3.0.0-12-generic". I am using the RTL2832-driver from "https:/ Some of the Source-Files for the RTL2832U-Driver (e.g. "rtl2832u_fe.c" ) are using 64Bit-divisions with the normal operators "/" and "%" (remainder). These operators are converted by gcc to libgcc-functions (e.g. "__divdi3" for signed 64Bit-operators). But Kernel-Modules can not use / don't have access to the libgcc-functions (see "http:// - Copy the attached files "div64_wrap.h" and "div64_wrap.c" into the folder "...linux/ - Add "div64_wrap.o" to "Makefile" in the same directory ("dvb-usb- - Add "EXTRA_LDFLAGS += --wrap __udivdi3 --wrap __divdi3 --wrap __moddi3 --wrap __umoddi3" to "Makefile" in the "v4l"-directory: ... # CFLAGS configuration ifeq ($(CONFIG_ EXTRA_CFLAGS += -I$(srctree) endif EXTRA_CFLAGS += -g EXTRA_LDFLAGS += --wrap __udivdi3 --wrap __divdi3 --wrap __moddi3 --wrap __umoddi3 ... After this my USB-dongle worked (I still have performance problems but that's another isue). "div64_wrap.h": /****** File: div64_wrap.h Description: Wrapper-functions for some 64Bit-division functions of libgcc (e.g. "__divdi3"). That means that the functions declared here are used instead of the originally ones. LD-options "--wrap=__divdi3 --wrap __udivdi3 --wrap __moddi3 --wrap __umoddi3" have to be used! Reason: Building modules for e.g. the Ubuntu-Kernel "3.0.0-12-generic" using integer 64Bit divisions with the "/" or "%" operator fails at linker-stage with e.g. 'WARNING: "__divdi3" [dvb-usb- rtl2832u.ko] undefined!' Problem: gcc is using the "divdi"-functions defined in libgcc. But Kernel-Modules can not use the libgcc- functions! Author: Stefan Bosch Date: 2011-11-11 ******* #include <linux/math64.h> unsigned long long __wrap_ long long __wrap_ unsigned long long __wrap_ long long __wrap_ // ******* EOF "div64_wrap.c": see attachment Addition to my former Post #22: The header file "div64_wrap.h" is not necessary, therefore delete the include in the "div64_wrap.c". Das Resultat mit 'make' ist leider immer noch das selbe, trotz der Änderungen. Das ganze fängt mit folgenden Fehlern an : . . /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ . . usw. . /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ Siehe meine beiden attachments. Sorry das vorherige Makefile ist das falsche @ezilg Diese Compilerfehler hatte ich mit den "falschen" RTL2832U- Hi, This is a patched amd64 kernel: http:// http:// http:// Please test it if you have time and let me know if it works or not. Thanks Yafu, The latest kernel update (3.0.0-14.23) brings two patches: [media] DiBcom: protect the I2C bufer access [media] dib0700: protect the dib0700 buffer access Could you see if that helps? @julianw works for me now! (dib0700) Works for me now too. In my case the new kernel has helped me to tune 2 transponders, but I'm still unable to tune all the existing ones (they worked before, of course). I get timeouts when executing the tool "scan". For example this is the output for one transponder: >>> tune to: >>> tune to: 482000000: >>> tuning status == 0x0e >>> tuning status == 0x1e WARNING: filter timeout pid 0x0011 WARNING: filter timeout pid 0x0000 WARNING: filter timeout pid 0x0010 Is fixed for me with the latest "3.0.0-14" Kernel. I can scan and watch TV. I do not get the error with "dmesg" anymore. Regards. Works for me. Ubuntu 11.10 in both 32-bit and 64-bit systems. Tested with Pixelview SBTVD dongle.:/ This bug is missing log files that will aid in diagnosing the problem. From a terminal window please run: apport-collect 838130.
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/838130
CC-MAIN-2018-43
refinedweb
1,583
68.67
Thanks. Theresa From: "Thomas B. Passin" <tpassin@xxxxxxxxxxxx> ["t l"] > > I'm trying to define an element that is able to take HTML tags in it. > > I tried to define an entity to represent the external DTD. I checked it... > it is well-formed and valid, but when it's view in browser I get an error: > > "Use of default namespace declaration attribute in DTD not supported." > Does this mean that you get a browser error when you try to view the XHTML DTD, or do you get the browser error when you try to view an xml document that you made that includes that dtd? And what browser/processor did you use to try to view it? Tom P XSL-List info and archive: _________________________________________________________________ Get your FREE download of MSN Explorer at
https://www.oxygenxml.com/archives/xsl-list/200107/msg01722.html
CC-MAIN-2018-34
refinedweb
134
80.62
Until the early 1980s, large software development projects had a continual problem with the inclusion of headers. One group might have produced a graphics.h, for example, which started by including io.h. Another group might have produced keyboard.h, which also included io.h. If io.h could not safely be included several times, arguments would break out about which header should include it. Sometimes an agreement was reached that each header should include no other headers, and as a result, some application programs started with dozens of #include lines, and sometimes they got the ordering wrong or forgot a required header. Compliant Solution All these complications disappeared with the discovery of a simple technique: each header should #define a symbol that means "I have already been included." The entire header is then enclosed in an include guard: #ifndef HEADER_H #define HEADER_H /* ... Contents of <header.h> ... */ #endif /* HEADER_H */ Consequently, the first time header.h is #include'd, all of its contents are included. If the header file is subsequently #include'd again, its contents are bypassed. Because solutions such as this one make it possible to create a header file that can be included more than once, the C Standard guarantees that the standard headers are safe for multiple inclusion. Note that it is a common mistake to choose a reserved name for the name of the macro used in the include guard. See DCL37-C. Do not declare or define a reserved identifier. Risk Assessment Failure to include header files in an include guard can result in unexpected behavior. Automated Detection Related Vulnerabilities Search for vulnerabilities resulting from the violation of this rule on the CERT website. 2 Comments Stewart Brodie I think you have an exclusion here for the cases when you do want to include headers more than once. Some headers do need to be included more than once (e.g. the standard header assert.h). Sometimes, generated code can result in headers that need to be included multiply too. kowsik This is a double-edged sword. Including the same file multiple times has it's benefits when you know what you are doing. I realize that you are trying to set up a common set of rules to make people compliant and safe, but in some cases there are are much broader applicability for including the header file multiple times like here. One of the common problems with C/C++ code is the repeated switch statements where a given "enum" type is being used in multiple contexts. Multiple-header-inclusion solves this to be lazy and auto generated. YMMV.
https://wiki.sei.cmu.edu/confluence/display/c/PRE06-C.+Enclose+header+files+in+an+include+guard
CC-MAIN-2019-39
refinedweb
434
55.84
Streaming Excel to the Browser in Node.JS and JavaScript No ads, no tracking, and no data collection. Enjoy this article? Buy us a ☕. In the past, we've dabbled in zip archives, as well as Word document creation. This time around, let's take a look at generating Excel documents that we want to stream back to the browser. Most people know how to process CSV files, but when you open them in Excel, you get that annoying pop-up, so let's use an actual Excel library to get the correct format. We start by installing the exceljs package: npm install exceljs --save …and if you're working with TypeScript, let's install the typings: npm install @types/exceljs --save-dev We'll be using TypeScript in this blog post. Once you do, you can import the package at the top of your file: import * as excel from 'exceljs'; Now let's create a utility function for creating the Excel file and returning the stream as a buffer: export async function createExcel(headers: Partial<excel.Column>[], rows: any[]): Promise<Buffer> { const workbook: excel.stream.xlsx.WorkbookWriter = new excel.stream.xlsx.WorkbookWriter({}); const sheet: excel.Worksheet = workbook.addWorksheet('My Worksheet'); sheet.columns = headers; for(let i = 0; i < rows.length; i++) { sheet.addRow(rows[i]); } sheet.commit(); return new Promise((resolve, reject): void => { workbook.commit().then(() => { const stream: any = (workbook as any).stream; const result: Buffer = stream.read(); resolve(result); }).catch((e) => { reject(e); }); }); } This function takes an array of the headers you want at the top of the Excel file, and any array of objects that represent the rows in the Excel spreadsheet. We then create a new WorkbookWriter and then create a Worksheet that we name "My Worksheet" (because we're creative). Next we set the columns property of the worksheet object to the headers that we passed in. As you can see by the TypeScript typings, this array is a Partial of the Excel package's Column type. We'll look at this a little more in a bit. The next step is to add rows to the worksheet. In the code above, we loop over the rows and use the addRow() method of the worksheet to attach them. Once finished, we commit the data to the worksheet, and then return a Promise. We're returning a Promise here because--at this time--TypeScript doesn't see a return value from committing the workbook when using async/await. This is a typings file issue. The Promise calls commit() on the workbook, and the "thennable" return function allows us to get the stream of the workbook, and read that into a buffer. Once this is complete, you can call this function from an Express route. For example: app.get('/course/:name/:id/download'), async (req, res) => { const { name, id } = req.params; const data = getCourse(id); const stream: Buffer = await createExcel([ { header: '', key: 'number' }, { header: 'Session', key: 'session' }, { header: 'Course/System', key: 'course' }, { header: 'Learning Objective', key: 'lo' } ], data); res.setHeader('Content-Type', 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'); res.setHeader('Content-Disposition', `attachment; filename=${ name }.xlsx`); res.setHeader('Content-Length', stream.length); res.send(stream); }); In this code, after grabbing the request parameters, we await the createExcel() method, passing it an array of objects representing the headers, and the data we receive from a getCourse() method. The headers are a partial of Columnlike we mentioned before. Here we're just passing in an array of objects with headerand keyproperties. The header is the column text, while the key is the property key in the datavariable. The datavariable is simply an array of objects that have the same properties as the values of each of the keys: number, session, course, lo. We can then take this stream and send it back to the browser as an attachment for the user.
https://codepunk.io/streaming-excel-to-the-browser-in-node-js-and-javascript/
CC-MAIN-2022-21
refinedweb
642
57.06
line.c File ReferenceLine drawing graphics routines. More... #include "gfx.h" #include "gfx_p.h" #include "cfg/cfg_gfx.h" #include <cfg/debug.h> #include <cfg/macros.h> Go to the source code of this file. Detailed DescriptionLine drawing graphics routines. - Version: Definition in file line.c. Function Documentation 184 of file line.c. Draw a line from the current pen position to the new coordinates. - Note: - This function moves the current pen position to the new coordinates. - See also: - gfx_line() Definition at line 264 of file line.c. Draw a sloped line without performing clipping. Parameters are the same of gfx_line(). This routine is based on the Bresenham Line-Drawing Algorithm. - Note: - Passing coordinates outside the bitmap boundaries will result in memory trashing. - See also: - gfx_line() Definition at line 71 of file line.c. Move the current pen position to the specified coordinates. The pen position is used for drawing operations such as gfx_lineTo(), which can be used to draw polygons. Definition at line 250 of file line.c.
http://doc.bertos.org/2.2/line_8c.html
crawl-003
refinedweb
168
63.66
Product Getting Started With the AYLIEN SDK for .Net (C#) January 27, 2015 - Product This is the fourth edition, in our series of blogs on getting started with AYLIEN’s various SDKs. There are SDKs available for Node.js, Python, Ruby, PHP, GO, Java and .Net (C#). For this week’s instalment we’re going to focus on C#. If you are new to AYLIEN and Text Analysis and you do not have an account yet, you can take a look at our blog on how to get started with the API or alternatively you can go directly to our Getting Started page which will take you through the signup process. We provide a free plan to get started which allows users to make up to 1,000 calls per day to the API for free. Downloading and Installing the C# SDK All of our SDK repositories are hosted on Github. You can find the C# repository here. The simplest way to install the repository is with “nuget package manager”. Simply type the following from a command line tool. nuget install Aylien.TextApi Alternatively, from Visual Studio under the “Project” Menu choose “Manage Nuget Packages” and search for the AYLIEN package under online packages. Once you have installed the SDK you’re ready to start coding. The Sandbox area of the website has a number of sample applications in node.js which would help to demonstrate what the APIs can do. In the remainder of this blog we will walk you through making calls using the C# SDK and show the output you should receive in each case. Configuring the SDK with your AYLIEN credentials Once you have received your AYLIEN APP_ID and APP_KEY from the signup process and you have downloaded the SDK, you can start making calls by adding the AYLIEN namespace to your C# code. using Aylien.TextApi; using System; And initialising a client with your AYLIEN credentials Client client = new Client( "YOUR_APP_ID", "YOUR_APP_KEY"); When calling the various API you can specify whether you want to analyze a piece of text directly or a URL linking to the text or article you wish to analyze. Language Detection First let’s take a look at the language detection endpoint by analyzing the following sentence: ‘What language is this sentence written in?’ You can call this endpoint using the following piece of code. Language language = client.Language(text: "What language is this sentence written in?"); Console.WriteLine("Text: {0}", language.Text); Console.WriteLine("Language: {0}", language.Lang); Console.WriteLine("Confidence: {0}", language.Confidence); You should receive an output very similar to the one shown below which shows the language detected as English and a confidence score. The confidence score is very close to 1, so, you can be pretty sure it’s correct. Language Detection Results Text: What language is this sentence written in? Language: en Confidence: 0.9999982 Sentiment Analysis Next, we’ll look at analyzing the sentence “John is a very good football player” to determine it’s sentiment i.e. whether it’s positive, neutral or negative. The endpoint will also determine if the text is subjective or objective. You can call the endpoint with the following piece of code Sentiment sentiment = client.Sentiment(text: "John is a very good football player!"); Console.WriteLine("Text: {0}", sentiment.Text); Console.WriteLine("Sentiment Polarity : {0}", sentiment.Polarity); Console.WriteLine("Polarity Confidence : {0}", sentiment.PolarityConfidence); Console.WriteLine("Subjectivity : {0}", sentiment.Subjectivity); Console.WriteLine("Subjectivity Confidence : {0}", sentiment.SubjectivityConfidence); You should receive an output similar to the one shown below. This indicates that the sentence is objective and is positive, both with a high degree of confidence. Sentiment Analysis Results Text: John is a very good football player! Sentiment Polarity : positive Polarity Confidence : 0.999998827276487 Subjectivity : objective Subjectivity Confidence : 0.989682159413825 Article Classification We’re now going to take a look at the Classification endpoint. The Classification endpoint automatically assigns an article or piece of text to one or more categories making it easier to manage and sort. Our classification is based on IPTC International Subject News Codes and can identify up to 500 categories. The code below analyses a BBC news article about scientists who have managed to slow down the speed of light. Classify classify= client.Classify(url: ""); Console.Write("nClassification: n"); foreach(var item in classify.Categories) { Console.WriteLine("Label : {0} ", item.Label.ToString()); Console.WriteLine("IPTC code : {0} ", item.Code.ToString()); Console.WriteLine("Confidence : {0} ", item.Confidence.ToString()); } When you run this code you should receive an output similar to that shown below which assigns the article an IPTC label of “applied science – particle physics” with an IPTC code of 13001004. Article Classification Results Classification: Label : applied science - particle physics IPTC code : 13001004 Confidence : 0.9877892 Hashtag Analysis Next, we’ll look analyze the same BBC article and extract hashtag suggestions for sharing the article on social media. Hashtags hashtags = client.Hashtags(url: ""); Console.Write("nHashtags: n"); foreach(var item in hashtags.HashtagsMember) { Console.WriteLine(item.ToString()); } You should receive the output shown below. Hashtag Suggestion Results Hashtags: #Glasgow #HeriotWattUniversity #Scotland #Moon #QuantumRealm #LiquidCrystal #Tie #Bicycle #Wave-particleDuality #Earth #Physics Check out our SDKs for node.js, Go, PHP, Python, Java and Ruby if C# isn’t your preferred language. For more information regarding the APIs go to the documentation section of our website.
http://blog.aylien.com/getting-started-with-the-aylien-sdk-for-net-c/
CC-MAIN-2017-39
refinedweb
888
50.02
Startup without peripheral libraries series The dsPIC is one of the easier MCUs in this series to get running. Much like the PIC32MK, the default startup compiler contains some built in functions that are required to switch clocks. These functions are documented in the compiler reference. This example uses the internal 8MHz fast oscillator and PLL to run the system clock at 180MHz. I then set up Timer 1 and interrupts. The steps are as follows: - Set up the proper configuration bits for the clock and features being used – MPLAB X contains a gui to help. The PLL cannot be changed while being actively used for the clock. So, I start the system up just using the FRC oscillator, then configure the PLL, then switch the clock to the PLL. - Next, I configure Timer1 and enable interrupts - The sysInit function is called before the main loop #include <xc.h> #pragma config FNOSC = FRC #pragma config IESO = OFF #pragma config POSCMD = NONE #pragma config OSCIOFNC = ON #pragma config FCKSM = CSECMD #pragma config PLLKEN = ON #pragma config WINDIS = OFF #pragma config FWDTEN = ON_SW #pragma config ICS = PGD1 #pragma config JTAGEN = OFF #pragma config DMTDIS = OFF void sysInit (void) { //----------------------------------------------------- // Configure clock - 360 MHZ -> 90 MHz fclk //----------------------------------------------------- CLKDIVbits.FRCDIV = 0; CLKDIVbits.PLLPRE = 1; PLLFBDbits.PLLFBDIV = 190; PLLDIVbits.POST1DIV = 4; PLLDIVbits.POST2DIV = 1; // Start clock switch to PLL __builtin_write_OSCCONH(0x01); __builtin_write_OSCCONL(OSCCON | 0x01); // wait for clock switch while (OSCCONbits.OSWEN != 0); // Wait for PLL to lock while (OSCCONbits.LOCK != 1); //----------------------------------------------------- // Timer 1 - use FRC @ 8MHz -> 1ms tick //----------------------------------------------------- T1CONbits.TON = 0; T1CONbits.TECS = 3; // use FRC T1CONbits.TCS = 1; // use TECS clock selection T1CONbits.TCKPS = 1; // /8 TMR1 = 0; PR1 = 1000; IPC0bits.T1IP = 2; // priority 2 IFS0bits.T1IF = 0; // clear timer1 int flag IEC0bits.T1IE = 1; // enable timer1 int T1CONbits.TON = 1; // turn on timer1 INTCON2bits.GIE = 1; // global interrupt enable }
http://www.brianchavens.com/2018/10/18/startup-without-peripheral-libraries-dspic33ck/
CC-MAIN-2020-29
refinedweb
305
68.77
After a bit of toying with it and referring back to my Java textbook. I was able to make it work. Thank you for help though. Type: Posts; User: DreamGod420 After a bit of toying with it and referring back to my Java textbook. I was able to make it work. Thank you for help though. So for my CIS class, I need to write a program that converts inputed meters into either kilometers, inches, or feet based off of a menu provided to the user and their input. the meters entered cannot... ok, i copy and pasted the code in a new project and it ran fine. Thanks again! I have removed the operand and replaced it with a boolean flag to make it easier and i'm still getting the same error. Here is my updated code: import javax.swing.JOptionPane; public class... So I'm trying to make an infinite maze for my beginner Java class, where the while loop will keep the user stuck until the 'secret word' is entered. The only issue I am having is that it is...
http://www.javaprogrammingforums.com/search.php?s=2a38541279126e032cf83f151942d9b7&searchid=1461034
CC-MAIN-2015-14
refinedweb
184
83.76
4049 [details] MT9002 [Condition] Trail/higher license is activated Open attached iOS sample 'MT9002' in VS and build the sample. Observe that it gives build error. Build Error: "Error1 The type or namespace name 'Bindings' could not be found (are you missing a using directive or an assembly reference?)C:\Users\monup\Desktop\New folder\MT9002\MT9002\AppDelegate.cs" When we try to add missing assembly in References folder then we are not getting Binding assembly in Add References window. However, on mac in XS we are able to add 'bindings-test.dll' file and after that build error disappears on XS. But in VS we are not 'bindings-test.dll' file in Add References window. Environment info: VS 2010/2012 MTVS 1.2.103 MT 6.2.6.6 Regression Status: Not Regression. This issue also exist with MTVS 1.1.200(Stable). The .csproj file references ..\..\bindings\bindings-test.dll which does not exist. Not sure what's the original sample, but if it generates bindings from a native library then this is not supported in VS. Hi PJ, Today we have checked this issue with above attached ios sample 'MT9002' with bug. As per comment 1 The .csproj file references ..\..\bindings\bindings-test.dll does not exist. We have added missing bindingtest.dll file from QualityAssurance-master/Automation/PricingTests/BuildErrorProjects/iOS/bindings/bindings-test.dll. Now sample is build successfully with Trial, Business and priority License on VS. As per our understanding, now this issue can be closed as it is working fine, after adding the missing .dll file. PJ, please confirm and closed this issue, if we are correct in our understanding or let us know your opinion for the same. Yes indeed, invalid until we support bindings in VS.
https://bugzilla.xamarin.com/12/12491/bug.html
CC-MAIN-2021-39
refinedweb
294
61.12
pathtype: Type-safe replacement for System.FilePath etc This package provides type-safe access to filepath manipulations. System.Path is designed to be used instead of System.FilePath. (It is intended to provide versions of functions from that module which have equivalent functionality but are more typesafe). System.Path.Directory is a companion module providing a type-safe alternative to System.Directory. The heart of this module is the abstract type which represents file and directory paths. The idea is that there are two phantom type parameters - the first should be Path ar fd Abs or Rel, and the second File or Dir. A number of type synonyms are provided for common types: type AbsFile = Path Abs File type RelFile = Path Rel File type AbsDir = Path Abs Dir type RelDir = Path Rel Dir type AbsPath fd = Path Abs fd type RelPath fd = Path Rel fd type FilePath ar = Path ar File type DirPath ar = Path ar Dir The type of the combine (aka </>) function gives the idea: (</>) :: DirPath ar -> RelPath fd -> Path ar fd Together this enables us to give more meaningful types to a lot of the functions, and (hopefully) catch a bunch more errors at compile time. Overloaded string literals are supported, so with the OverloadedStrings extension enabled, you can: f :: FilePath ar f = "tmp" </> "someFile" <.> "ext" If you don't want to use OverloadedStrings, you can use the construction fns: f :: FilePath ar f = asDirPath "tmp" </> asFilePath "someFile" <.> "ext" or... f :: FilePath ar f = asPath "tmp" </> asPath "someFile" <.> "ext" or just... f :: FilePath ar f = asPath "tmp/someFile.ext" One point to note is that whether one of these is interpreted as an absolute or a relative path depends on the type at which it is used: *System.Path> f :: AbsFile /tmp/someFile.ext *System.Path> f :: RelFile tmp/someFile.ext You will typically want to import as follows: import Prelude hiding (FilePath) import System.Path import System.Path.Directory import System.Path.IO The basic API (and properties satisfied) are heavily influenced by Neil Mitchell's System.FilePath module. Modules Downloads - pathtype-0.5.2.tar.gz [browse] (Cabal source package) - Package description (as included in the package) Maintainer's Corner For package maintainers and hackage trustees
http://hackage.haskell.org/package/pathtype-0.5.2
CC-MAIN-2018-34
refinedweb
371
64.2
Contents: "Will my software work in a Zone?" Having worked with many partners that are interested in integrating some of the new features of the Solaris 10 Operating System into their software, this is the single-most common question I am asked about supporting the Solaris Zones feature. While most software supported on the Solaris 10 OS will run properly in a non-global zone, this question is not an easy one to answer because there are very real issues that stand in the way of non-global zone support. Without a process to help guide vendors through non-global zone support, application vendors might be uncomfortable or unable to bring zones support to their products. The aim of this writing is to help you through the process of supporting your application within a zone. This process includes software installation and configuration as well as zone limitations with workarounds where possible. Also included is a discussion on best practices for application vendors desiring to support zones. Before we continue, a few words on the term "zone" are in order. Zones are the software partitioning technology used to virtualize operating system services and provide an isolated and secure environment for running applications. The Solaris 10 OS supports the notion of the global zone and the non-global zone. For all intents and purposes, a global zone is the global view of the Solaris operating environment. There is always one global zone per Solaris instance appropriately named global. Each software partition that is created within the Solaris instance is known as a non-global zone. Many non-global zones can exist within an instance of the Solaris OS, each with a unique name. For the remainder of this document, the term zone will refer to the latter. When a distinction is required between a non-global zone and the global zone, the fully qualified zone type will be used. If you want to add zone support to your software, you would benefit from knowing the fastest way to get started. What problems will you encounter? How can you estimate the amount of effort it would take? Zones provide the standard Solaris interfaces and application environment, and do not impose a new ABI or API. Applications do not need to be recompiled in the vast majority of cases in order to work within a zone. That is big news and that fact alone probably qualifies your application. There are a small number of limitations imposed on processes that run in a zone which guarantee that software in a zone does not cause harm to other zones or the global zone. Each of these will be explored in detail later, but the common trait among all of these limitations is that they require privileges only available to the superuser (root) in prior releases of Solaris. If your software runs as an unprivileged user, the simple answer to the question "will my software run in a zone?" is yes. This is the easiest way to cut the zones support question down to size. For example, if your software ran under Solaris 8 or Solaris 9 as a non-root user, you know that you are not using any system or library calls that require the sort of permission that would be limited in a zone under the Solaris 10 OS. If this is the case, it is fantastic news for you, as you will not have to perform a full qualification cycle to verify that your software runs properly in a zone. That is a pretty simple answer, and covers most software that runs on Solaris, but there is a catch even in this simple case. The problem is that during installation, configuration, and administration of your application a privileged user is probably required to perform certain actions. Also, if your software includes executables with the SUID (set user ID) permission bit set, you must look deeper to find out if those parts of the software need more qualification. These are the areas that will require your attention. Be sure to actually install and perform sanity tests at the very least before claiming support for a local zone configuration. It is easy to get surprised by a dependency on a device or file system that is not exposed into a zone by default. Even if your software is guaranteed to run in a zone, it makes sense to understand the zone configuration required to support your software correctly. SUID If a privileged user is required to run your software, your software will likely work in a zone, but to be certain, you have to do a bit more investigation. If your software does require a privileged user for execution, you have to do more investigation to determine whether or not your application will run correctly in a non-global zone. The problem is that you must determine if you are using any of the APIs that are restricted in ways that will not function as expected in a zone because of security reasons. The next section, titled Zone Limitations, lists all of the known system call limitations of zones. If you determine that you are using only APIs that are unrestricted in zones, your software will run correctly and completely in a zone. Rest assured that this is the case for most software. If you are using restricted APIs, your software will have limitations when running in a zone. If this is the case, it is possible to still support execution in a non-global zone. Perhaps the software will have known limitations. The application could be modified to be "zone aware" and behave in a slightly different way when executed in a zone. It would make sense to have an application turn off functionality or features when running in a zone to avoid running into problems. Because zones do not define a new ABI or API, most software that runs on the Solaris 10 OS will work correctly in a zone. This section is dedicated to those system calls and associated library calls that serve as exceptions to this rule. There are several ways to approach the problem of finding such limitations in your software. Automated searching through the source code will probably prove to be the most complete type of search, but it is also possible to catch issues solely through testing or at runtime with tools such as privilege debugging, apptrace(1), truss(1), and dtrace(1M). apptrace(1) truss(1) dtrace(1M) This section is dedicated to describing as fully as practical what each of the limitations entails. In a subsequent section, the use of various system utilities to find these issues will be explored. Before we address the particulars of the various system call behaviors in non-global zones, discussions are in order around zone security, the new Process Rights Management framework introduced in the Solaris 10 OS, and zone resources and services virtualization. Each non-global zone has a security boundary around it. The security boundary is maintained by: privileges(5) /proc /dev Process rights management enables processes to be restricted at the command, user, role, or system level. The Solaris OS implements process rights management through privileges. Privileges decrease the security risk that is associated with one user or one process having full superuser capabilities on a system.. Setuid. file_dac_write sys_time. UID=0 root sys. The privileges(5) man page provides descriptions of every privilege. The command ppriv -lv prints a description of every privilege to standard out. ppriv -lv For most privileges, absence of the privilege simply results in a failure (EPERM error). In some instances, the absence of a privilege can cause system calls to behave differently. In other instances, the removal of a privilege can force a set-uid application to seriously malfunction. EPERM set-uid All processes running in a zone are privilege aware. That means all processes in a zone are constrained by the privilege sets that are assigned to them when the process is created. When the system creates a non-global zone, an init(1M) process is created for it and that process is the root process of the zone. In general, all processes in a non-global zone are descendants of this init(1M) process. The inheritable privilege set of init determines the effective privilege set of processes in the zone. init(1M) init It was previously stated that the "basic" privileges used to be always available to unprivileged processes, and by default, processes still have the basic privileges. Unprivileged processes executing in a non-global zone, share the same "basic" privilege set as unprivileged processes running in the global zone. This is the reason why from a privilege standpoint, your unprivileged software is guaranteed to run in a zone (provided the zone was configured properly). Of the privileges listed below the privileges file_link_any, proc_info, proc_session, proc_fork and proc_exec make up the "basic" privilege set. file_link_any proc_info proc_session proc_fork proc_exec Table 1 lists the privileges available in the Solaris 10 OS and whether they are available in a non-global zone. The set of privileges in a non-global zone are a subset of the privileges available in the global zone. The functionality that these missing privileges provide (with the exception of the DTrace privileges, which are new to the Solaris 10 OS) is only available to the superuser in prior releases of Solaris. contract_event contract_observer cpc_cpu dtrace_kernel dtrace_proc dtrace_user file_chown file_chown_self file_dac_execute file_dac_read file_dac_search file_owner file_setid ipc_dac_read ipc_dac_write ipc_owner net_icmpaccess net_privaddr net_rawaccess proc_audit proc_chroot proc_clock_highres proc_lock_memory proc_owner proc_priocntl proc_setid proc_taskid proc_zone sys_acct sys_admin sys_audit sys_config sys_devices sys_ipc_config sys_linkdir sys_mount sys_net_config sys_nfs sys_res_config sys_resource sys_suser_compat Because of restricted privileges of a process in a non-global zone, certain system calls when called with certain parameters may return errors. In most cases, EPERM will be returned for a process that does not possess the privilege. All the failing cases required superuser privilege in prior versions of Solaris. adjtime stime ntp_adjtime adjtime(2) - correct the time to allow synchronization of the system clock stime(2) - set system time and date ntp_adjtime(2) - adjust local clock parameters adjtime(2) stime(2) ntp_adjtime(2) Limitation: Cannot set the system's notion of time in a non-global zone. Required Privilege: sys_time Impact: Software that needs to adjust the system's idea of the current time (for example, to synchronize with another machine). Workaround: N/A Associated Command(s): date(1), nptdate(1M), xntpd(1M) date(1) nptdate(1M) xntpd(1M) creat chmod open creat(2) - create a new file or rewrite an existing one chmod(2) - change the permissions mode of a file open(2) - open a file creat(2) chmod(2) open(2) Limitation: Creating or changing a regular file with the S_ISVTX mode (sticky bit) set. S_ISVTX Required Privilege: sys_config Impact: The sticky bit set on a regular file (that is, not a directory) that does not have the executable mode set indicates that the file is a swap file. Therefore, the system's page cache will not be used to hold the contents of the file with the sticky bit set. It is fair to assume the impact of this limitation is minimal, as not many applications create files with the sticky bit set. The impact is felt more by a system administrator who would use this mode directly - or perhaps indirectly through the use of mkfile(1M). Note that backup and restoration utilities that preserve such modes for later recovery could read and preserve the sticky bit for files, but would not be able to recreate the file with the mode upon restoration. mkfile(1M) Workaround: The sticky bit can only be applied to files within the file system from the global zone. No workaround for executing within a zone is known at this time. Operations that attempt to set the sticky bit on a regular file in a local zone will fail with no error or warning. Associated Command(s): mkfile(1M), chmod(1), tar(1) chmod(1) tar(1) ioctl ioctl(2) - device control ioctl(2) Limitation: Cannot pop a streams module if an anchor is in place. streams Required Privilege: sys_net_config Impact: An anchor (I_ANCHOR) is a lock that prevents the removal of a STREAMS module with an I_POP ioctl call. You place an anchor in a stream on a module you want to lock. All modules at or below the anchor are locked, and can only be popped by a sufficiently privileged process. In a zone, this privilege is not available. I_ANCHOR STREAMS I_POP ioctl Associated Command: autopush(1M) autopush(1M) link unlink link(2), unlink(2) - link and unlink files and directories link(2) unlink(2) Limitation: Cannot create a link or unlink a directory in a zone. Required Privilege: sys_linkdir Impact: This could have an impact during the installation/configuration of software that creates links to directories. This also has an impact on software that may create temporary directories that are later removed with calls to unlink(2). Workaround: Symbolic links (symlink(2)) to directories are allowed in a zone. The unlink(2) directory functionality can be replaced by the rmdir(2) system call. symlink(2) rmdir(2) Associated Command(s): link(1M), unlink(1M) link(1M) unlink(1M) memcntl memcntl(2) - memory management control memcntl(2) Limitation: MC_LOCK, MC_LOCKAS, MC_UNLOCK and MC_UNLOCKAS are not supported therefore a process cannot lock and unlock memory. MC_LOCK MC_LOCKAS MC_UNLOCK MC_UNLOCKAS Required Privilege: proc_lock_memory Impact: This can impact on software that needs to lock memory. For instance, a database program may want to lock memory to keep data table buffers in non-pageable memory for performance reasons. Workaround: If you are locking a shared memory segment, refer to workaround section for shmctl(2). shmctl(2) mknod mknod(2) - make a special file mknod(2) Limitation: Cannot create a block (S_IFBLK) or character (S_IFCHR) special file. S_IFBLK S_IFCHR Required Privilege: sys_devices Impact: Software that needs to create device nodes on the fly (for example, Sun Ray Server Software) is impacted by this. Backup and restoration utilities (for example, tar(1)) could read and preserve special files, but would not be able to recreate the special files upon restoration. Workaround: The special file creation could be omitted from the software. Instead, the zone's configuration as specified by zonecfg(1M) can include a "device" resource which will specify that the device file in question should be created when the zone is booted. Restoration of special files must be performed from the global zone. zonecfg(1M) Associated Command(s): cpio(1), disks(1M), mknod(1M), tapes(1M), tar(1) cpio(1) disks(1M) mknod(1M) tapes(1M) msgctl msgctl(2) - message control operations msgctl(2) Limitation: IPC_SET cannot be used to increase the message queue bytes (msg_qbytes). IPC_SET msg_qbytes Required Privilege: sys_ipc_config Impact: Software that dynamically sizes the message queue is affected by this. Workaround: The system-defined limit used to initialize msg_qbytes is the minimum enforced value of the calling process' process.max-msg-qbytes resource control. So it's possible to initialize msg_qbytes to the upper limits that your application requires when the message queue is initialized. process.max-msg-qbytes nice nice(2) - change priority of a process nice(2) Limitation: This call will fail if the increment argument is negative or greater than 40. Required Privilege: proc_priocntl Impact: Depending upon the nature of your application requirements, your software may need to set the scheduling priority. Calling the nice function has no effect on the priority of processes or threads with the scheduling policy SCHED_FIFO or SCHED_RR. SCHED_FIFO SCHED_RR Workaround: If your software really wants to adjust (raise) its priority using nice(2), then some other process in the global zone will need to perform that on behalf of the client in the non-global zone. Or, binding the non-global zone that the application runs in to a pool can also achieve the same effect (unless the process is competing for CPU with other processes in the same zone, in which case the Fair Share Scheduler can be used to specify which projects should get more of the CPU). Associated Command: nice(1) nice(1) p_online p_online(2) - return or change processor operational status p_online(2) Limitation: P_ONLINE, P_OFFLINE, P_NOINTR, P_FAULTED, P_SPARE, and P_FORCED flags are not supported. P_ONLINE P_OFFLINE P_NOINTR P_FAULTED P_SPARE P_FORCED Required Privilege: sys_res_config Impact: This will impact software that needs to disable/enable CPUs. Workaround: N/A Associated Command: psradm(1M) psradm(1M) priocntl priocntl(2) - process scheduler control priocntl(2) Limitation: Changing the scheduling parameters of an LWP (using PC_SETPARMS or PC_SETXPARMS) is not supported. PC_SETPARMS PC_SETXPARMS Impact: Depending upon the nature of your application requirements, your software may need to set the kernel-level scheduling priority of a LWP. Associated Command: priocntl(1) priocntl(1) pset_create pset_destroy pset_assign pset_bind pset_setattr processor_bind pset_create(2), pset_destroy(2), pset_assign(2) - manage set of processors pset_bind(2) - bind LWPs to a set of processors pset_setattr(2) - set processor set attributes pset_create(2) pset_destroy(2) pset_assign(2) pset_bind(2) pset_setattr(2) Limitation: These functions control the creation and management of sets of processors. Since processors are systemwide resources, manipulation of them from within a zone is not allowed. Impact: Software that takes advantage of SMP systems to bind LWPs to a specific set of processors for performance, concurrency or resource control reasons. Your software may limit itself to the number of processors it can run on for licensing reasons. Workaround: You can set up a resource pool using poolcfg(1M) and pooladm(1M) and then bind the zone that the application will run in to the resource pool using zonecfg(1M) and the "pool" property. You can use processor_bind(2) to bind LWPs to a single processor. poolcfg(1M) pooladm(1M) processor_bind(2) Associated Command: psrset(1M) psrset(1M) shmctl shmctl(2) - shared memory control operations Limitation: SHM_LOCK and SHM_UNLOCK are not supported, therefore a process cannot lock and unlock memory. SHM_LOCK SHM_UNLOCK Impact: This can have an impact on software that needs to lock memory. For instance, a database program may want to lock memory to keep data table buffers in non-pageable memory for performance reasons. Workaround: If the reason you are locking memory is for performance, you may want to investigate the use of the Intimate Shared Memory (ISM) feature of Solaris (shmat(2) SHM_SHARE_MMU). There are numerous benefits of using ISM, one of which is ISM pages are locked, significantly improving performance by reducing the kernel code path as well as preventing pages from being swapped out. It should be noted that the use of ISM can cause certain Dynamic Reconfiguration events (for example, those invoked using the cfgadm(1M) command) to fail. shmat(2) SHM_SHARE_MMU cfgadm(1M) socket socket(2) - create an endpoint for communication socket(2) Limitation: Attempts to create a raw socket with protocol set to IPPROTO_RAW or IPPROTO_IGMP will return a EPROTONOSUPPORT error. IPPROTO_RAW IPPROTO_IGMP EPROTONOSUPPORT Required Privilege: net_rawaccess Impact: This will impact software that is using the raw socket interface to implement network protocols or software that needs to create/inspect TCP/IP headers. Associated Command: N/A swapctl swapctl(2) - manage swap space. swapctl(2) Limitation: Cannot add (SC_ADD) or remove (SC_REMOVE) swapping resources. SC_ADD SC_REMOVE Required Privilege: sys_config Impact: Any software that needs to add or remove swap resources will be affected. This will most likely affect your installation and configuration. Workaround: Swap space is a systemwide resource, therefore it has to be configured from the global zone. Associated Command: swap(1M) swap(1M) uadmin uadmin(2) - administrative control uadmin(2) Limitation: The A_REMOUNT A_FREEZE, A_DUMP commands are not supported (ENOTSUP). The AD_IBOOT function of the A_SHUTDOWN command is not supported (ENOTSUP). A_REMOUNT A_FREEZE, A_DUMP ENOTSUP AD_IBOOT A_SHUTDOWN Impact: This could impact software that may want to force a crash dump under certain conditions. Associated Command: uadmin(1M) uadmin(1M) Not unlike system calls, because of the restricted privileges of a process in a zone, certain library calls may return errors. In most cases, EPERM will be returned for a process that does not possess the appropriate privilege. The failing cases required superuser privilege in prior versions of Solaris. clock_settime clock_settime(3RT) - high resolution clock operations clock_settime(3RT) Limitation: Cannot set the CLOCK_REALTIME and CLOCK_HIGHRES clocks since they are systemwide clocks. CLOCK_REALTIME CLOCK_HIGHRES Impact: Realtime software is most likely affected by the inability to set the clock. cpc_bind_cpu cpc_bind_cpu(3CPC) - bind request sets to hardware counters cpc_bind_cpu(3CPC) Limitation: This function binds the set to the specified CPU and measures events occurring on that CPU regardless of which LWP is running. This is not allowed in a zone because you could monitor the CPU events of processes not in your zone. The call fails because the function tries to open up a special file in the /devices directory which represents the CPU and the /devices directory is not part of the name space of a zone. Because there is no /devices, the open(2) system call issued by cpc_bind_cpu(3CPC) will generate an ENOENT return code. /devices ENOENT Required Privilege: cpc_cpu Impact: This could impact your development environment. For instance, you could be making calls to cpc_bind_cpu(3CPC) to determine the cache hit ratio of your code. Workaround: The cpc_bind_curlwp(3CPC) is allowed in a zone, so you are able to monitor CPU counters for the LWP the call was issued from. cpc_bind_curlwp(3CPC) mlock munlock mlockall munlocall plock mlock(3C), munlock(3C) - lock or unlock pages in memory mlockall(3C), munlockall(3C) - lock or unlock address space plock(3C) - lock or unlock into memory process, text, or data mlock(3C) munlock(3C) mlockall(3C) munlockall(3C) plock(3C) Limitation: Cannot use these library functions to lock and unlock memory. This is the same issue as for memcntl(2). Impact: This can have an impact on software that needs to lock memory. For instance, a database program may want to lock memory to keep data table buffers in non-pageable memory for performance reasons. It should be noted that locking memory can cause certain Dynamic Reconfiguration events (for example, those invoked using the cfgadm(1M) command) to fail. Workaround: If you are locking a shared memory segment, the workaround described in shmctl should be considered. pthread_setschedparam pthread_setschedparam (3C) - access dynamic thread scheduling parameters pthread_setschedparam (3C) Limitation: Cannot change the underlying scheduling policy and parameters for a thread. This is the same issue as for priocntl. Impact: Depending upon the nature of your application requirements, your software may need to set the kernel-level scheduling priority of a thread and the underlying LWP. timer_create timer_create(3RT) - create a timer timer_create(3RT) Limitation: Cannot create a timer using the high-resolution system clock (CLOCK_HIGHRES). Required Privilege: proc_clock_highres Impact: Software that requires high-resolution timers. t_open t_open(3NSL) - establish a transport endpoint t_open(3NSL) Limitation: The STREAMS driver /dev/rawip is the TLI transport provider that provides raw access to IP. This device node is not available in a zone, so this call will return the ENOENT error when used for this driver. /dev/rawip Required Privilege: net_rawaccess Impact: This will also impact software that is using the /dev/rawip device to implement network protocols, software that needs to create/inspect TCP/IP headers, and so on. The API that the following list of libraries provide, is not supported in a zone. The shared objects are present in the zone's /usr/lib directory, so no link time errors will occur if your code includes references to these libraries. You can inspect your make files to determine if your application has explicit bindings to any of these libraries and use pmap(1) while the application is executing to verify that none of these libraries are dynamically loaded. /usr/lib pmap(1) libdevinfo(3LIB) libcfgadm(3LIB) libpool(3LIB) libtnfctl(3LIB) libsysevent(3LIB) Zones have a restricted set of devices, consisting primarily of pseudo devices that form part of the Solaris programming API. These include /dev/null, /dev/zero, /dev/poll, /dev/random, /dev/tcp, and so on. Physical devices are not directly accessible from within a zone unless configured by an administrator. Since devices, in general, are shared resources in a system, to make devices available in a zone requires some restrictions so system security will not be compromised. /dev/null /dev/zero /dev/poll /dev/random /dev/tcp create(2) mkdir(2) rename(2) dtrace(7D) kmem(7D) ksyms(7D) kmdb(7D) trapstat(1M) lockstat(7D) hme(7D) ce(7D) ge(7D) eri(7D) bge(7D) dmfe(7D) dnet(7D) e1000g(7D elxl(7D) iprb(7D) pcelx(7D) pcn(7D) qfe(7D) rtls(7D) sk98sol(7D) skfp(7D) spwr(7D) The following list of devices are not visible in the namespace of a non-global zone. Except for cpuid, fcip and ksyms, the interfaces to these devices are not public (Interface Stability: Private) so this should have no effect on your well-behaved software. cpuid fcip ksyms mem(7D) allkmem(7D) fcip(7D) Each non-global. Not all software works properly in a local zone. This section examines how to detect and diagnose the source of the execution problem and how to make the software work, perhaps by disabling features, when it is running in a local zone. variable priv__debug = 1 in the global zone's /etc/system file. -D ppriv(1) priv__debug = 1 /etc/system global# zlogin redzonene redzone# ls -l /tmp total 8 drwxr-xr-x 2 root root 69 Apr 19 22:11 testdir redzone# ppriv -D -e unlink /tmp/testdir unlink[1245]: missing privilege "sys_linkdir" (euid = 0, syscall = 10) needed at tmp_remove+0x6e unlink: Not owner redzone# ppriv -D -e rmdir /tmp/testdir redzone# ls -l /tmp total 0 The Solaris 10 OS offers a number of tools that you can use to identify and inspect at runtime the system/library calls that your application issues. We will explore three such tools, apptrace(1M), dtrace(1M), and truss(1M). Although the dtrace(1M) command is not supported in a non-global zone, you can use DTrace to monitor a process from the global zone that is executing in a non-global zone because the global zone has visibility to all processes on the system. apptrace(1M) truss(1M) Once you have identified a system or library call that may not work in zone, you can inspect the argument list by using apptrace(1), dtrace(1M), or truss(1) for system calls. The following example will illustrate that msgctltst.c is code that will not work in a zone because of its use of IPC_SET to increase the message queue size. In a zone, you can decrease the size of a queue, but cannot increase the size of a queue. msgctltst.c redzone# cat msgctltst.c #include <stdio.h> #include <errno.h> #include <sys/msg.h> int main (int argc, char *argv[]) { struct msqid_ds msgc; int rc, msgid; if ((msgid = msgget(IPC_PRIVATE, IPC_CREAT)) < 0) { fprintf (stderr,"msgget(IPC_PRIVATE), errno = %d\n", errno); } if ((rc = msgctl(msgid, IPC_STAT, &msgc)) < 0) { fprintf (stderr,"msgctl(IPC_STAT), errno = %d\n", errno); } msgc.msg_qbytes--; if ((rc = msgctl(msgid, IPC_SET, &msgc)) < 0) { fprintf (stderr,"msgctl(IPC_SET), errno = %d\n", errno); } msgc.msg_qbytes++; if ((rc = msgctl(msgid, IPC_SET, &msgc)) < 0) { fprintf(stderr,"msgctl(IPC_SET) growing queue, errno = %d\n", errno); } return(0); } The following example illustrates the use of truss(1) to inspect the system calls issued by msgsctlstst. It will fail when we try to increase the number of bytes in the message queue. msgsctlstst redzone# truss ./msgctltst execve("msgctltst", 0x08047EA8, 0x08047EB0) argc = 1 ... ... ... msgget(IPC_PRIVATE, IPC_CREAT) = 3 msgctl(3, IPC_STAT, 0x08047E00) = 0 msgctl(3, IPC_SET, 0x08047E00) = 0 msgctl(3, IPC_SET, 0x08047E00) Err#1 EPERM [sys_ipc_config] ... ... ... ... ... The apptrace(1) utility runs the executable program specified and traces all function calls that the executable program makes to the Solaris shared libraries. For each function call that is traceable, apptrace(1) reports the name of the library interface called, the values of the arguments passed, and the return value. Again, the example below illustrates the application is making calls to msgctl(2) with the second argument set to IPC_SET (0xb). IPC_SET (0xb) redzone# apptrace ./msgctltst -> msgctltst -> libc.so.1:atexit(0x80505a8, 0xd27e6fd0, 0x0) ** NR -> msgctltst -> libc.so.1:atexit(0xd27e6fd0, 0x0, 0x0) ** NR -> msgctltst -> libc.so.1:atexit(0x80508d9, 0xd27e6fd0, 0x0) ** NR -> msgctltst -> libc.so.1:void __fpstart(void) <- msgctltst -> libc.so.1:__fpstart() = 0xd254cc3c -> msgctltst -> libc.so.1:int msgget(key_t = 0x0, int = 0x200) <- msgctltst -> libc.so.1:msgget() = 0x1 -> msgctltst -> libc.so.1:int msgctl(int = 0x1, int = 0xc,() = 0xffffffff ... ... ... ... ... ... Same program using DTrace, executing from the global zone. The DTrace probe, syscall::msgsys:entry, will fire every time the msgctl(2) function is called when the second argument is set to IPC_SET. If the system call returns with an error, the syscall::msgsys:return probe will fire. The msgctltst program is executing in the non-global zone redzone. So you can see, DTrace is more powerful than truss(1) and apptrace(2) because we can actually inspect data structures, conditionally execute probe actions and display a call stack trace. syscall::msgsys:entry syscall::msgsys:return msgctltst apptrace(2) global# cat msgctl.d #!/usr/sbin/dtrace -Cqs #include <sys/msg.h> #include <sys/msg_impl.h> syscall::msgsys:entry / arg0 == MSGCTL && arg2 == IPC_SET/ { self->ptr = (struct msqid_ds*)copyin(arg3, sizeof(struct msqid_ds)); printf("\n (%s) msgid=%d msg_qbytes=%d\n", execname, arg0, self->ptr->msg_qbytes); } syscall::msgsys:return /self->ptr && errno != 0/ { printf("\n msgctl failed (%d)\n",errno); ustack(); } syscall::msgsys:return /self->ptr/ { self->ptr = 0; } global# dtrace -ZCqs msgctl.d & global# zlogin redzone [Connected to zone 'redzone' pts/9] Last login: Mon May 9 15:38:17 on pts/7 Sun Microsystems Inc. SunOS 5.10 s10_72 December 2004 redzone# cd zonetest redzone# pwd /zonetest redzone# ./msgctltst msgctl(IPC_SET) growing queue, errno = 1 redzone# (msgctltst) msgid=2 msg_qbytes=65535 (msgctltst) msgid=2 msg_qbytes=65536 msgctl failed (1) libc.so.1`_syscall6+0x1b msgctltst`main+0x160 40094c If a non-global zone is not available for testing, you could test your software in the global zone with the privileges not available in a non-global zone removed from the privilege set of the program using the ppriv(1) command. If your software fails in the global zone with the privileges removed, it will fail in a non-global zone. This method will not catch access to device interfaces and libraries not available in a non-global zone, so it's advisable to test your software in a non-global zone. The runtime tools described above will only find errors in the code paths that are exercised. In no way do they replace a code review. Some software cannot work in a non-global zone completely as it does in the global zone. An example is the tar(1) command. When running in a zone, tar(1) is able to create archives that preserve the sticky bit on individual files but is not able to write files with the sticky bit set back to the file system. The tar(1) command fails silently in this case, because the chmod(2) system call does not report a failure when this occurs. This is an example of software that should work differently depending on if it is running in a global zone or a non-global zone. It would be great if the tar(1) command would report this condition as a warning so that you would at least figure out that something was wrong. This could be done by having tar(1) detect that it was running in a non-global zone and log a warning whenever a regular file with the sticky bit set was written to the file system. Similarly, it is easy to imagine applications that use other zone-restricted system calls could disable features when executed in a non-global zone. This allows the software to run properly to the extent that is possible while not pushing the burden of diagnosing the failure onto the user of that software. Once the decision has been made that your software requires zone awareness, perhaps for the reasons mentioned above, an API is provided for zone identity. getzoneid(3C) getzoneidbyname(3C) getzonenamebyid(3C) A definition of the global zone ID, GLOBAL_ZONEID, is defined in /usr/include/sys/zone.h. GLOBAL_ZONEID /usr/include/sys/zone.h global# cat myzone.c #include <stdio.h> #include <zone.h> int main(int argc, char **argv) { char zonename[ZONENAME_MAX+1]; zoneid_t id; if ((id = getzoneid()) == GLOBAL_ZONEID) printf("Global Zone!\n"); if (getzonenamebyid( id, zonename, sizeof(zonename)) > 0) printf("%s\n", zonename); } Executing the code from the global zone: global# ls -l zonename -rwxr-xr-x 1 root root 5704 Apr 19 23:06 zonename global# ./myzone Global Zone! global Executing the code from a non-global zone redzone: global# zlogin redzone [Connected to zone 'redzone' pts/5] Last login: Tue Apr 19 22:01:08 from 192.168.2.2 Sun Microsystems Inc. SunOS 5.10 s10_71 December 2004 redzone# cd zonetest redzone# zoneadm list redzone redzone# ./myzone redzone There are two issues that could cause the installation of your software to fail. When a zone is created, two options are available to create the root file system of the zone, the Sparse Root and Whole Root models. The Whole Root model provides the maximum configurability by installing all of the required and any selected optional Solaris software packages into the private file systems of the zone. The advantages of this model include the ability for zone administrators to customize their zone's file-system layout (for example, creating a /usr/local) and add arbitrary unbundled or third-party packages. The disadvantages of this model include the loss of sharing of text segments from executables and shared libraries by the virtual memory system, and a much heavier disk footprint -- approximately an additional 2 Gbyte -- for each non-global zone configured as such. /usr/local The Sparse Root model optimizes the sharing of objects by installing only a subset of the root packages (those with the pkginfo(4) parameter SUNW_PKGTYPE set to root) and using read-only loopback file systems to gain access to other files. This is similar to the way a diskless client is configured, where /usr and other file systems are mounted over the network with NFS. By default with this model, the directories /lib, /platform, /sbin and /usr are mounted as loopback, read-only file systems. The advantages of this model are greater performance due to the efficient sharing of executables and shared libraries, and a much smaller disk footprint for the zone itself. The sparse-root model only requires approximately 100 Mbyte of file system space for the zone itself. pkginfo(4) SUNW_PKGTYPE /usr /lib /platform /sbin Any installation software that needs to install components in /usr (or any of the other read-only loopback file systems) will fail in a Sparse Root model zone. The second issue deals with the CD-ROM device. There are a couple of ways to gain access to the CD-ROM. One popular method is to loopback mount the /cdrom directory from the global zone to the non-global zone: /cdrom # zonecfg -z myzone add fs set dir=/cdrom set special=/cdrom set type=lofs set options=[nodevices] end If you use this method and your installation requires multiple CD volumes, you will need to eject CDs from the global zone. Any explicit ejects of the CD-ROM device (eject(1)) in the installation scripts will fail. The alternative method used to gain access to the CD-ROM device in a non-global zone, exporting the physical device(s) from the global zone to the non-global zone, is discouraged. If you choose to use this method, it should be noted that the Volume Management demon (vold(1M)) does not function in a non-global zone. eject(1) vold(1M) Each zone maintains its own package and patch database. A package or a patch can be installed individually into a non-global zone or to all zones from the global zone. The behavior of packaging in a zone environment varies according to the following factors: -G pkgadd(1M) SUNW_PKG_ALLZONES SUNW_PKG_HOLLOW SUNW_PKG_THISZONE pkginfo Table 2 shows the behavior of packaging in a zone environment, with variances according to factors. SUNW_PKG_ALLZONES false SUNW_PKG_HOLLOW false SUNW_PKG_THISZONE false Add to gz, current lz and future lz Add to gz only, not to current or future lz Add to this lz only SUNW_PKG_ALLZONEStrue Operation not allowed SUNW_PKG_ALLZONES true SUNW_PKG_HOLLOW true Add to gz Add to pkginfo db in current and future lz SUNW_PKG_THISZONE true Invalid option combination SUNW_PKG_ALLZONES false SUNW_PKG_THISZONE true SUNW_PKG_THISZONE SUNW_PKG_ALLZONES true SUNW_PKG_HOLLOW false SUNW_PKG_HOLLOW SUNW_PKG_THISZONE true Legend: gz = global zone lz = non-global zone An "invalid option combination" means the package attribute settings do not make sense - not all possible combinations of settings for these three attributes are legal. They should be caught by pkgmk(1M) and the package should not be created. pkgmk(1M) An "operation not allowed" means the pkgadd command will output an error message and fail to add the packages based on the combination of command line options, package attribute settings, and the type of zone pkgadd is being run in. pkgadd Getting your software to work in a zone is only part of the challenge. System administrators must understand how to configure their zone appropriately for the software they intend to run. There are many possible configurations of zones, and this section does not go through all of the various possibilities. Instead, this section focuses on strategies for targeting useful configurations and communication of the required zone configuration to your software administration audience. Zones can be configured in many different ways. As mentioned before, a Sparse Root Zone takes advantage of files that are shared between the global zone and the non-global zone (such as /lib and /usr), while a Whole Root Zone maintains its own copy of all of the files. A Sparse Root zone is a more restrictive environment than the Whole Root Zone, because the shared directories are exposed into the non-global zone as a read-only file system. The additional flexibility provided by a less restrictive zone configuration comes at the cost of additional resources - specifically hard drive space and memory. In order to achieve maximum flexibility in the zone configurations that your software can be deployed into, it is important to target the most restrictive but reasonable zone configuration possible. The default zone configuration is a Sparse Root Zone with very few devices provisioned into the zone. The directories that are shared with the global zone (with a read-only loopback mount) are /lib, /platform, /sbin, and /usr. This provides a fairly restricted zone in terms of deploying unbundled software, because during installation and execution of your software it is not possible to modify or write files into those directories. This default zone configuration is a good starting point to consider for your software. It offers the maximum permission set available for zones and the directory restrictions mentioned above seem to be a good tradeoff in terms of disk space requirements. Starting from this default zone configuration, it is important to discover and document the configuration necessary to deploy your software successfully in a zone. If your installation includes writing to the read-only directories, a software modification should be considered. If the software cannot be modified to work around the problem, configure the zone accordingly. Remember that you are identifying the most restrictive environment in which your software can be installed and executed. From there, more liberal configurations will present no trouble. Be sure to keep track of all elements of the required zone configuration. This should include removing inherited directories, device configuration, and network requirements. This information should be included in your install documentation or any relevant configuration guides. The configuration details that fall out of using this strategy will aid system administrators who are faced with the task of configuring a single non-global zone for multiple software libraries and applications. By stating the minimum requirement for each piece of software, the minimum zone configuration for a set of software would simply be all of the minimum zone configuration requirements for each respective software package put together. This scheme simplifies the task of planning required resources for deployment of zones. Once you have identified the most restrictive zone configuration for the deployment of your software, you should verify that your software works correctly in that configuration. A non-global zone is a more restricted environment than the global zone. This is true in terms of permission to call various system calls as well as the ability to use specific devices or to modify the contents of specific directories. You can use this fact to your advantage as you move to support the Solaris 10 OS with global and non-global zones. Rather than potentially double your QA test matrix to add local zone support, you can do your QA only in the non-global zone. Rest assured that if the software works completely and correctly in the non-global zone, it will in the global zone as well. After the QA within the non-global zone, simply verify that the deployment works correctly in the global zone and you are all set. Paul Lovvik, who has been with Sun for seven years, is lead engineer in a group in the Market Development Engineering organization focused on partner adoption of the Solaris OS for x86 Platforms. Paul and his engineering team have helped many partners add Solaris on x86 support to their products over the past year. Joseph Balenzano has been with Sun for seven years. His current role is engineer in a group in the Market Development Engineering organization focused on partner adoption of the Solaris OS for x86 Platforms. He has over 20 years of software development experience working for ISVs.
http://developers.sun.com/solaris/articles/application_in_zone.html
crawl-002
refinedweb
6,980
50.46
Unexpected auto-completer behavior when working with aiida Hi, I use wing to test the aiida-core package with the code given here:.... The code snippets are the following: from aiida import load_profile load_profile() from aiida.orm import Code, Computer, Data, Node, Float, Str from aiida.plugins import CalculationFactory, DataFactory from aiida.engine import calcfunction In the above code, the calcfunction only exists in aiida.engine, so it should be imported like this: from aiida.engine import calcfunction But when I type the following: from aiida.orm import c<tab> then the auto-completer of wing will still suggest the name of calcfunction as one of the candidates, which if I selected and input there, the code will be error and finally fail to run. Any hints for this problem? Regards What happens if you complete the completion and do goto-definition on calcfunction on the 'from aiida.orm import' line? This might be a result of orm importing calcfunction internally in some way, or the symbol being listed in a *.pyi file if there are any, but then having other import behavior at runtime. Looking at where Wing ends up after goto-definition is the first step to figuring this out. Both of the two imports will skip to the same location here:... Another differences I noted that is when importing by 'from aiida.orm import' the result of the completion will becomes 'calcfunction(function)', while the completion result for the 'from aiida.engine import' line is 'calcfunction'. Still cannot figure out the reason. Regards What is the code error when you try to run it? I'm guessing maybe a circular import problem. See the following:
https://ask.wingware.com/question/1777/unexpected-auto-completer-behavior-when-working-with-aiida/
CC-MAIN-2021-17
refinedweb
278
59.4
Name | Synopsis | Interface Level | Parameters | Description | Context | See Also #include <sys/scsi/scsi.h> #include <sys/cmn_err.h> void scsi_log(dev_info_t *dip, char *drv_name, uint_t level, const char *fmt, ...); Solaris DDI specific (Solaris DDI). Pointer to the dev_info structure. String naming the device. Error level. Display format. The scsi_log() function is a utility function that displays a message via the cmn_err(9F) routine. The error levels that can be passed in to this function are CE_PANIC, CE_WARN, CE_NOTE, CE_CONT, and SCSI_DEBUG. The last level is used to assist in displaying debug messages to the console only. drv_name is the short name by which this device is known; example disk driver names are sd and cmdk. If the dev_info_t pointer is NULL, then the drv_name will be used with no unit or long name. If the first character in format is: An exclamation mark (!), the message goes only to the system buffer. A caret (^), the message goes only to the console. A question mark (?) and level is CE_CONT, the message is always sent to the system buffer, but is written to the console only when the system has been booted in verbose mode. See kernel(1M). If neither condition is met, the ? character has no effect and is simply ignored. All formatting conversions in use by cmn_err() also work with scsi_log(). The scsi_log() function may be called from user, interrupt, or kernel context. kernel(1M), sd(7D), cmn_err(9F), scsi_errmsg(9F) Name | Synopsis | Interface Level | Parameters | Description | Context | See Also
http://docs.oracle.com/cd/E19253-01/816-5180/6mbbf02o4/index.html
CC-MAIN-2015-14
refinedweb
249
60.01
This site works best with JavaScript enabled. Please enable JavaScript to get the best experience from this site. This thread was marked as Locked by Lord_Ralex. package net.minecraft.src; import java.util.Random; public class mod_NAME extends BaseMod { public static final Item NAMEhere = new ItemNAME(2085).setItemName("NAME"); public void load() { NAMEhere.iconIndex = ModLoader.addOverride("/gui/items.png" , "/items/pic.png"); ModLoader.addName(NAMEhere, "Ingame -Name"); } public String getVersion() { return "3.14159265"; } } package net.minecraft.src; import java.util.Random; public class ItemNAME extends Item { public ItemNAME(int i) { super(i); maxStackSize = 64; } public String Version() { return "3.14159265"; } } public static final Item NAMESword = new ItemSword(3077, EnumToolMaterial.GOLD).setItemName("AnythingHere"); NAMESword.iconIndex = ModLoader.addOverride("/gui/items.png" , "/items/sword.png"); ModLoader.addName(NAMESword, "NAME Sword"); ModLoader.addRecipe(new ItemStack(NAMESword, 1), new Object[] { " * ", " * ", " X ", 'X', Item.stick, '*', Block.dirt }); public static final Item NAMEPick = new ItemPickaxe(2102, EnumToolMaterial.GOLD).setItemName("Whatever"); NAMEPick.iconIndex = ModLoader.addOverride("/gui/items.png" , "/items/pic.png"); ModLoader.addName(NAMEPick, "NAME Pickaxe"); ModLoader.addRecipe(new ItemStack(NAMEPick, 1), new Object[] { "***", " X ", " X ", 'X', Item.stick, '*', Block.dirt }); public static final Item NAMEAxe = new ItemAxe(2096, EnumToolMaterial.GOLD).setItemName("whatever"); NAMEAxe.iconIndex = ModLoader.addOverride("/gui/items.png" , "/items/pic.png"); ModLoader.addName(NAMEAxe, "NAME Axe"); ModLoader.addRecipe(new ItemStack(NAMEAxe, 1), new Object[] { "** ", "*X ", " X ", 'X', Item.stick, '*', Block.dirt }); public static final Item NAMEHoe = new ItemHoe(2107, EnumToolMaterial.GOLD).setItemName("Whatever"); NAMEHoe.iconIndex = ModLoader.addOverride("/gui/items.png" , "/items/pic.png"); ModLoader.addName(NAMEHoe, "NAME Hoe"); ModLoader.addRecipe(new ItemStack(NAMEHoe, 1), new Object[] { "** ", " X ", " X ", 'X', Item.stick, '*', Block.dirt }); public static final Item NAMESpade = new ItemSpade(2099, EnumToolMaterial.GOLD).setItemName("Whatever"); NAMESpade.iconIndex = ModLoader.addOverride("/gui/items.png" , "/items/pic.png"); ModLoader.addName(NAMESpade, "NAME Shovel"); ModLoader.addRecipe(new ItemStack(NAMESpade, 1), new Object[] { " * ", " X ", " X ", 'X', Item.stick, '*', Block.dirt }); public static final Block NAMEBlock = new BlockNAME(151, 0).setHardness(6F).setResistance(7.0F).setBlockName("whatever"); package net.minecraft.src; import java.util.Random; public class BlockNAME extends Block { protected BlockNAME(int i, int j) { super(i, j, Material.iron); } public int idDropped(int par1, Random par2Random, int par3) { return mod_NAME.ITEM.shiftedIndex; } public int quantityDropped(Random random) { return 1; } public String Version() { return "3.14159265"; } } ModLoader.registerBlock(NAMEBlock); NAMEBlock.blockIndexInTexture = ModLoader.addOverride("/terrain.png" , "/items/pic.png"); ModLoader.addName(NAMEBlock, "In Game Name Ore"); public void generateSurface(World world, Random random, int chunkX, int chunkZ) { Random randomGenerator = random; for (int i = 0; i < 10; i++) { int randPosX = chunkX + randomGenerator.nextInt(20); int randPosY = random.nextInt(40); int randPosZ = chunkZ + randomGenerator.nextInt(20); (new WorldGenMinable(NAMEBlock.blockID, 4)).generate(world, random, randPosX, randPosY, randPosZ); } } package net.minecraft.src; import net.minecraft.client.Minecraft; public class mod_NAME extends BaseMod { public void load() { } public String Version() { return "1.4.2"; } public String getVersion() { return "3.14159265"; } } public static final Item NAMEBody = (new ItemArmor(2200, EnumArmorMaterial.GOLD ,5,1 ).setItemName("Whatever")); public static final Item NAMEHelmet = (new ItemArmor(2201,EnumArmorMaterial.GOLD ,5,0 ).setItemName("Whatever")); public static final Item NAMEPants = (new ItemArmor(2202,EnumArmorMaterial.GOLD ,5,2 ).setItemName("Whatever")); public static final Item NAMEBoots = (new ItemArmor(2203,EnumArmorMaterial.GOLD, 5, 3 ).setItemName("Whatever")); NAMEBody.iconIndex = ModLoader.addOverride("/gui/items.png", "/items/pic.png"); ModLoader.addName(NAMEBody, "In-Game-NAME Chestplate"); ModLoader.addRecipe(new ItemStack(NAMEBody,1), new Object[]{ "* *", "***", "***", Character.valueOf('*'), Block.dirt}); // Helmet Armor NAMEHelmet.iconIndex = ModLoader.addOverride("/gui/items.png", "/items/pic.png"); ModLoader.addName(NAMEHelmet, "In-Game-Name Helmet"); ModLoader.addRecipe(new ItemStack(NAMEHelmet,1), new Object[]{ "***", "* *", Character.valueOf('*'), Block.dirt}); // Pants Armor NAMEPants.iconIndex = ModLoader.addOverride("/gui/items.png", "/items/pic.png"); ModLoader.addName(NAMEPants, "In-game-Name Leggings"); ModLoader.addRecipe(new ItemStack(NAMEPants,1), new Object[]{ "***", "* *", "* *", Character.valueOf('*'), Block.dirt}); // Boots Armor NAMEBoots.iconIndex = ModLoader.addOverride("/gui/items.png", "/items/pic.png"); ModLoader.addName(NAMEBoots, "In-Game-Name Boots"); ModLoader.addRecipe(new ItemStack(NAMEBoots,1), new Object[]{ "* *", "* *", Character.valueOf('*'), Block.dirt}); ModLoader.addArmor("Name-Of-Armor"); ModLoader.addSmelting(BLOCK.blockID, new ItemStack(ITEM, 1)); public static final Block NAMEBlock = new BlockNAME(160, 0).setBlockName("whatever").setHardness(5F).setResistance(6F).setStepSound(Sound); package net.minecraft.src; import java.util.Random; public class BlockNAME extends Block { public BlockNAME(int i, int j) { super(i, j, Material.wood); } public int idDropped(int i, Random random, int j) { return mod_NAME.BLOCK.blockID; } public int quantityDropped(Random random) { return 1; }} public static final Item nameHere = new ItemFood(5000, 6, 1F, true).setItemName("anyNameHere"); foodNameHere.iconIndex = ModLoader.addOverride("/gui/items.png", "/items/image.png"); ModLoader.addName(foodNameHere, "My Food"); ModLoader.addRecipe(new ItemStack(Food01, 5), new Object[] { "***", "* *", "***", '*', Item.sugar }); //IF SMELTING ITEM TO ITEM ModLoader.addSmelting(yourItem.shiftedIndex, new ItemStack(yourSmeltedItem, 1), 1.0F); //IF SMELTING BLOCKS TO BLOCKS ModLoader.addSmelting(yourBlock.blockID, new ItemStack(yourSmeltedBlock, 1), 1.0F); //For a Item: YourItem.setTabToDisplayOn(CreativeTabs.tabMaterials); //For a Block this.setCreativeTab(CreativeTabs.tabBlock); package net.minecraft.src; import java.awt.Color; import java.util.Map; public class mod_NAME extends BaseMod { public String getVersion() { return "1.4.2"; } public void load() { ModLoader.registerEntityID(EntityNAME.class, "NAME", 30);//registers the mobs name and id ModLoader.addSpawn("NAME", 15, -5, 1, EnumCreatureType.monster);//makes the mob spawn in game ModLoader.addLocalization("entity.NAME.name", "NAME");//adds Mob name on the spawn egg EntityList.entityEggs.put(Integer.valueOf(30), new EntityEggInfo(30, 894731, (new Color(21, 15, 6)).getRGB()));//creates the spawn egg, and chnages color of egg } public void addRenderer(Map var1) { var1.put(EntityNAME.class, new RenderLiving(new ModelNAME(),.5f)); } } package net.minecraft.src; public class EntityNAME extends EntityMob//extend this to make mob hostile { public EntityNAME(World par1World) { super(par1World); this.texture = "/mob/NAME.png";//Set Mob texture this.moveSpeed = 0.4f;//sets how fast this mob moves isImmuneToFire = false; //below this is all the ai tasks that specify how the mob will behave mess around with it to see what happens this.tasks.addTask(0, new EntityAISwimming(this)); this.tasks.addTask(1, new EntityAIAttackOnCollide(this, EntityPlayer.class, this.moveSpeed, false)); this.tasks.addTask(2, new EntityAIMoveTwardsRestriction(this, this.moveSpeed)); this.tasks.addTask(3, new EntityAIWander(this, this.moveSpeed)); this.tasks.addTask(4, new EntityAILookIdle(this)); this.targetTasks.addTask(0, new EntityAIHurtByTarget(this, false)); this.targetTasks.addTask(1, new EntityAINearestAttackableTarget(this, EntityPlayer.class, 25.0F, 0, true)); } public int func_82193_c(Entity par1Entity) //the amount of damage { return 4; } protected void fall(float par1) {} public int getMaxHealth() // Mob health { return 10; } protected String getLivingSound() { return "mob.pig.say"; } protected String getHurtSound() { return "mob.pig.say"; } protected String getDeathSound() { return "mob.pig.death"; } protected int getDropItemId() { return Item.stick.shiftedIndex; } protected boolean canDespawn() { return true; } protected boolean isAIEnabled()//Allow your AI task to work? { return true; } } package net.minecraft.src; public class ModelGreenMonster extends ModelBase { //fields ModelRenderer Leg2; ModelRenderer Leg1; ModelRenderer Body; ModelRenderer Arm1; ModelRenderer Arm2; ModelRenderer Head; public ModelGreenMonster() { textureWidth = 64; textureHeight = 128; Leg2 = new ModelRenderer(this, 48, 49); Leg2.addBox(-2F, 0F, -2F, 4, 11, 4); Leg2.setRotationPoint(-4F, 13F, 0F); Leg2.setTextureSize(64, 128); Leg2.mirror = true; setRotation(Leg2, 0F, 0F, 0F); Leg1 = new ModelRenderer(this, 31, 49); Leg1.addBox(-2F, 0F, -2F, 4, 11, 4); Leg1.setRotationPoint(2F, 13F, 0F); Leg1.setTextureSize(64, 128); Leg1.mirror = true; setRotation(Leg1, 0F, 0F, 0F); Body = new ModelRenderer(this, 0, 68); Body.addBox(0F, 0F, 0F, 12, 7, 7); Body.setRotationPoint(-7F, 7F, -3F); Body.setTextureSize(64, 128); Body.mirror = true; setRotation(Body, 0F, 0F, 0F); Arm1 = new ModelRenderer(this, 0, 34); Arm1.addBox(0F, -1F, -9F, 4, 3, 12); Arm1.setRotationPoint(5F, 8F, -3F); Arm1.setTextureSize(64, 128); Arm1.mirror = true; setRotation(Arm1, 0F, 0F, 0F); Arm2 = new ModelRenderer(this, 0, 19); Arm2.addBox(-4F, -1F, -10F, 4, 3, 12); Arm2.setRotationPoint(-7F, 8F, -3F); Arm2.setTextureSize(64, 128); Arm2.mirror = true; setRotation(Arm2, 0F, 0F, 0F); Head = new ModelRenderer(this, 0, 0); Head.addBox(-3F, -6F, -2F, 8, 6, 5); Head.setRotationPoint(-2F, 7F, 0F); Head.setTextureSize(64, 128); Head.mirror = true; setRotation(Head, 0F, 0F, 0F); }); Leg2.render(f5); Leg1.render(f5); Body.render(f5); Arm1.render(f5); Arm2.render(f5); Head) { // super.setRotationAngles(f, f1, f2, f3, f4, f5); } } public void setRotationAngles(float f, float f1, float f2, float f3, float f4, float f5) { super.setRotationAngles(f, f1, f2, f3, f4, f5, null); leg.rotateAngleX = MathHelper.cos(f * 0.6662F) * 1.0F * f1; leg2.rotateAngleX = MathHelper.cos(f * 0.6662F + (float)Math.PI) * 1.0F * f1; arm.rotateAngleZ = MathHelper.cos(f * 0.6662F) * 1.0F * f1; arm2.rotateAngleZ = MathHelper.cos(f * 0.6662F + (float)Math.PI) * 1.0F * f1; } public boolean hasEffect(ItemStack par1ItemStack){ return true; } public String getItemDisplayName(ItemStack par1ItemStack) { String var2 = ("\u00a7E" + StringTranslate.getInstance().translateNamedKey(this.getLocalItemName(par1ItemStack))).trim(); return var2; } public boolean hitEntity(ItemStack itemstack, EntityLiving entityliving, EntityLiving entityliving1) { entityliving.setFire(5); return true; } public int addFuel(int par1, int par2) { if(par1 == ITEM.shiftedIndex)//200 ticks is normal,1600 ticks is coal { return 200; } if(par1 == BLOCK.blockID) { return 200; } return 0; } public static final Achievement achievementFood = new Achievement(5400, "achievementFood", 10, 9, Item.sugar, null).setSpecial().setIndependent().registerAchievement(); public static final Achievement achievementGem = new Achievement(5401, "achievementGem", 10, 12, Item.bone, achievementFood).registerAchievement(); ModLoader.addAchievementDesc(achievementFood, "Yummy!", "Craft a Donut!!"); ModLoader.addAchievementDesc(achievementGem, "Green GEM!", "Pick Up a Green Gem!"); public void takenFromCrafting(EntityPlayer entityplayer, ItemStack itemstack, IInventory iinventory) { if(itemstack.itemID == mod_minecraft.Food01.shiftedIndex) { entityplayer.addStat(achievementFood, 1); } } public void onItemPickup(EntityPlayer entityplayer, ItemStack itemstack) { if(itemstack.itemID == mod_minecraft.greengem.shiftedIndex) { entityplayer.addStat(achievementGem, 2); } } public void generateNether(World var1, Random var2, int var3, int var4) { int var5; int var6; int var7; int var8; int veinsize=8;//Size of the vein int rarity=12;//the rarity of the ore for (var5 = 0; var5 < rarity; ++var5) { var6 = var3 + var2.nextInt(16); var7 = var2.nextInt(128); var8 = var4 + var2.nextInt(16); new WorldGenNether(Block.dirt.blockID, veinsize).generate(var1, var2, var6, var7, var8);//replace dirt with your custom ore } } .setBlockUnbreakable() .setLightValue(5) [url=][img][/img][/url] package SCMowns.Tutorial; //Package directory /* * Basic importing */ import net.minecraft.block.Block; import net.minecraft.item.EnumToolMaterial; import net.minecraft.item.Item; import net.minecraft.item.ItemFood; import net.minecraft.item.ItemStack; import net.minecraftforge.common.EnumHelper; import cpw.mods.fml.common.Mod; import cpw.mods.fml.common.Mod.Init; import cpw.mods.fml.common.event.FMLInitializationEvent; import cpw.mods.fml.common.network.NetworkMod; import cpw.mods.fml.common.registry.GameRegistry; import cpw.mods.fml.common.registry.LanguageRegistry; /* * Basic needed forge stuff */ @Mod(modid="TutorialMod",name="Tutorial Mod",version="v1") @NetworkMod(clientSideRequired=true,serverSideRequired=false) public class TutorialMod { /* * ToolMaterial */ //Telling forge that we are creating these public static Item topaz; //Declaring Init @Init public void load(FMLInitializationEvent event){ // define items/blocks topaz = new GemItems(2013).setUnlocalizedName("topaz"); //adding names LanguageRegistry.addName(topaz, "Topaz Gem"); //crafting } } package SCMowns.Tutorial; import net.minecraft.item.Item; import cpw.mods.fml.relauncher.*; import net.minecraft.creativetab.CreativeTabs; public class GemItems extends Item { public GemItems(int par1) { super(par1); //Returns super constructor: par1 is ID setCreativeTab(CreativeTabs.tabMaterials); //Tells the game what creative mode tab it goes in } } import net.minecraft.block.Block; import cpw.mods.fml.common.registry.GameRegistry; import cpw.mods.fml.common.registry.LanguageRegistry; public static Block BlockName; BlockName= new NewBlockClass(3608, "BlockName_Whatever").setUnlocalizedName("Texture_For_Block").setHardness(2.0F).setStepSound(Block.soundMetalFootstep).setResistance(10.0F); GameRegistry.registerBlock(BlockName, "BlockName"); LanguageRegistry.addName(BlockName, "Blood Name"); package Your.Package; import java.util.Random; import cpw.mods.fml.common.Mod.Init; import cpw.mods.fml.common.event.FMLInitializationEvent; import cpw.mods.fml.common.registry.LanguageRegistry; import net.minecraft.block.Block; import net.minecraft.block.material.Material; import net.minecraft.client.renderer.texture.IconRegister; import net.minecraft.creativetab.CreativeTabs; public class NewBlockClass extends Block { public BlockName(int par1, String texture) { super(par1, Material.rock); setCreativeTab(CreativeTabs.tabBlock); //place in creative tabs } //drops when broken with pickaxe public int idDropped(int par1, Random par2Random, int par3) { return MainClass.ITEM.itemID; } public int quantityDropped(Random random) { return 3; } //texure the block (Not sure if it's required) public String getTextureFile(){ return "/textures/blocks/TEXTURE_NAME.png"; } } GameRegistry.addRecipe(new ItemStack(topazblock,1), new Object[]{ "TTT","TTT","TTT",'T',topaz, }); "TTT","TTT","TTT" "123","456","789" GameRegistry.addShapelessRecipe(new ItemStack(TrioItem,1), new Object[]{ RubyItem, SapphireItem, Item.emerald }); GameRegistry.registerWorldGenerator(new WorldGeneratorSCMowns()); package Your.Package; import java.util.Random; import net.minecraft.world.World; import net.minecraft.world.chunk.IChunkProvider; import net.minecraft.world.gen.feature.WorldGenMinable; import cpw.mods.fml.common.IWorldGenerator; import cpw.mods.fml.common.IWorldGenerator; public class WorldGeneratorSCMowns implements IWorldGenerator { @Override public void generate(Random random, int chunkX, int chunkZ, World world, IChunkProvider chunkGenerator, IChunkProvider chunkProvider) { // TODO Auto-generated method stub switch(world.provider.dimensionId){ //case -1: generateNether(world, random,chunkX*16,chunkZ*16); case 0 : generateSurface(world, random,chunkX*16,chunkZ*16); } } private void generateSurface(World world, Random random, int BlockX, int BlockZ) { for); }}} case 0 : generateSurface(world, random,chunkX*16,chunkZ*16); ); static EnumToolMaterial EnumToolMaterialTopaz= EnumHelper.addToolMaterial("LowPower", 2, 500, 6.0F, 6, 15); public static Item TopazAxe; public static Item TopazShovel; public static Item TopazSword; public static Item TopazPickaxe; public static Item TopazHoe; TopazAxe = new SCMownsAxe(9014, EnumToolMaterialTopaz).setUnlocalizedName("Texture_file_name"); TopazShovel = new SCMownsShovel(9015, EnumToolMaterialTopaz).setUnlocalizedName("Texture_file_name"); TopazPickaxe = new SCMownsPickaxe(9016, EnumToolMaterialTopaz).setUnlocalizedName("Texture_file_name"); TopazHoe = new SCMownsHoe(9017, EnumToolMaterialTopaz).setUnlocalizedName("Texture_file_name"); TopazSword = new SCMownsSword(9018, EnumToolMaterialTopaz).setUnlocalizedName("Texture_file_name"); package Your.Package; import net.minecraft.item.EnumToolMaterial; import net.minecraft.item.ItemPickaxe; // *REMEBER* Change "ItemPickaxe" to ItemAxe, ItemHoe, ItemSword, etc if you are making those tools! public class SCMownsPickaxe extends ItemPickaxe { public SCMownsPickaxe(int ItemID, EnumToolMaterial material){ super(ItemID, material); } } LanguageRegistry.addName(TopazAxe, "Topaz Axe"); LanguageRegistry.addName(TopazShovel, "Topaz Spade"); LanguageRegistry.addName(TopazPickaxe, "Topaz Pickaxe"); LanguageRegistry.addName(TopazSword, "Topaz Sword"); public static CreativeTabs tabYourTab = new TabSCMownsTutorialMod(CreativeTabs.getNextID(), "SCMowns Tutorial Mod"); import net.minecraft.creativetab.CreativeTabs; package SCMowns.Tutorial; import cpw.mods.fml.relauncher.Side; import cpw.mods.fml.relauncher.SideOnly; import net.minecraft.creativetab.CreativeTabs; public final class TabSCMownsTutorialMod extends CreativeTabs { public TabSCMownsTutorialMod(int par1, String par2Str) { super(par1, par2Str); } //sets the image for the creative tab @SideOnly(Side.CLIENT) public int getTabIconItemIndex() { //there is a difference between items and blocks. will give an example of both return TutorialMod.topaz.itemID; } //sets the title/name for the creative tab public String getTranslatedTabLabel() { return "SCMowns Tutorial Mod"; } } setCreativeTab(TutorialMod.tabYourTab); Thanks SCMowns, this is really helpful for the community. Your tutorials are very detailed and easy to follow along. +1 rep for you! Quote from Akif_the_minecrafter Oh! I know you! I watched you on youtube and subscribed to you! You do mods! Love those videos! Quote from Trigonia» Thanks SCMowns, this is really helpful for the community. Your tutorials are very detailed and easy to follow along. +1 rep for you! Yeah, his videos are fantastic! Quote from Phelixz SCMowns, I was wondering if you knew how to customize your block so that it could only be minable by a certain ammount of pickaxes? Quote from ballisticjon When Is Part 2 Coming out? Quote from the115 Hey SCMowns, when I make an Item, it doesn't show up in the game. Can you help me out? Quote from TCTNGamiz Clicked on the green arrow <3 Even thought I'm not into coding, great job on making this topic, it probably took you like 10h lawl Quote from MiniMan100001 When i try to install techne it doesnt have the application save file how do I fix that? 1. Setting up MCP( Minecraft Coder Pack) Modloader ( 1.2.5 / 1.3.2 / 1.4.x / 1.5.1/ 1.5.2 ) [media][media][/media[/media]] In the video I show you how to set up MCP. To get the downloads click on the links below (pick your version) Minecraft 1.2.5 Minecraft 1.3.2 Minecraft 1.4.2 Minecraft 1.4.4 Minecraft 1.4.5 Minecraft 1.4.6 Minecraft 1.5.1 Minecraft 1.5.2 [media][media][/media[/media]] You need a software that can allow you to make transparent pictures, like Paint.net Mod_NAME To understand this code is quite simple, public class final Item is telling Minecraft that you are making a new item, the (2085) is the Item ID, and the .setItemName("NAME"); is required, but dosn't need a fancy name! in the public void load() section, you will tell modloader where your items is located at, what is needed in order to craft it, and what the name is. My tutorial mentions all this. ItemNAME: Here is another separate class you will need to make. This will define the Item of how much it can stack and to make it an official item registered onto Minecraft My picture that I used in this tutorial can be downloaded here! (16x16) [media][media][/media[/media]] In my tutorial i might have mentioned to edit a BASE class, don't do it! in this code of line you will need to make a new sword and later define it in your public void load. replace NAME with the name of your sword. paste this in your public void load() The first line of code is telling modloader to override minecraft item's picture with your new sword sprite. The second line is adding in a name for your sword. In the "NAME Sword" call your sword what ever you like. The tutorial goes over everything. The picture of the sword that I used can be downloaded here! (16x16) [media][/media] In your static section paste this: replace the NAME with a name of your choice. EnumToolMaterial.GOLD is the tool material, so it's set as gold (defualt). Paste this line of code in your public void load(): The green Pickaxe picture that was used can be downloaded here! (16x16) [media][media][/media[/media]] In your static section paste this: Paste these lines of code in your public void load(): Watch the video for infomation! The green Axe picture that was used can be downloaded here! (16x16) [media][media][/media[/media]] In your static section paste this: Paste these lines of code in your public void load(): Watch the video for infomation! The green Hoe picture that was used can be downloaded here! (16x16) [media][media][/media[/media]] In your static section paste this: Paste these lines of code in your public void load(): Watch the video for information of these lines of code! The green Shovel picture that was used can be downloaded here! (16x16) [media][media][/media[/media]] pase this line of code in your public static section: Create a new class ( for your BlockNAME, and copy nnd paste this code in: follow the video tutorial if you get stuck or confused. Now paste these lines of code in you Public void load(): once you have paste that line of code, be sure to change any (NAME) with your block name, now under your static final section, not your public void load. You paste these lines of code: This line of code deals with your actual ore generation in chunks. If you would like to generate more ore everywhere then mess with this line of code: for (int i = 0; i < 10; i++) - Change 10 to 30 and see the difference. If you know Minecraft's XYZ's coordence then you can easly chnage where ore will spawn underground. I cover a lot of lines in my video. So check it out! The green Ore picture that was used can be downloaded here! (16x16) [media][media][/media[/media]] Make a new class, and paste this in your mod_NAMEARMOR: This is the Basic Modloader coding format. You will use this like a normal mod_file you have made. In your Static final body paste these lines of code: We are defining the items that you will use to place on your armor slots. Keep in mind this line of code (5,0) The 5 is the new armor number you have added. Since Minecraft has 4 armor sets you are adding in the 5th set. ( If you add in more armor be sure to increase that number to 6) - Now the 0 is where the item will be placed on the armor slots. 0 = HELMET, 1 = BODY, 2 = PANTS, and 3 = BOOTS. Now paste these lines of code in your Public void load(): Watch my video for more details. Pictures! ( used in video ) Download the bundle! (16x16) [media][media][/media[/media]] It's very simple. Just follow my tutorial xD [media][media][/media[/media]] *This video and code only work for Minecraft 1.2.5 MCP coding!* Updated code is located down below! Paste this line of code in your public void load(): Watch the video for information! Download my "Shiny Gem" Here! (16x16) [media][media][/media[/media]] The same Idea as in making a block, but a few changes. Copy and paste this line of code in your "public static final section" You will need to follow my video for a great description of this line and the following lines. Copy and paste these lines in your public void load: ModLoader.registerBlock(NAMEblock); ModLoader.addName(Rubyblock, "In-Game-Name Block"); ModLoader.addRecipe(new ItemStack(NAMEblock, 1), new Object [] {"###", "###", "###", Character.valueOf('#'), ITEM});; padding:10px; margin-bottom:10px;">NAMEblock.blockIndexInTexture = ModLoader.addOverride("/terrain.png", "/items/pic.png"); ModLoader.registerBlock(NAMEblock); ModLoader.addName(Rubyblock, "In-Game-Name Block"); ModLoader.addRecipe(new ItemStack(NAMEblock, 1), new Object [] {"###", "###", "###", Character.valueOf('#'), ITEM}); in your BlockNAME: Download my Block that I used in my tutorial Here! [media][media][/media[/media]] STEPS: - Download Modloader 1.3 - Download the new MCP V1.7 ( for Minecraft 1.3 ) - Get a fresh Minecraft.jar - (Optional) Get a Minecraft_Server.jar 1.3 - Copy and paste your "Bin" and "resources" folders in your "jars" folder.(make sure you have modded minecraft.jar with modloader - Decompile - Move over Old src to New Src *How to Move Over* - Either look for your class files in your old src and move over - Or, Highlight ALL, *copy* and paste in new src, But DON'T REPLACE - Recompile - Look for errors. ( you might get 1 ) - Move over pictures / items to new location. - Open eclipse, and look for your mod_NAME - Run Minecraft, and enjoy your mod! Get the New MCP for Minecraft 1.3 and downloads Here! [media][media][/media[/media]] - Update your version of MCP to 1.3.2 - Copy your old src (source) to your new moddings folder (you can get mine) - Copy your pictures as well - Fix errors. If any (look for red lines) [media][/media] Paste this line of code in your public static final section: It's just like making an Item, but this time you make sure it extends ItemFood. The 1F means how many 1/2 hunger bar is your food going to fill up, so 2 = 1 bar, 3 = 2 1/2, and so on. The 6 is how long till you get hungy again, 5 is normal, and 3 is very low where you will starve again! The true statement is if you can feed it to a wolf. true = yes, and false = No. Now paste this line of code in your public void load(): Download my Food item here! ( Doughnut ) [media][media][/media[/media]] The 1.0F is how much exp you'll recieve with smelting successfully. The rest is explained in my tutorial. [media][media][/media[/media]] If you have a sword, then remove the (CreativeTabs.tabMaterials) and spell in " (CreativeTabs. " if you put that point "." you should have a list of the tabs available, like the Materials Tab, Combat, Misc, Food, Tools, etc. [media][media][/media[/media]] Download all the requirements! [media][/media[/media]] Download Techne Here! Download my Mob save here! You will need Internet Explorer to right-click the "Download/Website" link on Techne's forum to save it as an application. ( Watch my Video ) [media][media][/media[/media]] Download my Mob Texture! [media][media][/media[/media]] mod_NAME.java: First off make yourself a new class, replace all the NAME with the actualy name that you have chosen. In the public void load() Modloader registers the mob/entity that you are creating. I have left side notes on most of the line of code for you to understand what they mean. EntityNAME: Each line is pretty much self explanatory, watch my video and read the side notes to understand the code. Here is my ModelGreenMonster Download my Mob Texture! [media] [/media] You will need to paste these body of code in your modelNAME.java file! Mess with the #'s to get random rotation results! [media] [/media] For Minecraft 1.4.4 Download the newest Modloader here! (1.4.4) Download MCP Version 7.19 for 1.4.4 here! For Minecraft 1.4.5 Download the files here! [media] [/media] The enchantment effect code is here: Leave the return to true in order to get the enchantment effect on the item! [media] [/media] Here is the code that was used in this video!: Pay attention to \u00a7E that E is the placement for a color code. In order to see the color codes link is right here: Color codes webpage! Change the E to 1 or 2 or 4, you will get a different color, this method does not work on blocks. I will look into that later on [media] [/media] Code: To turn off the fire aspect, put return false; The 5 is the amount of ticks the fire will remain on the entity. Raise it higher if you wish the mod will remain on fire for a very long time. [media] [/media] Watch the video for some additional information! [media][/media] These line of codes were used in my video, I have explained what they mean. Paste these lines of code in your "public class" section Paste these lines of code in your "public void load()" paste these lines of code underneath the public void load, make sure you do not paste them in public void load(): Follow the video tutorial for more information. [media][/media] Follow the tutorial on making a new WorldGenNether class by copying the src code from WorldGenMinable. Here is the code that you will place in your mod_mod: I left some side notes, explaining the obvious. But anyways! Hope this helps! [media][/media] For the gravitational block you'll need to follow along with my tutorial, to make your block unbreackable add in this line of code to your "pubic static final" Now to make your block have a light value, you'll add this line of code to your "public static final" Increase or decrease the number 5 to lower or increase the light radius. Downloads Click Here! 1. Setting up MCP, Eclipse, and Java JDK with Forge 1.5.1 / 1.5.2 Download(s): Minecraft 1.5.1 Minecraft 1.5.2 After making your Mod class, now paste the following code your class: Watch the video tutorial for information about what some lines mean. Time to make a class for the GemItems. Here's what you add in that class: You can download the topaz here! Download(s): Minecraft 1.5.2 Video: First off, make sure that you have imported everything that is needed in your main class: alright, after importing everything we need, now lets make the block, in your public class brackets, make a new public static block. now we need to declare what BlockName is, in your public void load brackets, put there lines of code inside: setHardness = how hard your block will be, to see the list of vanilla blocks, you can check out the Block.java file setResistance is how much resistence would the block have against impacts. You can follow the video tutorial for the little things. Now we need to define our new NewBlockClass.java. Make a new class, for example: NewBlockClass Now paste these lines of code inside: Download my Texture Here! Video: Src: Basic Crafting: In your main class, paste the code in your public void load(): To read this code, we are making a topaz block with 9 topaz. T = 1 topaz, (topazblock,1) = we are crafting a topaz block and only getting one of them. Here is an example using numbers in the crafting grid. Do you see the "TTT","TTT","TTT" Not think of this code: as: That is how you can tell the crafting grid in java. (Watch my video for a better explanation.) Shapeless Crafting: shapless crafting is making anything freely in the crafting table, without having any specific order. (watch the video for a better explanation) The coding is a lot similar to the Basic Crafting, but worded differently. To make a Trio Gem, You need a Rubie, Sapphire, and an Emerald. These gems can be placed in the crafting table without any order to make a Trio Gem. ( This mod is Copyrighted. TrioGems Mod.) Video: In order to make an ore in Minecraft forge we need to make a basic block. We have already made one before (Episode 4). After we have made a new block, we need it to generate around the overworld. We need to make a new World generator! You will have a red underline around this are: WorldGeneratorSCMowns - You need to hover over it, and make a new class in your package named WorldGeneratorSCMowns. (or whatever you named it) You can call it WorldGeneratorOreGen (To make it simple) I have left some code out of the class, that is where you see the "//" comment. That will be used in our Nether ore generation if you are inerested in that later on. So first off, Make sure you fix your package import. then rename anything that you have changed, so if you changed the name "WorldGeneratorSCMowns" to something else like "WorldGeneratorMyOres" rename this line of code: public class WorldGeneratorSCMowns implements IWorldGenerator If we look at our private void generateSurface we will be making our Basic Block genterate around in Minecraft. So since the case is set to 0, it will generate in the overworld, see this line of codes: If the case = 0 then it will generate in the overworld, we have this here in our private void generateSurface: i is the case, and it = 0. You can mess around with the i<10 - The higher the #, it should increase the chances when the ore spawns in the overworld. For these lines of code: These are the cordance of where the ores generate at random. I normally leave them 16x16x16 if you would like them to generate like Iron ores, You can chnage 16 to 32, if you would like the ore to generate more farther apart from each other. For this line of code: We are defining what is being generated. Replace YourClass with your main class, and YourBlock with your Basic Block you made. 4 = the vane size of the ore. Like how many ores will be together when generating. You can lower it or raise it if you like. Other then that, you should be set! Check the video for more info! Download my Topaz Block Picture here! 7. Tools (Sword, Shovel, Pickaxe, Hoe, Axe) and new Tool material1.5.2 (New!) Video: In order to begin making your first forge tools you need to make a new enum tool material. What that is, think of a diamond, it has the highest durability, strengh, etc. You can make your weapons/tools weak or strong, all it takes is to make a new enum tool material. For an example, this tool material named EnumToolMaterialTopaz is considered "Low Powered". For more info about the enum tool material, check out Minecraft's class: "EnumToolMaterial". Once you have the enum tool material, time to make the tools! Now we have made the tools registered, time to define them, You will get some red underlines, we will clear them up in a bit. First off make sure all your picture file names are replaced. "Texture_file_name" replace that with your pictures file's name. The 9014 is the Item ID. And now we gotta make a new class for SCMownsAxe, SCMownsShovel, SCMownsPickaxe, SCMownsHoe, and SCMownsSword. So make those new classes! Read that remember comment! You will need to change the "ItemPickaxe" if you are making a axe, sword, shovel, etc. Don't forget to add the names to the item! Watch the video for a better understanding of everything! Video: In order to begin making a creative tab in Minecraft forge you will need to open up your main mod class and paste the following code: (paste it somewhere in your public static section) Reading this, you are creating a new CreativeTab. tabYourTab is the actual tab you are making. Change that to something different like tabSCMowns, tabSteven, etc. What CreativeTabs.getNextID() is doing is looking for the next available ID for the creative tab. And inside those quotes I put SCMowns Tutorial Mod, change that to something else, the reason why is when the player hovers over the tab that name will appear. Make sure to import the following package: You will have a red underline on TabSCMownsTutorialMod. What you need to do now is create a new class called: TabSCMownsTutorialMod, and paste in this code: So now that you have created a new class called TabSCMownsTutorialMod, you will need to make sure your package is right, and fix any red underlines you might come across. You will need to change " return TutorialMod.topaz.itemID;" to a item/block of your choice. What that is doing is displaying that particular item/block on the creative tab. I picked my Topaz Gem to be displayed, see it here: Ok, now after you have found the right block/item to showcase on the tab, time to change this: "return "SCMowns Tutorial Mod";" changing that to anything you like will have it displayed on the actual tab itself. As you see on the picture, it's witten on the tab. (above the tools/blocks) So now that you have changed everything. Time for you to set your items and blocks in that new creative tab. To do that, find your item/block classes, like "GemItems" for example, or "SCMownsBasicAxe" or "TopazOre". when you open the class replace the old creative tab code and use this one: To best understand this episode, watch the video! * If you would like the SRC codes of everything I have been working on from ep. 1- 8, click here for the download! * Thanks SCMowns, this is really helpful for the community. Your tutorials are very detailed and easy to follow along. +1 rep for you! Yeah, his videos are fantastic! Thanks Back on MCF baby. I will make an episode on that soon I know I have to make a new mob lol. Back on MCF baby. I would recommend you to post a topic on my help forums. Edit: Nvm -.- After I went through more of the videos I found the video on "Making Items Show Up in The Creative Tab". Even thought I'm not into coding, great job on making this topic, it probably took you like 10h lawl Hey Did you try to download the link using Internet Explorer? or are you on a Mac? ( If I don't reply quick check out my Help forums )
https://www.minecraftforum.net/forums/mapping-and-modding-java-edition/mapping-and-modding-tutorials/1571481-1-5-2-1-5-1-1-4-7-1-3-2-creating-mods-modloader
CC-MAIN-2019-43
refinedweb
5,853
52.15
You can click on the Google or Yahoo buttons to sign-in with these identity providers, or you just type your identity uri and click on the little login button. ASTNG - Abstract Syntax Tree New Generation - is an enhanced Python Syntax Tree Generator. It uses the tree generated from the '_ast' python module and rebuilds an new tree with more informations. It is mainly used by Pylint, but also by Pyreverse and some other projects... Here is a file foo.py: class Super(object): @classmethod def instance(cls): return cls() class Sub(Super): def method(self): print 'method called' sub = Sub.instance() Astng builds a tree with the ASTNGBuilder from a file or from string. The returned tree has a list of nodes in tree.body. Nodes in the dictionary tree.locals store locally defined variables. Finally, inference is an important feature of astng. So let us see how all this works: >>> from logilab.astng import builder as buildmod >>> builder = buildmod.ASTNGBuilder() >>> tree = builder.string_build(open("foo.py").read()) >>> # you can also write : >>> tree = builder.file_build("foo.py") >>> print ', '.join(str(n) for n in tree.body) ClassDef(Super), ClassDef(Sub), Assign() >>> for key, vals in tree.locals.items(): ... print key, ":", [str(n) for n in vals] Super : [ClassDef(Super)] Sub : [ClassDef(Sub)] sub : [AssName(sub)] >>> print list(tree['sub'].infer()) [<Instance of .Sub at 0x38697296>] To understand the tree structure of the analyzed code, we can use the tree_repr method. Here is a tree representation for the sub assignment: tree.body[2].tree_repr() Assign() targets = [ AssName(sub) ] value = Call() func = Attribute() expr = Name(Sub) args = [ ] starargs = kwargs = pylint uses the ASTNG representation for analyzing code. Pyreverse builds nice UML diagrams from ASTNG projects. It is part of the pylint project. Card #21900 - latest update on 2012/05/29, created on 2010/03/16 by Emile Anclin
https://www.logilab.org/21900
CC-MAIN-2018-26
refinedweb
307
69.18
I. The object Let’s see how an object looks like: Naive method – DataContractSerializer Here’s how you serialize this thing normally: This gives a length of 431 characters (all ASCII, so also 431 bytes). Not much for a cookie, but in a forms authentication ticket, it’s about 5 times as much (more on that in part 2), which can be a problem. Binary serialization My second idea was to mark the object as [Serializable] and use the BinaryFormatter, like this: Sadly, this gives a binary length of 512, which is 684 characters in Base64. There’s still a lot of metadata in it. Alternative methods The problem with the above methods is that they store the names of properties along with the raw data. I could have developed a custom serialization method with a BinaryWriter, but that’s hard to maintain, so I gave Protocol Buffers a try. Binary length 100, Base64 length 136, that’s quite an improvement. This is because it outputs very short tag identifiers instead of property names, so the output’s size is very close to the raw size of the data, and also to what I could have achieved with a BinaryWriter. But I decided against it. I didn’t want to add another dependency for a project just for this one usage, and I didn’t want to change our MDG generated data classes to make them suitable for this purpose. And of course, I was looking for a challenge . Binary XML serialization Then I found out about binary XML. Here’s the first try: Binary length 308, Base64 length 412. Not much of an improvement over the text version (431), but the cool thing is that this thing stuff learn, and when it sees some data for the 2nd time, it only writes a short reference. Write the object twice, and it would be quite small, right? Wrong, because the XmlDictionary implementation can’t learn automatically. Even more sad is the fact that even if you teach it manually (as in, you add the names of your properties to it), it won’t make a difference because it only recognizes the exact same XmlDictionaryString instances that you add to it, and not the ones that it gets from the serializer. So here’s my great idea: make an XML dictionary that can learn automatically. Whenever the serializer looks up a string, and it’s not found in the dictionary, it is added automatically. Here is the source code: LearningXmlDictionary.cs. Here’s how to use it: First, you put the dictionary in learning mode and serialize an empty sample object (or several of them). Then you start serializing your real object just like above. The result: binary length 128, Base64 length 172! Sweet . It is pretty close to what Protocol Buffers can do, but it only needs core .NET stuff and a little trick. You need to be very careful, though. When two learning XML dictionaries don’t have the exact same knowledge, all hell breaks loose, like a login name ends up in the phone number property, there is no built in validation against that. So if you issue a ticket, upgrade your application, the data class changes a little, and a user returns with an old ticket, then you have a problem. First, make sure that the dictionary is always taught the exact same way, and don’t leave it in learning mode after that. Second, if the class changes, make sure that you create a new encryption key for the forms ticket (this is the default by the way) or add some version information in your class as a new property (changed metadata, like namespace, doesn’t work, since it doesn’t appear in the serialized data). Here is a chart to sum it up: You will see even better improvements in part 2.
http://joco.name/2013/06/01/storing-net-objects-in-cookies-part-1-compact-serialization-with-binary-xml/
CC-MAIN-2017-43
refinedweb
647
67.89
in reply to Re^2: No warning when assiging to a variablein thread No warning when assiging to a variable Hi Alexander, Perl is not C. Let's look at a common idiom in Perl: while(my $line = <>) { do_something($line); } [download] This is an assignment in a conditional operator. It does fortunately not emit a warning. Let's look at a more or less equivalent code in C: #include <stdio.h> #include <stdlib.h> int get_val(void); int main(void) { int i, x, y; i = x = y = 0; while (x = get_val()) { i++; printf("in loop\n"); if(i == 1) { exit(1); } } return i; } int get_val(void) { return(1); } [download] Compiled with LANG=C gcc -Wall -pedantic -o warning warning.c it does emit the following: warning.c: In function 'main': warning.c:11: warning: suggest parentheses around assignment used as t +ruth value [download] IMHO, it proves: Perl is not C, C is not Perl. C is a compiled language. Perl is a interpreted language. In my opinion, the above Perl idiom should not emit a warning like C does. Best regards McA P.S.: A ++ for your comparison. 1. Keep it simple 2. Just remember to pull out 3 in the morning 3. A good puzzle will wake me up Many. I like to torture myself 0. Socks just get in the way Results (288 votes). Check out past polls.
http://www.perlmonks.org/?node_id=1049790
CC-MAIN-2016-44
refinedweb
231
77.74
im_cmulnorm, im_multiply - multiply two images #include <vips/vips.h> int im_cmulnorm(in1, in2, out) IMAGE *in1, *in2, *out; int im_multiply(in1, in2, out) IMAGE *in1, *in2, *out; These functions operate on two images held by image descriptors in1 and in2 and write the result to the image descriptor out. Input images in1 and in2 should have the same channels and the same sizes; however they can be of different types. Only the history of the image descriptor pointed by in1 is copied to. None of the functions checks the result for over/underflow. All functions return 0 on success and -1 on error. im_subtract(3), im_lintra(3), im_add(3). N. Dessipris - 22/04/1991 J. Cupitt (im_multiply) - 22/04/1991 24 April 1991 IM_MULTIPLY(3)
http://huge-man-linux.net/man3/im_cmulnorm.html
CC-MAIN-2017-22
refinedweb
125
58.89
Hi, Thanks for the help. I could get started with the transformation. But the stylesheet outputted still has only the alias I gave and not 'xsl'. I think the namespace-alias hasn't worked. I gave the alias: <xsl:namespace-alias and I got: <?xml version = '1.0' encoding = 'UTF-8'?> <x:stylesheet <x:template xmlns: <x:param/person</x:param> ... Also, it works only with oracle xml parser v2 and not with xalan. And with oraclexmlparserv2, the xsl outut contains non-printable characters like as follows: </xsl:template> <several small rectangular boxes here> </x:stylesheet> Also for any tags I create using xsl:element, it is outputting the namespace along with the tag. Also, the schema namespace I have declared in the source xsl is also passed to the output which I dont want. I suppose ways exist to suppress these extra namespaces getting generated. I would appreciate a solution for the non-printable characters and the namespace-alias problem. I have to mention Iam using an unknown version of xalan. The oracle parser appears to be current. Regards, Omprakash.V David Carlisle To: xsl-list@xxxxxxxxxxxxxxxxxxxxxx <davidc@xxxxx cc: (bcc: omprakash.v/Polaris) o.uk> Subject: Re: [xsl] namespace-alias question 01/17/2005 08:10 PM Please respond to xsl-list <x:text You don't want to use disable-output-escaping here (you almost never want to use that). Here Iam using x to prefix my stylesheet while using xsl to prefix the stylesheet being outputted No, you have that backwards. <xsl:stylesheet xmlns: Here you have defined the prefix xsl to be so you must use xsl: as the prefix for all teh instructions that you want to execute such as this xsl:stylesheet element and also namespace-alias. You want to say that the x: prefix is an alias for XSLT so that should be <xsl:namespace-alias
http://www.oxygenxml.com/archives/xsl-list/200501/msg00629.html
CC-MAIN-2018-47
refinedweb
314
74.39
Is it possible to use an array to invoke an object? <pre> int[][] a1 = { { 1, 2, 3, }, { 2, 2, 3, }, { 2, 2, 3, }, { 4, 4, 4, }, }; for(int i = 0; i <=3; i++){ for(int j = 0; j < 3; j++) System.out.println(a1[i][j]); } so, instead of creating 4 objects obj myobj = new obj(1,2,3); obj myobj2 = new obj(2,2,3);..... .... I want to use one object and have the array invoke the object. </pre> sorry matey, i dont understand, coudl you maybe re-phrase the question? A kram a day keeps the doctor......guessing Are you asking if you can use a multidimensional array to store objects? If so, then yes you can. I'm not sure what you mean by "invoke an object" though. Not sure what the terms are? Instantiation... Ok, here goes... In the client I want to instantiate the objects with an array. so instead of creating four instances. obj myobj = new obj(1,2,3); obj myobj2 = new obj(2,2,3);..... I want the array...to instantiate one object. int[][] a1 = { { 1, 2, 3, }, { 2, 2, 3, }, { 2, 2, 3, }, { 4, 4, 4, }, }; obj myobj = new obj(array here?[][]); so, the result would be to use one object, with the array feeding the object data. Hope that is clear... Ok, now I understand.....well to do that you just need to create a constructor for the class that is capable of taking an array as a parameter. Code: public class SomeClass { int[][] anIntArray; public SomeClass(int[][] anIntArray) { //do anything you like with array here... //eg..... this.anIntArray = anIntArray; } } public class SomeClass { int[][] anIntArray; public SomeClass(int[][] anIntArray) { //do anything you like with array here... //eg..... this.anIntArray = anIntArray; } } Thanks Mike, that helps a lot and makes things more clear. But, my problem is that the constructor is in another class(server class), public TriBoolean(int a, int b, int c ){ this.side_a = a; this.side_b = b; this.side_c = c; } So, to sum up what I want...I have a server class that uses the constructor above. Then I have a client class in which I want to instantiate the objects using the array: public class theClient{ TriBoolean myobj = new TriBoolean(a1[][]);..... ..... So I'm not sure on how to set up the constructor as you stated in the server class, and then have the array process in the client class? Hope I'm still clear...lol not too sure about the coding behind this but couldnt you just use some kind of for looping and create the new object like this: MyObj a = new MyObj(array[0][0], array[0][1], array[0][2]); ???this would pass ints to the constructor, and if you looped it correctly then it would be ok...i think I still don't think i'm totally sure what you want to do lol. The array can't do any processing though. It is just a kind of box that holds values or references to objects. You can't make it do anything with the values though. You would have to write a method that takes all the values and makes the objects for you. Not knowing exactly what it is that the objects are or how they're used I'm not sure what the best way to do this would be though. The for loop suggestion would certainly work though. The method could store the created objects in an array, and then return the array of objects. This would work with any arbitrary number of objects. The method that does this could be a static method in the TriBoolean class. You pass it the array of values, it creates the objects of itself and passes them back in an array or some collection object. im assuming you are doing this so that you can save time on coding, but either way you are going to have to do alot of typing... if you do work out how to use an array, then your still going to have to create the array by hand, which could take a while, unless the array values are sequential or follow some pattern (ie not random) Thanks guys you both were a great help. Great info, the for loop works! That is what i needed. Mark I have posted the results of using the array(client class Describe.java). Thanks... The server class takes in three ints and tested in the boolean methods. if the property holds, then true else.... then the method() values are passed to a toString() and the output is based on the return value of the methods(). In the client I would have had to create an object for each triangle that I wanted to test for its property values. Just thought I post it.....for viewing Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?140385-how-to-invoke-an-object-using-an-array&p=415630
CC-MAIN-2016-18
refinedweb
823
74.49
Technical Articles Dancing with SAP Data Intelligence In the blog post written by Ingo Peter, we can see how SAP Data Intelligence can leverage SAP HANA PAL to create predictive models by using custom code in Python Operators. In this article, we will show a new way (available as of DI 1911) of using PAL models by using out-of-the-box operators. Introduction As the end-of-the-year party approaches, your manager gives you, an important data scientist in the company, a critical job: to make sure the music playlist is as fun as possible. This is of extreme importance so that all employees enjoy the party. You think to yourself: how on earth you will do that? After all, you are no DJ. So, you decide to put your machine learning skills to work and create a model to figure that out for you. But that brings up another question: where will you do that? It would be good if this model could be integrated with your in-house software that manages events. You then remember reading about a platform called SAP Data Intelligence in an SAP blog post and decide to use it to tackle this problem as well. The dataset As usual, you need to gather some data to train a model. Luckily, you find on the Internet a dataset containing a list of the top 50 Spotify songs of 2019 and some features associated to each of them. You upload the csv file to your company datalake on AWS S3. You open up the Modeler app to create a pipeline that will load the data from the csv into HANA DB. Also, you take the opportunity to clean it up a bit. As you would like to keep things simple (at least for now), you decide to use only some of the features available on the dataset. The pipeline for that is also simple. Just an operator to read the file, a Python script to pre-process the data, and ultimately a HANA client operator to insert the data into the database. It looks like this: For the Read File operator, the configuration was quite simple: For the Python operator, you wrap it with a group by right clicking it and selecting “Add group”. On the settings of that group, you add tag “pandas”. That will allow you to import pandas package, which makes it so much easier to deal with csv files. Also, you create an input port on the operator called “file” of type “message.file” and an output port called “data” of type “message. The Python code then looks like this: from io import StringIO import pandas as pd def on_file(message): content = StringIO(message.body.decode('iso-8859-1')) df = pd.read_csv(content, sep=",") dataset = df[['Genre','Beats.Per.Minute','Popularity','Length.','Danceability']] dataset['Genre'] = dataset['Genre'].astype('category').cat.codes danceability_col = dataset['Danceability'] danceability_col_cat = danceability_col.copy() fun_threshold = 70 danceability_col_cat[danceability_col < fun_threshold] = 'No Fun' danceability_col_cat[danceability_col >= fun_threshold] = 'Fun' dataset['Danceability'] = danceability_col_cat api.send('data', api.Message(body=dataset.to_csv(header=None))) api.set_port_callback('file', on_file) You then configure the SAP HANA client operator, which is also simple: Last, you connect the operators and run the pipeline. Success! The data is in HANA now. You are off to a great start in this endeavor! Training the model Well, SAP Data Intelligence has an app for that. Without second guessing, you open up ML Scenario Manager (MLSM), and create a new scenario: In the scenario, you click on the + sign on the Pipelines section, choose the template HANA ML Training and give it a name. When created, you are taken to the Modeler UI. Here you do some configurations on the pipeline. In this case, you configure the HANA ML Training operator by selecting a connection, specifying the training and test datasets, select the task and algorithm, and inform the key and target columns. Additionally, hyperparameters could be provided to fine-tune the algorithm in JSON format. They can be found in the HANA ML documentation page and are algorithm specific. For example, these are the hyper parameters for a neural network. When ready, the configurations look like the following: That’s it! Going back to MLSM, you select that pipeline and click on execute. Since there were changes to the pipeline, MLSM asks you to create a new version: After that, you execute the pipeline once again. This time, MLSM takes you through some steps in which you can provide a description of the current execution and provide global pipeline configurations. In this case, the name of the artifact (model) that is going to be created. You give it a name and click Save. Then, the training begins: … and in a few moments, you have yourself a shiny new model. Excellent! Deploying an inference pipeline Once again in MLSM, you create a new pipeline, but this time you select the HANA ML Inference template. Just like before, you are then taken to the Modeler: This time you configure the HANA ML Inference operator to connect to a HANA system. That’s right, any HANA system! Not necessarily the one in which the model was trained. The configuration looks like the following: Back to MLSM, you select the pipeline and deploy it: Once again, you create a new version, since you modified the pipeline. The wizard takes you to the point where you need to specify a model that will be used. Here, you select the recently trained model and continue. At the end, you get a URL that you can use as a REST endpoint exposing your model. Consuming the REST endpoint Time to see what this thing can do! To simplify things, you do a regular curl by providing the features of a song you want to determine whether people will enjoy: curl --location --request POST 'https://<host>/app/pipeline-modeler/openapi/service/<deploy id>/v1/inference' \ --header 'Content-Type: application/json' \ --header 'If-Match: *' \ --header 'X-Requested-With: Fetch' \ --header 'Authorization: Basic 00000000000000000' \ --data-raw '{ "ID": [1], "GENRE": ["6"], "BEATSPERMINUTE": [117], "POPULARITY": [79], "LENGTH":[121] }' And the response for that one is: {"ID":{"0":1},"SCORE":{"0":"Fun"},"CONFIDENCE":{"0":0.5990922485}} Great! That is it! You did it! Now, you just have to let your boss know the job is done, enjoy the party and get the well deserved promotion! 🙂 Well done! Good One. Thank you. What algorithm is used here? how can we choose the algorithm in the HANA ML Training operator? Hi Joseph, In this case, we used a Hybrid Gradient Boosting Classifier. The algorithm can be selected in the training operator configuration (6th screenshot from the “Training the model” section above). Depending on the task selected, different algorithms are available. Best Regards! We have several schema on the HANA-database. When using the “SAP HANA Client” operator, where can I decide what schema to create my Spotify table in ? Hello Alessandro, What a helpful blog! I tried to implement the same graph in DI, but while inserting into HANA, it throws an error: "failed to create schema '\"DATAHUB\"': SQL Error 258 - insufficient privilege: Detailed info for this error can be found with guid" Where can I check my authroization or how should I fix it? Could you please suggest! Thanks, Indu Khurana.
https://blogs.sap.com/2020/01/08/dancing-with-data-intelligence/
CC-MAIN-2022-05
refinedweb
1,217
63.59
Investors in CRISPR Therapeutics AG (Symbol: CRSP) saw new options become available today, for the April 24th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the CRSP options chain for the new April 24th contracts and identified the following call contract of particular interest. The call contract at the $52.50 strike price has a current bid of $4.80. If an investor was to purchase shares of CRSP stock at the current price level of $51.67/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $52.50. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 10.90% if the stock gets called away at the April 24 $52.50 strike highlighted in red: Considering the fact that the $52.29% boost of extra return to the investor, or 67.87% annualized, which we refer to as the YieldBoost. The implied volatility in the call contract example above is 89%. Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 252 trading day closing values as well as today's price of $51.67) to be 56%..
https://www.breathinglabs.com/monitoring-feed/genetics/crsp-april-24th-options-begin-trading/
CC-MAIN-2020-50
refinedweb
209
65.73
SoTimeCounter.3iv man page SoTimeCounter — timed integer counter Inherits from SoBase > SoFieldContainer > SoEngine > SoTimeCounter Synopsis #include <Inventor/engines/SoTimeCounter.h> Inputs from class SoTimeCounter: SoSFTime timeIn SoSFShort min SoSFShort max SoSFShort step SoSFBool on SoSFFloat frequency SoMFFloat duty SoSFShort reset SoSFTrigger syncIn Outputs from class SoTimeCounter: (SoSFShort) output (SoSFTrigger) syncOut Methods from class SoTimeCounter: SoTimeCounter() Methods from class SoEngine: static SoType getClassTypeId() virtual int getOutputs(SoEngineOutputList &list) const SoEngineOutput * getOutput(const SbName &outputName) const SbBool getOutputName(const SoEngineOutput *output, SbName &outputName) const SoEngine * copy() const static SoEngine * getByName(const SbName &name) static int getByName(const SbName &name, SoEngineList engine is a counter that outputs numbers, starting at a minimum value, increasing by a step value, and ending with a number that does not exceed the maximum value. When the maximum number is reached, it starts counting from the beginning again. The difference between this engine and the SoCounter engine, is that this engine also has a timeIn input, which allows the counting cycles to be timed. This engine counts automatically over time; it does not need to be triggered to go to the next step. By default, the timeIn input is connected to the realTime global field. It can, however, be connected to any time source. The frequency input field controls how many min-to-max cycles are performed per second. For example, a frequency value of 0.5 means that it will take 2 seconds to complete a single min-to-max cycle. The steps in the count cycle do not necessarily all have the same duration. Using the duty input field, you can arbitrarily split the time period of the count cycle between the steps. For example, if there are 5 steps in the cycle, a duty input of (1., 2., 2., 2., 1.) will make the second, third, and fourth steps take twice as long as the first and last steps. At any time the counter can be reset to a specific value. If you set the reset input field to a value, the engine will continue counting from there. Note that the counter will always output numbers based on the min, max and step values, and setting the reset value does not affect the those input fields. If the reset value is not a legal counter value, the counter will still behave as though it is: If reset is greater than max, the counter is set to max. If reset is less than min, the counter is set to min. If reset is between step values, the counter is set to the lower step. Each time a counting cycle is started, the syncOut output is triggered. This output can be used to synchronize some other event with the counting cycle. Other events can also synchronize the counter by triggering the syncIn input. You can pause the engine, by setting the on input to FALSE, and it will stop updating the output field. When you turn off the pause, by setting on to TRUE, it will start counting again from where it left off. Inputs SoSFTime timeIn Running time. SoSFShort min Minimum value for the counter. SoSFShort max Maximum value for the counter. SoSFShort step Counter step value. SoSFBool on Counter pauses if this is set to FALSE. SoSFFloat frequency Number of min-to-max cycles per second. SoMFFloat duty Duty cycle values. SoSFShort reset Reset the counter to the specified value. SoSFTrigger syncIn Restart at the beginning of the cycle. Outputs (SoSFShort) output Counts min-to-max, in step increments. (SoSFTrigger) syncOut Triggers at cycle start. Methods SoTimeCounter() Constructor File Format/Defaults TimeCounter { min 0 max 1 step 1 on TRUE frequency 1 duty 1 timeIn <current time> syncIn reset 0 } See Also SoCounter, SoElapsedTime, SoEngineOutput
https://www.mankier.com/3/SoTimeCounter.3iv
CC-MAIN-2017-47
refinedweb
620
55.03
Hello I'm using Unity Free 4.3.4 for iOS game. I'm having several plugins for my game: Admob, GameAnalytics, Facebookshare. Then I would like to add In App Purchase and it is Soomla. When I add Soomla to my existing project, the game crash when trying to purchase an item. So I try using the same Implementation and codes on an empty newly created project and it work just fine. I try deleting the whole plugin from my existing project and re-add but it still the same. There is one difference. When I build my existing project as xCode, I have to drag some of Soomla's library from Libraries folder to 'Link Binary' to make it runnable. But in my newly created project, I don't have to do anything about this libraries. Where should the problem comes from? Are there any conflicts between plugins? If so how solve it? Or is there any configuration of the plugin that may be wrong? How should I reset it or delete its cache? Thank you very much Hello You should contact the support team of those involved in the development of the plugin (moreover if you paid for it). Btw i did not find the plugins you talk about in the asset store. And yes there could be conflicts between plugins. Try to add the plugins one by one to your empty project to identify those in conflict. One more thing, you said it crashed, unity crashes? you have errors? bug splat? (crashes in editor and after build?) Hello Nerevar thanks for your reply The In App Purchase is here and the admob plugin is here. I believe that the problem comes from Unity side, project setting, or plugins conflict, because it works very well on empty project. I will try adding plugins one by one to see if there are any conflict. One thing to note is that I'm upgrading my game version to 1.1 and version 1.0 is working fine with all the plugins. About the crash, it happens when I test on device when I try purchasing an item (Soomla plugin's. Integrate Admob in iOS and windows phone 0 Answers Failed to load 'Assets/Plugins/x86_64/FlexWrapper.dll 1 Answer Unable to insert joystick in 3d game kit lite 0 Answers Assets\Plugins\UnityPurchasing\script\InventoryDemo.cs(4,45): error CS0246: The type or namespace name 'IStoreListener' could not be found (are you missing a using directive or an assembly reference?) 0 Answers com.google.android.gms.ads.MobileAds Issue With Firebase 1 Answer
https://answers.unity.com/questions/743519/a-plugin-crash-on-existing-project-but-works-well.html
CC-MAIN-2019-47
refinedweb
434
76.22