text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
> Ashlesha Shintre wrote: > > Hi, > > I m using the 2.6.14.6 tree and trying to get the kernel > running on the > Encore M3 board. > > The kernel crashes during the boot process at the > early_initcall. What is the exact output from the crash? > This function doesnt seem to be defined anywhere. It's not a function, but a macro. The macro is used to annotate a function as being among the list of functions that are called at startup by the "initcall" mechanism: a big loop that sweeps over a symbol table of registered initialization functions and calls them. E.g. #include <linux/init.h> /* ... */ int __init my_initialization_function(void) { printk(KERN_INFO "Hello, world\n"); } early_initcall(my_initialization_function); The __init tells the kernel build system that your function is not needed after initialization and its memory can be thrown away. The early_initcall arranges for the initialization call. Early means that it's in the first group of functions. If you suspect your kernel is dying during the calling of the initcall functions, you can turn on initcall debugging. Add these parameters to your kernel command line: debug debug_initcall Hope this helps.
http://www.linux-mips.org/archives/linux-mips/2006-09/msg00009.html
CC-MAIN-2015-35
refinedweb
189
64.91
Kaboose Queue (kabqueue) The Kaboose Queue system is designed to handle asynchronous generic tasks (or messages). The main function of this system is to take the load off the request cycle. We use this system to send out e-mail, perform intensive image or video tasks, asynchronous network operations, etc. We had a few important things in mind when desiging this system: - Performance - Scalibility - Reliability The first item is obvious. The system needs to be fast and relatively resource friendly. For this reason, we chose the Starling queue system from Twitter (). The second item, scalibility, means that the system should: - Be distributed. Many machines submitting tasks, and many machines processing these tasks. - Handle a large number of requests. Each photo upload spawns resize operations for frequently used sizes. Each resize spawns off additional S3 uploads, for example. - Support task priorities The third item requires the system to handle failures, retry common errors, and notify us in case of major failures. Also, the system needs to surive the server dying and coming back and temporary network errors. With the help of Starling, and our previous experience with a database-backed message queue system, I think we've achieved our goals. We look forward to the community's feedback. Requirements sudo gem install starling sudo gem install daemons Installation script/plugin install Example usage First, you will need to create a processor file(foo.rb) in RAILS_ROOT/app/processors folder. It should be structured like so: class FooProcessor < Kaboose::Processor processes :some_model def processsome_model.some_method endend end Each processor class must implement a process method that defines the action that needs to be processed by the queue. This method has access to the @task instance variable that is an instance of Kaboose::Task. The processes macro creates an accessor method as a shortcut for accessing ActiveRecord models specified by model_id option in the task. See Kaboose::Processor's self.processes for more info. Configuration You will need a kqueue.yml file in your apps config folder to specify the address and namespace of the system, for example: address: 127.0.0.1:22122 namespace: some_namespace Running the Kaboose Queue system Just run ./script/queue_processor to get a list of options. If you are using monit, here's what worked for us: check process queue-processor with pidfile /path/to/queue_processor.pid group qtp start program = "PATH/queue_processor start -d -e production -c CWD -u USER -g GROUP" stop program = "PATH/queue_processor stop -d -e production -c CWD"
http://code.google.com/p/kabqueue/
crawl-002
refinedweb
415
57.98
Exercise 6 has us working with more structs. This time we create a dynamically allocated array of structures. This sounds interesting but is not too difficult after you start putting code to compiler. Essentially, you create your array, declare variables that were mentioned int the text, use the “new” keyword to allow numDonors and our struct, then loop through patrons and their amounts. See my source below for a more clear explanation: 6. Put together a program that keeps track of monetary contributions to the Society for the Preservation of Rightful Influence. It should ask the user to enter the number of contributors and then solicit the user to enter the name and contribution of each contributor. The information should be stored in a dynamically allocated array of structures. Each structure should have two members: a character array (or else a string object) to store the name and a double member to hold the amount of the contribution. After reading all the data, the program should display the names and amounts donated for all donors who contributed $10,000 or more. This list should be headed by the label Grand Patrons. After that, the program should list the remaining donors. That list should be headed Patrons. If there are no donors in one of the categories, the program should print the word “none.” Aside from displaying two categories, the program need do no sorting. #include <iostream> #include <string> using namespace std; struct contrib{ string name; double amount; }; int main() { int numDonors = 0; int patrons = 0; int grandPatrons = 0; cout << "Society for the Preservation of Rightful Influence" << "\n\n"; cout << "Enter number of contributors: "; cin >> numDonors; cout << "\n"; contrib *society = new contrib[numDonors]; // Gather names and amounts for(int i = 0; i < numDonors; i++) { cout << "Enter the name of the contributor: "; cin >> society[i].name; cout << "Enter the contribution anount: "; cin >> society[i].amount; }"; // free memory delete [] society; return 0; }
https://rundata.wordpress.com/2012/12/11/c-primer-chapter-6-exercise-6/
CC-MAIN-2017-26
refinedweb
319
62.38
SANSA RDFSANSA RDF DescriptionDescription SANSA RDF is a library to read RDF files into Spark or Flink. It allows files to reside in HDFS as well as in a local file system and distributes them across Spark RDDs/Datasets or Flink DataSets./DataSets of triples. We name such an RDD/DataSets a main dataset. The main dataset is based on an RDD/DataSets data structure, which is a basic building block of the Spark/Flink framework. RDDs/DataSets are in-memory collections of records that can be operated on in parallel on large clusters. UsageUsage The following Scala code shows how to read an RDF file in N-Triples syntax (be it a local file or a file residing in HDFS) into a Spark RDD: import net.sansa_stack.rdf.spark.io._ import org.apache.jena.riot.Lang val spark: SparkSession = ... val lang = Lang.NTRIPLES val triples = spark.rdf(lang)(path) triples.take(5).foreach(println(_)) N-Triples loading optionsN-Triples loading options NTripleReader.load(...) Loads N-Triples data from a file or directory into an RDD. The path can also contain multiple paths and even wildcards, e.g. "/my/dir1,/my/paths/part-00[0-5],/another/dir,/a/specific/file" Handling of errorsHandling of errors By default, it stops once a parse error occurs, i.e. a org.apache.jena.net.sansa_stack.rdf.spark.riot.RiotException will be thrown generated by the underlying parser. The following options exist: - STOP the whole data loading process will be stopped and a org.apache.jena.net.sansa_stack.rdf.spark.riot.RiotExceptionwill be thrown - SKIP the line will be skipped but the data loading process will continue, an error message will be logged Handling of warningsHandling of warnings If the additional checking of RDF terms is enabled, warnings during parsing can occur. For example, a wrong lexical form of a literal w.r.t. to its datatype will lead to a warning. The following can be done with those warnings: - IGNORE the warning will just be logged to the configured logger - STOP similar to the error handling mode, the whole data loading process will be stopped and a org.apache.jena.net.sansa_stack.rdf.spark.riot.RiotExceptionwill be thrown - SKIP similar to the error handling mode, the line will be skipped but the data loading process will continue. In additon, an error message will be logged Checking of RDF termsChecking of RDF terms Set whether to perform checking of NTriples - defaults to no checking. Checking adds warnings over and above basic syntax errors. This can also be used to turn warnings into exceptions if the option stopOnWarnings is set to STOP or SKIP. - IRIs - whether IRIs confirm to all the rules of the IRI scheme - Literals: whether the lexical form conforms to the rules for the datatype. - Triples: check slots have a valid kind of RDF term (parsers usually make this a syntax error anyway). See also the optional errorLog argument to control the output. The default is to log. An overview is given in the FAQ section of the SANSA project page. Further documentation about the builder objects can also be found on the ScalaDoc page. How to ContributeHow to Contribute We always welcome new contributors to the project! Please see our contribution guide for more details on how to get started contributing to SANSA.
https://index.scala-lang.org/sansa-stack/sansa-rdf/sansa-rdf-common/0.6.0?target=_2.11
CC-MAIN-2019-30
refinedweb
555
56.55
Hey there, I am trying to plot multiple Indicators on a heatmap, and result a table similar to the following: I tried to plot a single Indicator on a heatmap, but couldn’t place it to the desired place on the plot. How can I plot them to the middle of boxes? Placing on “Morning” and “Monday” box for instance here: import plotly.express as px import plotly.graph_objects as go data=[[1, 25, 30, 50, 1], [20, 1, 60, 80, 30], [30, 60, 1, 5, 20]] fig = px.imshow( data, labels=dict(x="Day of Week", y="Time of Day", color="Productivity"), x=['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday'], y=['Morning', 'Afternoon', 'Evening'] ) fig.add_trace( go.Indicator( mode = "number+delta", value = 492, delta = {"reference": 512, "valueformat": ".0f"}, title = {"text": "Users online"}, domain = {'y': [0, 1], 'x': [0.25, 0.75]}) ) fig.show() P.S. Just one example will be enough because I will place indicators iteratively.
https://community.plotly.com/t/plotting-indicators-on-a-heatmap/58830
CC-MAIN-2022-05
refinedweb
156
56.05
Angular Master Class in Málaga Join our upcoming public training!Get a ticket → In our article on styling Angular components we learned how styles are applied to our component when defining them in different ways. We mentioned that all our component styles are appended to the document head, but usually would end up in the component’s template, in case we use native Shadow DOM. This article explains not only how we can tell Angular to use native Shadow DOM, but also what the other view encapsulation solutions are, that the framework comes with and why they exist. TABLE OF CONTENTS Understanding Shadow DOM Before we get started and take a look at how to use Angular’s different view encapsulation types, we need to understand what Shadow DOM actually is, what it makes so awesome and why we want to use it. We won’t have a super deep dive here, since there are a lot of great resources out there already. If you want to start from scratch and learn Shadow DOM 101, which you really should in case this is new to you, Eric Bidelman has written one of the best guides over at html5rocks.com. In one sentence, Shadow DOM is part of the Web Components standard and enables DOM tree and style encapsulation. DOM tree and style encapsulation? What does that even mean? Well, it basically means that Shadow DOM allows us to hide DOM logic behind other elements. Addition to that, it enables us to apply scoped styles to elements without them bleeding out to the outer world. Why is that great? We can finally build components that expose a single (custom) element with hidden DOM logic under the hood, and styles that only apply to that element - a web component. Just think of an <input type="date"> element. Isn’t it nice that we can just use a single tag and the browser renders a whole date picker for us? Guess with what you can achieve that… Shadow DOM in Angular Now that we got an idea of what Shadow DOM is (and trust me, there is so much more to cover), we can take a look at how Angular actually uses it. As we know, in Angular we build components. A component is a controller class with a template and styles that belong to it. Those components can be shared across applications if they are general enough. That means, Angular already embraces the idea of building applications in components and making them reusable. However, components in Angular are not web components per se but they take advantage of them as mentioned earlier. Whenever we create a component, Angular puts it’s template into a shadowRoot, which is the Shadow DOM of that particular component. Doing that, we get DOM tree and style encapsulation for free, right? But what if we don’t have Shadow DOM in the browser? Does that mean we can’t use Angular in those environments? We can. In fact, Angular doesn’t use native Shadow DOM by default, it uses an emulation. To be technically correct, it also doesn’t create a shadowRoot for our components in case no native Shadow DOM is used. The main reason for that is that most browsers simply don’t support Shadow DOM yet, but we should still be able to use the framework. Even better, we can easily tell Angular to use the native Shadow DOM if we want. So how is that implemented and what do we need to do? View Encapsulation Types Angular comes with view encapsulation built in, which enables us to use Shadow DOM or even emulate it. There are three view encapsulation types: - ViewEncapsulation.None - No Shadow DOM at all. Therefore, also no style encapsulation. - ViewEncapsulation.Emulated - No Shadow DOM but style encapsulation emulation. - ViewEncapsulation.Native - Native Shadow DOM with all it’s goodness. You might wonder why we have three types. Why not just one for native Shadow DOM support and another one that doesn’t use Shadow DOM? Things become more clear when we explore how they affect the way Angular applies styles to components. Let’s try them out one by one. ViewEncapsulation.None Angular doesn’t use Shadow DOM at all. Styles applied to our component are written to the document head. We talked about that in a more detail in styling Angular components, but to make a quick recap, having a zippy component with styles like this (note that we set the encapsulation property in our @Component decorator): import {ViewEncapsulation} from '@angular/core'; @Component({ moduleId: module.id, selector: 'my-zippy', templateUrl: 'my-zippy.component.html', styles: [` .zippy { background: green; } `], encapsulation: ViewEncapsulation.None }) class ZippyComponent { @Input() title: string; } And a template like this: <div class="zippy"> <div (click)="toggle()" class="zippy__title"> </div> <div [hidden]="!visible" class="zippy__content"> <ng-content></ng-content> </div> </div> Will make Angular creating a DOM like this: <!DOCTYPE html> <html> <head> <style> .zippy { background: green; } </style> </head> <body> <my-zippy <div class="zippy"> <div (click)="toggle()" class="zippy__title"> ▾ Details </div> <div [hidden]="!visible" class="zippy__content"> <script type="ng/contentStart"></script> ... <script type="ng/contentEnd"></script> </div> </div> </my-zippy> </body> </html> Again, this is due to the fact that there’s no Shadow DOM. This also means that all the styles apply to the entire document. Or in other words, a component could overwrite styles from another component because its styles are applied to the document head later. That’s why this is the unscoped strategy. If there was Shadow DOM, Angular could just write all the styles into the shadowRoot which will enable style encapsulation. Also note that the <ng-content> tag has been replaced with <script> tags that basically act as markers to emulate content insertion points. ViewEncapsulation.Emulated This view encapsulation is used by default. ViewEncapsulation.Emulated emulates style encapsulation, even if no Shadow DOM is available. This is a very powerful feature in case you want to use a third-party component that comes with styles that might affect your application. What happens to our components, and especially to the styles, when this view encapsulation is used? Well, let’s first check if the styles are still written to the document head. Here’s what the head looks like with the exact same component but different strategy: <head> <style>.zippy[_ngcontent-1] { background: green; }</style> </head> Looks like styles are still written to the document head. But wait, what’s that? Instead of the simple .zippy selector that we used, Angular creates a .zippy[_ngcontent-1] selector. So it seems like Angular rewrote our component’s styles. Let’s see what the component’s template looks like: <my-zippy</script> ... <script type="ng/contentEnd"></script> </div> </div> </my-zippy> Ha! Angular added some attributes to our component’s template as well! We see the _ngcontent-1 attribute which is also used in our rewritten CSS, but we also have _ngcontent-0 and _nghost-1. So what the hell is going on there? Actually it’s quite simple. We want scoped styles without Shadow DOM right? And that’s exactly what happens. Since there’s no Shadow DOM, Angular has to write the styles to the head of the document. Okay that’s nothing new, we know that from the unscoped strategy. But in order to enable scoped styles, Angular has to make sure that the component’s style selectors only match this particlar component and nothing else on the page. That’s why it extends the CSS selectors, so they have a higher specificity and don’t collide with other selectors defined before at the same. And of course, to make those selectors actually match, the elements in the template need to be extended as well. That’s why we see all those _ngcontent-* and _nghost-* attributes. Okay cool, now we know how Angular emulates scoped styles, but still, why are those attributes called _ngcontent-* and _nghost-* and what does the number in the attributes mean? If we take a closer look at the generated template, we can actually see a pattern. The number in the attribute matches the Shadow DOM, or content insertion point, level. The app that we bootstrap is a component that already uses Shadow DOM (emulation) and therefore has a content insertion point. That means, our root component is already a host element. However, it doesn’t get any additional attribute, because Angular is not in charge of rewriting it. We’ve written it (maybe) as <app></app> into our index.html and that’s it. Our zippy component is also a host element, which is why it get’s the _nghost-1 attribute. Why not _nghost-0? Well, that’s the one our root component would get. At the same time, the zippy component also gets a _ngcontent-0 attribute. That’s because it is part of the very first content insertion point level in the application, which is the one of our root component. We can confirm that pattern by taking a look at what’s inside the zippy element. Every direct child element inside the zippy element is part of the next content insertion point level, which is why they get the _ngcontent-1 attribute. And so on and so forth. Not sure what you think, but in my opinion, this is a very smart approach. ViewEncapsulation.Native Last but not least, we have the native Shadow DOM view encapsulation. This one is super simple to understand since it basically just makes Angular using native Shadow DOM. We can activate it the same way we did with the other types. Here’s what that looks like: @Component({ moduleId: module.id, templateUrl: 'my-zippy.component.html', styles: [` .zippy { background: green; } `], encapsulation: ViewEncapsulation.Native }) ... Okay that was easy. If we run our code in the browser, we see that no styles are written to the document head anymore. However, styles do now end up in the component’s template inside the shadow root. Here’s what that looks like: <my-zippy #shadow-root | <style> | .zippy { | background: green; | } | </style> | <div class="zippy"> | <div (click)="toggle()" class="zippy__title"> | ▾ Details | </div> | <div [hidden]="!visible" class="zippy__content"> | <content></content> | </div> | </div> "This is some content" </my-zippy> In order to get an output like this, we need to tell our browser dev tools to display Shadow DOM when inspecting element. No weird attributes anymore. Instead we get a nice shadow root and we can see very nicely how the styles are written into it. From here on, all the rules that apply to plain Shadow DOM, apply to our Angular component as well....
https://blog.thoughtram.io/angular/2015/06/29/shadow-dom-strategies-in-angular2.html
CC-MAIN-2019-35
refinedweb
1,770
65.22
For some work that I am doing recently, I need the following operation to be done. def myfunc(a, b): return a*b # some operation here a = [1,2,3] b = [2,4,6,8] print [[myfunc(i, j) for i in a] for j in b] numpy a b You can use numpy broadcasting: a = np.array([1,2,3]) b = np.array([2,4,6,8]) a = a[:, None] b = b[None, :] a * np.log(a/b) adding a new axis to a and b (as second and first axis respectively) will make a's shape (3, 1) and b's shape (1, 4). Then, a/b a 2D (3, 4) array where the i-th column is a[i]/b: a/b array([[ 0.5 , 0.25 , 0.16666667, 0.125 ], [ 1. , 0.5 , 0.33333333, 0.25 ], [ 1.5 , 0.75 , 0.5 , 0.375 ]]) Then you can take the pointwise log and multiply by a. Since np.log(a/b) is (3, 4) and a is (3, 1), a will again be broadcasted to (3, 4). A small subtlety is that, due to the way broadcasting happens, adding the second axis to b is not mandatory. I prefer writing it out explicitly nevertheless, for clarity.
https://codedump.io/share/eI9VK5wqECHm/1/equivalent-using-numpy
CC-MAIN-2016-50
refinedweb
210
79.8
Sortable Drag and Drop Discussion Just a heads up, in your Gemfile, bootstrap 4 beta no longer uses tether for tooltips, now uses popper.js which is now a dependency of the bootstrap 4 gem Yeah! Popper is nice to have instead. Screencast was using an old version of my Rails template which has been updated already, just hadn't done a git pull recently on that machine. Should update that in the repo. I'd DEFINITELY be interested in the Vuejs version - I've got an app in mind that it would be very helpful for... This approach for updating reordered records will produce O(n) queries. If you use Postgres, it is possible to do the same operation with a single update:... The only disadvantage here is that ActiveRecord can't do queries like this (correct me if I'm wrong), so it will take direct SQL execution. Here's another implementation that avoids excess queries and lets you use the standard update controller action. It uses the insert_at method that acts_as_list provides to move the item within the list:... # app/models/question.rb def new_position=(position) insert_at(position.to_i) end Make sure to add :new_position to permitted params in questions controller # coffeescript $(document).on 'turbolinks:load', -> $('.sortable').sortable axis: 'y' handle: '.handle' update: (e, ui) -> data = new FormData data.append "question[new_position]", ui.item.index() + 1 Rails.ajax url: "/questions/#{ui.item.data('question-id')}" type: "PATCH" data: data (1) When I use: Rails.ajax I changed to: $.ajax (2) Instead of //= require jquery-ui/widget //= require jquery-ui/sortable I needed to use //= require jquery-ui/widgets/sortable (3) My implementation required me to the skip authenticity token on the sort method with: skip_before_action :verify_authenticity_token, only: [:sort] How does the code in this tutorial work without skipping the authenticity token verification? Thanks, as always, for such wonderful tutorials! Some answers to your questions (not exactly in order): 1. Rails.ajax is the new replacement for jquery's ajax ($.ajax) now that Rails no longer comes with jQuery. Rails.ajax is also smart enough to include the authenticity token in every request which is why you don't have to do #3. 3. For your code, you should include the authenticity token in the request instead of disabling the check as you'll open yourself up to security vulnerabilities by turning it off. 2. jQuery-ui may have moved sortable to the widgets folder since I recorded the episode. Things are always changing so you'll often run into little changes like that. The same was happening to me when using Rails.ajax(), I found this online and now it works: $("#sections").sortable({ update: function(e, ui) { Rails.ajax({ dataType: 'script', url: $(this).data("url"), type: "PATCH", beforeSend: function() { return true }, data: $(this).sortable('serialize'), }); } }); Hope you enjoy it! If you know you are not going to respond to our questions, why do you keep the comment session on the blog? Hi Chris, This is such a useful video! With a little help from your friendly neighborhood GoRails Slack channel I got it (mostly) working, but for some reason it's not serializing correctly. According to my server log, it is sending PATCH data to the database: Started PATCH "/tasks/sort" for 127.0.0.1 at 2018-10-20 11:16:06 -0700 Processing by TasksController#sort as */* Parameters: {"task"=>["2", "8"]} User Load (0.2ms) SELECT "users".* FROM "users" WHERE "users"."id" = ? ORDER BY "users"."id" ASC LIMIT ? [["id", 1], ["LIMIT", 1]] ↳ /Users/lizbayardelle/.rvm/gems/ruby-2.5.0/gems/activerecord-5.2.1/lib/active_record/log_subscriber.rb:98 Task Update All (0.2ms) UPDATE "tasks" SET "position" = 1 WHERE "tasks"."id" = ? [["id", 2]] ↳ app/controllers/tasks_controller.rb:27 Task Update All (0.1ms) UPDATE "tasks" SET "position" = 2 WHERE "tasks"."id" = ? [["id", 8]] ↳ app/controllers/tasks_controller.rb:27 Completed 200 OK in 3ms (ActiveRecord: 0.5ms) But for some reason it always stays the original order on refresh. Any hints as to what could be going wrong? The only difference between my app and the example app is that I had to have the .sortable act on a class ( .taskWrapper not #taskWrapper) because I have the tasks in a partial that's being rendered more than one place on the page. Could that be affecting it? How to make it work with Mango? I can't make it work with serialize: I see the only params sent to my controller are {"controller"=>"tasks", "action"=>"sort"} After following along to the end of the video, I stil couldn't get my records( feature_package to update, the params getting passed were Parameters: {"packaged_feature"=>["321", "1"], "id"=>"sort"}, apparently because of the PATCH request it was looking for a specific record id automatically and changing this to a POST in my ajax request and my routes file worked for me. Man, that was super-tight. Thank you so much. Really awesome! Hi Sir, Good day! Can we implement this using Rails 6 webpack? Hay Jaymarc, run yarn add jquery-ui then in your javascript/packs/application.js require("jquery-ui/ui/widget") require("jquery-ui/ui/widgets/sortable") $(document).on("turbolinks:load", () => { $("#questions").sortable({ update: function(e, ui) { Rails.ajax({ url: $(this).data("url"), type: "PATCH", data: $(this).sortable('serialize'), }); } }); }) ... and the rest is the same as the tutorial, minus all ui gem. hth I get Uncaught TypeError: $(...).sortable is not a function yarn add jquery-ui application.js require('jquery') require("@rails/ujs").start() require("turbolinks").start() require("@rails/activestorage").start() require("channels") require('owl.carousel') require('isotope-layout') require('packs/redactor.min') require('packs/filemanager.min') require('packs/imagemanager.min') require("jquery-ui/ui/widget") require("jquery-ui/ui/widgets/sortable") var jQueryBridget = require('jquery-bridget'); var Isotope = require('isotope-layout'); jQueryBridget( 'isotope', Isotope, $ ); $(document).on("turbolinks:load", () => { $("#document_list").sortable({ update: function(e, ui) { Rails.ajax({ url:$(this).data('url'), type: "PATCH", data: $(this).sortable('serialize'), }); } }); }) enviroment.js const { environment } = require('@rails/webpacker') const webpack = require('webpack') environment.plugins.prepend('Provide', new webpack.ProvidePlugin({ $: 'jquery/src/jquery', jQuery: 'jquery/src/jquery', jquery: 'jquery/src/jquery' }) ) module.exports = environment This worked for me: Start with removing jquery-ui: yarn remove jquery-ui
https://gorails.com/forum/sortable-drag-and-drop-example-gorails
CC-MAIN-2021-31
refinedweb
1,037
52.56
Multifiles A multifile is a file that contains a set of files, similar to a .zip or .rar archive file. They are meant for containing multiple resources such as models, textures, sounds, shaders, and so on, and Panda can load them directly from the multifiles without having to unpack them first. Many games employ a similar concept of “data” file such as .upk for Unreal Engine and .pak for Quake Engine. The multify program The multify console program creates such files. You can get information about the commandline parameters by running multify with the -h option. This is how program describes itself: Usage: multify -[c|r|u|t|x] -f <multifile_name> [options] <subfile_name> ... multify is used to store and extract files from a Panda Multifile. This is similar to a tar or zip file in that it is an archive file that contains a number of subfiles that may later be extracted. Panda’s VirtualFileSystem is capable of mounting Multifiles for direct access to the subfiles contained within without having to extract them out to independent files first. The command-line options for multify are designed to be similar to those for tar, the traditional Unix archiver utility. Read Assets If you want to prepare to read assets from a Multifile directly, you can “mount” it into the virtual file system: from panda3d.core import VirtualFileSystem from panda3d.core import Multifile from panda3d.core import Filename vfs = VirtualFileSystem.getGlobalPtr() vfs.mount(Filename("foo.mf"), ".", VirtualFileSystem.MFReadOnly) If you want to read assets, you can mount a whole directory structure from a webserver. If your webserver hosts: Put this in your config.prc: vfs-mount-url /mydir model-path /mydir Or, equivalently, write this code at startup: vfs.mount(VirtualFileMountHTTP(''), '/mydir', 0) getModelPath().appendDirectory('/mydir') and then you can load models like this in your Python code: model = loader.loadModel('models/myfile.bam') texture = loader.loadTexture('maps/mytexture.png') If you want to prepare for reading and writing assets to a Multifile do the following., ".", VirtualFileSystem.MFReadOnly): print('mounted') If you want to prepare for reading and writing assets to a ‘subdirectory’ Multifile do the following. Note “mysys” must always be literally written in any python code. E.g. “mysys/memfdir/mfbar2.txt”, "mysys", VirtualFileSystem.MFReadOnly): print('mounted') If you are having problems loading from multifiles you can list the complete contents of your .mf file with a command like: multify -tvf mymultifile.mf Doing a sanity inspection like this can be useful to ensure that your assets are in the right place within the multifile. Multifile objects The Multifile class is designed for opening, reading and writing multifiles. You can open a new multifile by creating an instance of the class and calling the openRead() method: from panda3d.core import Multifile mf = Multifile() mf.openRead("foo.mf") The openRead() method opens the multifile as read-only. If you want to make changes to it and write it back to disk, you will need to use the openReadWrite() method. Also, there exists openWrite() to create a new multifile. If you have made important structural changes to a Multifile, it is recommended to rewrite the multifile using the repack() method. (This won’t work if you’ve opened it using openRead().) If you are uncertain about whether it has become suboptimal, you can call neesd_repack() which returns True if the Multifile is suboptimal and should be repacked. To write it back to disk, you can use the flush() method which flushes the changes you’ve made to the multifile back to disk, or the close() method if you’re done with the file. To mount Multifile objects into the VirtualFileSystem without writing them to disk first, here’s an example on how to mount them: mf = Multifile() #... now do something with mf vfs = VirtualFileSystem.getGlobalPtr() vfs.mount(mf, ".", VirtualFileSystem.MFReadOnly) Subfiles Files that are added to a multifile are called subfiles. You can add existing files to a multifile object using the addSubfile() method. This method takes three arguments: the target filename, the existing source file and the compression level (1-9). There is also updateSubfile(), which does the same thing but if the file already exists, only updates it if the content is different. There are several other methods which operate on subfiles, which you can find on the Multifile page in the API Reference. Here are a few examples of working with subfiles: from panda3d.core import VirtualFileSystem from panda3d.core import Multifile from panda3d.core import Filename m = Multifile() # Add an existing real os file with compression level 6 m.openReadWrite("foo.mf") m.addSubfile("bar.txt", Filename("/tmp/bar.txt"), 6) m.flush() # Destroy the contents of the multifile # Add an existing real os file to be the first multifile m.openWrite("foo.mf") m.addSubfile("bar.txt", Filename("/tmp/bar.txt"), 6) m.flush() # Permanently re-order in ascending order the # directories and files in the multifile m.openReadWrite("foo.mf") m.repack() m.flush() # Open a multifile and replace the contents of the mulifile file # with new contents m = Multifile() m.openReadWrite("foo.mf") m.updateSubfile("bar.txt", Filename("/tmp/bar2.txt"), 9) m.flush() # Open a multifile and extract all files smaller than 3kb # New real os files are created with the contents of the multifile data m = Multifile() m.openRead("foo.mf") for i in range(m.getNumSubfiles()): if m.getSubfileLength(i) < 3 * 1024: m.extractSubfile(i, Filename("/tmp/" + m.getSubfileName(i))) # Find, print and remove a file named bar.txt barIdx = m.findSubfile("bar.txt") if barIdx != -1: # It returns -1 if it doesn't exist print(m.readSubfile(barIdx)) m.removeSubfile(barIdx) m.flush() m.close() If the foo.mf file were to have a contained bar.egg.pz file, load the egg and use it similar to other model loading methods. nodepath = loader.loadModel("foo/bar") Stream-Based Multifile algorithms are stream-based and not random-based. In a running game, from the output, if a message is received saying something similar to the words seek error for offset then a file in the multifile is trying to be accessed by a random-based method. For multifiles and fonts, an example of a random-based file is an .rgb file. An alternative different from the use of an .rgb file is the use of a .ttf instead. An example follows. # models is the original directory # models.mf it the new target multifile multify -c -f models.mf -v models In the game, from the multifile models.mf, load the .ttf file. font = loader.loadFont("models/arial.ttf") Encryption Multifiles can also encrypt your files with a password. To do so, you need to set the encryption flag and password using the setEncryptionFlag() and setEncryptionPassword() methods, before adding, extracting or reading multifiles. At the OS prompt, to create a password protected multifile and print out the contents do the following. # models is the original directory # models.mf it the new target multifile multify -c -f models.mf -ep "mypass" -v models This code creates a multifile and adds an encrypted file to it: m = Multifile() m.openReadWrite("foo.mf") m.setEncryptionFlag(True) m.setEncryptionPassword("foobar") # Add a new file to the multifile m.addSubfile("bar.txt", Filename("/tmp/bar.txt"), 1) m.flush() m.close() You can read encrypted multifiles the same way: m = Multifile() m.openRead("foo.mf") m.setEncryptionFlag(True) m.setEncryptionPassword("foobar") # Prints the contents of the multifile print(m.readSubfile("bar.txt")) At the OS prompt, to see the contents of a password protected multifile perform multify -tvf models.mf -p "mypass" You can test the reading in a of password-protected multifile, followed by the mounting of the file using the following code. from panda3d.core import Multifile mf = Multifile() mf.openRead("models.mf") mf.setEncryptionFlag(True) mf.setEncryptionPassword("mypass") from panda3d.core import VirtualFileSystem vfs = VirtualFileSystem.getGlobalPtr() if vfs.mount(mf, ".", VirtualFileSystem.MFReadOnly): print('mounted') When running the application, the following should be seen: mounted You can check if a certain subfile is encrypted or not using the isSubfileEncrypted() method, which takes the subfile index as parameter. It is possible to have a multifile where different subfiles have different encryption, but you will not be able to mount it with the VirtualFileSystem or use it with the multify tool. To mount an encrypted file using the virtual file system, pass the password as parameter to the mount() method: from panda3d.core import VirtualFileSystem, Filename vfs = VirtualFileSystem.getGlobalPtr() vfs.mount(Filename("foo.mf"), ".", vfs.MFReadOnly, "foobar") To use encryption with the multify tool, run it with the -e option, which will prompt for a password on the command line. Alternatively, if you also specify the -p "password" option, you can specify it in the command instead of typing it at the prompt.
https://docs.panda3d.org/1.10/python/programming/advanced-loading/multifiles
CC-MAIN-2022-33
refinedweb
1,465
50.12
Emgu Integration with .NET: Integrating Images OpenCV is a very popular library written in the native C and C++ programming languages for processing images. Programmers familiar with the C# and Visual Basic.NET languages often face challenges in using OpenCV. Luckily for C# developers, a cross platform .NET wrapper to the OpenCV image processing library is available; it's named Emgu CV. This wrapper allows C# programs to communicate with native APIs of the underlying library that was written with unmanaged code. Emgu CV functions can be called from languages such as C#, VB, VC++, Python, and so forth. This wrapper can be compiled in Mono and run on Windows, Linux, Mac OS X, iPhone, iPad and Android devices. Advantages of Using Emgu CV Emgu CV libraries are cross language compatible and installation comes with example code for developers to reuse. An online discussion forum is also available if developers have any questions related to implementation. Emgu CV is capable of doing edge detection, grayscale conversion, and histogram equalization. Installing and Adding the Emgu CV Library Emgu CV could be downloaded and installed from Sourceforge. It's also available in nuget package manager. At the time this article was written, the latest version of Emgu CV is 3.1, but remember it has an enormous amount of performance issues and bugs. Currently, the most stable release is version 2.4. Emgu CV DLLs need to be available in your application's bin directory, the output folder where executables reside. A developer can add the library to an existing .NET application by choosing the References → Add Reference → Emgu CV libraries folder (see Figure 1). Figure 1: Emgu CV Library DLLs These libraries are required process the image in your .NET applications. They are required to be available when your application starts to execute, as shown in Figure 2. Figure 2: Graph API Profile page URL setup Sample Application S/w Requirements In this tutorial, I will create a sample Windows application to show you how to detect human faces using your computer camera/webcam and the Emgu CV library. You need following software for this tutorial: - Microsoft Visual Studio 2010 or above - Emgu CV (OpenCV in .NET) library. You have to download and install this. Step 1 Emgu CV tutorials have a face detection sample. You can find that face detection application example in following default path: C:\Emgu\emgucv-windesktop 3.2.0.2682\Emgu.CV.Example\FaceDetection. The path may change if you have changed something during installation. You can execute the FaceDetection.exe to check how it detect faces. Step 2 Next, open Visual Studio and create a new Windows Project. Add Emgu CV references and make sure it'll look like Figure 3. The references are required for image processing. Figure 3: Adding the needed Emgu CV references If you are running an x64 system, you will have to download separate DLLs. To change the build platform, right-click your project file in the solution explorer and select "Properties" at the bottom. Select the "Build" tab from the ribbon bar on the right of this window. There will be an option for Platform Target: with a drop-down menu change this from x86 to x64 (see Figure 4). Figure 4: Changing the drop-down menu Step 3 Add a Windows Forms PictureBox control to display images. Also, and a Timer Control to raise events at a specific interval. Step 4 Add the following namespaces in your Form. using System; using System.Collections.Generic; using System.Diagnostics; using System.Drawing; using System.Windows.Forms; using Emgu.CV; using Emgu.CV.CvEnum; using Emgu.CV.Structure; using Emgu.CV.UI; using Emgu.CV.Cuda; using Emgu.Util; Step 5 Initialize two objects Capture and HaarCascade and add the following code in Form1_Load & timer1_Tick events. Finally, adjust the path of the Haarcascade XML file. namespace MyFaceDetectionForm { public partial class MyFaceDetectionForm : Form { private Capture cap; private HaarCascade haar; public Form1() { InitializeComponent(); } private void timer1_Tick(object sender, EventArgs e) { using (Image<Bgr, byte> nextFrame = cap.QueryFrame()) { if (nextFrame != null) {) { cap = new Capture(0); haar = new HaarCascade( @"C:\Emgu\emgucv-windesktop 3.2.0.2682\etc \haarcascadeshaarcascade_frontalface _alt2.xml"); } } Step 6 Execute the program for face detection. Summary I hope you have enjoyed reading this article and that the preceding example has given you an idea of how you can use Emgu CV libraries in your .NET applications to detect human faces. Happy reading and wait for my next post on OpenCV. AuthenticatePosted by Tapas Pal on 08/18/2017 05:40am Hi Proloy, You can Implementing Face Detection using Emgu CV library. You can refer the following article. Regards, TapasReply Namespace errorPosted by Nidheesh on 08/11/2017 07:25am private Capture cap; private HaarCascade haar; These 2 above statement making error. The Type or namespace '' could not be found.Reply Emgu CV LibraryPosted by proloy on 07/22/2017 02:06pm Is there any feature available for this library to authenticate on the basis of the face?Reply
https://www.codeguru.com/csharp/emgu-integration-with-.net-integrating-images.html
CC-MAIN-2019-26
refinedweb
837
59.4
I’ve read the AngularJS documentation on the topic carefully, and then fiddled around with a directive. Here’s the fiddle. And here are some relevant snippets: From the HTML: <pane bi-{{text}}</pane> From the pane directive: scope: { biTitle: '=', title: '@', bar: '=' }, There are several things I don’t get: - Why do I have to use "{{title}}"with '@'and "title"with '='? - Can I also access the parent scope directly, without decorating my element with an attribute? - The documentation says “Often it’s desirable to pass data from the isolated scope via an expression and to the parent scope”, but that seems to work fine with bidirectional binding too. Why would the expression route be better? I found another fiddle that shows the expression solution too: Why do I have to use “{{title}}” with ‘@‘ and “title” with ‘=‘? @ - Lukas’s isolated scope blog post (covers @, =, &) - dnc253’s explanation of @ and = - my blog-like answer about scopes — the directives section (way at the bottom, just before the Summary section) has a picture of an isolate scope and its parent scope — the directive scope uses @ for one property and = for another - What is the difference between & vs @ and = in angularJS): - = - @ - & The = means bi-directional binding, so a reference to a variable to the parent scope. This means, when you change the variable in the directive, it will be changed in the parent scope as well. @ means the variable will be copied (cloned) into the directive. As far as I know, <pane bi-{{text}}</pane> should work too. bi-title will receive the parent scope variable value, which can be changed in the directive. If you need to change several variables in the parent scope, you could execute a function on the parent scope from within the directive (or pass data via a service). If you would like to see more how this work with a live example. var app = angular.module('app', []); app.controller("myController", function ($scope) { $scope.title = "binding"; }); app.directive("jmFind", function () { return { replace: true, restrict: 'C', transclude: true, scope: { title1: "=", title2: "@" }, template: "<div><p>{{title1}} {{title2}}</p></div>" }; }); @ get as string - This does not create any bindings whatsoever. You’re simply getting the word you passed in as a string = 2 way binding - changes made from the controller will be reflected in the reference held by the directive, and vice-versa & This behaves a bit differently, because the scope gets a function that returns the object that was passed in. I’m assuming this was necessary to make it work. The fiddle should make this clear. - After calling this getter function, the resulting object behaves as follows: - if a function was passed: then the function is executed in the parent (controller) closure when called - if a non-function was passed in: simply get a local copy of the object that has no bindings This fiddle should demonstrate how they work. Pay special attention to the scope functions with get... in the name to hopefully better understand what I mean about & There are three ways scope can be added in the directive: - Parent scope: This is the default scope inheritance. The directive and its parent(controller/directive inside which it lies) scope is same. So any changes made to the scope variables inside directive are reflected in the parent controller as well. You don’t need to specify this as it is default. - Child scope: directive creates a child scope which inherits from the parent scope if you specify the scope variable of the directive as true. Here, if you change the scope variables inside directive, it wont reflect in the parent scope, but if you change the property of a scope variable, that is reflected in the parent scope, as you actually modified the scope variable of the parent. Example, app.directive("myDirective", function(){ return { restrict: "EA", scope: true, link: function(element, scope, attrs){ scope.somvar = "new value"; //doesnot reflect in the parent scope scope.someObj.someProp = "new value"; //reflects as someObj is of parent, we modified that but did not override. } }; }); - Isolated scope: This is used when you want to create scope that does not inherit from the controller scope. This happens when you are creating plugins as this makes the directive generic, since it can be placed in any html and does not gets affected by its parent scope. Now, if you dont want any interaction with the parent scope, then you can just specify scope as empty object. like, scope: {} //this does not interact with the parent scope in any way Mostly this is not the case as we need some interaction with the parent scope, so we want some of the values/ changes to pass through. For this reason we use: 1. "@" ( Text binding / one-way binding ) 2. "=" ( Direct model binding / two-way binding ) 3. "&" ( Behaviour binding / Method binding ) @ means that the changes from the controller scope will be reflected in the directive scope but if you modify the value in the directive scope, the controller scope variable will not get affected. @ always expects the mapped attribute to be an expression. This is very important; because to make the “@” prefix work, we need to wrap the attribute value inside {{}}. = is birectional so if you change the variable in directive scope, the controller scope variable gets affected as well & is used to bind controller scope method so that if needed we can call it from the directive Advantage here is that, the name of variable need not be same in controller scope and directive scope. Example, directive scope has a variable “dirVar” which syncs with variable “contVar” of the controller scope. This gives a lot of power and generalisation to the directive as one controller can sync with variable v1 while another controller using the same directive can ask dirVar to sync with variable v2. Below is the example of usage: The directive and controller are:: "&" }, link: function(element, scope, attrs){ //do something like $scope.reverse(); //calling the controllers function } }; }); And the html(note the differnce for @ and =): <div my-directive </div> Here is a link to the blog which describes it nicely. Simply we can use:- @ :- for String values for one way Data binding. in one way data binding you can only pass scope value to directive = :- for object value for two way data binding. in two way data binding you can change the scope value in directive as well as in html also. & :- for methods and functions. EDIT In our Component definition for Angular version 1.5 And above there are four different type of bindings: =Two-way data binding :- if we change the value,it automatically update <one way binding :- when we just want to read a parameter from a parent scope and not update it. @this is for String Parameters &this is for Callbacks in case your component needs to output something to its parent scope I created a little HTML file that contains Angular code demonstrating the differences between them: <!DOCTYPE html> <html> <head> <title>Angular</title> <script src=""></script> </head> <body ng- <div ng- <a my-dir</a> </div> <script> angular.module("myApp", []) .controller("myCtrl", [function(){ var vm = this; vm.sayHi = function(name){ return ("Hey there, " + name); } }]) .directive("myDir", [function(){ var directive = { scope: { attr1: "=", attr2: "@", attr3: "&" }, link: function(scope){ console.log(scope.attr1); // logs "Hey there, Juan" console.log(scope.attr2); // logs "VM.sayHi('Juan')" console.log(scope.attr3); // logs "function (a){return h(c,a)}" console.log(scope.attr3()); // logs "Hey there, Juan" } } return directive; }]); </script> </body> </html> The = way is 2-way binding, which lets you to have live changes inside your directive. When someone changes that variable out of directive, you will have that changed data inside your directive, but @ way is not two-ways binding. It works like Text. You bind once, and you will have only its value. To get it more clearly, you can use this great article: AngularJS Directive Scope ‘@’ and ‘=’ @ local scope property is used to access string values that are defined outside the directive. = In cases where you need to create a two-way binding between the outer scope and the directive’s isolate scope you can use the = character. & local scope property allows the consumer of a directive to pass in a function that the directive can invoke. Kindly check the below link which gives you clear understanding with examples.I found it really very useful so thought of sharing it. I implemented all the possible options in a fiddle. It deals with all the options: scope:{ name:'&' }, scope:{ name:'=' }, scope:{ name:'@' }, scope:{ }, scope:true, Even when the scope is local, as in your example, you may access the parent scope through the property $parent. Assume in the code below, that title is defined on the parent scope. You may then access title as $parent.title: link : function(scope) { console.log(scope.$parent.title) }, template : "the parent has the title {{$parent.title}}" However in most cases the same effect is better obtained using attributes. An example of where I found the “&” notation, which is used “to pass data from the isolated scope via an expression and to the parent scope”, useful (and a two-way databinding could not be used) was in a directive for rendering a special datastructure inside an ng-repeat. <render data = "record" deleteFunction = "dataList.splice($index,1)" ng- </render> One part of the rendering was a delete button and here it was useful to attach a deletefunction from the outside scope via &. Inside the render-directive it looks like scope : {</button>" 2-way databinding i.e. data = "=" can not be used as the delete function would run on every $digest cycle, which is not good, as the record is then immediately deleted and never rendered. @ and = see other answers. One gotcha about & TL;DR; & gets expression (not only function like in examples in other answers) from parent, and sets it as function in the directive, that calls the expression. And this function has ability to replace any variable (even function name) of expression, by passing an object with the variables. explained & is an expression reference, that means if you pass something like <myDirective expr="x==y"></myDirective> in the directive this expr will be a function, that calls the expression, like: function expr(){return x == y}. so in directive’s html <button ng-</button> will call the expression. In js of the directive just $scope.expr() will call the expression too. The expression will be called with $scope.x and $scope.y of parent. You have ability to override the parameters! If you set them by call, e.g. <button ng-</button> then the expression will be called with your parameter x and parant’s parameter y. You can override both. Now you know, why <button ng-</button> works. Because it just calls the expression of parent (e.g. <myDirective functionFromParent="function1(x)"></myDirective>) and replaces possible values with your specified parameters, in this case x. it could be: <myDirective functionFromParent="function1(x) + 5"></myDirective> or <myDirective functionFromParent="function1(x) + z"></myDirective> with child call: <button ng-</button>. or even with function replacement: <button ng-</button>. it just an expression, does not matter if it is a function, or many function, or just comparison. And you can replace any variable of this expression. examples: following works: parent has defined $scope.x, $scope.y parent template: <myDirective expr="x==y"></myDirective> directive template: <button ng-</button> directive template: <button ng-</button> directive template: <button ng-</button> parent has defined $scope.function1, $scope.x, $scope.y parent template: <myDirective expr="function1(x) + y"></myDirective> directive template: <button ng-</button> directive template: <button ng-</button> directive template: <button ng-</button> directive has $scope.myFn as function directive template: <button ng-</button> Why do I have to use “{{title}}” with ‘@’ and “title” with ‘=’? When you use {{title}} , only the parent scope value will be passed to directive view and evaluated. This is limited to one way, meaning that change will not be reflected in parent scope. You can use ‘=’ when you want to reflect the changes done in child directive to parent scope also. This is two way. Can I also access the parent scope directly, without decorating my element with an attribute? When directive has scope attribute in it ( scope : {} ), then you no longer will be able to access parent scope directly. But still it is possible to access it via scope.$parent etc. If you remove scope from directive, it can be accessed directly. The documentation says “Often it’s desirable to pass data from the isolated scope via an expression and to the parent scope”, but that seems to work fine with bidirectional binding too. Why would the expression route be better? It depends based on context. If you want to call an expression or function with data, you use & and if you want share data , you can use biderectional way using ‘=’ You can find the differences between multiple ways of passing data to directive at below link: AngularJS – Isolated Scopes – @ vs = vs &
https://exceptionshub.com/what-is-the-difference-between-and-in-directive-scope-in-angularjs.html
CC-MAIN-2022-05
refinedweb
2,164
62.68
Sample size required for no failures¶ The function sample_size_no_failures is used to determine the minimum sample size required for a test in which no failures are expected, and the desired outcome is the lower bound on the reliability based on the sample size and desired confidence interval. API Reference For inputs and outputs see the API reference. As an example, consider a scenario in which we want to be sure that a batch of LEDs meets the reliability target for on/off cycles. Testing is for the planned lifetime (1 million cycles) and tested items will have most or all of their lifetime used up during testing so we can’t test everything. How many items from the batch do we need to test to ensure we achieve 99.9% reliability with a 95% confidence interval? from reliability.Reliability_testing import sample_size_no_failures sample_size_no_failures(reliability=0.999) ''' Results from sample_size_no_failures: To achieve the desired reliability of 0.999 with a 95% lower confidence bound, the required sample size to test is 2995 items. This result is based on a specified weibull shape parameter of 1 and an equivalent test duration of 1 lifetime. If there are any failures during this test, then the desired lower confidence bound will not be achieved. If this occurs, use the function Reliability_testing.one_sample_proportion to determine the lower and upper bounds on reliability. ''' Based on this result, we need to test 2995 items from the batch and not have a single failure in order to be 95% confident that the reliability of the batch meets or exceeds 99.9%. If we tested each LED for more on/off cycles (lets say 3 million which is 3 lifetimes), then the number of successful results would only need to be 999. In this way, we can design our qualification test based on the desired reliability, confidence interval, and number of lifetimes that are tested to. In the event that we suffer a single failure during this test, then we will need to adjust the testing method, either by finishing the testing and calculating the lower bound on reliability using the one_sample_proportion test, or by using a sequential_sampling_chart.
https://reliability.readthedocs.io/en/stable/Sample%20size%20required%20for%20no%20failures.html
CC-MAIN-2022-27
refinedweb
358
50.77
18 November 2009 17:51 [Source: ICIS news] LONDON (ICIS news)--International Petroleum Investment Co (IPIC) is considering a number of initiatives with different chemical companies, a spokesman with the Abu Dhabi-based company said on Wednesday. IPIC was reacting to earlier press reports that the group was in discussions with Bayer MaterialScience over the formation of a joint venture to establish a petrochemical plant in Abu Dhabi. An IPIC spokesman said: “At present there are no firm plans to do anything with Bayer MaterialScience, or any other chemical company. A number of initiatives are under consideration internally, but nothing has been decided.” A Bloomberg report quoted IPIC’s managing director, Khadem al-Qubaisi, as saying that IPIC was discussing a venture with Bayer MaterialScience, NOVA Chemicals, Borealis and two South Korean companies, to create a petrochemical plant in Abu Dhabi. State-run IPIC owns 100% of Canada-based commodity chemical group NOVA Chemicals and 64% of Austria-based polyolefins group Borealis. Abu Dhabi is planning the Chemaweyaat chemical city in the new Mina Khalifa Industrial Zone, which is located in ?xml:namespace> The first phase of the city includes a 1.45m tonne/year ethylene plant, which is to begin production in 2014. The next stages of construction may involve the production of aromatics and phenol. On his blog, Paul Hodges, chairman of consultancy International eChem, said acquisitions and joint ventures by IPIC would be driven by the need for technology. The Chemaweyaat development “will be based on liquid feeds, rather than ethane”, wrote Hodges. “[Chemaweyaat] will therefore allow On 17 November, IPIC’s Al-Qubaisi told ICIS news that he expected to sign a deal to acquire a major European-based petrochemical company by the first quarter of 2010. Al-Qubaisi said he was in talks
http://www.icis.com/Articles/2009/11/18/9265216/ipic-considers-initiatives-with-chemical-companies.html
CC-MAIN-2015-11
refinedweb
298
50.97
We used Django REST framework for a while now. Nothing spectacular yet. In most cases we’re only returning a bit of JSON. We chose Django REST framework over Piston because that one was effectively unmaintained at the time. We chose it over tastypie because of the great way in which Django REST framework presents your API in the browser (the “browseable API” they call it), which is very helpful if you want others to actually use your API! And… Django REST framework uses Django’s class based views, which we’re using a lot. So it matches the style we want to work with. This was a second thing that gave it an edge over tastypie for us. Well, anyway, most of our Django REST framework 0.3.3 (or 0.4 or whatever) views looked like this: from djangorestframework.views import View as JsonView class SomeView(JsonView): def get(self, request): return {'some': 'dictionary', 'or': ['a', 'list']} JsonView would take care of converting everything that came in/out of the get() or post() between JSON and whatever Python data structure it represents. Handy. Since 30 october, version 2.0 is out. Lots of improvements and, from what I read, a better fit for many projects. But, boy, is it backward incompatible. I thought to do a quick migration… import djangorestframeworkto import rest_framework. Well, that’s a sure fire way to make absolutely sure nobody accidentally uses the wrong version. JsonViewis gone! You cannot just spit out a dict or list anymore in your .get()and get it converted automatically. You must wrap it in a special Responseobject. This is probably needed, but it doesn’t look as nice. Here’s what the first example looks like in version 2.0: from rest_framework.views import APIView from rest_framework.response import Response as RestResponse class SomeView(APIView): def get(self, request): return RestResponse({'some': 'dictionary', 'or': ['a', 'list']}) It works fine, but it lost a bit of its friendliness, at least for such an utterly simple example. I looked at the documentation, though, and everything is as simple as JsonView used to be if you work with models. Just two or three lines of code for basic cases and you’re done. I’ll have to go through some other apps we made, as everything that uses Django REST framework has to change at the same time. Every single app. There’s no in-between. I’m afraid it’ll be quite a headache…):
https://reinout.vanrees.org/weblog/2012/12/04/django-rest-framework-2.html
CC-MAIN-2019-04
refinedweb
414
76.11
FPUTWC(3P) POSIX Programmer's Manual FPUTWC(3P) This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. fputwc — put a wide-character code on a stream #include <stdio.h> #include <wchar.h> wint_t fputwc(wchar_t wc, FILE *stream); The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1‐2017 defers to the ISO C standard... None. None. None. None. Section 2.5, Standard I/O Streams, ferror(3p), fopen(3p), setbuf(3p), ulimitWC(3P) Pages that refer to this page: wchar.h(0p), fprintf(3p), fputws(3p), fwprintf(3p), putwc(3p), putwchar(3p)
https://www.man7.org/linux/man-pages/man3/fputwc.3p.html
CC-MAIN-2022-27
refinedweb
146
58.99
Java | Packages | Question 1 Which of the following is/are true about packages in Java? 1) Every class is part of some package. 2) All classes in a file are part of the same package. 3) If no package is specified, the classes in the file go into a special unnamed package 4) If no package is specified, a new package is created with folder name of class and the class is put in this package. (A) Only 1, 2 and 3 (B) Only 1, 2 and 4 (C) Only 4 (D) Only 1 and 3 Answer: (A) Explanation: In Java, a package can be considered as equivalent to C++ language’s namespace. Quiz of this Question Attention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready. My Personal Notes arrow_drop_up
https://www.geeksforgeeks.org/java-packages-question-1/
CC-MAIN-2021-17
refinedweb
151
71.85
Get the highlights in your inbox every week. A guide to logging in Python A guide to logging in Python Logging is a critical way to understand what's going on inside an application. Subscribe now.The same way we document code for future developers, we should direct new software to generate adequate logs for developers and sysadmins. Logs are a critical part of the system documentation about an application's runtime status. When instrumenting your software with logs, think of it like writing documentation for developers and sysadmins who will maintain the system in the future. Some purists argue that a disciplined developer who uses logging and testing should hardly need an interactive debugger. If we cannot reason about our application during development with verbose logging, it will be even harder to do it when our code is running in production. This article looks at Python's logging module, its design, and ways to adapt it for more complex use cases. This is not intended as documentation for developers, rather as a guide to show how the Python logging module is built and to encourage the curious to delve deeper. Why use the logging module? A developer might argue, why aren't simple print statements sufficient? The logging module offers multiple benefits, including: - Multi-threading support - Categorization via different levels of logging - Flexibility and configurability - Separation of the how from the what This last point, the actual separation of the what we log from the how we log enables collaboration between different parts of the software. As an example, it allows the developer of a framework or library to add logs and let the sysadmin or person in charge of the runtime configuration decide what should be logged at a later point. What's in the logging module The logging module beautifully separates the responsibility of each of its parts (following the Apache Log4j API's approach). Let's look at how a log line travels around the module's code and explore its different parts. Logger Loggers are the objects a developer usually interacts with. They are the main APIs that indicate what we want to log. Given an instance of a logger, we can categorize and ask for messages to be emitted without worrying about how or where they will be emitted. For example, when we write logger.info("Stock was sold at %s", price) we have the following model in mind: We request a line and we assume some code is executed in the logger that makes that line appear in the console/file. But what actually is happening inside? Log records Log records are packages that the logging module uses to pass all required information around. They contain information about the function where the log was requested, the string that was passed, arguments, call stack information, etc. These are the objects that are being logged. Every time we invoke our loggers, we are creating instances of these objects. But how do objects like these get serialized into a stream? Via handlers! Handlers Handlers emit the log records into any output. They take log records and handle them in the function of what they were built for. As an example, a FileHandler will take a log record and append it to a file. The standard logging module already comes with multiple built-in handlers like: - Multiple file handlers (TimeRotated, SizeRotated, Watched) that can write to files - StreamHandler can target a stream like stdout or stderr - SMTPHandler sends log records via email - SocketHandler sends LogRecords to a streaming socket - SyslogHandler, NTEventHandler, HTTPHandler, MemoryHandler, and others We have now a model that's closer to reality: But most handlers work with simple strings (SMTPHandler, FileHandler, etc.), so you may be wondering how those structured LogRecords are transformed into easy-to-serialize bytes... Formatters Let me present the Formatters. Formatters are in charge of serializing the metadata-rich LogRecord into a string. There is a default formatter if none is provided. The generic formatter class provided by the logging library takes a template and style as input. Then placeholders can be declared for all the attributes in a LogRecord object. As an example: '%(asctime)s %(levelname)s %(name)s: %(message)s' will generate logs like 2017-07-19 15:31:13,942 INFO parent.child: Hello EuroPython. Note that the attribute message is the result of interpolating the log's original template with the arguments provided. (e.g., for logger.info("Hello %s", "Laszlo"), the message will be "Hello Laszlo"). All default attributes can be found in the logging documentation. OK, now that we know about formatters, our model has changed again: Filters The last objects in our logging toolkit are filters. Filters allow for finer-grained control of which logs should be emitted. Multiple filters can be attached to both loggers and handlers. For a log to be emitted, all filters should allow the record to pass. Users can declare their own filters as objects using a filter method that takes a record as input and returns True/False as output. With this in mind, here is the current logging workflow: The logger hierarchy At this point, you might be impressed by the amount of complexity and configuration that the module is hiding so nicely for you, but there is even more to consider: the logger hierarchy. We can create a logger via logging.getLogger(<logger_name>). The string passed as an argument to getLogger can define a hierarchy by separating the elements using dots. As an example, logging.getLogger("parent.child") will create a logger "child" with a parent logger named "parent." Loggers are global objects managed by the logging module, so they can be retrieved conveniently anywhere during our project. Logger instances are also known as channels. The hierarchy allows the developer to define the channels and their hierarchy. After the log record has been passed to all the handlers within the logger, the parents' handlers will be called recursively until we reach the top logger (defined as an empty string) or a logger has configured propagate = False. We can see it in the updated diagram: Note that the parent logger is not called, only its handlers. This means that filters and other code in the logger class won't be executed on the parents. This is a common pitfall when adding filters to loggers. Recapping the workflow We've examined the split in responsibility and how we can fine tune log filtering. But there are two other attributes we haven't mentioned yet: - Loggers can be disabled, thereby not allowing any record to be emitted from them. - An effective level can be configured in both loggers and handlers. As an example, when a logger has configured a level of INFO, only INFO levels and above will be passed. The same rule applies to handlers. With all this in mind, the final flow diagram in the logging documentation looks like this: How to use logging Now that we've looked at the logging module's parts and design, it's time to examine how a developer interacts with it. Here is a code example: import logging def sample_function(secret_parameter): logger = logging.getLogger(__name__) # __name__=projectA.moduleB logger.debug("Going to perform magic with '%s'", secret_parameter) ... try: result = do_magic(secret_parameter) except IndexError: logger.exception("OMG it happened again, someone please tell Laszlo") except: logger.info("Unexpected exception", exc_info=True) raise else: logger.info("Magic with '%s' resulted in '%s'", secret_parameter, result, stack_info=True) This creates a logger using the module __name__. It will create channels and hierarchies based on the project structure, as Python modules are concatenated with dots. The logger variable references the logger "module," having "projectA" as a parent, which has "root" as its parent. On line 5, we see how to perform calls to emit logs. We can use one of the methods debug, info, error, or critical to log using the appropriate level. When logging a message, in addition to the template arguments, we can pass keyword arguments with specific meaning. The most interesting are exc_info and stack_info. These will add information about the current exception and the stack frame, respectively. For convenience, a method exception is available in the logger objects, which is the same as calling error with exc_info=True. These are the basics of how to use the logger module. ʘ‿ʘ. But it is also worth mentioning some uses that are usually considered bad practices. Greedy string formatting Using loggger.info("string template {}".format(argument)) should be avoided whenever possible in favor of logger.info("string template %s", argument). This is a better practice, as the actual string interpolation will be used only if the log will be emitted. Not doing so can lead to wasted cycles when we are logging on a level over INFO, as the interpolation will still occur. Capturing and formatting exceptions Quite often, we want to log information about the exception in a catch block, and it might feel intuitive to use: try: ... except Exception as error: logger.info("Something bad happened: %s", error) But that code can give us log lines like Something bad happened: "secret_key." This is not that useful. If we use exc_info as illustrated previously, it will produce the following: try: ... except Exception: logger.info("Something bad happened", exc_info=True) Something bad happened Traceback (most recent call last): File "sample_project.py", line 10, in code inner_code() File "sample_project.py", line 6, in inner_code x = data["secret_key"] KeyError: 'secret_key' This not only contains the exact source of the exception, but also the type. Configuring our loggers It's easy to instrument our software, and we need to configure the logging stack and specify how those records will be emitted. There are multiple ways to configure the logging stack. BasicConfig This is by far the simplest way to configure logging. Just doing logging.basicConfig(level="INFO") sets up a basic StreamHandler that will log everything on the INFO and above levels to the console. There are arguments to customize this basic configuration. Some of them are: This is a simple and practical way to configure small scripts. Note, basicConfig only works the first time it is called in a runtime. If you have already configured your root logger, calling basicConfig will have no effect. DictConfig The configuration for all elements and how to connect them can be specified as a dictionary. This dictionary should have different sections for loggers, handlers, formatters, and some basic global parameters. Here's an example: config = { 'disable_existing_loggers': False, 'version': 1, 'formatters': { 'short': { 'format': '%(asctime)s %(levelname)s %(name)s: %(message)s' }, }, 'handlers': { 'console': { 'level': 'INFO', 'formatter': 'short', 'class': 'logging.StreamHandler', }, }, 'loggers': { '': { 'handlers': ['console'], 'level': 'ERROR', }, 'plugins': { 'handlers': ['console'], 'level': 'INFO', 'propagate': False } }, } import logging.config logging.config.dictConfig(config) When invoked, dictConfig will disable all existing loggers, unless disable_existing_loggers is set to false. This is usually desired, as many modules declare a global logger that will be instantiated at import time, before dictConfig is called. You can see the schema that can be used for the dictConfig method. Often, this configuration is stored in a YAML file and configured from there. Many developers often prefer this over using fileConfig, as it offers better support for customization. Extending logging Thanks to the way it is designed, it is easy to extend the logging module. Let's see some examples: Logging JSON If we want, we can log JSON by creating a custom formatter that transforms the log records into a JSON-encoded string: import logging import logging.config import json ATTR_TO_JSON = ['created', 'filename', 'funcName', 'levelname', 'lineno', 'module', 'msecs', 'msg', 'name', 'pathname', 'process', 'processName', 'relativeCreated', 'thread', 'threadName'] class JsonFormatter: def format(self, record): obj = {attr: getattr(record, attr) for attr in ATTR_TO_JSON} return json.dumps(obj, indent=4) handler = logging.StreamHandler() handler.formatter = JsonFormatter() logger = logging.getLogger(__name__) logger.addHandler(handler) logger.error("Hello") Adding further context On the formatters, we can specify any log record attribute. We can inject attributes in multiple ways. In this example, we abuse filters to enrich the records. import logging import logging.config GLOBAL_STUFF = 1 class ContextFilter(logging.Filter): def filter(self, record): global GLOBAL_STUFF GLOBAL_STUFF += 1 record.global_data = GLOBAL_STUFF return True handler = logging.StreamHandler() handler.formatter = logging.Formatter("%(global_data)s %(message)s") handler.addFilter(ContextFilter()) logger = logging.getLogger(__name__) logger.addHandler(handler) logger.error("Hi1") logger.error("Hi2") This effectively adds an attribute to all the records that go through that logger. The formatter will then include it in the log line. Note that this impacts all log records in your application, including libraries or other frameworks that you might be using and for which you are emitting logs. It can be used to log things like a unique request ID on all log lines to track requests or to add extra contextual information. Starting with Python 3.2, you can use setLogRecordFactory to capture all log record creation and inject extra information. The extra attribute and the LoggerAdapter class may also be of the interest. Buffering logs Sometimes we would like to have access to debug logs when an error happens. This is feasible by creating a buffered handler that will log the last debug messages after an error occurs. See the following code as a non-curated example: import logging import logging.handlers class SmartBufferHandler(logging.handlers.MemoryHandler): def __init__(self, num_buffered, *args, **kwargs): kwargs["capacity"] = num_buffered + 2 # +2 one for current, one for prepop super().__init__(*args, **kwargs) def emit(self, record): if len(self.buffer) == self.capacity - 1: self.buffer.pop(0) super().emit(record) handler = SmartBufferHandler(num_buffered=2, target=logging.StreamHandler(), flushLevel=logging.ERROR) logger = logging.getLogger(__name__) logger.setLevel("DEBUG") logger.addHandler(handler) logger.error("Hello1") logger.debug("Hello2") # This line won't be logged logger.debug("Hello3") logger.debug("Hello4") logger.error("Hello5") # As error will flush the buffered logs, the two last debugs will be logged For more information This introduction to the logging library's flexibility and configurability aims to demonstrate the beauty of how its design splits concerns. It also offers a solid foundation for anyone interested in a deeper dive into the logging documentation and the how-to guide. Although this article isn't a comprehensive guide to Python logging, here are answers to a few frequently asked questions. My library emits a "no logger configured" warning Check how to configure logging in a library from "The Hitchhiker's Guide to Python." What happens if a logger has no level configured? The effective level of the logger will then be defined recursively by its parents. All my logs are in local time. How do I log in UTC? Formatters are the answer! You need to set the converter attribute of your formatter to generate UTC times. Use converter = time.gmtime. 10 Comments Great article! Thanks a lot for putting the effort to explain it nicely. I usually just use logging.info, logging.debug, etc. Is there any advantage on using different loggers? Even if you can log using the module level functions (logging.info) that is basically just logging with the root logger. You can see doing `logging.info` identical to `logging.getLogger().info` it just logs under the root instance. With that said the question becomes: is there any advantage of not using always the root logger? The answer is tuning. If you are developing a single script is probably fine (I do it quite often) but if you are developing a larger program or a library you definitively want to use different loggers so at the time when you set up your configuration you can decide what to do with the logs coming from that piece of code. As an example, if you develop an http library and log everything on info, using the logger hierarchy allows the users of your library to just suppress all info logs by configuring your root logger to emit only records above info. You can also attach different handlers and tune accordingly. In brief, my take is: For simple scripts probably ok, for more complex solutions always go to proper logger instances. And thank you for reading it :) Great article Mario, Lots of info covered there, more detail than the average article. Very appreciated. Thank you for reading it :) In my judgement a big oversight of logger documentation and tutorials is coverage of how to do unit testing with logging. In many units of code the specifications require that a log record with specified contents is emitted. The log record might be emitted only in special cases or always. For instance: A unit of code may be required to a) validate the format of a string b) if the string has an illegal format emit a error message to a log file. In such a test case reading and verifying the contents of a log file does double duty. It a) verifies if the logging function is working correct and 2) also verifies the functionality of the unit under test. On a Final Note: Even for an experienced programmer new to Python figuring out these procedures has quite a learning curve. And in the end I have no confidence that my solution is very "Pythonic". Thanks for a great article. So, I wanna translate it into korean on my personal blog with also using your images. Can I do it? :) (Surely, I will link this original post) Thanks. Sure thing, this is all CC :) Hi Mario, great article! Just one question: shouldn't you always use Logger.exception for logging unknown exceptions info as it includes exc_info and is more verbose and semantically correct? Hello Deus, thank you for reading it :) About `logging.exception`, Indeed it is just using `logging.error` with `exc_info`. See... The key to your question is the level you want to log with. If you want to log with error level I indeed prefer using `logging.exception` rather than `logging.error` and passing `exc_info`. But in the article (when it is not using `exception`) is because it wants to log in a different level. Not sure if that answer the question, let me know if it doesn't! :) Regards, Mario If you want your life to be simplified, jump on daiquiri, a thin wrapper that sets things up in a simpler way:
https://opensource.com/article/17/9/python-logging
CC-MAIN-2019-35
refinedweb
3,042
56.66
Use PowerShell to Get the SQL Server Error Log Summary: Microsoft Scripting Guy, Ed Wilson, talks about different ways to use Windows PowerShell to get the SQL Server error log. Hey, Scripting Guy! I have recently inherited a Microsoft SQL Server 2008 R2 box, and I am concerned about the thing. I am not a DBA, and to be honest the server sort of frightens me a bit. When I look into the Application log, there is lots of information about SQL, but it all seems to be SPAM, and it does not really seem to tell me much useful information. I am wondering if Windows PowerShell can help me with this server, and if so, where is the real error log for the SQL server? —PV Hello PV, Microsoft Scripting Guy, Ed Wilson, is here. As it turns out, a couple of weeks ago I was talking about Windows PowerShell at the SQL Rally in Orlando, Florida. Microsoft SQL Server MVP, Aaron Nelson (aka SQLVariant), The Scripting Wife, and I hosted a Birds of a Feather table one afternoon, and we had the chance to talk to lots of SQL DBAs. The following picture was taken shortly before the Birds of a Feather session commenced. Anyway, as it turns out, I had a rather interesting conversation with Microsoft SQL Server MVP, Allen Kinsel, about using Windows PowerShell to query the SQL Server error log. He stated that Windows PowerShell is too hard for non-developers to use. In some respects, he is correct. In other respects, it all depends on what example you are attempting to follow. With Windows PowerShell, there are many ways to accomplish the same task. Finding the easiest approach to a solution can often be a lot of work. With this as background, PV, let us dive into your question. By default, the SQL Server error log resides in the log directory under MSSQL as shown in the following image. If I am not certain where the SQL Server error log resides, I use my Get-ErrorLogPath function to retrieve the path to the error log. This function is shown here (thanks to Chad Miller for the Add-Type code). Note Keep in mind, that I prefer using Add-Type instead of using the obsolete LoadWithPartialName static method from the Reflection.Assembly class. However, Add-Type attempts to load version 9 of the Microsoft.SqlServer.Smo, and that fails on my SQL Server 2008 R2 system. Therefore, it is necessary to use the strong name to attempt to load the version 10 of the SQL Management Objects (SMO). If this fails, then I fall back to using Add-Type to load the previous version. Get-ErrorLogPath function Function Get-ErrorLogPath { <# .Synopsis Returns the path to the SQL Error Log .Description This function returns the path to the SQL Error Log .Example Get-ErrorLogPath Returns the path to the SQL Error Log on default instance of SQL on local machine .Example Get-ErrorLogPath -SQLServer SQL1 Returns the path to the SQL Error Log on default instance of SQL on a SQL server named SQL1 .Parameter SQLServer The name and instance of SQL Server .Notes NAME: Get-ErrorLogPath AUTHOR: ed wilson, msft LASTEDIT: 05/27/2011 11:34:59 KEYWORDS: Databases, SQL Server, Add-Type HSG: HSG-5-31-11 .Link #Requires -Version 2.0 #> Param([string]$SQLServer = “(local)”)”) $SQLServer $server.ErrorLogPath } #end function Get-ErrorLogPath The complete function is uploaded to the Scripting Guys Script Repository, and it can easily be downloaded from there (it will help you avoid cut and paste errors and extraneous HTML stuff that can happen when attempting to copy code from the blog). Also by default, the most recent error log is named ERRORLOG and it has no extension. Six archive copies of the error log are maintained. These files have an extension of 1 – 6. The error log is a plain text file. Unfortunately, it is not formatted as a CSV file, or even as a TSV, although the three columns are separated by spaces (just a variable amount of space). A sample ERRORLOG is shown in the image that follows. At first, I played around with using the Import-CSV cmdlet to import the SQL Server error log directly, but the results were not great and it began to become really complicated rather quickly. This is because I needed to skip several rows from the top of the file, and because of the variable spacing between the rows in the remainder of the file. I thought to myself, “Dude, there has got to be an easier way to do this.” I remembered that there is a SQL command called xp_ReadErrorLog, and I found a pretty cool script written by Buck Woody on his blog that uses this extended stored procedure. His script uses the .NET Framework SQLConnection class. The script is shown here. ReadSqlErrorLogWithDotNetClasses.ps1 #() The ReadSqlErrorLogWithDotNetClasses.ps1 script is very useful. It returns the SQL Server error log information, and it is basically “ready to go,” meaning that I do not need to write the code. This is always an advantage. However, the code itself it a bit confusing for someone who is not real familiar with .NET Framework programming and ADO.NET in particular. When I was getting my MCDBA, I was hammered with ADO code on the exams, so I personally do not have a problem with the code. But I also remember studying for those exams, and it was a bit confusing to learn all that stuff. Chad Miller decided to clean up the code a bit, so that the script is a bit more “PowerShell like.” In addition, he replaced the SQL Reader with a “plain old DataTable.” The revised code is shown here. ReadSqlErrorLogWithDotNetClasses_PartDeux.ps1 $ServerInstance = “(Local)” $conn=new-object System.Data.SqlClient.SQLConnection ` $(“Server={0};Database=master;Integrated Security=True” -f $ServerInstance) $conn.Open() $cmd=new-object system.Data.SqlClient.SqlCommand(“xp_ReadErrorLog”,$conn) $ds=New-Object system.Data.DataSet $da=New-Object system.Data.SqlClient.SqlDataAdapter($cmd) $da.fill($ds) | out-null $conn.Close() $ds.Tables There could be some other issues involved in the previous two examples. For one thing, the xp_ReadErrorLog extended stored procedure is not really very well documented, and I personally have had a few unpleasant surprises in the past when using undocumented features. One reason the product group does not document certain features is because they do not want to support those features. While they may work perfectly fine right now, any hotfix, service pack, or version upgrade could cause those undocumented features to quit working. But, hey, if it works right now, and it is easy to do, then I do not really have a major problem. I will worry about future upgrades when the future comes. Of course, if one wants to use xp_ReadErrorLog, one can easily use other methods to invoke it so that it does not involve using ADO. (I will talk about that tomorrow.) I also figured that I could use SQL Management Objects (SMO) to get information about the SQL Server error log, and sure enough, Chad Miller added a note to Buck’s article where he helpfully posted sample code to accomplish this task. The Server class from the Microsoft.SQLServer.SMO .NET Framework namespace is quite extensive, and it is well documented on MSDN. SMOSQLErrorLog.ps1”) ‘(local)’ $server.ReadErrorLog(0) Buck Woody even added a note to his blog post that states you can even use the SQL provider to access the SQL Server error log. This is shown here. $MyServer = get-item SQLSERVER:\SQL\BWOODY1\SQL2K8 $MyServer.ReadErrorLog(0) If you have installed the SQLPSX modules (a free download from CodePlex) on your computer you can use the Get-SqlErrorLog cmdlet. This is shown here. Import-Module sqlpsx Get-SqlErrorLog -sqlserver “(local)” The command and associated output are shown in the following image. Generally, if I am not using some of the “advanced” features of the SQLPSX, then I will load only the SQLServer module. This is more efficient than loading all the other modules associated with SQLPSX. The revised command to do this is shown here. Import-Module sqlserver Get-SqlErrorLog -sqlserver “(local)” PV, that should be enough different ways to access the SQL Server error log to confuse you, or hopefully to aid you in deciding how to best access the SQL Server error log. Tomorrow, I will talk about searching the SQL Server error log for specific errors.
https://devblogs.microsoft.com/scripting/use-powershell-to-get-the-sql-server-error-log/
CC-MAIN-2020-40
refinedweb
1,413
64.71
import sysimport twython print 'Arguments:',sys.argv[1] nounsstring = open("/Users/ranmo/Desktop/Nouns.txt", "r").read() adjectivesstring = open("/Users/ranmo/Desktop/Adjectives.txt", "r").read() nouns = list() adjectives = list() newNouns = list() number =0 nouns = nounsstring.split("\r\n") adjectives = adjectivesstring.split("\n") results = dict() name = sys.argv=200) class tweet(object): def __init__(self,result): self.results = result self.tweet = tweet print "hello" def isNoun(self, word): if word == "": return False if word.lower() in nouns: return True else: return False print "noun" def isAdjective(self, word): if word == "": return False if word.lower() in adjectives: return True else: return False def newtweet(self): self.results = dict() for tweet in response: mynouns = list() myadjectives = list() if tweet['retweeted'] is False and tweet['text'][0:2] != "RT": words = tweet['text'].split(" ") for word in words: if self.isNoun(word): mynouns.append(word) elif self.isAdjective(word): myadjectives.append(word) for noun in mynouns: nounindex = words.index(noun) for adjective in myadjectives: adjectiveindex = words.index(adjective) if adjectiveindex < nounindex: self.results[noun] = adjective return self.results a = tweet(results) results = a.newtweet() for key in results: print results[key] + "\t\t" + key Basically, what I did is using the idea which we've learned in the last RWET class to transform my mid-term project into a "class" in python. Also I did a little revise based on the former structure to refine to code as well. By following the course note, the work was not that tough and complicated. However, because what I have to do was replant the former structure to the "class" logical structure, so the most errors I got were indentation issues which cost me a lot of time. The interesting thing was after I finish all the debug work until there was no error, I barely could get more than 1 "result", which confused me a lot. I tried a lot. (The "count" in "response" is "count = 10")I printed stuff from the top to bottom of the code and found everything is fine. The turning point is when I check my friend's twitter, I found a tweet: "You. Are." I suddenly realize that this tweet would be neglect by my program because "You" is a noun, but there is no adjective at all. So, I tried to turn up the "count" number...
http://www.ranmoo.com/mrrm-2/288
CC-MAIN-2017-26
refinedweb
388
60.41
I am currently trying to build a simple web application in Ruby on Rails connecting to the OMBD api. I am new to Ruby and coding in general. I am trying to run a FOR loop to collect all the titles from a search query. Here is my method def self.findAllTitles(title) allresponses = [] for page in 1..100 do url="{title}&page=#{page}" response = HTTParty.get(url) responsebody = JSON.parse(response.body) allresponses.concat(responsebody) end return allresponses [] end <% @responsealltitles.each do |result| %> <td><%= result["Title"] %></td> <td><%= result["Year"] %></td> The response you get is not a hash. It is an array of hashes. Please try printing out responsebody[0]. responsebody = [{}, {}] I guess your response body is as above code. Calling reponsebody[0] will take out the first element in the array(that is the response hash you need).
https://codedump.io/share/MTVIESi2RjxG/1/simple-api-call-to-parse-json
CC-MAIN-2016-50
refinedweb
141
61.93
« Gecko Plugin API Reference « Plug-in Side Plug-in API Summary Provides a local file name for the data from a stream. Syntax #include <npapi.h> void NPP_StreamAsFile(NPP instance, NPStream* stream, const char* fname); Parameters The function has the following parameters: - instance - Pointer to current plug-in instance. - stream - Pointer to current stream. - fname - Pointer to full path to a local file. If an error occurs while retrieving the data or writing the file, fname may be null. Description When the stream is complete, the browser calls NPP_StreamAsFile to provide the instance with a full path name for a local file for the stream. NPP_StreamAsFile is called for streams whose mode is set to NP_ASFILEONLY or NP_ASFILE only in a previous call to NPP_NewStream. If an error occurs while retrieving the data or writing the file, the file name (fname) is null. See Also NPP_NewStream, NPP_Write, NPP_WriteReady, NPStream, NPP
https://developer.mozilla.org/en-US/Add-ons/Plugins/Reference/NPP_StreamAsFile
CC-MAIN-2016-40
refinedweb
150
65.22
You can purchase the STA013 and CS4334 directly from PJRC: Aries makes adaptors that are ideal for connecting the STA013 in a prototype, which DIP pins are easier to use. The adaptor shown in Figure 2 is available at DigiKey (P/N: A323-ND). This is actually a 32 pin adaptor, so 4 pins are unused with the 28 pin STA013. While an adaptor circuit board is easier, it is also possible to solder wires to all the pins. Here's a student project by Peter D'Antonio and Danial Riiff, with 28 wires soldered. TODO: contact Peter and Daniel (anyone know their email address??) to ask for permission to display their soldered wires photo. Update: Scott McNab wrote this nice page with instructions to make your own adaptors. Connecting the STA013 output pins to TTL level input pins is simple, because the STA013 will output at least 85% of it's power supply for a logic high, and a maximum of 0.4 volts for a logic low. Even if the STA013 is run at 2.7 volts, it will still output 2.3 volts, which will satisfy the 2.0 input requirement of a TTL level input pin. The four CS4334 input signals are all TTL level. The outputs from a 5 volt powered chip must not be directly connected to the STA013 input pins. The STA013 inputs are not 5 volt tolerant. Each pin has a pair of input protection diodes. These diodes may conduct a small current. The simplest and easiest way to interface a 5 volt output to the STA013 input pins is with a series resistor, that will limit the current when the 5 volt output is high. There is some input capacitance (3.5 pF) on the input pins, so adding a small capacitor in parallel with the resistor will allow the rapid edge to properly drive the input pin. The value of the resistor and capacitor are not critical, Figure 4 shows 4.7K and 47pF, which limits the steady-state current to less than 1/2 mA, and has been tested. The capacitor is not really necessary if the pin doesn't need to be driven rapidly, such as the RESET pin. Connecting the SDA signal from a 5 volt microcontroller to the STA013 is a bit more complex, though a simple circuit can often be used. The SDA line is bidirectional, where either device can pull down, and a resistor provides the pull-up. Most microcontrollers have TTL thresholds, so a pullup to the 3 volt supply will easily satisfy the 2.0 volt input high requirement. If the microcontroller will never drive the line high (only pull low or tri-state), then no other parts are probably required. This may be the case if the microcontroller has a dedicated I²C pin. In most cases, the microcontroller can drive the line high, and an additional resistor should be added to prevent damage to the STA013. This current limiting resistor should be small, as it will form a resistor divider with the pullup when the microcontroller drives low. If the pin can be put in a tri-state mode, the firmware should be written to use the tri-state for a logic high, or at least tri-state the SDA signal when not transmitting data, to save power. Philips suggest a somewhat more complex 5V to 3V connection involving a pair of mosfet transistors. Their solution is a much better approach for a general purpose bus. The pair of resistors shown here should only be used with a single microcontroller and the STA013. If you have a N-channel mosfet transistor, the Philips circuit would be better. This is only necessary on the SDA line, since the microcontroller will always generate the clock, so the circuit in Figure 4 will work well for the SCL signal. The resistors connected to PVDD and PVSS are 4.7 Ohm (not 4.7k). Any value near a few ohms is ok, if 4.7 ohms is not available. The capacitors connected to the crystal may need to be changed to match the resonance of the crystal. Values are usually in the range of 15 to 47 pF, and many crystals are not very sensitive to the exact capacitor values used. The inductor shown on the CS4334 power supply is intended to block noise from digital circuitry that's also connected to that line. The value of the inductor is not critical, so 100 µH would probably work well. In no inductor is available, and alternative is to create a separate ("clean") supply for the CS4334 with a LM7805 or similar linear-mode voltage regulator. Here's a table listing how each pin should be connected, which can be used as a "check off" list while wiring the chip on a breadboard. First, a little background about I²C. I²C is a 2-wire protocol, created by Philips. One line acts as a clock (SCL) and the other data (SDA). The protocol defines the ability to have multiple masters initiate communication, but here we'll only worry about the simple and common case where the microcontroller is the only device that controls the bus, and all the other chips (like the STA013) respond to queries initiated by the microcontroller. There are many sites with detailed info about this protocol. There are other faster and somewhat more complex ways to communicate with the STA013, such as writing to multiple consecutive locations. These optimizations are usually of little benefit with the STA013. The first step should be to check that the STA013 is actually present. Just read from address 0x01. If the read routine returns with an error, then no device sent an ACK bit and there is no chip installed. If there is an ACK, the data returned should always be 0xAC. Any other value means that the STA013 (or the code implementing the communication) isn't working properly. It is also possible to read the STA013 revision code at address 0, but ST gives not useful info about what this means, so it's probably not worth the trouble. The important step is to verify the ACK and 0xAC data at address 1, which means that the STA013 is present and communicating properly. If you don't get the ACK, check the connections to the STA013, and verify that the STA013 got a good reset pulse and has its crystal oscillating. The next step is to transmit the "p02_0609.bin" config file provided by ST. This file consists of 2007 address and data pairs. Sending the file is simple, just write a loop that passes each pair to the "sta013_write" function. Each ACK should be checked and the process aborted if any write doesn't receive any of its ACKs. The exact purpose of the p02_0609.bin file is a bit of a mystery. ST calls it a "config" file on their website. In the app note, they describe it as a "Patch File". Most of the addresses used in the file are registers that are not defined in the data sheet. Probably only a few engineers at ST, who've certainly signed NDAs, really know what this file does. If you, dear reader, have any solid information (not guesswork or speculation), please contact me so I can update this section. Also, if you manage to get a STA013 chip running properly without using p02_0609.bin, please send me an email with details and the date code from your chip, so that I can update this page. Update: a couple people have reported being able to play low-bitrate files without Update, March 15, 2001: burried deep within the p02_0609.bin is a write to address 16. Address 16 is the soft reboot regsiter. A brief delay is probably required after writing to this register. I've found that the vast majority of STA013 chips work without this delay, but some don't initialize properly and their data request pin stays stuck either high or low. The example code below does not have this delay, and it worked properly with the chip on the protoboard shown in the photo. Hundreds of these chips have worked without this delay on the MP3 player boards, but a few have not been so fortunate. Many people have reported success designing with the STA013 based on this page. This problem appears to be rare, but it does happen. It may be more prone to happen with a really fast mircocontroller, like the Atmel AVR or some 16 bit chips. Code with this delay will appear in the MP3 player's 0.6.2 firmware release, inside the "drivers.asm" file. If you're having a similar problem before the 0.6.2 release, contact me for the code. It's just a simple 0.6 ms delay after sta013_write if the address was 16. Also in 0.6.2 is some code which sends a tiny 1/2 second MP3 of silence to the STA013 and measures how long it takes for the STA013 to consume it. This appears to be the only effective way to detect if a STA013 has been misconfigured, as it will properly respond to I²C queries while it's in this funny state. The p02_0609.bin file is actually a simple ASCII text file. Why ST used a ".bin" extension is also a mystery. Each line contains two numbers, the first is the address and the second is the data to write to the chip. Here's the first several lines of the file: 58 1 42 4 40 0 41 0 32 0 33 0 34 0 The example code near the end of this page contains a perl script to convert this data format into intel-hex, which can be programmed into a EEPROM or downloaded to a monitor program. The perl script also adds a length count and simple checksum, which can be used to check the data before sending to the the chip. Once the config file is sent, the board specific settings must be sent. Here are the settings used in this example, for 14.7456 MHz crystal and the CS4334 DAC: It is possible to use the STA013 with other crystals. For example, to use a 10 MHz crystal, use the values in Table 5 from datasheet, and to use a 14.31818 MHz crystal, use Table 7. ST also provides a ConfigPLL v1.0 utility to compute these parameters for other crystals. It's interesting that register 7 is shown in ConfigPLL, but not in the tables of the datasheet. ConfigPLL seems to always set it to zero. Likewise, the tables list register 5, but ConfigPLL does not. All the tables show register 5 at 161, so perhaps that's the right setting for any crystal. One thing is certain, the settings shown here (same as in the example code) have been tested and are known to work with the 14.7456 MHz crystal and CS4334 DAC, so using them with 14.7456 MHz and a CS4334 is a very safe bet! The easiest way to send these setting it to just add them on to the end of the p02_0609.bin file, and let the loop that sends that file do the work. The last several lines of that file are actually writes to some of these registers, so they can be removed and replaced with the configuration you need. The copy of p02_0609.bin in the download section has these settings added, so you can use it with a CS4334 and 14.7456 MHz crystal without any changes. After you've checked the 0xAC, and sent the config file (with config data appended), then all that's left it telling the chip to start running. This could perhaps be added to the config file as well, though the example shown below does it separately. Just write a 1 to address 114 to tell the STA013 to start running, and then a 1 to address 19 to tell it to start playing. The DATA_REQ pin should be asserted, and when you start feeding valid MP3 data in, the chip will detect the MP3 sample rate, create the correct clocks for the DAC, and start pushing data into it. The STA013 is designed so that your project never needs to "know" anything about the MP3 file. When the chip requests more data, you need to respond in a timely manner, and feed the data in as rapidly as possible until the STA013 de-asserts its request signal. The chip will make requests more frequently at faster bitrates. The combination of your response time and data rate, if they are slow, may limit the maximum bitrate that can be played. The good news is that most VBR encodings only use fast bitrates (256 and 320 kbps) for brief times, so in many cases the STA013's input buffering will allow a design that couldn't supply 256 kbps contant bit rate to play variable bit rate files which contain brief bursts at these high speeds. The STA013 will ignore non-MP3 data (producing no sound), so it's safe to feed entire MP3 files, which may contain ID3 tags, into the STA013 without concern for which part of the file is MP3 bitstream and what portion is the ID3 tag. The chip will just keep requesting more data until it sees valid MP3 information. It's also safe to feed damaged MP3 streams into the STA013, for example truncated files due to partial download. Most damaged MP3 data is completely ignored. Some damaged files will produce a brief chirp sound (usually depending on what follows immediately after the damaged part), but the STA013 will rapidly sync back to good data that's sent after a corrupted part. One final thing to keep in mind is that the STA013 doesn't "know" about files. You may decide to issues I²C commands between each file, but that is not necessary. A simple player may just send the contents of file after file to the chip, and the STA013 will automatically adjust its settings when it sees data that specifies a different bitrate, sample frequency, mono or stereo mode, etc. This section was contributed by Ed Schlunder (zilym@NOSPAM.yahoo.com). Ed's examples are written in C. I'll add a comment or two in this smaller green text. There are three signal lines used for sending the MP3 data to the STA013 decoder: SDI (DATA), SCKR (CLOCK), and DATA_REQ. The STA013 asserts the DATA_REQ line when more MP3 data is needed from the host controller. The host controller then feeds MP3 data, most significant bit first, on the SDI line. Each bit placed on SDI is clocked, by the controller, into the STA013 with the SCKR signal line. Some example code for sending MP3 data follows: char mp3data[]; /* imagine that this buffer is already full of data */ unsigned long mp3ptr = 0; for(;;) { while(DATA_REQ) { /* STA013 is requesting more data, send a byte */ for(short j = 7; j >= 0; j--) { SCKR = 0; SDI = (mp3data[mp3ptr] >> j) & 1; SCKR = 1; } mp3ptr++; } /* do anything you want here... but don't take too long at it :-) */ } The example code near the end of this page has a very similar function named "xmit_mp3_clip" implemented in 8051 assembly. It contains 8 copies instead of a loop, which nearly doubles the speed. Also keep in mind that if something goes wrong with the STA013 and it doesn't assert DATA_REQ, this will turn into an infinite loop. For a "real" player design, some sort of timeout should be added. The above code assumes that whenever the STA013 asserts DATA_REQ, it can accept at least eight bits of data. The STA013 datasheet never explicitly says this is the case (AFAIK), however, in the practice it does seem to hold true. Which is very good, because this lets us use the 8051 serial mode 0 to send data more efficiently... In 8051 serial mode 0, the RxD pin shifts data out of the serial buffer (special function register SBUF) as the TxD pin provides clocking for each bit transmitted. If you hooked the STA013 SDI pin to the 8051 RxD pin and STA013 SCKR to 8051 TxD, you could in theory now use the 8051's UART hardware to do the bit shifting that we had to do manually in the code example before. However, there's a catch: the 8051 UART shifts out data least significant bit first, which is the opposite of what the STA013 is expecting. Therefore, before writing a byte of MP3 data to the SBUF register, we must reverse the bits so that the STA013 receives the bits in the order it expects: /* swaps bits MSB<->LSB to put bits in correct order for STA013 SDI */ unsigned char swapbits(unsigned char i) { return i >> 7 | (i & 0x40) >> 5 | (i & 0x20) >> 3 | (i & 0x10) >> 1 | (i & 0x08) << 1 | (i & 0x04) << 3 | (i & 0x02) << 5 | i << 7; } char mp3data[]; /* imagine that this buffer is already full of data */ unsigned long mp3ptr = 0; SCON = 0; /* go into serial mode 0 */ for(;;) { while(DATA_REQ && TI) { /* STA013 is requesting more data and our serial port isn't busy */ TI = 0; /* clear transmit flag -- serial port busy */ SBUF = swapbits(mp3data[mp3ptr++]); } /* do anything you want here, but don't take too long... ;-) */ } unsigned char swapbits (unsigned char i) { i = ((i & 0xAA) >> 1) | ((i & 0x55) << 1); /* 67452301 */ i = ((i & 0xCC) >> 2) | ((i & 0x33) << 2); /* 45670123 */ return (i >> 4) | (i << 4); /* 01234567 */ } The lookup table should be implemented in code-space (MOVC). The 8051 serial mode 0 is so fast that the checks on the TI bit are unnecessary... you can't get the data and do a fast table lookup bit swap before all of the previous bits are sent. My old player design used this approach, and you can look at it's code, including the "flip_bits" lookup table, to save you the effort of regenerating this table. Look at the code in "idle_loop" for a timing analysis, best case is 13 cycles, and 8051 mode 0 takes 9 cycles to transmit. A microcontroller with a dedicated SPI port would be even better because the SPI interface will send data in the proper bit order by default. So, use SPI instead of you have the option. End of Ed's section. Thanks Ed, that really helped. The fastest and most efficient method, but also by far the most difficult to design, is to build dedicated hardware to send large blocks of data to the STA013 automatically, without any CPU overhead. The new MP3 player does this using a Xilinx FPGA chip. The basic idea is that the CPU writes the address and length of a block of MP3 data that resides in memory, and then writes to a "go" bit. The hardware automatically detects when the CPU doesn't need to access the memory, and reads the MP3 data into a shift register, which sends the bits to the STA013 as it requests them. The bits are shifted at about 7 Mbit/sec (very fast, but the STA013 can handle 20 Mbit/sec). Each time the shift register is empty, the hardware makes another read from the next memory location, as soon as there is an opportunity to access the memory when the CPU doesn't need it. When all of that block has been sent, the hardware gives the CPU an interrupt, so that the CPU can give it the next block's addr/len to transmit. For some more details about how this is done, take a look at the memory map page for the new player design. This design allows even a slow (low power) CPU to easily play the fastest MP3 streams (320 kbps), but it is a very difficult approach. While it is possible to design a very high performance data transfer, the example code below uses the simplest possible method, directly manipulating the port pins. The intention of the example code is to be simple and easy to understand. The left photo shows a burst of transfer every 26 ms, which suggest that an average of 1664 bits are transfered into the buffer each time, to maintain 64 kbps throughput. Our sony mavica camera uses approx 15 ms shutter speed in this lighting, which made the left photo very difficult... it's either a way-too-bright spot a few divisions long, or a weak image if the shutter happens to open between traces (as in the one I finally ended up using). That's why you see a weak trace, when a slow sweep is usually a bright and annoyingly flashing image. The center photo shows that the bursts are about 345 µs long. The right photo shows the effective bit rate. Bursts of 16 bits are sent at 7.37 MHz, with a delay between each 16 bits while the control state machine arbitrates for access and then reads the DRAM. 32 bits are sent in approx 6.4 µs, for a bit rate of 5 Mbit/sec, which suggests about 1725 bits are loaded into the STA013's input buffer during the 345 µs burst. This second method probably over-estimates the buffer size slightly, since the control state state machine must sometimes wait during the bus arbitration (the slight "ghost" image in the right photo), causing the effective bit rate to be somewhat less than 5 Mbit/sec. Still, 1664 and 1725 bits is a 3.7% error (if you assume 1664 is closer to the truth)... not bad for counting divisions on the 'scope screen! These waveforms only measure the input buffer between the point where the STA013 asserts and releases DATA_REQ, which is says that the buffer is at least 206 bytes. TODO: write some descriptive text... the pictures show what it looks like and the screen dumps shown how to run with PM2 monitor. See comments in the code for making a stand-alone version or using with a different development system. Here's a typical session using the example code. The code is written to run with the PM2 monitor program. It can be easily modified to run under a different development system, or stand-alone. See the comments in the source code for details. The example uses the 8051 development board. Here, the board is booted and the memory and flash are cleared. It's not really necessary to clear the memory, but if anything is residing in the flash memory (other than MP3 data), it must be erased. Welcome to PAULMON2 v2.1, by Paul Stoffregen See PAULMON2.DOC, PAULMON2.EQU and PAULMON2.HDR for more information. Program Name Location Type List 1000 External command Single-Step 1400 External command Memory Editor (VT100) 1800 External command STA013 Example 2000 Program PAULMON2 Loc:2000 > Clear memory First Location: 2000 Last Location: 3FFF Are you sure? PAULMON2 Loc:2000 > Erase flash rom Erase flash rom, Are you sure? Flash rom erased Three files must be downloaded. The "sta013.hex" was sent first in this example (794 bytes of data). Then the "p02_0609.hex" file was transmitted next, and finally one of the MP3 test clips was sent. In this example, it was "faith_hill_clip.mp3", which is several seconds from a well known song. The example code uses the following memory: Any MP3 data could be downloaded, but the four test clips in the download section are encoded at very low bit rates, so that a 32 kbyte clip can play for several seconds. At the usual 128 kbit/sec MP3 bitrate, 32 kbytes will play for only 2 seconds. PAULMON2 Loc:2000 > ^................................................... Download completed Summary: 51 lines received 794 bytes received 794 bytes written No errors detected PAULMON2 Loc:2000 > ^........................................................... .................................................................... Download completed Summary: 127 lines received 4026 bytes received 4026 bytes written No errors detected PAULMON2 Loc:2000 > ^........................................................... ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................................................................ ..... Download completed Summary: 1024 lines received 32735 bytes received 32735 bytes written No errors detected Once all three files are loaded into memory, just run the program. The example code does some checks to make sure that a valid STA013 config file is present. It does not check the validity of the MP3 data... whatever is in the 0x8000 to 0xFFFF memory range is sent to the STA013 chip. PAULMON2 Loc:2000 > Run program A - STA013 Example run which program(A-A), or ESC to quit: A STA013 MP3 Decoder Test Program Reseting STA013 Reading IDENT register: AC Ok Checking p02_0609 (config data) at 0x3000 Ok Downloading Config Data to STA013 chip: Ok Sending RUN Command: Ok Sending PLAY Command: Ok Playing MP3 Clip From Memory: Ok Press any key to restart: Testing and troubleshooting custom-built circuitry can be difficult and frustrating. The idea behind these MP3 clips is to provide a ready-to-use test for the STA013 and this example code, already converted to intel-hex format for loading into the flash. If data is sent to the STA013 that's been incorrectly translated, it will detect that it's not a valid MP3 stream and do nothing. Having known-to-work MP3 test clips in the correct format eliminates (or at least reduces the likelyhood of) one more potential source of trouble. All four of these intel-hex format clips were tested on the example hardware shown. These four MP3 clips are short sections of well-known copyrighted works. I believe that providing these clips is a fair use of the copyrighted material, because the purpose and character of the use is educational, the amount and substantiality of the portion used in relation to the copyrighted work as a whole is a 5 to 8 second clip of a 3-5 minute song, and the effect of the use upon the potential market for or the value of the copyrighted work is effectively zero. TODO: add a wrap-up section, with some suggestions about what to do next, links to new and old player.
http://www.pjrc.com/tech/mp3/sta013.html
crawl-001
refinedweb
4,340
70.02
Title: Hurd Debian-Installer (draft) Student: Jeremie Koenig <jk@jk.fr.eu.org>, jkoenig@{freenode,oftc} Mentor: Samuel Thibault (youpi) <sthibault@debian.org> Abstract: Debian GNU/Hurd is currently installed either using outdated CD images, or from an existing Debian GNU/Linux system using the 'crosshurd' package. The goal of this project is to modify debian-installer and the related packages to produce working Debian GNU/Hurd installation images. Update 2010-05-24: The proposal has been accepted by Debian. I will keep track of my progress on the SummerOfCode2010/HurdDebianInstaller/JeremieKoenig/Roadmap page. Background I'm a 25 years old Computer Science student in Strasbourg (France), currently finishing my Licence (more or less equivalent to a bachelor's degree I think), after some years of interruption. In particular I worked for some time as a developer on a Debian-based product (an "embedded" backup server with a web interface). This involved creating custom packages and modifying existing ones, automating the system's administration and installation, as well as some C, Perl, PHP and C# programming. Though the work in itself was fun and went rather smoothly, it was not manageable as a part-time job so I had to quit to resume my studies. I have never been a regular Debian contributor, but I have used it for quite some time (see the list of bugs I reported) and am familiar with the developer tools. I have also done some debian-installer work in the past, namely helping with "oldworld" Macintosh support. I had to tweak the d-i image build scripts, package the miBoot bootloader and create the rsrce package (used to configure miBoot). Unfortunately, there were legal problems with miBoot and the Macintosh boot sector which the floppy used, so my work could not be completely integrated. As for the Hurd I am less familiar with its guts, but I am enthusiastic about its overall design and I had tried it in the past. Preparing this application also involved some looking under the hood and tinkering with it, so I am confident that I would be able to acquire the necessary knowledge without too much of a hassle. Project overview Hurd is micro-kernel based system: most of the code usually found in the kernel of Unix-like systems is implemented in userspace, as a series of cooperating server processes. The filesystem is used as a namespace for the services that the system provides (the servers are said to translate parts of the filesystem), and the principle of capabilities is used to manage access to resources. This design offers great flexibility, as even a normal user can use to own servers, in addition to or as a replacement to the system ones. I would work on a native debian-installer port for Hurd. Colin Watson and Samuel Thibault have already done some work in this direction, but much remains. Besides the installer itself, the project would involve at least ?BusyBox, GNU Mach and Hurd. Furthermore, I intend to coordinate with the kFreeBSD people and come up with generic solutions for non-Linux systems whenever possible. Benefits to Debian I believe that Debian, the universal operating system, benefits from non-Linux ports in general: - Its users get more choice and more functionnality (FreeBSD jails, KAME -- or the crucial random .signature translator) - The free software community gets the whole Debian archive built for alternative kernels, which is great to ensure that those packages are portable and that they target POSIX rather than Linux. More specifically, a Hurd debian-installer would be an important step towards making hurd-i386 a viable Debian architecture (though, admittedly, lots of work would remain). Making debian-installer less tied to a particular kernel would benefit non-Linux Debian architectures in general. Deliverables My work would primarily consist of changes to existing software. I would submit them as I go, as patches against Debian packages and upstream where appropriate. I would also keep track of them, so I can submit them to Google after GSoC is over. Project details Current status Samuel Thibault has some packages and patches which can be used to build "monolithic" debian-installer images, which at some point he could boot into a shell. I have fixed a bug in ext2fs which prevented them from booting but as of 2010-04-06, another problem with the bootstrap process remains (namely, it hangs after the exec server is started by ext2fs.) Busybox I have started working on "clean" patches for porting busybox. My first patches have been accepted upstream with some changes and the source from the upstream git repository builds on Hurd with a minimal configuration. However more applets will probably be needed as I progress with the installer. In any case, some applets will need to be disabled in the Debian package on Hurd and kFreeBSD. This could be done either by using system-specific configuration files in the Debian package, or by modifying the build system of busybox: the current target system would be detected, and the irrelevant applets would be disabled through KConfig dependencies. Genext2fs The ext2fs translator only supports filesystems with 4KB blocks right now. On the other hand, genext2fs uses a fixed block size of 1024. The problem could be attacked on either side, but the genext2fs route seems much more simple. Samuel Thibault has a patch which changes the hardwired #define from 1024 to 4096, and sets the superblock's "creator OS" field to a system-specific value. In the longer term, however, these would need to be turned into command-line options. GNU Mach Mach will need to support using some of its multiboot modules as initial ramdisks. I have had an occasion to wander around the code for iopl and the "driver" part for this does not seem too hard. The user interface might be fitted into the boot script language, for instance a $(use-as-initrd) statement on a module's command line would trigger its usage as a ramdisk device. The decompression could be done either by GRUB or by libstore. In either case the ramdisk could be used read-write. Hurd Crosshurd currently does some magic in the "native-install" script to set up translators on a newly installed system. This magic would need to be moved into the hurd package's postinst script so that it happens during d-i installations. Tuning hurd-udeb and further changes might be necessary as well. For instance, there are currently workarounds for the installer in the Debian package's libstore bootstrap code and I strongly suspect that they will need some changes or cleaning up. Network configuration Apparently, netcfg has had Hurd support for some time (r548, by David Whedon starts it, the more recent r62649 by Samuel Thibault adds some fixes). However, I'm not sure that DHCP is supported, which it also isn't in a straightforward way on an installed Hurd system. So the situation would need to be reviewed and support for configuring DHCP both during the install and on the target system would need to be implemented. For the record, Samuel Thibault has submitted a patch which adds Hurd support to dhcp3. Partitioning Not every filesystem can be used for a Hurd installation: support for passive translators is necessary. As a consequence, we have to use ext2 as the root filesystem, with special options at mkfs time as mentioned above for genext2fs. More generally, Hurd only has a subset of the filesystems which Linux understands. Partman would have to be modified to stay within these constraints on Hurd installations. Bootloader installation Both os-prober and grub-installer are Linux-specific for now, though there is some support for Hurd as a foreign OS. The Linux-specific parts would have to be moved into "hook" shell functions, and each system would provide a shell script snippet implementing them. debian/rules could then choose the relevant snippet at build time. Multilingual console Mach has a very basic console which could be used in the beginning. Among other interesting features, the Hurd console has some UTF-8 support and its VGA driver can use any 512 glyphs at once in a dynamic fashion. So while I don't expect it to handle bidi or large glyphs, I believe it could be used for a fair number of languages nontheless. The Hurd console uses BDF fonts, so a udeb similar to bterm-unifont would have to be created. Also, the keyboard driver does not support any kind of keymaps so this would need to be implemented, and integrated with kdb-chooser and /etc/default/keyboard-configuration. Installation media Right now, only the "netboot" and "monolithic" image types have some hurd-specific configuration, and only "monolithic" can be built (due to missing udeb deps on "netboot", IIRC). Ultimately, more image types should be supported, but this would probably require some shuffling of the pkg-lists to separate some system-specific parts. Graphical installer Though it's somewhat sluggish, Xorg runs on Hurd, and porting the graphical installer would be great for languages not supported by the Hurd console. ... and many other cool surprises! I expect to be discovering much more work as I progress. But if we could know everything in advance, there would be no fun! Project schedule With this in mind, here is my attempt at a schedule. I'm confident that I would be able to achieve the first half of it even if an unexpected number of unexpected problems show up... Travel I expect to be able (and would be more than willing) to travel to Debconf 10. However I'm on a tight budget at the moment so I would need sponsoring for the travel and housing costs. Other plans I don't have any other other plans for this summer and I expect to be able to work on this project full-time for the complete duration of the GSoC program. My exams should be finished by May 21, so I don't expect them to be a problem either. After GSoC Obviously, even if the project is a success, lots of work would remain on Debian GNU/Hurd. I'm quite fond of Debian, both as an operating system and as an organization, and I have had other Debian-related projects (see my outdated website), so I hope that a participation to GSoC would be the foot in the door of my continued involvement
https://wiki.debian.org/SummerOfCode2010/HurdDebianInstaller/JeremieKoenig
CC-MAIN-2020-29
refinedweb
1,739
59.33
CPANPLUS - Command-line access to the CPAN interface cpanp cpanp -i Some::Module perl -MCPANPLUS -eshell perl -MCPANPLUS -e'fetch Some::Module' ### for programmatic interfacing, see below ### The CPANPLUS library is an API to the CPAN mirrors and a collection of interactive shells, commandline programs, daemons, etc, that use this API. This documentation will discuss all of these briefly and direct you to the appropriate tool to use for the job at hand. The CPANPLUS library comes with several command line tools; cpanp This is the commandline tool to start the default interactive shell (see SHELLS below), or to do one-off commands. See cpanp -h for details. cpan2dist.pl This is a commandline tool to convert any distribution from CPAN into a package in the format of your choice, like for example .deb or FreeBSD ports. See cpan2dist.pl -h for details. cpanpd.pl This is a daemon that acts as a remote backend to your default shell. This allows you to administrate multiple perl installations on multiple machines using only one frontend. See cpanpd.pl -h for details. Interactive shells are there for when you want to do multiple queries, browse the CPAN mirrors, consult a distributions README, etc. The CPANPLUS library comes with a variety of possible shells. You can install third party shells from the CPAN mirrors if the default one is not to your liking. This is the standard shell shipped with CPANPLUS. The commands cpanp and perl -MCPANPLUS -eshell should fire it up for you. Type h at the prompt to see how to use it. This is the emulation shell that looks and feels just like the old CPAN.pm shell. All the above tools are written using the CPANPLUS API. If you have any needs that aren't already covered by the above tools, you might consider writing your own. To do this, use the CPANPLUS::Backend module. It implements the full CPANPLUS API. Consult the CPANPLUS::Backend documentation on how to use it. There are various plugins available for CPANPLUS. Below is a short listing of just a few of these plugins; As already available in the 0.04x series, CPANPLUS provides various shells (as described in the SHELL section above). There are also 3rd party shells you might get from a cpan mirror near you, such as: A shell using libcurses A shell using the graphical toolkit Tk As already available in the 0.04x series, CPANPLUS can provide a hook to install modules via the package manager of your choice. Look in the CPANPLUS::Dist:: namespace on cpan to see what's available. Installing such a plugin will allow you to create packages of that type using the cpan2dist program provided with CPANPLUS or by saying, to create for example, debian distributions: cpanp -i Acme::Bleach --format=debian There are a few package manager plugins available and/or planned already; they include, but are not limited to: Allows you to create packages for FreeBSD ports. Allows you to create .deb packages for Debian linux. Allows you to create packages for MandrakeLinux. Allows you to create packages in the PPM format, commonly used by ActiveState Perl.. New in the 0.05x series is the possibility of scripting the default shell. This can be done by using its dispatch_on_input method. See the CPANPLUS::Shell::Default manpage for details on that method. Also, soon it will be possible to have a .rc file for the default shell, making aliases for all your commonly used functions. For exmpale, you could alias 'd' to do this: d --fetchdir=/my/downloads or you could make the re-reading of your sourcefiles force a refetch of those files at all times: x --update_source. See CPANPLUS::Shell::Default for instructions on using the default shell. Note that if you have changed your default shell in your configuration, that shell will be used instead. If for some reason there was an error with your specified shell, you will be given the default shell. You may also optionally specify another shell to use for this invocation (which is a good way to test other shells): perl -MCPANPLUS -e 'shell Classic' Shells are only designed to be used on the command-line; use of shells for scripting is discouraged and completely unsupported. For frequently asked questions and answers, please consult the CPANPLUS::FAQ manual. This module by Jos Boumans <kane. Please see the AUTHORS file in the CPANPLUS distribution for a list of Credits and Contributors. CPANPLUS::Backend, CPANPLUS::Shell::Default, CPANPLUS::FAQ, cpanp, cpan2dist.pl
http://search.cpan.org/~kane/CPANPLUS-0.059_01/lib/CPANPLUS.pm
CC-MAIN-2017-30
refinedweb
756
64.71
Build startup_cortexA15MPcore using GNU toolchain. URGENT.Mazen Ezzeddine Jul 3, 2014 5:33 PM Dear all, I am trying to build the example startup_cortexA15MPcore (provided with DS-5) using GNU toolchain. The example is originally designed for build using ARM standard tools, and the following variables are hence defined : CC=armcc AS=armasm LD=armlink AR=armar FE=fromelf With what GNU tools should I substitute CC, AS, LD, AR, FE so that the example can be built using GNU toolchain. Does the below work: CC = arm-none-eabi-gcc AS= arm-none-eabi-as LD= arm-none-eabi-ld how about AR and FE? Please help thank you very much. Re: Build startup_cortexA15MPcore using GNU toolchain. URGENT.mweidmann Jul 3, 2014 4:27 PM (in response to Mazen Ezzeddine)1 of 1 people found this helpful AR refers to "armar" which is the ARM utility for creating libraries. FR refers to "fromelf" which is a tool for inspecting ELF objects and images, for converting them to other formats. I haven't got a copy of the example to hand, but I doubt it actually needs either. Switch to GCC/GNU tools I foresee two problems. GAS (GNU assembler tool) and armasm (ARM's assembler tool) use different syntax. Which makes the .s files for one incompatible with the other. Meaning you'll have to port it. It's not too difficult (from limited past experience) but can be fiddly. Helps if you know one of the syntaxes well to start with. THe other potential problem is libraries. The calls you make to set up and initialise the C library are different. Re: Build startup_cortexA15MPcore using GNU toolchain. URGENT.Mazen Ezzeddine Jul 4, 2014 2:04 PM (in response to mweidmann) Dear Martin, Many thanks for your helpful reply. Great, I believe I can port the ARM .as files into the GNU .as files and I am aware of their major syntax differences. Could you please provide more information about the issue of Libraries or route me to a helpful related document. Thank you very much. Re: Re: Build startup_cortexA15MPcore using GNU toolchain. URGENT.jensbauer Aug 20, 2014 11:48 PM (in response to Mazen Ezzeddine) If you're lucky, you could use the C Pre-Processor in GNU as a wrapper around your ARM assembly files. Say... You have foo.as and you want to assemble it using GAS; then wrap the C Pre-Processor aournd GAS for files ending in ".S" To wrap GAS in the C Pre-Processor, use GCC with the '-x assembler-with-cpp' switch: arm-none-eabi-gcc <some parameters> -x assembler-with-cpp <other parameters> Make yourself an .S file like the following: #define KEYWORD replacement ... ... #define KEYWORDn replacement_n ... After all your #define macros, you can include your ARM-assembly file: .include "foo.as" ... or even ... #include "foo.as" Of course, it will not handle all cases, so you may experience the need for modifying the .as files, so they will become "portable" Also have a look at GAS's macros. They're quite powerful, and may be able to override keywords in your .as file too. .macro macroname,param1,param2 .ifnb \param1 ... .else .endif .endm
https://community.arm.com/thread/6201
CC-MAIN-2016-30
refinedweb
533
66.64
This site uses strictly necessary cookies. More Information Is there a way to get the list of axes, or enumerate them somehow, defined in the InputManager? I know I can get them if I know there name. I want to get all the defined ones, to find their names and such. What would be the point? Axes can't be dynamically added, so just keep a list of it somewhere in your application. Answer by Sarkahn · Jul 21, 2015 at 07:58 AM Yup. Edit: As pointed out by Bunny in the comments - this will only work in the editor and relies on reflection - it could easily break in future versions of Unity if they randomly decide to rename or restructure their input classes/data. However there is no real alternative with the existing (as of 2017.1) input system. using UnityEngine; using System.Collections; using UnityEditor; public class ReadInputManager { public static void ReadAxes() {.FindPropertyRelative("m_Name").stringValue; var axisVal = axis.FindPropertyRelative("axis").intValue; var inputType = (InputType)axis.FindPropertyRelative("type").intValue; Debug.Log(name); Debug.Log(axisVal); Debug.Log(inputType); } } public enum InputType { KeyOrMouseButton, MouseMovement, JoystickAxis, }; [MenuItem("Assets/ReadInputManager")] public static void DoRead() { ReadAxes(); } } If you have a look at the Input Manager the rest should be pretty easy to figure out. Great post! It should be noted that this only works in the editor, not in a build. Thank you so much!! I always wanted to find a way to get a list of them, this is perfect for what I needed it for! The answer is actually "no". This solution does only work inside the editor and not in a build as it uses classes from the UnityEditor namespace. So this can't be used in your actual game, only inside editor scripts. Of course you can write an editor script which you would invoke once to store that information in an asset so it can be used at runtime. Additionaly this "solution" uses reflection to access built-in classes and internal datafields. Those are not documented and therefore the internal structure can change without any warning between Unity releases which could break this solution. Don't get me wrong, there's no real alternative out there since scripting support for the Input$$anonymous$$anager is long overdue. However it should be pointed out that this is actually a hacky workaround solution. ps: they currently working on a new, better input system. However until it's ready to be included in a stable version you have to use the old one, at least in actual production builds. True, I will edit my answer to clarify that. Answer by RevolutionNow · Sep 07, 2014 at 11:39 AM I'm not sure exactly how but I think that you need to use GetComponent and for loops :) these two videos might help $$anonymous$$aybe I'm mistaken, but I don't think you really understand what the question is about. The Input$$anonymous$$anager is not a GameObjects Component. We are talking about the panel we get on the inspector when we select Edit -> Project Settings -> Input. And the question is what's the way to access its properties from a script, if there's one. Answer by glantucan · Sep 01, 2014 at 06:34 AM Hi, I'm strugling with same issue here. It would be really handy to be able to get that list from a script. Answer by guavaman · Oct 03, 2014 at 03:11 AM Nope, there's no way to do it. Unity's InputManager is extremely limited and doesn't allow you do to anything at all during runtime except get input. Update: Yes, it's possible in the editor as Sarkhan's post shows, but not in a game. Will Unity 3.0 have a scriptable Input Manager 1 Answer Change input axis values from code 2 Answers Creating an input manager that supports axis 0 Answers Button Axis with New Input System 0 Answers Weird Input Manager behaviour 1 Answer EnterpriseSocial Q&A
https://answers.unity.com/questions/566736/get-list-of-axes.html
CC-MAIN-2021-49
refinedweb
669
55.74
py.setup_int_pin_wake_up() has no effect when a button is pushed I have installed a simple push button on my board connected to GND and P9(G16) of the external IO header. My little test code looks like this: def goSleep(): print("Do not wake up by accelerometer") py.setup_int_wake_up(False, False) print("Setting P9 to listen for wakeup on falling edge") py.setup_int_pin_wake_up(False) py.setup_sleep(60) print("Going into deepsleep, no backup power for GPS") py.go_to_sleep(False) try: py = Pytrack() time.sleep(10) print('goSleep launched') goSleep() except Exception as exc: sys.print_exception("Exception in system init: {0}".format(exc)) No matter whether I put py.setup_int_pin_wake_up(True) or py.setup_int_pin_wake_up(False) -> the button does not wake up the board. So to be sure my button was working, I tested with def pin_handler(arg): time.sleep(0.5) if int(p_in()) == 0: print("irq on %s",(arg.id())) try: py = Pytrack() time.sleep(10) print('P9 button enabled') p_in = Pin('P9', mode=Pin.IN, pull=Pin.PULL_UP) p_in.callback(Pin.IRQ_FALLING, pin_handler) except Exception as exc: sys.print_exception("Exception in system init: {0}".format(exc)) which outputs as expected: irq on P9 I tried other combination like enabling the accelerometer (that works) but the button never wakes up the board. My system:(sysname='WiPy', nodename='WiPy', release='1.17.3.b1', version='v1.8.6-849-83e2f7f on 2018-03-19', machine='WiPy with ESP32') The pyboard firmware is the latest pytrack_0.0.8.dfu What am I missing? @fsergeys yes, we physically cut the header for that pin, so the only things that remain connected are the sensor and the PIC on the Pysense in our case. The LoPy was just bringing havoc when in deep sleep mode. You may want to experiment by using jumper cables between the WiPy and PyTrack at first, connecting everything but that pin, but I’m pretty sure you’ll get better results that way. Of course things would be very different with a WiPy 3, LoPy 4 or any of the other modules that implement deeep sleep directly rather than having to rely on an external board. @jcaron So, do I understand well that you physically cut off the pin of P9 on the Lopy? Then you got better behaviour on P9 (external IO) on the pytrack when connect to 3V3 sensor (pin 4 of the external IO)? Just want to be sure before I go down that route. @fsergeys when the Pytrack enter deep sleep mode, it powers down the WiPy, which means P9 is actually more or less pulled to ground. You would need to have your button connected to the 3v3_sensors pin rather than GND to have a difference in the signal, though you probably need a resistor somewhere in there as well. Note that the state of P9 during sleep is a bit weird, and in my experience you end up with bizarre voltages, which depending on the sensor used can result in levels which are not really recognised as high or low when they should. There are also parasite signals at the time you go to sleep or wake up. The solution we went for is to actually cut P9 on the LoPy. Note that it means you can only read the state of the button via the PyTrack’s PIC once you do that, but it does work pretty well.
https://forum.pycom.io/topic/2951/py-setup_int_pin_wake_up-has-no-effect-when-a-button-is-pushed/
CC-MAIN-2019-39
refinedweb
566
64.61
On Mar 18, 11:11 am, "R. David Murray" <rdmur... at bitdance.com> wrote: > I don't have any wisdom on the metaclass/decorator stuff, but what > about slightly reformulating the interface? Instead of having the > programmer type, eg: > > @GET > def foo(self): pass > > @POST > def foo(self): pass > > have them type: > > def GET_foo(self): pass > def POST_foo(self): pass > > It's even one less character of typing (the <cr> :) > > -- > R. David Murray David, would you believe that I just posted about this very idea, It doesn't seem to have shown up yet, though. This idea works from the perspective of being trivially easy to implement. I can easily write a metaclass that looks in the namespace for methods with names that start with GET_or POST_, or I can override __getattribute__ to do the look up that way. However, there are a couple of weaknesses that I see with this approach. First, from a purely aesthetic point of view, prepending the desired verb to the method name just looks a bit ugly. Also, it makes it difficult to deal elegantly with avoiding duplicating code when one piece of logic should dealing with more than one verb. So, if I want to have one method that works for GET and POST, I can do this: def GET_foo(self): # Do stuff for GET def POST_foo(self): return self.GET_foo() but then I feel like I am cluttering my controller code with unneeded functions when writing @GET @POST def foo(self): # Blah blah blah would be so much neater. Or, I could allow method signatures like: def GET_POST_foo(self): # Blah, blah, blah But now my code to parse and manipulate or do lookups on methods names is much more complicated. Also, it introduces difficult ambiguities in the case that an action of the controller has the same name as an HTTP Verb. These ambiguities can be coded around, but doing so makes the code more-and-more crufty and prone to breakage. I don't want to build too much of a Rube Goldberg machine here, right? What I really want to do is use Python's metaprogamming facilities to provide an elegant solution to a problem. Unfortunately, I really don't think that it is going to work out in any way that is really satisfying.
https://mail.python.org/pipermail/python-list/2009-March/529347.html
CC-MAIN-2016-36
refinedweb
385
67.79
This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project. I attempted to send this to both java and gcc lists yesterday, but there seems to have been a mailer glitch ... -----Original Message----- From: Boehm, Hans To: 'gcc@gcc.gnu.org'; 'java@gcc.gnu.org' Sent: 5/17/02 7:30 PM On Linux/IA64, and probably other platforms, the trunk gcc miscompiles the isWhiteSpace() method in libjava/java/lang/Character. Apparently the bit-wise & in a return is the problem. This is a regression from 3.1. It fails with the current CVS and failed with the one from a few days ago. Reduced test case (only isWhitespace is miscompiled, the rest is scaffolding): class mychar { public static boolean isWhitespace(char ch) { int attr = readChar(ch); return (((1 << attr) & 0x7000) != 0); } private static char readChar(char ch) { if (ch < 128) return (char) 9; return 9999; } public static void main(String[] argv) { if (isWhitespace('4')) System.out.println("FAIL 1"); if (isWhitespace('a')) System.out.println("FAIL 2"); } } Offending RTL instruction (partial diff of trunk (-) against 3.1 output (+)): (insn 57 56 59 (set (reg:DI 359) - (zero_extend:DI (subreg:QI (reg:SI 355) 0))) -1 (nil) + (and:DI (subreg:DI (reg:SI 355) 0) + (const_int 1 [0x1]))) -1 (nil) (nil)) In both 3.1 and the trunk, the code in the return is compiled as a variable right shift of 0x7000. In the 3.1 version, this is anded against 0x1. In the trunk version the and is dropped. I didn't yet track this down much further, but it fails at -O0, and the initial rtl is already wrong. It works correctly if I compile to a class file and then interpret with gij. Any ideas? Hans
https://gcc.gnu.org/legacy-ml/gcc/2002-05/msg01752.html
CC-MAIN-2020-40
refinedweb
296
74.79
Results 1 to 2 of 2 Hi, I am trying to start writing a basic VFS module for linux that can be imported directly into the kernel using insmod but I get the following error when ... - Join Date - Feb 2014 - 1 File System module not compiling? Code: #include <linux/kernel.h> #include <linux/init.h> #include <linux/module.h> #include <linux/pagemap.h> #include <linux/fs.h> #include <asm/atomic.h> #include <asm/uaccess.h> static DECLARE_FSTYPE_DEV(pipe_fs_type, "pipe", pipe_read_super); static int __init init_pipe_fs(void) { return 0; } static void __exit exit_pipe_fs(void) { } module_init(init_pipe_fs) module_exit(exit_pipe_fs) Code: obj-m += module.o all: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules clean: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean If anyone knows why I am getting this error please respond. I am making my own operating system and would much rather access my file system easily. Rather than rely on a user based program to read through it - Join Date - Apr 2009 - Location - I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away. - 11,655 The string constant is, from what I can see, "pipe". Look at the definition of the DECLARE_FSTYPE_DEV() macro to see why this may be happening. Complex macros are a serious PITA, and this sort of situation is not uncommon. Since you don't state which kernel and distribution you are using, that is a good as you can get from me...Sometimes, real fast is almost as good as real time. Just remember, Semper Gumbi - always be flexible!
http://www.linuxforums.org/forum/kernel/200151-file-system-module-not-compiling.html
CC-MAIN-2014-42
refinedweb
268
58.89
The Q3ListView class implements a list/tree view. More... #include <Q3ListView> This class is part of the Qt 3 support library. It is provided to keep old source code working. We strongly advise against using it in new code. See Porting to Qt 4 for more information. Inherits: Q3ScrollView. The Q3ListView class implements a list/tree view.3ListView, add some column headers using addColumn() and create one or more Q3ListViewItem or Q3CheckListItem objects with the Q3ListView as parent. Further nodes can be added to the list view object (the root of the tree) or as child nodes to Q3ListViewItems. The main setup functions are: There are3ListViewItem::itemBelow(); over a list view's top-level items using Q3ListViewItem::firstChild() and Q3ListViewItem::nextSibling(); or every item using a Q3ListViewItemIterator. See the Q3ListViewItem documentation for examples of traversal. An item can be moved amongst its siblings using Q33ListView::SelectionMode documentation. The default is Single selection, which you can change using setSelectionMode(). Because Q3ListView offers multiple selection it must display keyboard focus and selection state separately. Therefore there are functions both to set the selection state of an item (setSelected()) and to set which item displays keyboard focus (setCurrentItem()). Q3ListView emits two groups of signals; one group signals changes in selection/focus state and one indicates selection. The first group consists of selectionChanged() (applicable to all list views), selectionChanged(Q3ListViewItem*) (applicable only to a Single selection list view), and currentChanged(Q3ListViewItem*). The second group consists of doubleClicked(Q3ListViewItem*), returnPressed(Q3ListViewItem*), rightButtonClicked(Q333ListView::SelectionMode for details. The list view can be navigated either using the mouse or the keyboard. Clicking a - icon closes an item (hides its children) and clicking a + icon opens an item (shows its children). The keyboard controls are. Note that the list view's size hint is calculated taking into account the height and width to produce a nice aspect ratio. This may mean that you need to reimplement sizeHint() in some cases. Warning: The list view assumes ownership of all list view items and will delete them when it does not need them any more. See also Q3ListViewItem and Q3CheckListItem. This typedef is used in Q3ListView's API for values that are OR'd combinations of StringComparisonMode values. See also StringComparisonMode. This enum describes whether a rename operation is accepted if the rename editor loses focus without the user pressing Enter. This enum describes how the list view's header adjusts to resize events which affect the width of the list view. This enumerated type is used by Q3ListView to indicate how it reacts to selection by the user. In other words, Single is a real single-selection list view, Multi a real multi-selection list view, Extended is a list view where users can select multiple items but usually want to select either just one or a range of contiguous items, and NoSelection is a list view where the user can look but not touch. This enum type is used to set the string comparison mode when searching for an item.. See also ComparisonFlags. This enum type describes how the width of a column in the view changes. See also setColumnWidth(), setColumnWidthMode(), and columnWidth(). This property holds whether items should show keyboard focus using all columns. If this property is true all columns will show focus and selection states, otherwise only column 0 will show focus. The default is false. Setting this to true if it's not necessary may cause noticeable flicker. Access functions: This property holds the number of parentless (top-level) Q3ListViewItem objects in this Q3ListView. Holds the current number of parentless (top-level) Q3ListViewItem objects in this Q3ListView. Access functions: See also Q3ListViewItem::childCount(). This property holds the number of columns in this list view. Access functions: See also addColumn() and removeColumn(). This property holds what action to perform when the editor loses focus during renaming. If this property is Accept, and the user renames an item and the editor loses focus (without the user pressing Enter), the item will still be renamed. If the property's value is Reject, the item will not be renamed unless the user presses Enter. The default is Reject. Access functions: This property holds the advisory item margin that list items may use. The item margin defaults to one pixel and is the margin between the item's edges and the area where it draws its contents. Q3ListViewItem::paintFocus() draws in the margin. Access functions: See also Q3ListViewItem::paintCell(). This property holds whether the list view is in multi-selection or extended-selection mode. If you enable multi-selection, Multi, mode, it is possible to specify whether or not this mode should be extended. Extended means that the user can select multiple items only when pressing the Shift or Ctrl key at the same time. The default selection mode is Single. Access functions: See also selectionMode(). This property holds whether all, none or the only the last column should be resized. Specifies whether all, none or only the last column should be resized to fit the full width of the list view. The values for this property can be one of: NoColumn (the default), AllColumns or LastColumn. Warning: Setting the resize mode should be done after all necessary columns have been added to the list view, otherwise the behavior is undefined. Access functions: See also Q3Header and header(). This property holds whether the list view shows open/close signs on root items. Open/close signs are small + or - symbols in windows style, or arrows in Motif style. The default is false. Access functions: This property holds the list view's selection mode. The mode can be Single (the default), Extended, Multi or NoSelection. Access functions: See also multiSelection. This property holds whether the list view header should display a sort indicator. If this property is true, an arrow is drawn in the header of the list view to indicate the sort order of the list view contents. The arrow will be drawn in the correct column and will point up or down, depending on the current sort direction. The default is false (don't show an indicator). Access functions: See also Q3Header::setSortIndicator(). This property holds whether this list view should show tooltips for truncated column texts. The default is true. Access functions: This property holds the number of pixels a child is offset from its parent. The default is 20 pixels. Of course, this property is only meaningful for hierarchical list views. Access functions: Constructs a new empty list view called name with parent parent and widget attributes f. This constructor sets the WA_StaticContent and the Qt::WA_NoBackground attributes to boost performance when drawing Q3ListViewItems. This may be unsuitable for custom Q3ListViewItem classes, in which case Qt::WA_StaticContents and Qt::WA_NoBackground should be cleared on the viewport() after construction. See also QWidget::setAttribute(). Destroys the list view, deleting all its items, and frees up all allocated resources. Adds a width pixels wide column with the column header label to the list view, and returns the index of the new column. All columns apart from the first one are inserted to the right of the existing ones. If width is negative, the new column's WidthMode is set to Maximum instead of Manual. See also setColumnText(), setColumnWidth(), and setColumnWidthMode(). This is an overloaded function. Adds a width pixels wide new column with the header label and the icon to the list view, and returns the index of the column. If width is negative, the new column's WidthMode is set to Maximum, and to Manual otherwise. See also setColumnText(), setColumnWidth(), and setColumnWidthMode(). Adjusts the column col to its preferred width Reimplemented from QWidget::changeEvent(). Removes and deletes all the items in this list view and triggers an update. See also triggerUpdate(). Sets all the items to be not selected, updates the list view as necessary, and emits the selectionChanged() signals. Note that for Multi selection list views this function needs to iterate over all items. See also setSelected() and setMultiSelection(). This signal is emitted whenever the user clicks (mouse pressed and mouse released) in the list view. item is the clicked list view item, or 0 if the user didn't click on an item. Warning: Do not delete any Q3ListViewItem objects in slots connected to this signal. This is an overloaded function. This signal is emitted whenever the user clicks (mouse pressed and mouse released) in the list view. item is the clicked list view item, or 0 if the user didn't click on an item. pnt is the position where the user has clicked in global coordinates. If item is not 0, c is the list view column into which the user pressed; if item is 0 c's value is undefined. Warning: Do not delete any Q3ListViewItem objects in slots connected to this signal. This signal is emitted when the item has been collapsed, i.e. when the children of item are hidden. See also setOpen() and expanded(). Returns the alignment of column column. The default is Qt::AlignAuto. See also setColumnAlignment() and Qt::Alignment. Returns the text of column c. See also setColumnText(). Returns the width of column c. See also setColumnWidth(). Returns the WidthMode for column c. See also setColumnWidthMode(). Reimplemented from Q3ScrollView::contentsContextMenuEvent(). Reimplemented from Q3ScrollView::contentsDragEnterEvent(). Reimplemented from Q3ScrollView::contentsDragLeaveEvent(). Reimplemented from Q3ScrollView::contentsDragMoveEvent(). Reimplemented from Q3ScrollView::contentsDropEvent(). Reimplemented from Q3ScrollView::contentsMouseDoubleClickEvent(). Processes the mouse double-click event e on behalf of the viewed widget. Reimplemented from Q3ScrollView::contentsMouseMoveEvent(). Processes the mouse move event e on behalf of the viewed widget. Reimplemented from Q3ScrollView::contentsMousePressEvent(). Processes the mouse press event e on behalf of the viewed widget. Reimplemented from Q3ScrollView::contentsMouseReleaseEvent(). Processes the mouse release event e on behalf of the viewed widget. This signal is emitted when the user invokes a context menu with the right mouse button or with special system keys. If the keyboard was used item is the current item; if the mouse was used, item is the item under the mouse pointer or 0 if there is no item under the mouse pointer. If no item is clicked, the column index emitted is -1. pos is the position for the context menu in the global coordinate system. col is the column on which the user pressed, or -1 if the signal was triggered by a key event. This signal is emitted whenever the current item has changed (normally after the screen update). The current item is the item responsible for indicating keyboard focus. The argument is the newly current item, or 0 if the change made no item current. This can happen, for example, if all the items in the list view are deleted. Warning: Do not delete any Q3ListViewItem objects in slots connected to this signal. See also setCurrentItem() and currentItem(). Returns the current item, or 0 if there isn't one. See also setCurrentItem(). This slot handles auto-scrolling when the mouse button is pressed and the mouse is outside the widget. This signal is emitted whenever an item is double-clicked. It's emitted on the second button press, not the second button release. item is the list view item on which the user did the double-click. This signal is emitted when a double-click occurs. It's emitted on the second button press, not the second button release. The item is the Q3ListViewItem the button was double-clicked on (which could be 0 if it wasn't double-clicked on an item). The point where the double-click occurred is given in global coordinates. If an item was double-clicked on, column is the column within the item that was double-clicked; otherwise column is -1. Warning: Do not delete any Q3ListViewItem objects in slots connected to this signal. If the user presses the mouse on an item and starts moving the mouse, and the item allow dragging (see Q3ListViewItem::setDragEnabled()), this function is called to get a drag object and a drag is started unless dragObject() returns 0. By default this function returns 0. You should reimplement it and create a Q3DragObject depending on the selected items. Reimplemented from Q3ScrollView::drawContentsOffset(). Calls Q3ListViewItem::paintCell() and Q3ListViewItem::paintBranches() as necessary for all list view items that require repainting in the cw pixels wide and ch pixels high bounding rectangle starting at position cx, cy with offset ox, oy. Uses the painter p. This signal is emitted, when a drop event occurred on the viewport (not onto an item). e provides all the information about the drop. Ensures that item i is visible, scrolling the list view vertically if necessary and opening (expanding) any parent items if this is required to show the item. See also itemRect() and Q3ScrollView::ensureVisible(). Reimplemented from QObject::eventFilter(). Redirects the event e relating to object o, for the viewport to mousePressEvent(), keyPressEvent() and friends. This signal is emitted when item has been expanded, i.e. when the children of item are shown. See also setOpen() and collapsed(). Finds the first list view item in column column, that matches text and returns the item, or returns 0 of no such item could be found. Pass OR-ed together ComparisonFlags values in the compare flag, to control how the matching is performed. The default comparison mode is case-sensitive, exact match. Returns the first item in this Q3ListView. Returns 0 if there is no first item. A list view's items can be traversed using firstChild() and nextSibling() or using a Q3ListViewItemIterator. See also itemAt(), Q3ListViewItem::itemBelow(), and Q3ListViewItem::itemAbove(). Reimplemented from QWidget::focusInEvent(). Reimplemented from QWidget::focusOutEvent(). Returns the Q3Header object that manages this list view's columns. Please don't modify the header behind the list view's back. You may safely call Q3Header::setClickEnabled(), Q3Header::setResizeEnabled(), Q3Header::setMovingEnabled(), Q3Header::hide() and all the const Q3Header functions. Hides the column specified at column. This is a convenience function that calls setColumnWidth(column, 0). Note: The user may still be able to resize the hidden column using the header handles. To prevent this, call setResizeEnabled(false, column) on the list views header. See also setColumnWidth(). Reimplemented from QWidget::inputMethodQuery(). Inserts item i into the list view as a top-level item. You do not need to call this unless you've called takeItem(i) or Q3ListViewItem::takeItem(i) and need to reinsert i elsewhere. See also Q3ListViewItem::takeItem() and takeItem(). Inverts the selection. Only works in Multi and Extended selection modes. Returns true if this list view item has children and they are not explicitly hidden; otherwise returns false. Identical to item->isOpen(). Provided for completeness. Returns true if an item is being renamed; otherwise returns false. Returns true if the list view item i is selected; otherwise returns false. See also Q3ListViewItem::isSelected(). Returns the list view item at viewPos. Note that viewPos is in the viewport()'s coordinate system, not in the list view's own, much larger, coordinate system. itemAt() returns 0 if there is no such item. Note that you also get the pointer to the item if viewPos points to the root decoration (see setRootIsDecorated()) of the item. To check whether or not viewPos is on the root decoration of the item, you can do something like this: Q3ListViewItem *i = itemAt(p); if (i) { if (p.x() > header()->sectionPos(header()->mapToIndex(0)) + treeStepSize() * (i->depth() + (rootIsDecorated() ? 1 : 0)) + itemMargin() || p.x() < header()->sectionPos(header()->mapToIndex(0))) { ; // p is not on root decoration else ; // p is on the root decoration } This might be interesting if you use this function to find out where the user clicked and if you want to start a drag (which you do not want to do if the user clicked onto the root decoration of an item). See also itemPos(), itemRect(), and viewportToContents(). Returns the y-coordinate of item in the list view's coordinate system. This function is normally much slower than itemAt() but it works for all items, whereas itemAt() normally works only for items on the screen. This is a thin wrapper around Q3ListViewItem::itemPos(). See also itemAt() and itemRect(). Returns the rectangle on the screen that item item occupies in viewport()'s coordinates, or an invalid rectangle if item is 0 or is not currently visible. The rectangle returned does not include any children of the rectangle (i.e. it uses Q3ListViewItem::height(), rather than Q3ListViewItem::totalHeight()). If you want the rectangle to include children you can use something like this: QRect r(listView->itemRect(item)); r.setHeight(qMin(item->totalHeight(), listView->viewport->height() - r.y())) Note the way it avoids too-high rectangles. totalHeight() can be much larger than the window system's coordinate system allows. itemRect() is comparatively slow. It's best to call it only for items that are probably on-screen. This signal is emitted when item has been renamed to text, e.g. by in in-place renaming, in column col. See also Q3ListViewItem::setRenameEnabled(). This is an overloaded function. This signal is emitted when item has been renamed, e.g. by in-place renaming, in column col. See also Q3ListViewItem::setRenameEnabled(). Reimplemented from QWidget::keyPressEvent(). Returns the last item in the list view tree. Returns 0 if there are no items in the Q3ListView. This function is slow because it traverses the entire tree to find the last item. Reimplemented from QWidget::minimumSizeHint(). This signal is emitted whenever the user clicks (mouse pressed and mouse released) in the list view at position pos. button is the mouse button that the user pressed, item is the clicked list view item or 0 if the user didn't click on an item. If item is not 0, c is the list view column into which the user pressed; if item is 0 c's value is undefined. Warning: Do not delete any Q3ListViewItem objects in slots connected to this signal. This signal is emitted whenever the user pressed the mouse button in the list view at position pos. button is the mouse button which the user pressed, item is the pressed list view item or 0 if the user didn't press on an item. If item is not 0, c is the list view column into which the user pressed; if item is 0 c's value is undefined. Warning: Do not delete any Q3ListViewItem objects in slots connected to this signal. This signal is emitted when the user moves the mouse cursor onto item i, similar to the QWidget::enterEvent() function. This signal is emitted when the user moves the mouse cursor from an item to an empty part of the list view. Paints rect so that it looks like empty background using painter p. rect is in widget coordinates, ready to be fed to p. The default function fills rect with the viewport()->backgroundBrush(). This signal is emitted whenever the user presses the mouse button in a list view. item is the list view item on which the user pressed the mouse button, or 0 if the user didn't press the mouse on an item. Warning: Do not delete any Q3ListViewItem objects in slots connected to this signal. This is an overloaded function. This signal is emitted whenever the user presses the mouse button in a list view. item is the list view item on which the user pressed the mouse button, or 0 if the user didn't press the mouse on an item. pnt is the position of the mouse cursor in global coordinates, and c is the column where the mouse cursor was when the user pressed the mouse button. Warning: Do not delete any Q3ListViewItem objects in slots connected to this signal. Removes the column at position index. Removes the given item. Use takeItem() instead. Repaints item on the screen if item is currently visible. Takes care to avoid multiple repaints. Reimplemented from QWidget::resizeEvent(). Ensures that the header is correctly sized and positioned when the resize event e occurs. This signal is emitted when Enter or Return is pressed. The item parameter is the currentItem(). This signal is emitted when the right button is clicked. The item is the Q3ListViewItem the button was clicked on (which could be 0 if it wasn't clicked on an item). The point where the click occurred is given in global coordinates. If an item was clicked on, column is the column within the item that was clicked; otherwise column is -1. This signal is emitted when the right button is pressed. The item is the Q3ListViewItem the button was pressed on (which could be 0 if it wasn't pressed on an item). The point where the press occurred is given in global coordinates. If an item was pressed on, column is the column within the item that was pressed; otherwise column is -1. If select is true, all the items get selected; otherwise all the items get unselected. This only works in the selection modes Multi and Extended. In Single and NoSelection mode the selection of the current item is just set to select. Returns the selected item if the list view is in Single selection mode and an item is selected. If no items are selected or the list view is not in Single selection mode this function returns 0. See also setSelected() and setMultiSelection(). This signal is emitted whenever the set of selected items has changed (normally before the screen update). It is available both in Single selection and Multi selection mode but is most useful in Multi selection mode. Warning: Do not delete any Q3ListViewItem objects in slots connected to this signal. See also setSelected() and Q3ListViewItem::setSelected(). This is an overloaded function. This signal is emitted whenever the selected item has changed in Single selection mode (normally after the screen update). The argument is the newly selected item. In Multi selection mode, use the no argument overload of this signal. Warning: Do not delete any Q3ListViewItem objects in slots connected to this signal. See also setSelected(), Q3ListViewItem::setSelected(), and currentChanged(). Sets column column's alignment to align. The alignment is ultimately passed to Q3ListViewItem::paintCell() for each item in the list view. For horizontally aligned text with Qt::AlignLeft or Qt::AlignHCenter the ellipsis (...) will be to the right, for Qt::AlignRight the ellipsis will be to the left. See also columnAlignment() and Qt::Alignment. Sets the heading of column column to label. See also columnText(). This is an overloaded function. Sets the heading of column column to icon and label. See also columnText(). Sets the width of column column to w pixels. Note that if the column has a WidthMode other than Manual, this width setting may be subsequently overridden. See also columnWidth(). Sets column c's width mode to mode. The default depends on the original width argument to addColumn(). See also columnWidthMode() and Q3ListViewItem::width(). Reimplemented from Q3ScrollView::setContentsPos(). Sets item i to be the current item and repaints appropriately (i.e. highlights the item). The current item is used for keyboard navigation and focus indication; it is independent of any selected items, although a selected item can also be the current item. See also currentItem() and setSelected(). Sets item to be open if open is true and item is expandable, and to be closed if open is false. Repaints accordingly. See also isOpen(), Q3ListViewItem::setOpen(), and Q3ListViewItem::setExpandable(). If selected is true the item is selected; otherwise it is unselected. If the list view is in Single selection mode and selected is true, the currently selected item is unselected and item is made current. Unlike Q3ListViewItem::setSelected(), this function updates the list view as necessary and emits the selectionChanged() signals. setCurrentItem(), setSelectionAnchor() See also isSelected(), setMultiSelection(), and isMultiSelection(). Sets the selection anchor to item, if item is selectable. The selection anchor is the item that remains selected when Shift-selecting with either mouse or keyboard in Extended selection mode. See also setSelected(). Sets the sorting column for the list view. If column is -1, sorting is disabled and the user cannot sort columns by clicking on the column headers. If column is larger than the number of columns the user must click on a column header to sort the list view. See also sortColumn() and setSorting(). Sets the sort order for the items in the list view to order. See also sortOrder() and setSorting(). Sets the list view to be sorted by column column in ascending order if ascending is true or descending order if it is false. If column is -1, sorting is disabled and the user cannot sort columns by clicking on the column headers. If column is larger than the number of columns the user must click on a column header to sort the list view. Reimplemented from QWidget::showEvent(). Reimplemented from QWidget::sizeHint(). Sorts the list view using the last sorting configuration (sort column and ascending/descending). Returns the column by which the list view is sorted, or -1 if sorting is disabled. See also setSortColumn() and sortOrder(). Returns the sorting order of the list view items. See also setSortOrder() and sortColumn(). This signal is emitted when Space is pressed. The item parameter is the currentItem(). Starts a drag. Removes item i from the list view; i must be a top-level item. The warnings regarding Q3ListViewItem::takeItem() apply to this function, too. See also insertItem(). Triggers a size, geometry and content update during the next iteration of the event loop. Ensures that there'll be just one update to avoid flicker. Updates the sizes of the viewport, header, scroll bars and so on. Warning: Don't call this directly; call triggerUpdate() instead. Reimplemented from Q3ScrollView::viewportResizeEvent().
http://doc.trolltech.com/main-snapshot/q3listview.html
crawl-003
refinedweb
4,266
67.86
Network Working Group A. Valencia Request for Comments: 2341 M. Littlewood Category: Historic T. Kolar Cisco Systems May 1998 Cisco Layer Two Forwarding (Protocol) "L2F" Status of Memo This memo describes a historic protocol for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Abstract Virtual dial-up allows many separate and autonomous protocol domains to share common access infrastructure including modems, Access Servers, and ISDN routers. Previous RFCs have specified protocols for supporting IP dial-up via SLIP [1] and multiprotocol dial-up via PPP [2]. This document describes the Layer Two Forwarding protocol (L2F) which permits the tunneling of the link layer (i.e., HDLC, async HDLC, or SLIP frames) of higher level protocols. Using such tunnels, it is possible to divorce the location of the initial dial- up server from the location at which the dial-up protocol connection is terminated and access to the network provided. Table of Contents 1.0 Introduction 3 1.1 Conventions 3 2.0 Problem Space Overview 3 2.1 Initial Assumptions 3 2.2 Topology 4 2.3 Virtual dial-up Service - a walk-though 5 3.0 Service Model Issues 7 3.1 Security 7 3.2 Address allocation 8 3.3 Authentication 8 3.4 Accounting 8 4.0 Protocol Definition 9 4.1 Encapsulation within L2F 10 4.1.1 Encapsulation of PPP within L2F 10 4.1.2 Encapsulation of SLIP within L2F 10 4.2 L2F Packet Format 10 4.2.1 Overall Packet Format 10 4.2.2 Packet Header 11 4.2.3 Version field 11 4.2.4 Protocol field 11 4.2.5 Sequence Number 12 4.2.6 Packet Multiplex ID 12 4.2.7 Client ID 13 4.2.8 Length 13 4.2.9 Packet Checksum 13 4.2.10 Payload Offset 14 4.2.11 Packet Key 14 4.2.12 Packet priority 14 4.3 L2F Tunnel Establishment 14 4.3.1 Normal Tunnel Negotiation Sequence 15 4.3.2 Normal Client Negotiation Sequence 17 4.4 L2F management message types 18 4.4.1 L2F message type: Invalid 18 4.4.2 L2F_CONF 19 4.4.3 L2F_OPEN, tunnel establishment 20 4.4.4 L2F_OPEN, client establishment 20 4.4.5 L2F_CLOSE 22 4.4.6 L2F_ECHO 22 4.4.7 L2F_ECHO_RESP 23 4.5 L2F Message Delivery 23 4.5.1 Sequenced Delivery 23 4.5.2 Flow Control 23 4.5.3 Tunnel State Table 24 4.5.4 Client State Table 25 5.0 Protocol Considerations 26 5.1 PPP Features 26 5.2 Termination 26 5.3 Extended Authentication 26 5.4 MNP4 and Apple Remote Access Protocol 27 5.5 Operation over IP/UDP 27 6.0 Acknowledgments 27 7.0 References 27 8.0 Security Considerations 28 9.0 Authors' Addresses 28 10.0 Full Copyright Statement 29 1.0 Introduction The traditional dial-up network service on the Internet is for registered IP addresses only. A new class of virtual dial-up application which allows multiple protocols and unregistered IP addresses is also desired on the Internet. Examples of this class of network application are support for privately addressed IP, IPX, and AppleTalk dial-up via SLIP/PPP across existing Internet infrastructure. The support of these multiprotocol virtual dial-up applications is of significant benefit to end users and Internet Service providers as it allows the sharing of very large investments in access and core infrastructure and allows local calls to be used. It also allows existing investments in non-IP protocol applications to be supported in a secure manner while still leveraging the access infrastructure of the Internet. It is the purpose of this RFC to identify the issues encountered in integrating multiprotocol dial-up services into an existing Internet Service Provider's Point of Presence (hereafter referred to as ISP and POP, respectively), and to describe the L2F protocol which permits the leveraging of existing access protocols. 1. 2.0 Problem Space Overview In this section we describe in high level terms the scope of the problem that will be explored in more detail in later sections. 2.1 Initial Assumptions We begin by assuming that Internet access is provided by an ISP and that the ISP wishes to offer services other than traditional registered IP address based services to dial-up users of the network. We also assume that the user of such a service wants all of the security facilities that are available to him in a dedicated dial-up configuration. In particular, the end user requires: + End System transparency: Neither the remote end system nor his home site hosts should require any special software to use this service in a secure manner. + Authentication as provided via dial-up PPP CHAP or PAP, or through other dialogs as needed for protocols without authentication (e.g., SLIP). This will include TACACS+ and RADIUS solutions as well as support for smart cards and one-time passwords. The authentication should be manageable by the user independently of the ISP. + Addressing should be as manageable as dedicated dial-up solutions. The address should be assigned by the home site and not the ISP. + Authorization should be managed by the home site as it would in a direct dial-up solution. + Accounting should be performed both by the ISP (for billing purposes) and by the user (for charge-back and auditing). 2.2 Topology Shown below is a generic Internet with Public switched Telephone Network (PSTN) access (i.e., async PPP via modems) and Integrated Services Digital Network (ISDN) access (i.e., synchronous PPP access). Remote users (either async PPP or SLIP, or ISDN) will access the Home LAN as if they were dialed into the Home Gateway, although their physical dial-up is via the ISP Network Access Server. ...----[L]----+---[L]-----... | | [H] | ________|________________________ | | ________|__ ______|________ | | | | | PSTN [R] [R] ISDN | | Cloud | | Cloud [N]__[U] | | Internet | | | | [R] | [N]______[R] |_____________| | | | | | | [U] |________________________________| [H] = Home Gateway [L] = Home LAN(s) [R] = Router [U] = Remote User [N] = ISP Network Access Server ("NAS") 2.3 Providing Virtual dial-up Services - a walk-through To motivate the following discussion, this section walks through an example of what might happen when a Virtual dial-up client initiates access. The Remote User initiates a PPP connection to an ISP via either the PSTN or ISDN. The Network Access Server (NAS) accepts the connection and the PPP link is established. The ISP undertakes a partial authentication of the end system/user via CHAP or PAP. Only the username field is interpreted to determine whether the user requires a Virtual dial-up service. It is expected-- but not required--that usernames will be structured (e.g. littlewo@cisco.com). Alternatively, the ISP may maintain a database mapping users to services. In the case of Virtual dial-up, the mapping will name a specific endpoint, the Home Gateway. If a virtual dial-up service is not required, standard access to the Internet may be provided. If no tunnel connection currently exists to the desired Home Gateway, one is initiated. L2F is designed to be largely insulated from the details of the media over which the tunnel is established; L2F requires only that the tunnel media provide packet oriented point- to-point connectivity. Obvious examples of such media are UDP, Frame Relay PVC's, or X.25 VC's. Details for L2F operation over UDP are provided in section 5.5. The specification for L2F packet formats is provided in section 4.2, and the message types and semantics starting in section 4.4. Once the tunnel exists, an unused Multiplex ID (hereafter, "MID") is allocated, and a connect indication is sent to notify the Home Gateway of this new dial-up session. The Home Gateway either accepts the connection, or rejects. Rejection may include a reason indication, which may be displayed to the dial-up user, after which the call should be disconnected. The initial setup notification may include the authentication information required to allow the Home Gateway to authenticate the user and decide to accept or decline the connection. In the case of CHAP, the set-up packet includes the challenge, username and raw response. For PAP or text dialog (i.e., for SLIP users), it includes username and clear text password. The Home Gateway may choose to use this information to complete its authentication, avoiding an additional cycle of authentication. For PPP, the initial setup notification may also include a copy of the the LCP CONFACKs sent in each direction which completed LCP negotiation. The Home Gateway may use this information to initialize its own PPP state (thus avoiding an additional LCP negotiation), or it may choose to initiate a new LCP CONFREQ exchange. If the Home Gateway accepts the connection, it creates a "virtual interface" for SLIP or PPP in a manner analogous to what it would use for a direct-dialed connection. With this "virtual interface" in place, link layer frames may now pass over this tunnel in both directions. Frames from the remote user are received at the POP, stripped of any link framing or transparency bytes, encapsulated in L2F, and forwarded over the appropriate tunnel. The Home Gateway accepts these frames, strips L2F, and processes them as normal incoming frames for the appropriate interface and protocol. The "virtual interface" behaves very much like a hardware interface, with the exception that the hardware in this case is physically located at the ISP POP. The other direction behaves analogously, with the Home Gateway encapsulating the packet in L2F, and the POP stripping L2F before transmitting it out the physical interface to the remote user. At this point, the connectivity is a point-to-point PPP or SLIP connection whose endpoints are the remote user's networking application on one end and the termination of this connectivity into the Home Gateway's SLIP or PPP support on the other. Because the remote user has become simply another dial-up client of the Home Gateway access server, client connectivity can now be managed using traditional mechanisms with respect to further authorization, protocol access, and filtering. Accounting can be performed at both the NAS as well as the Home Gateway. This document illustrates some Accounting techniques which are possible using L2F, but the policies surrounding such Accounting are outside the scope of this specification. Because L2F connect notifications for PPP clients contain sufficient information for a Home Gateway to authenticate and initialize its LCP state machine, it is not required that the remote user be queried a second time for CHAP authentication, nor that the client undergo multiple rounds of LCP negotiation and convergence. These techniques are intended to optimize connection setup, and are not intended to deprecate any functions required by the PPP specification. 3.0 Service Model Issues There are several significant differences between the standard Internet access service and the Virtual dial-up service with respect to authentication, address allocation, authorization and accounting. The details of the differences between these services and the problems presented by these differences are described below. The mechanisms used for Virtual Dial-up service are intended to coexist with more traditional mechanisms; it is intended that an ISP's POP can simultaneously service ISP clients as well as Virtual dial-up clients. 3.1 Security For the Virtual dial-up service, the ISP pursues authentication only to the extent required to discover the user's apparent to establish tunnels to the Home Gateway. Tunnel establishment involves an ISP-to-Home Gateway authentication phase to protect against such attacks. 3.2 Address Allocation For an Internet service, the user accepts that the IP address may be allocated dynamically from a pool of Service provider addresses. This model often means that the remote user has little or no access to their home network's resources, due to firewalls and other security policies applied by the home network to accesses from external IP addresses. For the Virtual dial-up service, the Home Gateway can exist behind the home firewall, allocating addresses which are internal (and, in fact, can be RFC1597 addresses, or non-IP addresses). Because L2F tunnels exclusively at the frame layer, the actual policies of such address management are irrelevant to correct Virtual dial-up service; for all purposes of PPP or SLIP protocol handling, the dial-in user appears to have connected at the Home Gateway. 3.3 Authentication The authentication of the user occurs in three phases; the first at the ISP, and the second and optional third at the Home gateway. The ISP uses the username to determine that a Virtual dial-up service is required and initiate the tunnel connection to the appropriate Home Gateway. Once a tunnel is established, a new MID is allocated and a session initiated by forwarding the gathered authentication information. The Home Gateway undertakes the second phase by deciding whether or not to accept the connection. The connection indication may include CHAP, PAP, or textual authentication information. Based on this information, the Home Gateway may accept the connection, or may reject it (for instance, it was a PAP request and the username/password are found to be incorrect). Once the connection is accepted, the Home Gateway is free to pursue a third phase of authentication at the PPP or SLIP layer. These activities are outside the scope of this specification, but might include an additional cycle of LCP authentication, proprietary PPP extensions, or textual challenges carried via a TCP/IP telnet session. 3.4 Accounting It is a requirement that both the Access gateway and the Home Gateway can provide accounting data and hence both may count packets, octets and connection start and stop times. Since Virtual dial-up is an access service, accounting of connection attempts (in particular, failed connection attempts) is of significant interest. The Home Gateway can reject new connections based on the authentication information gathered by the ISP, with corresponding logging. For cases where the Home Gateway accepts the connection and then continues with further authentication, the Home Gateway might subsequently disconnect the client. For such scenarios, the disconnection indication back to the ISP may also include a reason. Because the Home Gateway can decline a connection based on the authentication information collected by the ISP, accounting can easily draw a distinction between a series of failed connection attempts and a series of brief successful connections. Lacking this facility, the Home Gateway must always accept connection requests, and would need to exchange a number of PPP packets with the remote system. 4.0 Protocol Definition The protocol definition for Virtual dial-up services requires two areas of standardization: + Encapsulation of PPP packets within L2F. The ISP NAS and the Home gateway require a common understanding of the encapsulation protocol so that SLIP/PPP packets can be successfully transmitted and received across the Internet. + Connection management of L2F and MIDs. The tunnel must be initiated and terminated, as must MIDs within the tunnel. Termination includes diagnostic codes to assist in the diagnosis of problems and to support accounting. While providing these services, the protocol must address the following required attributes: + Low overhead. The protocol must impose a minimal additional overhead. This requires a compact encapsulation, and a structure for omitting some portions of the encapsulation where their function is not required. + Efficiency. The protocol must be efficient to encapsulate and deencapsulate. + Protocol independence. The protocol must make very few assumptions about the substrate over which L2F packets are carried. + Simple deployment. The protocol must not rely on additional telecommunication support (for instance, unique called numbers, or caller ID) to operate. 4.1 Encapsulation within L2F 4.1.1 Encapsulation of PPP within L2F The PPP packets may be encapsulated within L2F. The packet encapsulated is the packet as it would be transmitted over a physical link. The following are NOT present in the packet: + Flags + Transparency data (ACCM for async, bit stuffing for sync) + CRC The following ARE still present: + Address and control flags (unless negotiated away by LCP) + Protocol value 4.1.2 Encapsulation of SLIP within L2F SLIP is encapsulated within L2F in much the same way as PPP. The transparency characters are removed before encapsulating within L2F, as is the framing. 4.2 L2F Packet Format 4.2.1 Overall Packet Format The entire encapsulated packet has the form: --------------------------------- | | | L2F Header | | | --------------------------------- | | | Payload packet (SLIP/PPP) | | | --------------------------------- | | | L2F Checksum (optional) | | | --------------------------------- 4.2.2 Packet Format An L2F packet has the form: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |F|K|P|S|0|0|0|0|0|0|0|0|C| Ver | Protocol |Sequence (opt)|\ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\ | Multiplex ID | Client ID | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | L2F | Length | Offset (opt) | |Header +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | Key (opt) | / +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+/ + (payload) | + ..... | + ..... | + ..... | + (payload) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | L2F Checksum (optional) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 4.2.3 Version field The Ver ("Version") field represents the major version of the L2F software creating the packet. It MUST contain the value 001. If Ver holds a value other than 1, or any bits are non-zero after bit S but before bit C, this corresponds to a packet containing extensions not understood by the receiving end. The packet is handled as an invalid packet as defined in 4.4.1. 4.2.4 Protocol field The Protocol specifies the protocol carried within the L2F packet. Legal values (represented here in hexadecimal) are: Value Type Description 0x00 L2F_ILLEGAL Illegal 0x01 L2F_PROTO L2F management packets 0x02 L2F_PPP PPP tunneled inside L2F 0x03 L2F_SLIP SLIP tunneled inside L2F If a packet is received with a Protocol of L2F_ILLEGAL or any other unrecognized value, it MUST be treated as an illegal packet as defined in 4.4.1. 4.2.5 Sequence Number The Sequence number is present if the S bit in the L2F header is set to 1. This bit MUST be 1 for all L2F management packets. It MAY be set to 1 for non-L2F management packets. If a non-L2F management packet is received with the S bit set, all future L2F packets sent for that MID MUST have the S bit set (and, by implication, be sent using sequence numbers). For instance, the Home Gateway might choose to force sequenced packet delivery if it detects an NCP opening for a protocol which can not operate with out-of-sequence packets. The Sequence number starts at 0 for the first sequenced L2F packet. Each subsequent packet is sent with the next increment of the sequence number. The sequence number is thus a free running counter represented modulo 256. There is distinct Sequence number state (i.e., counter) for each distinct MID value. For packets with S bit and sequence number, was 15, then packets with sequence numbers 0 through 15, as well as 144 through 255, would be considered less than or equal to, and would be silently discarded. Otherwise it would be accepted. 4.2.6 Packet Multiplex ID The Multiplex ID ("MID") identifies a particular connection within the tunnel. Each new connection is assigned a MID currently unused within the tunnel. It is recommended that the MID cycle through the entire 16-bit namespace, to reduce aliasing between previous and current sessions. A MID value which has been previously used within a tunnel, has been closed, and will now be used again, must be considered as an entirely new MID, and initialised as such. The MID with value 0 is special; it is used to communicate the state of the tunnel itself, as distinct from any connection within the tunnel. Only L2F_PROTO packets may be sent using an MID of 0; if any other type is sent on MID 0, the packet is illegal and MUST be processed as defined in 4.4.1. 4.2.7 Client ID The Client ID ("CLID") is used to assist endpoints in demultiplexing tunnels when the underlying point-to-point substrate lacks an efficient or dependable technique for doing so directly. Using the CLID, it is possible to demultiplex multiple tunnels whose packets arrive over the point-to-point media interleaved, without requiring media-specific semantics. When transmitting the L2F_CONF message (described below), the peer's CLID must be communicated via the Assigned_CLID field. This MUST be a unique non-zero value on the sender's side, which is to be expected in the Home Gateway's L2F_CONF response, as well as all future non- L2F_CONF packets received. The CLID value from the last valid L2F_CONF message received MUST be recorded and used as the CLID field value for all subsequent packets sent to the peer. Packets with an unknown Client ID MUST be silently discarded. For the initial packet sent during tunnel establishment, where no L2F_CONF has yet been received, the CLID field MUST be set to 0. Thus, during L2F_CONF each side is told its CLID value. All later packets sent, tagged with this CLID value, serve as a tag which uniquely identifies this peer. 4.2.8 Length Length is the size in octets of the entire packet, including header, all fields present, and payload. Length does not reflect the addition of the checksum, if one is present. The packet should be silently discarded if the received packet is shorter than the indicated length. Additional bytes present in the packet beyond the indicated length MUST be silently ignored. 4.2.9 Packet Checksum The Checksum is present if the C bit is present in the header flags. It is a 16-bit CRC as used by PPP/HDLC (specifically, FCS-16 [3]). Is is applied over the entire packet starting with the first byte of L2F flags, through the last byte of payload data. The checksum is then added as two bytes immediately following the last byte of payload data. 4.2.10 Payload Offset The Offset is present if the F bit is set in the header flags. This field specifies the number of bytes past the L2F header at which the payload data is expected to start. If it is 0, or the F bit is not set, the first byte following the last byte of L2F header is the first byte of payload data. It is recommended that data skipped due to the payload offset be initialized to 0's. For architectures where it is more efficient to have the payload start at an aligned 32-bit boundary with respect to the L2F header, it is recommended that the F bit be set, and an offset of 0 be used. 4.2.11 Packet Key The Key field is present if the K bit is set in the L2F header. The Key is based on the authentication response last given to the peer during tunnel creation (the details of tunnel creation are provided in the next section). It serves as a key during the life of a session to resist attacks based on spoofing. If a packet is received in which the Key does not match the expected value, the packet MUST be silently discarded. Such handling takes precedence over 4.4.1. The Key value is generated by taking the 128-bit authentication response from the peer, interpreting it as four adjacent 32-bit words in network byte order, XOR'ing these words together, and using the resulting 32-bit value as the Key. 4.2.12 Packet priority If the P bit in the L2F header is set, this packet is a "priority" packet. When possible for an implementation, a packet received with the P bit should be processed in preference to previously received unprocessed packets without the P bit. The P bit may be set by an implementation based on criteria beyond the scope of this specification. However, it is recommended that PPP keepalive traffic, if any, be sent with this bit set. 4.3 L2F Tunnel Establishment When the point-to-point link is first initiated between the NAS and the Home Gateway,. 4.3.1 Normal Tunnel Negotiation Sequence The establishment sequence is best illustrated by a "typical" connection sequence. Detailed description of each functions follows, along with descriptions of the handling of exceptional conditions. Each decides that a tunnel must be initiated from the NAS to the GW. An L2F packet is sent with the Proto field indicating an L2F management message is contained. Because the tunnel is being initiated, Key is set to 0. The sequence number starts at 0; the MID is 0 to reflect the establishment of the tunnel itself. Since the NAS has not yet received an L2F_CONF, the CLID is set to 0. The body of the packet specifies the claimed name of the NAS, and a challenge random number which GW will use in authenticating itself as a valid tunnel endpoint. Assigned_CLID is generated to be a value not currently assigned out to any other tunnel to any other Home Gateway. 2. GW->NAS: Proto=L2F, Seq=0, MID=0, CLID=22, Key=0 L2F_CONF Name: GW_name Challenge: Rnd2 Assigned_CLID: 73 The Home Gateway continues to be 0 during this phase of tunnel establishment. The body contains the Home Gateway's name, its own random number challenge, and its own Assigned_CLID for the NAS to place in the CLID field of future packets. The CLID is generated in an analogous manner to that of the NAS. After this, all packets received from the NAS must be tagged with a CLID field containing 73, and all packets sent to the NAS must be tagged with a CLID field containing 22. 3. NAS->GW Proto=L2F, Seq=1, MID=0, CLID=73, Key=C(Rnd2) L2F_OPEN Response: C(Rnd2) The NAS responds with its Key now set to reflect the shared secret. The Key is a CHAP-style hash of the random number received; each packet hereafter will reflect this calculated value, which serves as a key for the life of the tunnel. Both the Home Gateway and the NAS use such Keys for the life of the tunnel. The Key is a 32-bit representation of the MD5 digest resulting from encrypting the shared secret; the full MD5 digest is included in the L2F_OPEN response, in the "response" field. 4. GW->NAS Proto=L2F, Seq=1, MID=0, CLID=22, Key=C(Rnd) L2F_OPEN Response: C(Rnd) The Home Gateway provides closure of the key from the NAS, reflected in both the Key field as well as the "response" field. The tunnel is now available for clients to be established. 4.3.2 Normal Client Negotiation Sequence This section describes the establishment of a Virtual dial-up client on a NAS into a Home Gateway. It assumes a tunnel has been created in the way described in 4.3.1. The client for this example is a PPP client configured for CHAP. Treatment of Checksum, Length, and Offset are as in 4.3.1. 1. NAS->GW Proto=L2F, Seq=2, MID=1, CLID=73, Key=C(Rnd2) L2F_OPEN Type: CHAP Name: CHAP-name Challenge: Rnd3 Response: <Value received, presumably C(Rnd3)> ID: <ID used in challenge> The NAS has received a call, tried CHAP with a challenge value of Rnd3, and found that the client responded. The claimed name lead the NAS to believe it was a Virtual dial-up client hosted by the Home Gateway. The next free MID is allocated, and the information associated with the CHAP challenge/response is included in the connect notification. 2. GW->NAS Proto=L2F, Seq=2, MID=1, CLID=22, Key=C(Rnd) L2F_OPEN The Home Gateway, or the home site. The contents is uninterpreted data, HDLC in this case. Data traffic, since it is not the L2F protocol, does not usually use the Seq field, which is set to 0 in non-L2F messages (see the S bit in section 4.2.5 for details on an exception to this). 4.4 L2F management message types When: Hex Value Abbreviation Description -------- ------------ ----------- 0x00 Invalid Invalid message 0x01 L2F_CONF Request configuration 0x02 L2F_CONF_NAME Name of peer sending L2F_CONF 0x03 L2F_CONF_CHAL Random number peer challenges with 0x04 L2F_CONF_CLID Assigned_CLID for peer to use 0x02 L2F_OPEN Accept configuration 0x01 L2F_OPEN_NAME Name received from client 0x02 L2F_OPEN_CHAL Challenge client received 0x03 L2F_OPEN_RESP Challenge response from client 0x04 L2F_ACK_LCP1 LCP CONFACK accepted from client 0x05 L2F_ACK_LCP2 LCP CONFACK sent to client 0x06 L2F_OPEN_TYPE Type of authentication used 0x07 L2F_OPEN_ID ID associated with authentication 0x08 L2F_REQ_LCP0 First LCP CONFREQ from client 0x03 L2F_CLOSE Request disconnect 0x01 L2F_CLOSE_WHY Reason code for close 0x02 L2F_CLOSE_STR ASCII string description 0x04 L2F_ECHO Verify presence of peer 0x05 L2F_ECHO_RESP Respond to L2F_ECHO 4.4.1 L2F message type: Invalid If a message is received with this value, or any value higher than the last recognized option value, or if an illegal packet as defined by other parts of this specification is received, the packet is considered invalid. The packet MUST be discarded, and an L2F_CLOSE of the entire tunnel MUST be requested. Upon receipt of an L2F_CLOSE, the tunnel itself may be closed. All other received message MUST be discarded. An implementation MAY close the tunnel after an interval of time appropriate to the characteristics of the tunnel. Note that packets with an invalid Key are discarded, but disconnect is not initiated. This prevents denial-of-service attacks. Invalid option types within a message MUST be treated as if the entire message type was invalid. 4.4.2 L2F_CONF The L2F message type is used to establish the tunnel between the NAS and the Home Gateway. MID is always set to 0. The body of such a message starts with the octet 0x01 (L2F_CONF), followed by all three of the sub-options below. a non-zero octet, followed by a number of bytes specified by this non-zero octet. The challenge value should be generated using whatever techniques provide the highest quality of random numbers available to a given implementation. The L2F_CONF_CLID sub-option MUST be present. It is encoded as the octet 0x04, followed by four bytes of Assigned_CLID value. The Assigned_CLID value is generated as a non-zero 16-bit integer value unique across all tunnels which exist on the sending system. The least significant two octets of Assigned_CLID are set to this value, and the most significant two octets MUST be set to 0. The CLID field is sent as 0 in the initial L2F_CONF packet from NAS to Home Gateway, and otherwise MUST be sent containing the value specified in the Assigned_CLID field of the last L2F_CONF message received. Key MUST be set to 0 in all L2F_CONF packets, and no key field is included in the packet. When sent from a NAS to a Home Gateway, the L2F_CONF is the initial packet in the conversation. When sent from the Home Gateway to the NAS, an L2F_CONF indicates the Home Gateway's recognition of the tunnel creation request. The Home Gateway MUST provide its name and its own challenge in the message body. In all packets following the L2F_CONF, the Key MUST be set to the CHAP-style hash of the received challenge bytes. The CHAP-style hash is done over the concatenation of the low 8 bits of the assigned CLID, the secret, and the challenge value. Generation of the 32-bit key value is discussed in section 4.2.11. 4.4.3 L2F_OPEN, tunnel establishment The L2F_OPEN message is used to provide tunnel setup closure (for a MID of 0) or to establish a client connection within a tunnel previously established by L2F_CONF and L2F_OPEN messages (MID not equal to 0). This section describes tunnel establishment; section 4.4.4 following describes clients established within the tunnel. An L2F_OPEN for tunnel establishment MUST contain only the sub-option 0x03, L2F_OPEN_RESP. This option MUST be followed by the octet 0x10, specifying the size of the 128-bit MD5 digest resulting from encrypting the challenge value in the L2F_CONF, along with the low byte of the Assigned_CLID. After this byte MUST be the sixteen bytes of the generated MD5 digest. If during tunnel establishment an L2F_OPEN is received with an incorrect L2F_OPEN_RESP, the packet MUST be silently discarded. It is recommended that such an event generate a log event as well. 4.4.4 L2F_OPEN, client establishment An L2F_OPEN (with non-zero MID) sent from the NAS to the Home Gateway indicates the presence of a new dial-in client. When sent back from the Home Gateway to the NAS, it indicates acceptance of the client. This message starts with the octet 0x02. When sent from the NAS, it may contain further sub-options. When sent from the Home Gateway, it may not contain any sub-options. All further discussion of sub- options in this section apply only to the NAS to Home Gateway direction. The L2F_OPEN_TYPE sub-option MUST be present. It is encoded as the octet 0x06, followed by a single byte describing the type of authentication the NAS exchanged with the client in detecting the client's claimed identification. Implicit in the authentication type is the encapsulation to be carried over the life of the session. The authentication types are: 0x01 Textual username/password exchange for SLIP 0x02 PPP CHAP 0x03 PPP PAP 0x04 PPP no authentication 0x05 SLIP no authentication The L2F_OPEN_NAME sub-option is encoded as the octet 0x01, followed by an octet specifying the length of the name, followed by the indicated number of bytes of the name. This field MUST be present for any authentication type except 0x04 (None). It MUST contain the name specified in the client's authentication response. The L2F_OPEN_CHAL sub-option is encoded as the octet 0x02, followed by an octet specifying the length of the challenge sent, followed by the challenge itself. This field is only present for CHAP, and MUST contain the challenge value sent to the client by the NAS. The L2F_OPEN_RESP sub-option is encoded as the octet 0x03, followed by an octet specifying the length of the response received, followed by the client's response to the challenge. For CHAP, this field contains the response value received by the NAS. For PAP or textual authentication, it contains the clear text password received from the client by the NAS. This field is absent for authentication 0x04 "None".. L2F_ACK_LCP1 specifies a copy of the closing CONFACK received from the client, and L2F_ACK_LCP2 specifies a copy of the closing CONFACK sent to the client by the NAS. The L2F_REQ_LCP0 sub-option is encoded as the octet 0x08, followed by two octets in network byte order specifying the length of the LCP CONFREQ initially received from the client. This may be used by the Home Gateway to detect capabilities of the client which were negotiated away while starting LCP with the NAS. Detection of such options may be used by the Home Gateway to decide to renegotiate LCP. The L2F_OPEN_ID sub-option is encoded as the octet 0x06, followed by a single octet. This sub-option is only present for CHAP; the single octet contains the CHAP Identifier value sent to the client during the CHAP challenge. The Home Gateway may choose to ignore any sub-option of the L2F_OPEN, and accept the connection anyway. The Home Gateway would then have to undertake its own LCP negotiations and authentication. To maximize the transparency of the L2F tunnel, it is recommended that extra negotiations and authentication be avoided if possible. 4.4.5 in response to an L2F_OPEN, it indicates that the Home Gateway has declined the connection. When sent with a non-zero MID, it indicates the termination of that client within the tunnel. The L2F_CLOSE_WHY sub-option is encoded as the byte 0x01 followed 0x00000100 Wrong multilink PPP destination Bits in the mask probe. 4.4.6 L2F_ECHO Transmission of L2F_ECHO messages is optional. If an implementation transmits L2F_ECHO messages, it MUST not transmit more than one such request each second. The payload size MUST be 64 bytes or less in length. It is recommended that at least 5 L2F_ECHO messages be sent without response before an implementation assumes that its peer has terminated. The L2F_ECHO message is encoded as the single byte 0x04. It may be sent by either side once the tunnel is established. MID MUST be 0. An L2F_ECHO_RESP (documented below) MUST be sent back in response. 4.4.7 L2F_ECHO_RESP All implementations MUST respond to L2F_ECHO, using L2F_ECHO_RESP. The received packet MUST. 4.5 L2F Message Delivery L2F is designed to operate over point-to-point unreliable links. It is not designed to provide flow control of the data traffic, nor does it provide reliable delivery of this traffic; each protocol tunnel carried via L2F is expected to manage flow control and retry itself. Thus, it is only L2F control messages which must be retransmitted; this process is described in this section. 4.5.1. Section 4.2.5 describes the process in detail. 4.5.2 Flow control L2F control messages are expected to be exchanged lock-step. Thus, per-client activities can not occur until tunnel setup is complete. Neither can one client be serviced until the L2F message exchange is complete for a previous client. Thus, it is expected that rarely--if ever--should a flow control action be required. If the input queue of L2F control messages reaches an objectionable level for an implementation, the implementation may silently discard all messages in the queue to stabilize the situation. 4.5.3 Tunnel State table The following enumerates the handling of L2F messages for tunnel creation in state table format. Events name an L2F_ message type (the L2F_ portion of the named message is omitted to permit a more compact table). A start ("*") matches any event not otherwise matched for the named state. A NAS starts at initial state Start0, sending a packet before waiting for its first event. A Home Gateway starts at Start1, waiting for an initial packet to start service. If an event is not matched for a given state, the packet associated with that event is silently discarded. Tunnel establishment (MID == 0), NAS side. State Event Action New State ----- ----- ------ --------- Start0 Send CONF Start1 Start1 CONF Send OPEN Start2 Start1 timeout 1-3 Send CONF Start1 Start1 timeout 4 Clean up tunnel (done) Start2 OPEN (initiate 1st client) Open1 Start2 timeout 1-3 Send OPEN Start2 Start2 timeout 4 Clean up tunnel (done) Open1 OPEN Send OPEN Open1 Open1 CLOSE Send CLOSE Close1 Open1 no MIDs open Send CLOSE Close2 Close1 CLOSE Send CLOSE Close1 Close1 timeout 4 Clean up tunnel (done) Close2 CLOSE Clean up tunnel (done) Close2 timeout 1-3 Send CLOSE Close2 Close2 timeout 4 Clean up tunnel (done) Tunnel establishment (MID == 0), Home Gateway side. State Event Action New State ----- ----- ------ --------- Start0 CONF Send CONF Start1 Start1 CONF Send CONF Start1 Start1 OPEN Send OPEN Open1 Start1 timeout 4 Clean up tunnel (done) Open1 OPEN Send OPEN Open1 Open1 OPEN (MID > 0) (1st client, below) Open2 Open1 CLOSE Send CLOSE Close1 Open1 timeout 4 Clean up tunnel (done) Open2 OPEN (MID > 0) (below) Open2 Open2 CLOSE Send CLOSE Close1 Close1 CLOSE Send CLOSE Close1 Close1 timeout 4 Clean up tunnel (done) 4.5.4 Client State table This table is similar to the previous one, but enumerates the states for a client connection within a tunnel in the opened state from 4.5.3. As this sequence addresses clients, MID will be non-zero. Client establishment (MID != 0), NAS side. State Event Action New State ----- ----- ------ --------- Start0 Send OPEN Start1 Start1 OPEN (enable forwarding) Open1 Start1 CLOSE Clean up MID (MID done) Start1 timeout 1-3 Send OPEN Start1 Start1 timeout 4 Clean up MID (MID done) Start1 client done Send CLOSE Close2 Open1 OPEN (no) Client establishment (MID != 0), Home Gateway side. State Event Action New State ----- ----- ------ --------- Start0 OPEN Send OPEN Open1 Start0 OPEN (fail) Send CLOSE Close3 Open1 OPEN Send) Close3 OPEN Send CLOSE Close3 Close3 timeout 4 Clean up MID (MID done) 5. Protocol Considerations Several aspects of operation over L2F, while outside the realm of the protocol description itself, serve to clarify the operation of L2F. 5.1 PPP Features-in client on one end, and the Home Gateway on the other), with L2F continuing to simply ship HDLC frames back and forth. For similar reasons, PPP echo requests, NCP configuration negotiation, and even termination requests, are all simply tunneled HDLC frames. 5.2 Termination As L2F simply tunnels link-layer frames, it does not detect frames like PPP TERMREQ. L2F termination in these scenarios is driven from a protocol endpoint; for instance, if a Home Gateway receives a TERMREQ, its action will be to "hang up" the PPP session. It is the responsibility of the L2F implementation at the Home Gateway to convert a "hang up" into an L2F_CLOSE action, which will shut down client's session in the tunnel cleanly. L2F_CLOSE_WHY and L2F_CLOSE_STR may be included to describe the reason for the shutdown. 5.3. One-time password cards have become very common. To the extent the NAS can capture and forward the one-time password, L2F operation is compatible with password cards. For the most general solution, an arbitrary request/response exchange must be supported. In an L2F environment, the protocol must be structured so that the NAS can detect the apparent identity of the user and establish a tunnel connection to the Home Gateway, where the arbitrary exchange can occur. 5.4 MNP4 and Apple Remote Access Protocol L2F appears compatible with Apple's ARAP protocol. Its operation under L2F has not been described simply because this experimental RFC does not have a corresponding implementation of such operation. 5.5 Operation of IP and UDP L2F tries to be self-describing, operating at a level above the particular media over which it is carried. However, some details of its connection to media are required to permit interoperable implementations. This section describes the issues which have been found when operating L2F over IP and UDP. L2F uses the well-known UDP port 1701 [4]. The entire L2F packet, including payload and L2F header, is sent within a UDP datagram. The source and destination ports are the same (1701), with demultiplexing being achieved using CLID values. It is legal for the source IP address of a given CLID to change over the life of a connection, as this may correspond to a peer with multiple IP interfaces responding to a network topology change. Responses should reflect the last source IP address for that CLID. IP fragmentation may occur as the L2F packet travels over the IP substrate. L2F makes no special efforts to optimize this. A NAS implementation MAY cause its LCP to negotiate for a specific MRU, which could optimize for NAS environments in which the MTUs of the path over which the L2F packets are likely to travel have a consistent value. 6.0 Acknowledgments L2F uses a packet format inspired by GRE [5]. Thanks to Fred Baker for consultation, Dave Carrel for consulting on security aspects, and to Paul Traina for philosophical guidance. 7.0 References [1] Romkey, J., "A Nonstandard for Transmission of IP Datagrams over Serial Lines: SLIP", RFC 1055, June 1988. [2] Simpson, W., "The Point-to-Point Protocol (PPP)", STD 51, RFC 1661, July 1994. [3] Simpson, W., "PPP in HDLC-like Framing", STD 51,, RFC 1662, July 1994. [4] Reynolds, J., and J. Postel, "Assigned Numbers", STD 2, RFC 1700, October 1994. [5] Hanks, S., Li, T., Farinacci, D., and P. Traina, "Generic Routing Encapsulation (GRE)", RFC 1701, October 1994. 8.0 Security Considerations Security issues are discussed in Section 3.1. 9.0 Authors' Addresses Tim Kolar Cisco Systems 170 West Tasman Drive San Jose CA 95134-1706 Morgan Littlewood Cisco Systems 170 West Tasman Drive San Jose CA 95134-1706 Andy Valencia Cisco Systems 170 West Tasman Drive San Jose CA 95134-1706 940 - Nortel's Virtual Network Switching (VNS) Overview Next: RFC 2342 - IMAP4 Namespace
http://www.faqs.org/rfcs/rfc2341.html
CC-MAIN-2015-48
refinedweb
7,348
53.61
GetTokenByPosition(string, string, int) Return the i'th token in the string, by delimiter. string GetTokenByPosition( string sString, string sDelimiter int nPos ); Parameters sString The string to be split up into parts (tokens). sDelimiter The delimiter used to split up the string. nPos The position (=number) of the token that is to be returned. Description This function returns the i'th token within a given string. The string is split up into tokens, using a specified delimiter. GetTokenByPosition() requires three arguments: The string that is to be split up into tokens (sString), the delimiter used to split up the string (sDelimiter) and the position (=number) of the token within the string (nPos). The delimiter MUST be a string containing a single character. The first token in the string is at position nPos = 0; the second at nPos = 1; etc. If there is no token at the specified position, an empty string "" is returned... GetTokenByPosition("I|am|sloppy||programmer", "|", 2) will return Token[2] = "sloppy" Known Bugs Contrary to the description found in the include file, the delimiter must be a single character (see remarks above). Requirements #include "x0_i0_stringlib" Version ??? See Also author: motu99, editors: Mistress, Kolyana
http://palmergames.com/Lexicon/Lexicon_1_69/function.GetTokenByPosition.html
CC-MAIN-2015-14
refinedweb
196
56.76
27 August 2010 16:29 [Source: ICIS news] SAO PAULO (ICIS)--?xml:namespace> Chemical output rose in July to the highest level since January 2000, according to the association. This year has seen greater domestic demand than in 2009, Abiquim said. When compared with June, July output increased 9.05%, Abiquim said. July domestic sales grew 12.25% compared with June, while the year over year figure was not revealed. For the period of January through July, industrial chemical output increased by 10.56% year over year, and domestic sales grew by 8.05%, the trade group said. Abiquim said figures were
http://www.icis.com/Articles/2010/08/27/9389036/brazil-july-chemical-output-rises-3.39-year-on-year.html
CC-MAIN-2014-42
refinedweb
102
71.21
calagrid not working Datagrid not working The code here is working fine, apart from the fact that that I'm using netbeans 6.5 and the servlet v2.5 and struts 1.1.... working. please help me out Struts Struts When Submit a Form and while submit is working ,press the Refresh , what will happen in Struts...: calender in struts - Struts calender in struts when i execute the following code ,that is working properly if i convert to struts html tags that code is not working please help me to rectify the UITapgesturerecognizer not working UITapgesturerecognizer not working uitapgesturerecognizer Error - Struts Error Hi, I downloaded the roseindia first struts example and configured in eclips. It is working fine. But when I add the new action and I create the url for that action then "Struts Problem Report Struts has detected Why this is not working...? Why this is not working...? import java.util.*; public class Family { int size_of_family=0; public Person[] members=new Person[size_of_family]; Person d = new Person(); public Family (int size_of_family){ members = new 2 Validation in URL. I am unable to get the validator tags working for the newuser registration page. After the validator tags are not working. Even if I make a null...Struts 2 Validation Hello,I have been learning struts. I have Dynamic-update not working in Hibernate. Dynamic-update not working in Hibernate. Why is dynamic update not working in hibernate? Dynamic-update is not working. It means when you are running your update, new data is added in your table in place Working of POS Terminal Working of POS Terminal Hi there, thanks for this post. Just curious how actual POS terminal will interact with PHP. PHP doesn't provide hardware interaction. Thanks for reply. Regards UIWebView zoom not working UIWebView zoom not working Hi, I don't know why UIWebView zoom not working? Tell the solution. Thanks Hi, Open the .xib file and set scalesPageToFit to YES. Thanks Struts Tutorials . Using the Struts Validator Follow along as Web development expert Brett... application programming.With the Validator, you can validate input in your Struts... application development using Struts. I will address issues with designing Action php <? ?> tag not working php tag not working why PHP tags not working in my application? This might happen when your shortopentag is turned off. So, you will have to turn it on by changing the status of this PHP tag in php.ini file. Here text struts struts Hi what is struts flow of 1.2 version struts? i have struts applicatin then from jsp page how struts application flows Thanks Kalins Naik Please visit the following link: Struts Tutorial working of a div tag in html working of a div tag in html !DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head> Multi Threading is not working in that why...? Multi Threading is not working in that why...? import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.util.Date; public
http://roseindia.net/tutorialhelp/comment/17143
CC-MAIN-2015-22
refinedweb
511
60.41
Hello, I am an absolute beginner that just started, and i need help with online course homework. Instruction Using Pseudocode: Pseudocode with Input and Output The goal is to write a program that asks the user to answer a joke and then print the response to the screen. The pseudocode for this program could look like: Input: Ask the user what computers like to eat. Output: Display two messages. One will state the user’s guess. The other will share the correct answer. My code: #Pseudocode Sample def main(): answer = input("What part of a computer does a spider use?") print("Your guess was" + answer + ".") print("The correct answer was webcam." main() Output ParseError: bad input on line 8
https://www.freecodecamp.org/forum/t/i-need-help-for-my-online-course-homework/228639
CC-MAIN-2018-43
refinedweb
119
76.72
pci.ids(5) The PCI Utilities pci.ids(5) pci.ids - list of known identifiers related to PCI devices Devices on the PCI bus are identified by a combination of a vendor ID (assigned by the PCI SIG) and device ID (assigned by the vendor). Both IDs are 16-bit integers and the device itself provides no translation to a human-readable string. In addition to the vendor and device, devices also report several other identifiers: · Device class and subclass (two 8-bit numbers) · Programming interface (8-bit number, meaning specific for the subclass) · Subsystem, which identifies the assembly in which the device is contained. A typical example is an Ethernet add-in card: the device is the Ethernet controller chip, while the card plays the role of the subsystem. Subsystems have their vendor ID (from the same namespace as device vendors) and subsystem ID. Generally, the meaning of the subsystem ID depends on the device, but there are cases in which a single subsystem ID is used for many devices - e.g., laptop motherboards. The PCI utilities use the pci.ids file to translate all these numeric IDs to strings. The pci.ids file is generated from the PCI ID database, which is maintained at ⟨⟩. If you find any IDs missing from the list, please contribute them to the database. You can use the update-pciids command to download the current version of the list. Alternatively, you can use lspci -q to query the database online. The pci.ids file is a text file in plain ASCII, interpreted line by line. Lines starting with the hash sign are treated as comments are ignored. Comments regarding a specific entry are written immediately before the entry. Vendor entries start with a 4-digit hexadecimal vendor ID, followed by one or more spaces, and the name of the vendor extending to the end of the line. Device entries are placed below the vendor entry. Each device entry consists of a single TAB character, a 4-digit hexadecimal device ID, followed by one or more spaces, and the name of the device extending to the end of the line. Subsystem entries are placed below the device entry. They start with two TAB characters, a 4-digit hexadecimal vendor ID (which must be defined elsewhere in the list), a single space, a 4-digit hexadecimal subsystem ID, one or more spaces, and the name of the subsystem extending to the end of the line. Class entries consist of "C", one space, 2-digit hexadecimal class ID, one or more spaces, and the name of the class. Subclasses are placed below the corresponding class, indented by a single TAB, followed by a 2-digit hexadecimal subclass ID, one or more spaces, and the name of the subclass. Programming interfaces are below the subclass, indented by two TABs, followed by a 2-digit hexadecimal prog-if ID, one or more spaces, and the name. There can be device-independent subsystem IDs, although the web interface of the database does not support them yet. They start with a subsystem vendor line consisting of "S", one space, and a 4-digit hexadecimal vendor ID (which must correspond to an already listed vendor). Subsystems follow on subsequent lines, each indented by one TAB, followed by a 4-digit hexadecimal subsystem ID, one or more spaces, and the name of the subsystem. To ensure extensibility of the format, lines starting with an unrecognized letter followed by a single space are ignored and so are all following TAB-indented lines. lspci(8), update-pciids pci.ids(5) Pages that refer to this page: pcilib(7), lspci(8)
https://www.man7.org/linux/man-pages/man5/pci.ids.5.html
CC-MAIN-2020-29
refinedweb
606
52.09
The. See also Most of the time, tkinter, tkinter.constants being one of the most important. Importing tkinter will automatically import tkinter.constants, so, usually, to use Tkinter all you need is a simple import statement: import tkinter Or, more often: from tkinter import *() The") Options) For more extensive information on the packer and the options that it can take, see the man pages and page 183 of John Ousterhout’s book.. For example: class App(Frame): def __init__(self, master=None): Frame.__init__(self,::. A number of widgets require “index” parameters to be passed. These are used to point at a specific place in a Text widget, or to particular characters in an Entry widget, or to particular menu items in a Menu widget. Entry widgets have options that refer to character positions in the text being displayed. You can use these tkinter functions to access these special points in text widgets: Some options and methods for menus manipulate specific menu entries. Anytime a menu index is needed for an option or a parameter, you may pass in: Bit.
https://docs.python.org/3.2/library/tkinter.html
CC-MAIN-2018-05
refinedweb
180
54.02
The UPC Specification does not address language interoperability issues, and so the interfaces described here should be considered Berkeley UPC extensions. NOTE: if -pthreads is used to generate UPC executables, differences emerge between C and UPC global variables, and function calls become the only interface from other languages into UPC. See the Programming Hybrid Applications with Pthreads section. The 'main()' function can live either in UPC code, or in an object written in another language. If 'main' is in a UPC file, the UPC runtime is bootstrapped normally, and no special logic is needed at program exit. If 'main()' does not live in UPC code, 'bupc_init()' or 'bupc_init_reentrant()' must be called at startup to bootstrap the Berkeley UPC runtime, and 'bupc_exit()' should be used at program exit. These functions are available by #including <bupc_extern.h>: #include <bupc_extern.h> int main(int argc, char** argv) { int exitcode = 0; bupc_init(&argc, &argv); ... rest of program... bupc_exit(exitcode); }The 'bupc_init_reentrant()' function must be used when '-pthreads' is used. It also supports non-pthreaded applications, and so it is preferred for portability. The call to 'bupc_init()' (or 'bupc_init_reentrant()') should be the first statement in 'main()'. The semantics of any code appearing before it is implementation-defined (for example, it is undefined how many threads of control will run that code, or whether stdin/stdout/stderr are functional). Similarly, no code should follow a call to 'bupc_exit()': it is not defined if such code will even be reached (hint: it won't). See the comments in <bupc_extern.h> for more information. If regular 'exit()' (or a return from main) is performed from non-UPC code, instead of 'bupc_exit()', the result is as if 'upc_global_exit()' had been called--i.e., program termination will occur immediately, for all threads, without the final UPC barrier that occurs during normal UPC program termination. Calling '_exit' is strongly discouraged--program behavior is undefined in this case, and even process cleanup is not guaranteed (i.e. zombie processes may be left behind). The path needed to include <bupc_extern.h> is always available by calling 'upcc -print-include-dir'. Thus a C file that #includes <bupc_extern.h> can be portably compiled with gcc -I`upcc -print-include-dir` -c main.c To have C code refer to a UPC variable or call a UPC function, have the C file #include a header file that declares the UPC variables/functions (none of which may contain any UPC-specific constructs, like 'strict'), and then the C code can refer to them normally. You may use the '__UPC__' macro (always defined during UPC compilation) to hide UPC-specific constructs: #ifndef MY_UPC_HEADER_H #define MY_UPC_HEADER_H /* Variables/Routines exported to C routines */ extern int foo; int get_bar(); void set_bar(int newval); /* UPC-specific declarations */ #ifdef __UPC__ extern shared int bar; extern strict shared long myarray[THREADS * 2]; #endif /* __UPC__ */ #endif /* MY_UPC_HEADER_H */ one used by Berkeley UPC (use 'upcc -version' to see which compiler upcc uses). Then pass both your C and UPC object files (and/or any C libraries you wish to use) to upcc to link: upcc -c foo.upc gcc -c bar.c upcc foo.o bar.o -lm upcrun -n 2 a.out #ifndef MY_UPC_HEADER_H #define MY_UPC_HEADER_H /* Variables/Routines exported to C/C++ routines */ #ifdef __cplusplus extern "C" { #endif extern int foo; int get_bar(); void set_bar(int newval); #ifdef __cplusplus } /* end "extern" */ #endif /* UPC-specific declarations */ #ifdef __UPC__ extern shared int bar; extern strict shared long myarray[THREADS * 2]; #endif /* __UPC__ */ #endif /* MY_UPC_HEADER_H */ Similarly, a C++ header that is also #included by UPC code would need to use the same extern "C" wrapper around any symbols intended for use by UPC, and any C++-specific constructs would need to be hidden from UPC by putting them within an '#ifdef __cplusplus' block. C compiler used by Berkeley UPC. At link time, 'upcc' additionally needs to be pointed at the C++ linker that should be used for the final link step: use the '-link-with' flag for this: upcc -c foo.upc g++ -c bar.cc upcc -link-with=g++ foo.o bar.o -lsomeC++library upcrun -n 2 a.out NOTE: if -pthreads is used to generate UPC executables, differences emerge between C and UPC global variables, and function calls become the only interface from other languages into UPC. See the Programming Hybrid Applications with Pthreads section. As with C objects, you should generally be able to simply pass FORTRAN objects/libraries to the upcc linker: upcc -c foo.upc f77 -c bar.f upcc foo.o bar.o -lsomeFortranLibrary upcrun -n 2 a.outNote: different systems use different methods for linking together C and FORTRAN. You may need to consult your system documentation. You may select a different linker for upcc to use via the '-link-with' flag, and/or pass any linker-specific flags needed via upcc's '-Wl,' flag. See the upcc man page. Note: at present, compiling mixed MPI/UPC applications requires that the Berkeley UPC runtime be configured and built with 'CC' and 'MPI_CC' set to a C MPI compiler (and 'CXX' set to a C++ MPI compiler, unless '--disable-udp' is passed). Such a runtime will always use an MPI compiler to compile UPC programs, even those which do not use MPI: you may thus wish to keep a separate runtime installation specifically for compiling MPI/UPC programs. Finally, MPI interoperability is not provided for all systems and network types, and '-pthreads' is not supported. See the 'INSTALL' document in the runtime for more information. UPC files which contain calls to MPI functions must '#include <mpi.h>', and must be compiled with the '-uses-mpi' flag. If any objects in a UPC application contain calls to MPI, the flag must be passed to the upcc at link time: upcc -uses-mpi -c foo.upc mpicc -c bar.c upcc -uses-mpi foo.o bar.o On some systems, '-lmpi' must also be passed to 'upcc' in order for MPI symbols to be resolved. If C++ MPI objects are linked into a UPC application, the '-link-with' flag must be additionaly used at link time to point upcc at a C++ MPI compiler to use for linking. upcc -uses-mpi -c foo.upc mpic++ -c bar.cc upcc -uses-mpi -link-with=mpic++ foo.o bar.o When '-uses-mpi' is used to link a UPC application, the 'MPI_Init()' and 'MPI_Finalize()' calls are handled by the UPC runtime--these calls should not appear in client code. If 'main()' exists in a non-UPC file, calls to 'bupc_init()' and 'bupc_exit()' should replace any calls to the MPI init/finalize functions. Certain UPC network types (notably 'mpi' and 'vapi') use MPI under the covers. For this reason, non-UPC MPI objects should be compiled with the same MPI compiler family as is used by upcc. To see the MPI C compiler used by upcc, use 'upcc -print-mpicc'. Both MPI and UPC cause network communication. At present, the network traffic generated by MPI is not coordinated with that generated by UPC code. As a result, it is quite easy to cause network deadlock when mixing MPI and UPC, unless the following protocol is strictly observed: upcc -c upc.upc gcc -c c.c g++ -c c++.cc mpiCC -c mpi.cc f77 -c fortran.f upcc -uses-mpi -link-with=mpiCC upc.o c.o c++.o mpi.o fortran.o -lFortranLib -lC++lib -lClib upcrun -n 2 a.out The result of this is that when '-pthreads' is used, C/C++/FORTRAN code can not refer to UPC global variables. However, UPC functions which have a C interface (no UPC-specific constructs) can still be called. Note, though, that UPC functions should only be called from pthreads that were created by the UPC runtime: pthreads created by user calls to 'pthread_create' cannot safely call UPC routines. When '-pthreads' is used in a hybrid application, any non-UPC code linked into the application must be thread-safe (i.e. capable of being referenced by multiple threads at the same time), if you plan to call it concurrently from more than one UPC thread. Many MPI libraries are not safe to use with pthreads, and those that are often require special initialization. At present Berkeley UPC does not support hybrid MPI/UPC applications with '-pthreads', but support may be added for reentrant MPI libraries in the future. See the Using Pthreads section of the UPC User's Manual for more details on '-pthreads' usage.
http://upc.lbl.gov/download/berkeley_upc-2.4.0/docs/html/user/interoperability.html
crawl-001
refinedweb
1,410
64
How to use C# if else statements The conditional statement if.. else in C# is using for check the conditions that we provided in the head of if statement and making decision based on that condition. The conditional statement examining the data using comparison operators as well as logical operators. The else statement is optional , so we can use the statement in two ways ; if (condition) statement; if (condition) statement; else statement; If the condition is true then the control goes to the body of if block , that is the program will execute the code inside if block. If the condition is false then the control goes to next level , that is if you provide else block the program will execute the code block of else statement, otherwise the control goes to next line of code. If you want to check more than one conditions at the same time , you can use else if statement . if (condition) statement; else if (condition) statement; else statement; Just take a real-time example - We have a mark list and we want to analyze the grading of each student. In this case we can use if..else conational statements. Following are the grading rule of the student: 2) If the marks less than 80 and greater than 60 then the student get first class 3) If the marks less than 60 and greater than 40 then the student get second class 4) If all the above conditions failed and the marks less than 40 then the student is failed. Now here implementing these conditions in a C# program. 1: if (totalMarks >= 80) { 2: MessageBox.Show("Got Higher First Class "); 3: } 4: else if (totalMarks >= 60) { 5: MessageBox.Show("Got First Class "); 6: } 7: else if (totalMarks >= 40){ 8: MessageBox.Show("Just pass only"); 9: } 10: else { 11: MessageBox.Show("Failed"); 12: } Line 2 : If total marks greater than 80 show message - "Got Higher First Class " Line 4 : Checking the total marks greater than or equal to 60 Line 5 : If total marks greater than 60 show message - "Got First Class " Line 7 : Checking the total marks greater than or equal to 40 Line 8 : If total marks greater than 40 show message - "Just pass only" Line 10: If those three conditions failed program go to the next coding block . Line 11: If all fails, it will show message "Failed" using System; using System.Windows.Forms; namespace WindowsApplication1 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { int totalMarks = 59; if (totalMarks >= 80) { MessageBox.Show("Got Higher First Class "); } else if (totalMarks >= 60) { MessageBox.Show("Got First Class "); } else if (totalMarks >= 40){ MessageBox.Show("Just pass only"); } else { MessageBox.Show("Failed"); } } } } In this C# example the total marks is 59 , when you execute this program you will get in message box "Just Pass Only"
http://csharp.net-informations.com/statements/csharp-if-else.htm
CC-MAIN-2016-26
refinedweb
475
67.49
Hi all! I'm trying to build Beast, a sound composer/synth. I have no problems with the building process, except it needs OSS devices to build. So, someone will have SND_PCM_OSS statically compiled, maybe others (like me) have a module, someone won't have the support at all! ALSA support is a separate add-on which has to be compiled later. So, what am I supposed to do? Put a disclaimer in the PKGBUILD? Check if uid==0, then add something that checks the module's existence, and if yes try to load it? Question #2 (I think it's a FAQ, but I don't want to mess things too much, so I prefer to ask again): installation tries to overwrite three files in /usr/share/mime, XMLnamespaces, globs and magic. Should I let it do it? I think not. But maybe it adds necessary info to run the program... I don't know. Thanks for your answers, I looked in every doc file I could find and all over the program's site, but I couldn't find anything useful...[/code] dreaming in digital / living in realtime / thinking in binary / talking in ip / welcome to our world... Offline OSS: compile the program using snd_pcm_oss module and then in the the post install function make it say that you need oss to run the program. With the base kernel everybody has the snd_pcm_oss module. MIME: the files that conflict are owned by the package shared-mime-info (run: pacman -Qo /path/to/file to know this), so you should remove this files from the package and add a dependency in your package to shared-mime-info. Then compare the files. If the files your package wants to install have some info that isn't in shared-mime-info package files then open a bug report. Offline First of all, we don't build as root usually (XFree86 is another thing though). We use fakeroot to compile packages. With fakeroot, you can't install things on the system and can't load driver modules. I don't know if OSS support is really a requirement to build BEAST, but you could make a note in the top of the PKGBUILD, it's very usual to notice those things in top of the PKGBUILD with some comments. The PKGBUILD is one of the first places to look at if something doesn't work. About the MIME stuff: you have to check if the ./configure option has an option to disable mime updates, or disable database updates. It depends on the package. In the postinstall and postremove, you need to run update-mime-database to make sure MIME types are correct after installation or removal. Offline Pajaro: maybe my explaination wasn't very good... what I meant is that you *can* install the package, you can't *build* it if you don't have that module/support. If you use ALSA, you can just install Bse-ALSA (another package I'm building, which depends on Beast). About the MIME files, they're different. XMLnamespaces is just a line (and Beast's version is empty, so I could just ignore it). Looking at magic and globs, they should be appended to the originals, I think. Or should the ahred-mime-info package include those information anyway? JGC: the problem is just this, we shouldn't build as root, but we must if we want it to compile fine everywhere. Fakeroot is usually good, but it's fake I'm sure OSS is a requirement 'cause you can't have ALSA at compile-time, and the only useful ./configure option is --prefix. When I didn't have the module loaded, the build failed saying I didn't have any sound device for which to build. I just had to load the module, and all went fine. As I just said, no options about MIME, read what I wrote to Pajaro. dreaming in digital / living in realtime / thinking in binary / talking in ip / welcome to our world... Offline
https://bbs.archlinux.org/viewtopic.php?id=9644
CC-MAIN-2018-17
refinedweb
674
73.27
> > This is not my point. Wether you have an order by or not, > lucene will > > compute the score of all hits anyway. So, no order by, does > not mean > > that lucene does not order: it orders on score (but ofcourse you > > already know that :-) ) So, my thing holds with and without > order by. > Marcel Reutegger wrote: > WRT lucene this is correct. but the same is not true for JCR. > if there is no order by the implementation is free to return > the nodes in any order. True, but lucene will sort is anyway accoring score AFAICS. So, the implementation is tree to return it in any order, but AFAIU, lucene still returns the Hits sorted according score. And, this is ofcourse important in the case of textsearches with contains(). > I did a quick test and wrote a custom IndexSearcher (see > below), which collects only the first n matching documents. > the test query then executed much faster because the number > of DescendantSelfAxisScorer.isValid() calls dropped drastically. > > There is one drawback though. you don't know the total number > of results. in this case it might be OK to return -1 for the > RangeIterator.getSize(). Yes, true. And, we have to take into account that people might have an AccessManager > the order by is more difficult to solve. what we could try is > order the result of the sub query first and then run the > descendant axis test against the context nodes. > DescendantSelfAxisQuery does not add nodes to the sub query > but only limits the set subsequent ordering can be skipped. > this requires that we need to pass along ordering information > with the scorer. e.g. index-order, relevance, property. That is what I meant with the 'lazy filter', in which we start filtering according paths *after* the fast initial result set returned by lucene. > > In any case we should create a jira issue for it. I can fetch some snippets of this discussion and add it to JCR-1196 [Queries for DescendantSelfAxisWeight/ChildAxisQuery are currently very heavy and become slow pretty quickly], or do you want a new issue? Ard > > regards > marcel > > > public class JackrabbitIndexSearcher extends IndexSearcher { > > private final IndexReader reader; > > public JackrabbitIndexSearcher(IndexReader r) { > super(r); > this.reader = r; > } > > // inherit javadoc > public TopDocs search(Weight weight, Filter filter, int nDocs) > throws IOException { > TopDocCollector collector = new TopDocCollector(nDocs); > Scorer scorer = weight.scorer(reader); > if (scorer != null) { > while (scorer.next() && nDocs-- > 0) { > collector.collect(scorer.doc(), scorer.score()); > } > } > return collector.topDocs(); > } > } >
http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200711.mbox/%3CF8E386B54CE3E6408F3A32ABB9A7908A57599D@hai02.hippointern.lan%3E
CC-MAIN-2015-11
refinedweb
409
66.33
hello... u might think this question is too basic...but i really want a good explanation on 'this' object reference in java. although i read about it in so many books but up till now i still couldnt really understand about 'this'. can someone please help me??? tx! No question is stupid, essentially! The "this" keyword basically refers to the current instance of the class. For example, if you have a book class with the id and name variables then this.id would refer to the id variable and this.name would refer to the name variable. So the effect of String theName = this.name; would be the same as doing String theName = name;. This may seem silly but there are greater uses for it, which I cannot really think of at the moment (in the middle of doing Networks revision for my 2nd year Comptuer Science exams). Hope that helps, if not E-mail me at meethoss@btinternet.com. Meethoss thanx alot for ur explanation Meethoss!u have helped me to understand 'this' with ur simple example..... all da best for ur exams! yeah, "this" is sometimes a headache... public class About_THIS { String string; public About_THIS( String string ) { ...... } public About_THIS( String string, String string2 ) { this( string2 ); string = this.string; } the above is clear, right? but what if you have an inner class? private class Inner_Class //this is inside the class above... { String string; Inner_Class( String string ) { string = this.string; } } } -- this would obviously refer to the inner class's string variable not the string in the class that it's in. so what if you want to refer to the outer class's string variable?i once encountered this and had to change the inner class's variable name so i won't have to puzzle it out. but i have read a solution to this and i can't remember it now. would someone please refresh my memory? sheesh........ =) Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?138151-this-object-reference&p=408302
CC-MAIN-2015-18
refinedweb
338
77.13
The structure is simple, still a few interesting points are touched. Client Given io, the app ASIO io_context, and the server hostname as a string, the client tries this block, and eventually just output to console an exception. namespace ba = boost::asio; using ba::ip::tcp; // ... tcp::socket socket{ io }; // 1 tcp::resolver resolver{ io }; ba::connect(socket, resolver.resolve(host, ECHO_PORT_STR)); // 2 // ... ba::write(socket, ba::buffer(request, reqLen)); // 3 char reply[CLIENT_MAX_LEN]; // 4 size_t repLen = ba::read(socket, ba::buffer(reply, reqLen)); // 4 // ...1. Create an ASIO TCP socket and a resolver on the current io_context. 2. Then resolve() the resolver on the host and port of the echo server (in my case, localhost:50014), and use the resulting endpoints to estabilish a connection on the socket. 3. If the connection holds, write to the socket the data we previously put in the char buffer named request, for a size of reqLen. 4. We reserve a confidently large buffer where to store the server reply. Since we are writing a echo application, we know that the size of the data we are about to get from the client should be the same of the size we have just sent. This simplify our code to the point that we can do a single read for the complete data block. 5. Use the socket for reading from the server. We use the buffer, and the size of the data we sent, for what said on (4). At this point we could do whatever we want with the data we read in reply with size repLen. Server loop Once we create an acceptor on the ASIO io_context, specifying as endpoint the IP protocol we want (here I used version 4) and the port number, we loop forever, creating a new socket through a call to accept() on the acceptor each time a request comes from a client, passing it to the session() function that is going to run in a new thread. tcp::acceptor acceptor{ io, tcp::endpoint(tcp::v4(), ECHO_PORT) }; for (;;) { std::thread(session, acceptor.accept()).detach(); }Notice that each thread created in the loop survives the exiting of the block only because it is detached. This is both handy and frightening. In production code, I would probably push them in a collection instead, so that I could explicitly kill anyone that would stop behave properly. Server session Since we don't know the size of the data sent by the client, we should be ready to split it and read it in chunks. for (;;) { char data[SERVER_MAX_LEN]; // 1 bs::error_code error; size_t len = socket.read_some(ba::buffer(data), error); // 2 if (error == ba::error::eof) { return; } else if (error) { throw bs::system_error(error); } ba::write(socket, ba::buffer(data, len)); // 3 }1. To better see the effect, I have chosen a ridiculously small size for the server data buffer. 2. The data coming from the client is split in chunks from read_some() on the socket created by the acceptor. When the read is completed, read_some() sets the passed boost system error to eof error. When we detect it, we know that we could terminate the session. Any other error says that the something went wrong. 3. If read_some() set no error, we use the current chunk of data to do what the server should do. In this case, we just echo it back to the client. Full C++ code on GitHub. The original source is the official Boost ASIO tutorial, divided in two parts, client and server.
http://thisthread.blogspot.com/2018/03/boost-asio-echo-tcp-synchronous-client.html
CC-MAIN-2018-43
refinedweb
588
62.58
Solution Arithmetic Slices II - Subsequence Dynamic Programming Solution - sub_problem[i,d] = Number of arthimetic sequences of length 2 or more which end at index i and have difference as d. Note we said length 2 or more and not 3. - Say we have an array called cache where every element of cache is a dictionary. The key for dictionary is difference d and value is number of sequences which end at i with difference d. - sub_problem[i,d] = sub_problem[j,d] + 1 iff A[i]-A[j] == d. The 1 in this equation represents the new subsequence of size 2 comprising of A[j], A[i]. Imagine array [2,2,3,4]. Say we are at index 2. Now we want to find the distribution of all sequences which end at index 2. Now for index 1 and 0, we notice a size 2 sequence of (2,3) and (2,3). These two would be added in the count of all subsequences ending at i. - We maintain a count of all size 2 subsequences and call it subs_len_2. Once we are done processing an index, we add all subsequences which end at index i to result. Finally we remove the count of size 2 subsequences from result and return the result. from collections import defaultdict class Solution(object): def numberOfArithmeticSlices(self, A): """ :type A: List[int] :rtype: int """ cache = [defaultdict(int) for _ in range(len(A))] result, subs_len_2 = 0, 0 for i in range(1, len(A)): for j in range (i-1,-1,-1): curr_diff = A[i]-A[j] cache[i][curr_diff], subs_len_2 = cache[i][curr_diff]+1, subs_len_2+1 if curr_diff in cache[j]: cache[i][curr_diff] += cache[j][curr_diff] result += sum(cache[i].values()) return result - subs_len_2 Succinct Code - The above code can be made a bit more clever so that we do not need to maintain a count of size 2 subsequences. class Solution1(object): def numberOfArithmeticSlices(self, A): """ :type A: List[int] :rtype: int """ cache = [{} for _ in range(len(A))] result = 0 for i in range(1, len(A)): for j in range (i-1,-1,-1): curr_diff = A[i]-A[j] cache[i].setdefault(curr_diff, 0) cache[i][curr_diff] += 1 if curr_diff in cache[j]: cache[i][curr_diff] += cache[j][curr_diff] result += cache[j][curr_diff] return result
https://discuss.leetcode.com/topic/74727/python-solution-with-detailed-explanation
CC-MAIN-2017-34
refinedweb
384
62.17
27 November 2012 13:29 [Source: ICIS news] HOUSTON (ICIS)--Jacobs has won a contract for work on a sulphur recovery unit (SRU) by ?xml:namespace> Jacobs said that the unit will be part of VKG’s investment to upgrade a shale oil refinery complex in Under the deal, Jacobs will license its technology and will provide the basic design package, training, as well as commissioning services for the new SRU, it said. Financial details or timelines for construction and completion were not disclosed. VKG is described as an Estonian oil shale processor. The company’s activities span the entire shale oil chain - from mining and processing to the marketing
http://www.icis.com/Articles/2012/11/27/9618686/us-jacobs-wins-contract-for-sulphur-recovery-unit-in.html
CC-MAIN-2014-10
refinedweb
110
50.16
Today I'll show how to build Windows Service in C# using Visual Studio 2010. First, open Visual Studio and create Windows Service project. Visual Studio will create a WindowsService.cs file, let's add some code to it: public WindowsService() { this.ServiceName = "My Windows Service"; this.EventLog.Source = "My Windows Service"; this.EventLog.Log = "Application"; this.CanHandlePowerEvent = true; this.CanHandleSessionChangeEvent = true; this.CanPauseAndContinue = true; this.CanShutdown = true; this.CanStop = true; if (!EventLog.SourceExists("My Windows Service")) EventLog.CreateEventSource("My Windows Service", "Application"); } static void Main() { ServiceBase.Run(new WindowsService()); } protected override void OnStart(string[] args) { base.OnStart(args); File.Create(@"C:\servicefile.txt"); } My windows service simply creates a file when it starts, you can write any other logic instead. Now we need to add an Installer for our service. Mouse right-click on the project to add another class, let's call it WindowsServiceInstaller.cs, put this code inside of it: [RunInstaller(true)] public class WindowsServiceInstaller : Installer {); } }Ok, we have the Installer, so our service can be registered in Windows OS. Now we need to add to our solution a Setup Project which will actually install and uninstall our windows service. Once you done, mouse right click on the setup project and choose Add Project Output: Then click on Primary Output and choose your Windows Service project as shown below: Then, mouse right click on Setup Project and click View -> Custom Actions: Add Primary Output to each folder in Custom Actions as shown below: Now build your Windows Service project and then build Setup Project. Once build succeeded you are ready to install your windows service. Mouse right click on Setup project and choose Install (you can also choose then Uninstall when needed): To check if you have installed your windows service, run Service manager: Look if the service exists in the service list: Download the source code of this example (Visual Studio 2010 project). Thank you for this clear and understandable article. I usually find that it's simpler to explore something by example, rather than reaching the guides. Absolutely agree with you, Karina. Thanks , very clear and concise. Very easy to understand the first part but could you explain how to get the Setup Project? I'm using VS 2012 and I do not have. Any final solution with full source code using VS 2012 and InstallShield Limited Edition or Windows Installer XML ? Attention to details?? I have never tryed to create a service before. I get some errors I think you asume some other code will be there Can you provide the sample code? Thanks in advance. You are welcome to download the sample code. The link is at the bottom of the article. The install option is disabled. I am not able to install it. Build your setup first then try to install. Nice to read your article! I am looking forward to sharing your adventures and experiences. COMPUTER Folder Lock App Download - Don't Spend Moment Searching, Study All Regarding PC Desktops Right here password protect files freeware
http://www.codearsenal.net/2012/07/windows-service-in-c-sharp-with-setup.html?showComment=1400224927160
CC-MAIN-2020-24
refinedweb
505
60.11
Java Technology Fundamentals Learn about the Java programming language and platform, and how to create applications. 2008-05-09T04:03:44-07:00 Apache Roller Weblogger The 2008 JavaOne Conference: For New Developers dananourie 2008-04-24T11:25:30-07:00 2008-04-24T11:25:30-07:00 Read how new developers and students can benefit from the 2008 JavaOne Conference this year, including free entry for some. <p>by Dana Nourie</p> <p>The <a href="">2008 JavaOne Conference</a>, with events from May 5 to 9 in San Francisco, California, is a great place for new developers to learn about many <a href="">Java technologies</a> and see how these technologies fit together. In addition, you can learn about other technologies and scripting languages, such as <a href="">Ruby and Groovy</a>. </p> <p>The conference kicks off on Monday, May 5, with <a href="">Java University</a>, which consists of classes you can take to learn how to program for the Java platform and how to incorporate other technologies and use tools, such as the <a target="_blank" href="">NetBeans IDE</a>. </p> <p>The rest of the week is filled with technical sessions that last about one hour each and that cover many different topics. In addition, the JavaOne conference also has Birds-of-a-Feather (BOF) sessions where you can hear what fellow developers think, what they're doing professionally, and where they want to see technologies go. It's a week jam-packed with technical how-to and information that can be hard to find elsewhere.</p> <p>This year, Sun will allow some student developers to attend the <a href="">JavaOne conference for free!</a></p> <p>Recommended sessions for students include the following: <table width="96%" cellspacing="0" cellpadding="5" border="1" bgcolor="#ffffff" align="center"> <tbody><tr bgcolor="#5382a1"> <th align="left" colspan="1" class="middle_10"><font color="white">Session ID</font></th> <th align="left" colspan="1" class="middle_10"><font color="white">Session Title</font></th> <th align="left" colspan="1" class="middle_10"><font color="white">Speaker Name and Company</font></th> </tr> <tr class="white"> <td valign="top" class="middle_10">TS-5925</td> <td valign="top" class="middle_10">A City-Driving Robotic Car Named Tommy Jr.</td> <td valign="top" class="middle_10">Paul Perrone, Perrone Robotics</td> </tr> <tr class="white"> <td valign="top" class="middle_10">TS-5841</td> <td valign="top" class="middle_10">Project Aura: Recommendation for the Rest of Us</td> <td valign="top" class="middle_10">Stephen Green and Paul Lamere, Sun Microsystems</td> </tr> <tr class="white"> <td valign="top" class="middle_10">TS-6656</td> <td valign="top" class="middle_10">Extreme GUI Makeover: Swing Meets FX</td> <td valign="top" class="middle_10">Christopher Campbell and Shannon Hickey, Sun Microsystems</td> </tr> <tr class="white"> <td valign="top" class="middle_10">TS-6611</td> <td valign="top" class="middle_10">Filthy-Rich Clients: Filthier, Richer, Clientier</td> <td valign="top" class="middle_10">Romain Guy, Google; Chet Haase, Adobe</td> </tr> <tr class="white"> <td valign="top" class="middle_10">TS-5286</td> <td valign="top" class="middle_10">Introduction to Web Beans</td> <td valign="top" class="middle_10">Gavin King, JBoss</td> </tr> <tr class="white"> <td valign="top" class="middle_10">TS-6169</td> <td valign="top" class="middle_10">Spring Framework 2.5: New and Notable</td> <td valign="top" class="middle_10">Rod Johnson, SpringSource</td> </tr> <tr class="white"> <td valign="top" class="middle_10">TS-6528</td> <td valign="top" class="middle_10">Listen and Speak: Teach Your Old Device New Tricks</td> <td valign="top" class="middle_10">Charles Hemphill, Conversay; Steve Rondel, Conversay</td> </tr> <tr class="white"> <td valign="top" class="middle_10">TS-6623</td> <td valign="top" class="middle_10">More "Effective Java"</td> <td valign="top" class="middle_10">Joshua Bloch, Google</td> </tr> <tr class="white"> <td valign="top" class="middle_10">TS-5579</td> <td valign="top" class="middle_10">Closures Cookbook</td> <td valign="top" class="middle_10">Neal Gafter, Google</td> </tr> <tr class="white"> <td valign="top" class="middle_10">TS-5165</td> <td valign="top" class="middle_10">Programming With Functional Objects in Scala</td> <td valign="top" class="middle_10">Martin Odersky, EPFL</td> </tr> <tr class="white"> <td valign="top" class="middle_10">TS-5152</td> <td valign="top" class="middle_10">Overview of the JavaFX Script Programming Language</td> <td valign="top" class="middle_10">Christopher Oliver, Sun Microsystems</td> </tr> <tr class="white"> <td valign="top" class="middle_10">TS-6050</td> <td valign="top" class="middle_10">Comparing JRuby and Groovy</td> <td valign="top" class="middle_10">Neal Ford, ThoughtWorks</td> </tr> <tr class="white"> <td valign="top" class="middle_10">TS-4842</td> <td valign="top" class="middle_10">Designing an MMORPG With Project Darkstar</td> <td valign="top" class="middle_10">Jeffrey Kesselman, Sun Microsystems</td> </tr> <tr class="white"> <td valign="top" class="middle_10">TS-6807</td> <td valign="top" class="middle_10">What's New in Ajax</td> <td valign="top" class="middle_10">Dion Almaer, Ajaxian; Ben Galbraith, MediaBank</td> </tr> <tr class="white"> <td valign="top" class="middle_10">TS-6537</td> <td valign="top" class="middle_10">Applications for the Masses by the Masses: Why Engineers Are an Endangered Species</td> <td valign="top" class="middle_10">Girish Balachandran and Todd Fast, Sun Microsystems</td> </tr> <tr class="white"> <td valign="top" class="middle_10">TS-5249</td> <td valign="top" class="middle_10">The NetBeans Ruby IDE: You Thought Rails Development Was Fun Before</td> <td valign="top" class="middle_10">Brian Leonard and Tor Norbye, Sun Microsystems</td> </tr> </tbody></table> </p> <p>Additionally, you can attend fun sessions on how Java technologies are being used in the world of science: Mapping Mars (TS-6608); Universal Translator -- Breaking the Communication Barrier With JSAPI (TS-5908); Pushing Java OpenGL (JOGL) to the Limit With Stellarium (TS-4964); and Mars Rover Operations Imaging and Mapping With Java Technology (BOF-5044).</p> <p>If you're interested in gaming, check out Video Game Development on the Java Platform: Past, Present, and Future of Java Technology Games (BOF-5832); Creating Games on the Java Platform with the jMonkeyEngine (TS-5711); Using Comet to Create a Two-Player Web Game (BOF-6584); and Project Wonderland: A Toolkit for Building 3-D Virtual Worlds (TS-6125).</p> <p>Check the <a href="">2008 JavaOne Conference</a> home page for more information, including a complete <a href="">content catalog</a> searchable by keyword, speaker name, track, and other parameters.</p><p>If you can't make it to the conference this year, you can read about the sessions as <a title="2008 JavaOne Conference Blog" href="">Sun writers blog</a> from the trenches, sharing technical information and their impressions of the conference. </p><p>Hope to see you there! </p> Young Developers Section in the New to Java Programming Center dananourie 2008-04-15T14:56:31-07:00 2008-04-15T14:56:31-07:00 The <a href="" title="New to Jaa Programming Center">New to Java Programming Center</a> is delighted to present a new section just for <a href="" title="Young Developers">young developers</a>. <p>Programming isn't just for adults any longer. Young people are learning programming languages from the earliest ages and up. The <a href="" title="New to Jaa Programming Center">New to Java Programming Center</a> is delighted to present a new section just for <a href="" title="Young Developers">young developers</a>, that lists tools and web sites that focus on teaching young developers how to program using the Java programming language, as well as languages developed for ease of use.</p><p>See <a href="" title="Young Developers">Young Developers</a> in the <a href="" title="New to Jaa Programming Center">New to Java Programming Center.</a> </p> Building an Ajax Chat Room with the Ajax Transaction Dynamic Faces Component dananourie 2008-04-09T12:07:55-07:00 2008-04-09T12:07:55-07:00 In this <a href="" title="Build an Ajax Chat Room in NetBeans">tutorial</a>, you build an Ajax chat room web application with components that are themselves Ajax-unaware, also known as POJC (Plain Old JavaServer Faces Components). <p>In this <a href="" title="Build an Ajax Chat Room in NetBeans">tutorial</a>,.<br /><br />Contents<br />- Tutorial Requirements<br />- Creating the Project<br />- Configuring the Deployment Descriptor<br />- Adding the Dynamic Faces Component Library to the Project<br />- Adding Code to the Application Bean<br />- Adding Code to Store the Username<br />- Adding the transcript Property to Page1.java<br />- Creating the User Interface<br />- Configuring Ajax Transactions for Sending Comments and Polling<br />- Setting JavaScript Properties of the Body and Form Components<br />- Adding JavaScript<br />- Deploying the Project</p><p> </p><p><b>This tutorial works with the following technologies and resources</b></p> <table cellpadding="1" border="1"> <tbody> <tr> <td valign="top">JavaServer Faces Components/<br /> Java EE Platform</td> <td valign="top"><!-- <img src="../../../images/articles/60/web/spacer.png" alt="works with" height="15" hspace="3" width="14">1.2 with Java EE 5*<br> --> <img width="14" hspace="3" height="15" alt="works with" src="" />1.1 with J2EE 1.4 </td></tr> <tr> <td valign="top"><a href="">Travel Database</a></td> <td valign="top" colspan="4"><img width="14" hspace="3" height="15" alt="works with" src="" />Not Required</td> </tr> </tbody></table> <!-- END RESOURCE MATRIX --><!-- END INTRO -----------------------------------------------------------------------------------------* --><!-- ======================================================================================== --><!-- ======================================================================================== --> <br /> <h2><a name="01"></a>Tutorial Requirements</h2> <!-- Intro paragraph for this topic --------------------------------------------------------------------* --> <p>Before you begin, you need to install the following software on your computer:</p> <ul><li>NetBeans IDE 6.0 with Web and Java EE functionality (included in the Web and Java EE download and the All download). (<a href="">download</a>)</li><li>Visual Web Samples Plugin. This plugin includes the Dynamic Faces Component Library (0.2). Follow the instructions in the section entitled "Installing the Plugin" in <a href="">Installing the Visual Web Samples Plugin</a>. Be sure to select the entries for both the Visual Web JSF Post Release Samples and the Visual Web JSF Backwards Compatibility Kit.</li></ul> <br /> <!-- ======================================================================================== --><!-- BEGIN FIGURE COMPONENT --> <img width="599" height="505" border="1" title="Ajax Chat Room Web Application" alt="Ajax Chat Room Web Application" src="" /><p><br /><br /><a href="" title="Build an Ajax Chat Room in NetBeans">Follow the tutorial</a><br /><br />**********************************<br /><br /></p><h3><b>Sun Student Courses</b></h3><p><br /><a href="" title="Real World Technologies">Real World Technologies: NetBeans GUI Builder, JRuby, JavaFX, and JavaME</a> is an introduction level course that leads you to the basics of JavaFX, NetBeans GUI, JRuby, and JavaME programming. When you finish this course you will earn a certificate.</p><p> </p><p> </p> Trail: The Extension Mechanism dananourie 2008-03-26T10:01:43-07:00 2008-03-26T10:01:43-07:00 Learn about the extension mechanism, which enables the runtime environment to find and load extension classes without the extension classes having to be named on the class path. The extension mechanism was introduced as a new feature in the Java<font size="-2"><sup>TM</sup></font> 1.2 platform. The extension mechanism provides a standard, scalable way to make custom APIs available to all applications running on the Java platform. As of the Java 1.3 platform release, <em>Java extensions</em> are also referred to as <em>optional packages</em>. This trail may use both terms interchangeably. <p> <em>Extensions</em>. <p> Since this mechanism extends the platform's core API, its use should be judiciously applied. Most commonly it is used for well standarized interfaces such as those defined by the Java Community Process<font size="-2"><sup>SM</sup></font>, although it may also be appropriate for site wide interfaces. <p> <p><center><IMG SRC="" WIDTH="461" HEIGHT="293" ALIGN="BOTTOM" ALT="This figure shows the relationships between Application, Java Platform, and Extensions."></center></p> <p> As the diagram indicates, extensions act as "add-on" modules to the Java platform. Their classes and public APIs are automatically available to any applications running on the platform. <p> The extension mechanism also provides a means for extension classes to be downloaded from remote locations for use by applets. <p> Extensions are bundled as Java Archive (JAR) files, and this trail assumes that you are familiar with the JAR file format. If you're not up to speed on JAR files, you might want to review some JAR-file documentation before proceeding with the lessons in this trail: <ul> <li>The <a class="TutorialLink" target="_top" href="">Packaging Programs in JAR Files</a> lesson in this tutorial. <li>The <a class="OutsideLink" target="_blank" href="">JAR Guide</a> in the JDK<font size="-2"><sup>TM</sup></font> documentation.</ul> <p> This trail has two lessons: <P> <h3> <a class="TutorialLink" target="_top" href="">Creating and Using Extensions</a></h3> <blockquote> This section shows you what you need to do to add an extension to your Java platform and how applets can benefit from the extension mechanism by downloading remote extension classes. </blockquote> <P> <h3> <a class="TutorialLink" target="_top" href="">Making Extensions Secure</a></h3> <blockquote> This section describes security privileges and permissions that are granted to extensions on your platform. You'll see how to use the Java platform's security architecture if you're writing extensions classes of your own. </blockquote> <h3>Additional Documentation</h3> You can find further information about extensions in the <a class="OutsideLink" target="_blank" href="">The Java Extensions Mechanism</a> section of the JDK documentation. </blockquote> <p> ***********</p> <p> <a href="">Download NetBeans IDE 6.1 Beta – Win Big!</a><br> Download Sun NetBeans IDE 6.1 Beta to preview for yourself. Post a blog describing your experience using NetBeans and you could win $500.</P> Creating a Simple Web Application Using a MySQL Database dananourie 2008-03-10T10:35:53-07:00 2008-03-10T10:35:53-07:00 This document describes how to create a simple web application that connects to a MySQL database server. <p>This document describes how to create a simple web application that connects to a MySQL database server. It also covers some basic ideas and technologies in web development, such as <a href="">JavaServer Pages</a>™ (JSP), <a href="">JavaServer Pages Standard Tag Library</a>™ (JSTL), the Java Database Connectivity™ (JDBC) API, and two-tier, client-server architecture. This tutorial is designed for beginners who have a basic understanding of web development and are looking to apply their knowledge using a MySQL database.</p> <p><a href="">MySQL</a> is a popular Open Source database management system commonly used in web applications due to its speed, flexibility and reliability. MySQL employs SQL, or <em>Structured Query Language</em>, for accessing and processing data contained in databases.</p> <p>This tutorial continues from the <a href="../ide/mysql.html">Connecting to a MySQL Database</a> tutorial and assumes that you already have a connection to a MySQL database created and registered in NetBeans IDE. The table data that is included in <a href="">ifpwafcad.sql</a>. is also required for this tutorial. This SQL file creates two tables, <tt>Subject</tt> and <tt>Counselor</tt>, then populates them with sample data. Save this file to a local directory, then open it in NetBeans IDE and run it on the MySQL database. The database used in this tutorial is named <tt>MyNewDatabase</tt>.</p> <p> Read <a href="">Creating a Simple Web Application Using a MySQL Database</a>. </p> Easy Web Site Creation in the NetBeans IDE dananourie 2008-02-19T19:29:08-08:00 2008-02-19T19:29:08-08:00. by Dana Nourie <p>Last year I attended a <a href="" title="Tech Days">Tech Days</a> event and learned about the ease of use of <a href="" target="_blank">jMaki widgets </a> for web site building, and then I gave a chat in <a href="" target="_blank">Second Life</a> on the topic of web programming using the <a href="" target="_blank">NetBeans IDE</a>, including what I had learned at Tech Days. This article is based on those talks, showing how incredibly easy it is to create a web site in NetBeans through drag-and-drop without writing code, and how you can gradually learn Java programming by adding to your <a href="" title="JavaServer Pages (JSP)">JavaServer Pages (JSP)</a>, and creating other features or programs that may be added to your site.</p> <p>This article is aimed at new developers and programmers, and developers new to the NetBeans IDE. To follow the examples, you must have the following software installed on your computer:</p> <ul> <li> The <a href="">Java Standard Edition Platform (Java SE)</a> (Note, you can download the JDK with the NetBeans IDE).</li> <li> The <a href="" target="_blank">NetBeans IDE 6.0</a> or greater.</li> </ul> <div><b>NetBeans IDE Benefits for Web Site Developers</b></div> <div class="contentdivider"> <table class="grey4" border="0" cellpadding="0" cellspacing="0" width="100%"> <tbody><tr><td><img src="" alt=" " border="0" height="4" width="1"></td></tr> </tbody></table> </div> <p>As many of you know, web applications often require many different programming languages, and a way of combining various technologies. For instance, you may use HTML and CSS for your page formatting, JavaScript for some rollover buttons, and a Java servlet or JSP to process a form. The latter is a good way to learn the syntax of the Java programming language and is a great entry point if you are new to the Java platform.</p> <p>One of the wonderful things about the NetBeans IDE is that you don't have to know all the languages or how to combine the technologies. NetBeans handles the languages, and combines technologies seamlessly for you. In addition, the NetBeans IDE has some wonderful drag-and-drop widgets from various built-in palettes. For instance, you can drag and drop HTML components to create a form, Swing components to create great looking buttons or menus, or drop in interactive Ajax components using <a href="" target="_blank" title="jMaki">jMaki widgets</a>.</p> <p>The web site you see below in Figure 1 is not beautiful, nor is it a design I recommend. However, all of its components were simply dragged onto a page, and are fully functional, requiring no code to be written from scratch. You can do a lot of web site creation in the NetBeans IDE with very little programming.</p> <!-- BEGIN IMAGE WITH CAPTION --> <table border="0" cellpadding="2" cellspacing="0" width="350"> <tr> <td class="grey3" align="center"> <img src="" alt="" height="164" width="350"> <div class="pad3"> <span class="dkcaption1"><b>Figure 1:</b><i> Sample Web Site</i> </span> </div> </td> </tr> </table> <span class="sp10"> </span><br /> <!-- END IMAGE WITH CAPTION --> <p>Notice the clock that keeps time (on the right), a form (on the left) that gathers data from users with the all important CAPTCHA (the image above the submit button) to prevent spam, a tab layout in the center that makes for nice organization, and a fisheye effect on the photos at the top. No programming was needed for any of these. All of these features were added through drag and drop, which is far less time consuming than coding those components yourself. </p> <p>There are also many services available that you can simply drop onto a page, then add URLs, or whatever you need to include to pull in that service, such as with a mashup. For instance, adding RSS feeds to your page is very easy.</p> <p>NetBeans also handles writing to and pulling data from a database, as described in the <a href="" target="_blank">Using Databound Components to Access a Database</a> tutorial. </p> <div class="contentdivider"> <table class="grey4" border="0" cellpadding="0" cellspacing="0" width="100%"> <tbody><tr><td><img src="" alt=" " border="0" height="4" width="1"></td></tr> </tbody></table> </div> <p> To create the web site shown above, read the <a href="">rest of this article</a> </p> Core Java: Volume I, Fundamentals (Book Review) dananourie 2008-02-15T09:26:08-08:00 2008-02-15T09:26:08-08:00 Read this review of Core Java: Volume I, Fundamentals, and download a sample chapter. <p>"Since 1995, Sun Microsystems has released seven major revisions of the Java Development Kit... The Application Programming Interface (API) has grown from about 200 to over 3,000 classes," according to <i>Core Java: Volume I, Fundamentals</i> by Cay S. Horstmann and Gary Cornell. As you can imagine, this book has grown from the first edition of 672 pages to this eighth edition of 864 pages. This edition focuses on <a href="">Java Platform, Standard Edition 6</a> (Java SE 6). </p> <p>. </p> <p>. </p> <p>. </p> <p> Of course, once you have these basics and have written a few applications, you'll need to package and deploy your application or applet so that others can use it. As a staff writer for <a href="">java.sun.com</a>, I frequently hear developers asking how they should package their applications, so I thought that <a href="">Chapter 10, Deploying Applications and Applets (PDF)</a> would be an excellent chapter to post as a sample for the book. </p> <p>. </p> <p> Read the rest of the book review and download the sample chapter <a href="">here.</a> SDN Chat: Meet the Writers of java.sun.com dananourie 2008-02-12T15:52:36-08:00 2008-02-12T15:52:36-08:00 Please join us in <a href="">Sun's Developer Playground</a> in <a href="">Second Life</a> on Thursday, <b>February 14 at 10am PST</b> to meet the writers of java.sun.com. Please join us in <a href="">Sun's Developer Playground</a> in <a href="">Second Life</a> on Thursday, <b>February 14 at 10am PST<! The Singleton Pattern dananourie 2008-02-04T13:30:42-08:00 2008-02-04T13:30:42-08:00 Learn what the Singleton pattern is and when it's used. A design pattern is a general solution to a common problem in software design. The idea is that the solution gets translated into code, and that the code can be applied in different situations where the problem occurs. Discussion of design patterns started with the book <em><a href="" style="color: rgb(153, 153, 204); font-size: 13px; text-decoration: none; font-family: Arial,Verdana,Helvetica,Sans-serif;">Design Patterns: Elements of Reusable Object-Oriented Software</a></em>. <br><br>. <br><br> If you know the one instance being created will be a subclass, make the parent class abstract and provide a method to get the current instance. An example of this is the <code>Toolkit</code> class in the AWT package. The constructor for <code>Toolkit</code> is public (the default constructor in this particular case): <br> <pre> public Toolkit() </pre> and the class has a <code>getDefaultToolkit()</code> method for getting the specific subclass -- in this case, the subclass is platform-specific: <br> <pre> public static Toolkit getDefaultToolkit() </pre> On a Linux platform with the Sun Java runtime, the specific subclass is of type <code>sun.awt.X11.XToolkit</code>. However you don't need to know that because you only access the class through its common abstract parent class, <code>Toolkit</code>. <br><br> The <code>Collator</code> class is another example of this pattern, with a slight difference. It offers two <code>getInstance()</code> methods. The no-argument version gets the <code>Collator</code> for the default locale. You can pass in your own locale to get the instance of the <code>Collator</code> for that locale. Request the <code>Collator</code> for the same locale multiple times and you get back the same <code>Collator</code> instance. The constructor itself is protected. Similar ways of restricting class creation can be found throughout the J2SE standard libraries. <br><br> At this point you might think that restricting access to the constructor of a class automatically makes it a Singleton. It doesn't. A case in point is the <code>Calendar</code> class. The <code>Calendar</code> class constructor is protected, and the class offers a <code>getInstance()</code> method to get an instance of the class. However, each call to <code>getInstance()</code> gets a new instance of the class. So that isn't a Singleton. <br><br> When you create your own <code>Singleton</code> class, make sure that only a single instance is ever created: <br> <pre> public class MySingleton { private static final MySingleton INSTANCE = new MySingleton(); private MySingleton() { } public static final MySingleton getInstance() { return INSTANCE; } } </pre> The static method, <code>getInstance()</code>, returns the single instance of the class. Note that even if the single instance needs to be a subclass, you don't have to change the API. <br><br> Theoretically, you don't need the <code>getInstance()</code> method because the <code>INSTANCE</code> variable could be public. However, the <code>getInstance()</code> method does provide flexibility in case of future system changes. Good virtual machine implementations should inline the call to the static <code>getInstance()</code> method. <br><br> That's not quite all there is to creating a Singleton. If you need to make your Singleton class <code>Serializable</code>, you must provide a <code>readResolve()</code> method: <br> <pre> /** * Ensure Singleton class */ private Object readResolve() throws ObjectStreamException { return INSTANCE; } </pre> With the <code>readResolve()</code> method in place, deserialization results in the one (and only one) object -- the same object as produced by calls to the <code>getInstance()</code> method. If you don't provide a <code>readResolve()</code> method, an instance of the object is created each time you deserialize the object. <br><br>. <br><br>. <br><br> Note that Singletons are only guaranteed to be unique within a given class loader. If you use the same class across multiple distinct enterprise containers, you'll get one instance for each container. <br><br> <code>BorderFactory</code> class. The class has a series of static methods returning different types of <code>Border</code> objects. It hides the implementation details of the subclasses, allowing the factory to directly call the constructors for the interface implementations. Here's an example of <code>BorderFactory</code> in use: <br> <pre> Border line = BorderFactory.createLineBorder(Color.RED); JLabel label = new JLabel("Red Line"); label.setBorder(line); </pre> Here, the fact that <code>BorderFactory</code> creates a <code>LineBorder</code>, or how <code>BorderFactory</code> does that, is hidden from the developer. In this particular example, you can directly call the <code>LineBorder</code> constructor, but in many cases of using the Factory pattern, you can't. <br><br> Frequently, the class implementing the Singleton pattern returns an object to use as a Factory to create instances of a different class. This is exemplified by the <code>PopupFactory</code> class in the way it creates <code>Popup</code> objects. <br><br> To get the Singleton factory, you call the <code>getSharedInstance()</code> method of <code>PopupFactory</code>: <br> <pre> PopupFactory factory = PopupFactory.getSharedInstance(); </pre> Then you create a <code>Popup</code> object from the factory by calling the factory's <code>getPopup()</code> method, passing in the parent component, its contents, and position: <br> <pre> Popup popup = factory.getPopup(owner, contents, x, y); </pre> You'll find the Factory pattern used frequently in a security context. In the following example, a certificate factory is obtained for a particular algorithm, then a certificate for a stream is generated: <br> <pre> FileInputStream fis = new FileInputStream(filename); CertificateFactory cf = CertificateFactory.getInstance("X.509"); Collection c = cf.generateCertificates(fis); </pre> As shown with <code>BorderFactory</code>, the Factory pattern does not have to be used with the Singleton pattern. However the two patterns are frequently used together. <p> ******* </p> <p> <a href=""> Student Developers</a> </p> <p> Where can you find hot technologies, open-source communities, and job opportunities? Sun is looking for students who are ready to innovate and create the future. <a href="">Read More</a>.</p> <p> Lesson: JavaBeans Concepts dananourie 2008-02-01T11:44:33-08:00 2008-02-01T11:44:33-08:00 Learn the concepts of JavaBeans technology, when you would need to use JavaBeans, and the different ways they are used in applications. <P>The JavaBeans™ architecture is based on a component model which enables developers to create software units called <EM class="Emphasis">components</EM>. Components are self-contained, reusable software units that can be visually assembled into composite components, applets, applications, and servlets using visual application builder tools. JavaBean components are known as <EM class="Emphasis">beans</EM>. <P>A set of APIs describes a component model for a particular language. The JavaBeans API <a class="APILink" target="_blank" href=""><code>specification</code></a> describes the core detailed elaboration for the JavaBeans component architecture. <P>Beans are dynamic in that they can be changed or customized. Through the design mode of a builder tool you can use the Properties window of the bean to customize the bean and then save (persist) your beans using visual manipulation. You can select a bean from the toolbox, drop it into a form, modify its appearance and behavior, define its interaction with other beans, and combine it and other beans into an applet, application, or a new bean. <p> The following list briefly describes key bean concepts. <ul> <li>Builder tools discover a bean's features (that is, its properties, methods, and events) by a process known as <EM class="Emphasis">introspection</EM>. Beans support introspection in two ways: <ul> <li>By adhering to specific rules, known as <em>design patterns</em>, when naming bean features. The <a class="APILink" target="_blank" href=""><code>Introspector</code></a> class examines beans for these design patterns to discover bean features. The <code>Introspector</code> class relies on the <EM class="Emphasis">core reflection</em> API. The trail <a class="TutorialLink" target="_top" href="../../reflect/index.html">The Reflection API</a> is an excellent place to learn about reflection. <li>By explicitly providing property, method, and event information with a related <em>bean information</em> class. A bean information class implements the <code>BeanInfo</code> interface. A <code>BeanInfo</code> class explicitly lists those bean features that are to be exposed to application builder tools. </UL> <li><em>Properties</em> are the appearance and behavior characteristics of a bean that can be changed at design time. Builder tools introspect on a bean to discover its properties and expose those properties for manipulation. <p> <li>Beans expose properties so they can be customized at design time. <em>Customization</em> is supported in two ways: by using property editors, or by using more sophisticated bean customizers. <p> <li>Beans use <em>events</em> to communicate with other beans. A bean that is to receive events (a listener bean) registers with the bean that fires the event (a source bean). Builder tools can examine a bean and determine which events that bean can fire (send) and which it can handle (receive). <p> <li><em>Persistence</em> enables beans to save and restore their state. After changing a bean's properties, you can save the state of the bean and restore that bean at a later time with the property changes intact. The JavaBeans architecture uses Java Object Serialization to support persistence. <p> <li>A bean's <em>methods</em> are no different from Java methods, and can be called from other beans or a scripting environment. By default all public methods are exported. </ul> <p> Beans vary in functionality and purpose. You have probably met some of the following beans in your programming practice: <UL> <LI>GUI (graphical user interface)</LI> <LI>Non-visual beans, such as a spelling checker</LI> <LI>Animation applet</LI> <LI>Spreadsheet application</LI> </UL> <p>Read more from the <a href="">JavaBeans Trail</a> </p> <p> New to Java Programming Center Redesign dananourie 2008-01-24T11:04:52-08:00 2008-01-24T11:04:52-08:00 The <a href="">New to Java Programming Center</a> has recently been redesigned to accommodate more articles and tutorials, community information, and has been organized so the information you need is easy to find. <p> The <a href="">New to Java Programming Center</a> has recently been redesigned to accommodate more articles and tutorials, community information, and has been organized so the information you need is easy to find. </p> <p> If you are new to the Java platform, you can use the <a href="">New to Java Programming Center</a> to get your computer set up with the Java platform, learn the syntax of the Java programming language, and learn how to develop applications for desktops, the web, or mobile devices. </p> <p> The information listed and linked to the <a href="">New to Java Programming Center</a> is free. In addition, it lists links to Sun courses and certification training that you can buy. </p> Getting Started With the Hiking Log Application dananourie 2008-01-16T11:17:25-08:00 2008-01-16T11:17:25-08:00 Before getting into the details of the database design of the new Hiking Log application, let's make sure that your setup is up-to-date and includes the necessary plugins. <p> by John Zukowski</p> <p. <p>Before getting started, make sure you have <a href="" target="_blank">JDK 6</a> installed on your system. After you have set up the JDK on your local system, follow the <a href="" target="_blank">Download link</a> from the <a href="" target="_blank">NetBeans IDE 6.0</a> home page to get the newest version of the IDE. <p. <p>After downloading, go through the IDE Installer, making sure to read the updated licensing agreement. The 21 MB Java SE download will expand to just under 100 MB once installed, so be sure to have enough free disk space. The other releases require even more space. <p. <p. <p><a href=""><img src="" width="400"></a> <p <a href="" target="_blank">jMaki home page</a> to find documentation and demos. <p>That's really it for installation and getting started, assuming you'll be using Sun's JDK 6. If not, you'll need to get the <a href="" target="_blank">Derby libraries</a> from Apache, which is what will be used here. If you prefer, you can use <a href="" target="_blank">MySQL</a>, though this blog will not offer instructions for use of that open-source database. </p> Introduction to the Swing Application Framework dananourie 2008-01-14T09:56:30-08:00 2008-01-14T09:56:30-08:00 <p>This guide is an introduction to the support in NetBeans IDE 6.0 for the Swing Application Framework.</p> <p>Introduction: Swing Application Framework in NetBeans IDE 6</p> <p>The Swing Application Framework is a light framework that simplifies the creation and maintaining of small- to medium-sized Java desktop applications. The framework consists of a Java class library that supports constructs for things such as the following:</p> <p> * Remembering state between sessions.<br/> * Easier managing of actions, including running as background tasks and specifying blocking behavior.<br/> * Enhanced resource management, including resource injection for bean properties.</p> <p>The IDE supports the development of applications based on the Swing Application Framework in the following ways:</p> <p> * Providing the Java Desktop Application project template, which contains skeleton implementations of the main framework features. This template enables you to choose from the follow two shells:<br/> o Basic Application. Provides a basic frame, some sample menu items, status bar, and mechanisms for managing actions and resources.<br/> o Database Application. Provides all of the features in the Basic Application shell plus all of the features necessary for a simple database application with create, read, update, and delete features. See Building a Java Desktop Database Application for an example of how this shell can be used.<br/> * Integration of framework features in the IDE's GUI Builder. Swing Application Framework applications can be designed in the IDE like any other Swing application.<br/> * Generating your application's UI text and other resources in .properties files.<br/> * Providing a special property editor for actions in which you can associate an action with keyboard accelerators, text, and a tooltip. In addition, you can configure properties for when it is selected or enabled and you can set the action to run asynchronously.<br/> * Automatically packaging the Swing Application Framework library into your project's dist/lib folder when you build your application in the IDE.</p> <p.</p> <p><a href="">Read this full tutorial</a></p> Help Define What Comes After ABook dananourie 2007-12-13T15:17:39-08:00 2007-12-13T15:17:39-08:00 Read on for an idea for a new project, and leave feedback in the comments for how you would like the next series of blog postings to develop. <p>The <a href="">Abook application</a> that this blog has discussed over the last several months has reached its end. It was a nice project to show, but several aspects of it kept the discussion from getting too interesting, like all the database fields as strings. </p> <p>It is now time for a new direction. Instead of just jumping right into that new project, we're going to take a longer view of things, with a goal of getting some input from readers. That way, instead of just throwing out there what we writers think that you, the readers, might want to learn about, you tell us. Then, over the next several months, we'll dive into different aspects of the project. </p> <p>Read on for an idea for a new project, and leave feedback in the comments for how you would like the next series of blog postings to develop. </p> <p><b>Project Ideas</b> </p> <p>For the new project, instead of enhancing the address book to include things like personal photos or birthdays, it seems best to take a different direction. How about if we create an application such as a trip log or a hiking log? The database would store things like location, distance traveled, photos, and comments, perhaps even a list of people you traveled with or records of how much water you drank. Then imagine filling the database over several months and what you can do with the data, from generating a hiking-log report to printing reports for specific hiking trips. We can even incorporate the use of drag and drop or XML for data transfer. Graphic charts can also be added if we want to show weekly trends. </p> <p>Lastly, we can use the neat <a href="" target="_blank">jMaki</a> tools to drag and drop in a map for the different hiking trips. All this can be done in the latest version of the <a href="" target="_blank">NetBeans IDE</a>. </p> <p>Let us know if you like this idea for a group of tutorials that focus on building a hiking-log application, or if there is something else you'd rather learn. Give us your ideas in the comments area below, then together we will build an application over a series of blog postings. </p> <p>In the end, it should make for a most interesting project. We'll get the database design locked down first, then we can create the necessary user interface to maintain the database and present that information in new and interesting ways. </p> <p> ****** </p> <b>New Certification Courses (For Experienced Developers)</b></p> <p> <a href="">Sun Certified Programmer for the Java Platform, Standard Edition 6</a> -- This certification exam is for programmers experienced in using the Java programming language. Achieving this certification provides clear evidence that a programmer understands the basic syntax and structure of the Java programming language and can create Java technology applications that run on server and desktop systems using Java SE 6 technology.</p> <p><a href="">Sun Certified Programmer for the Java Platform, Standard Edition 6 Upgrade Exam (CX-310-066)</a>-- This certification exam is for programmers experienced in using the Java programming language. Achieving this certification provides evidence that a programmer understands the basic syntax and structure of the Java programming language and can create Java technology applications that run on server and desktop systems using Java SE this SCJP 6 Upgrade exam. </p> ******<br> on Tuesday, December 18, from 9 a.m. PST and 7 p.m. PST, in <a href="" target="_blank">Second Life</a> at the <a href="" target="_blank">Sun Microsystems Developer Playground</a> to discover how easy it is to learn the Java platform through web programming. Watch a demo and have your questions answered. </p> ****** Creating Games for Mobility in the NetBeans IDE dananourie 2007-12-11T11:38:31-08:00 2007-12-11T11:38:31-08:00 No one needs to tell you that games are wildly popular on all platforms. Now, game development has gotten even easier in the Java platform space. <p> No one needs to tell you that games are wildly popular on all platforms. Now, game development has gotten even easier in the Java platform space. With the latest release of the <a href="">Netbeans Mobility Pack 6.0</a>, you can learn to create games, using Scenes, Sprites, and Tiled Layers. </p> <p>. </p> <p> <img src="" width="600"> <p> Read the <a href="">tutorial</a>. </p> <p> **************<br> <a href=" ">="">Sun Microsystems Developer Playground</a> to discover how easy it is to learn the Java platform through web programming. Watch a demo and have your questions answered. Get NetBeans 6! dananourie 2007-12-03T14:53:50-08:00 2007-12-03T14:53:50-08:00 NetBeans 6 is now available for download! > Java Technology Fundamentals Second Life Chat dananourie 2007-11-30T10:40:04-08:00 2007-11-30T10:40:04-08:00 <p>Missed the Java Fundamentals chat? Read our discussion here, where we talk about NetBeans, how beginners can learn the Java platform, how scripting languages fit in, and what resources are available.</p> <p><img src="" align="right">The Java Fundamentals chat within <a href="">Second Life</a>.</p> <p>There were some questions I was not able to answer, and those I've passed off to co-workers so that I can answer those questions in the future. </p> <p>We will be having another <b>Java Fundamentals chat Dec 5, Wed at 7PM, PST</b>. Please join us to ask any questions you have about where and how you can learn the Java platform through Sun's resources.</p> <p>Chat from Second Life:</p> <p>[9:02] Heidi SunMicrosystems: Let's get started!<br/> [9:02] Heidi SunMicrosystems: Good morning and welcome to SDN's chat on Java Fundamentals. I'm Heidi Dailey, Web Marketing Manager for Sun Developer Network<br/> [9:03] Java Mug whispers: Mmmmm...hot cup of Java from Sun Microsystems, Inc.<br/> [9:03] Heidi SunMicrosystems: It's my great pleasure to introduce Dana Nourie, staff writer and web manager for java.sun.com<br/> [9:03] Heidi SunMicrosystems: and a personal friend!<br/> [9:03] Dana: Thank you, Heidi!<br/> [9:03] Dana: And thanks to all of you for coming today!<br/> .<br/> [9:03] Heidi SunMicrosystems: With that I'll turn it over to Dana.<br/> [9:04] Heidi SunMicrosystems: :)<br/> [9:04] Dana: Thanks so much for coming. I'm really thrilled to be here.<br/> [9:04] Dana: I've been at Sun now for almost 9 years, working on the java.sun.com website.<br/> [9:04] Duffy Lomu: Thanks.<br/> [9:04] Dana: We noticed we had a good-sized audience of new developers, people new to Java technologies, so . ..<br/> [9:05] Dana: we now have many resources available for you to learn about the Java platform.<br/> [9:05] FLOSSGeek Raymond: fantastic<br/> [9:05] Dana: The first is Java Fundamentals, a newsletter you can subscribe to through RSS.<br/> [9:05] FLOSSGeek Raymond: ok<br/> [9:05] Dana: We also have the New to Java Programming Center, and a number of other places.<br/> [9:06] Heidi SunMicrosystems: Okay<br/> [9:06] Heidi SunMicrosystems: Let's open the floor to questions.<br/> [9:06] Heidi SunMicrosystems: Remember if you have a question type @<br/> [9:06] FLOSSGeek Raymond: Could we have some URLs of these locations?<br/> [9:06] Heidi SunMicrosystems: and I'll call you<br/> [9:06] Heidi SunMicrosystems: Yes, Raymond<br/> [9:06] Dana: Absolutely<br/> [9:06] Heidi SunMicrosystems: We'll be passing out note cards<br/> [9:06] Heidi SunMicrosystems: good questions!<br/> [9:06] Heidi SunMicrosystems: during the chat :)<br/> [9:07] Dana: We have a whole list of them for you. <br/> [9:07] Erik Leominster: @<br/> [9:07] Heidi SunMicrosystems: Go Erik<br/> [9:07] Erik Leominster: I notice it's not yet possible to use Java here in-world, any plans to change that?<br/> [9:07] FLOSSGeek Raymond: What is the best way to begin getting into this web techs and Ajax with java? As a beginner perspective, should I jump into web programming straight away or just stick with command line apps to start with?<br/> [9:08] curiousman Abramovic: @ why Sun create website about Java learning? something off java.sun.com something like Java?<br/> [9:08] Dana: For Ajax we do have a good resource page for you. Ajax is a good place to start with programming if you're new. Web programming of any kind is a good place for beginners to jump in.<br/> [9:09] Heidi SunMicrosystems: Curious, please re ask your question<br/> [9:09] Heidi SunMicrosystems: Go curious<br/> [9:09] curiousman Abramovic: Why did sun create a new website about Java learning?<br/> [9:09] FLOSSGeek Raymond: I'm not totally new to programming, I do Actionscript flash stuff at work but I don't like it as it is not open source<br/> [9:10] Dana: Well, we didn't actually create a new site, we added to the old one. . .<br/> [9:10] Heidi SunMicrosystems: Scooter, ask your question<br/> [9:10] FLOSSGeek Raymond: I've always liked Java though, but don't know where to start of in the web area.<br/> [9:10] Dana: The first site addressed more advanced developers, so I put together the New to Java Programming Center for beginners<br/> [9:10] Scooter Back: I just started a JAVA class in college. I am a proficient PHP, Perl/CGI, and Bash scripting. What complications, if any, will I face while trying to learn the syntax of JAVA? And what similarities do you think will help me?<br/> [9:11] FLOSSGeek Raymond: @sorry about that I wasn't aware of the "@" thing<br/> [9:11] Dana: Java goes with scripts very well now! Be sure to download JDK 6, and NetBEans as you can use Java and scripting languages together now!<br/> [9:11] Heidi SunMicrosystems: no worries<br/> [9:11] Dana: Also check out JRuby and Java<br/> [9:11] Heidi SunMicrosystems: Curious, you had a follow up question?<br/> [9:11] Scooter Back: Yes, I grabbed NetBeans and have successfully compiled my first "hello world" program.<br/> [9:11] Dana: Great!<br/> [9:12] Dana: NetBeans saves a lot of coding time. Do check it out.<br/> [9:12] curiousman Abramovic: Have you ever seen the MS tutorials on MSDN or Code4fun? Why doesn't Sun make something like those?<br/> [9:12] Dana:<br/> [9:12] Dana: We do have quite a lot of tutorials like those on other sites.<br/> [9:12] curiousman Abramovic: @<br/> [9:13] Java Mug whispers: Mmmmm...hot cup of Java from Sun Microsystems, Inc.<br/> [9:13] Heidi SunMicrosystems: go curious<br/> [9:13] Dana: We have online tutorials, and articles.<br/> [9:13] Dana: Be sure to check the JSC front page at least weekly.<br/> [9:13] Dana: We also send out Tech Tips for intermediate programmers twice monthly. You can subscribe through RSS<br/> [9:13] Scooter Back: Will do<br/> [9:13] Croft Ashbourne: Sorry to jump in without putting my hand up, but.... Curiousman, are you asking why Sun hasn't made something like those? I'm not quite clear on your question, sorry.<br/> [9:14] Scooter Back: thanks<br/> [9:14] Heidi SunMicrosystems: Curious?<br/> [9:14] curiousman Abramovic: yes, something that motivate new programmer, something that is well organized.<br/> [9:15] curiousman Abramovic: take look at here its like a paragraph<br/> [9:15] Dana: Yes, that is what the New to Java Programming Center does . . ..<br/> [9:15] Dana: I have it organized by steps, but I am reorganizing it as well<br/> [9:15] Dana: We also have the Java Tutorial, have you seen that?<br/> [9:16] Heidi SunMicrosystems: I want to turn the tables a bit<br/> [9:16] curiousman Abramovic: Yes, I have seen it<br/> [9:16] Heidi SunMicrosystems: and ask you all what you level of Java experience is<br/> [9:16] Heidi SunMicrosystems: are you all new to Java?<br/> [9:16] Dana: There is a really good Java EE tutorial as well.<br/> [9:16] Erik Leominster: No I'm not new.<br/> [9:16] Jhonny Claxton: @what are disadvantage and the advantage of Java?<br/> [9:17] FLOSSGeek Raymond: I wouldn't say new, but forgot a lot off it.<br/> [9:17] Croft Ashbourne: I'm more of a mechanic, that is sysadmin, so my programming as a whole is fairly limited in scope.<br/> [9:17] Dana: For Ajax:<br/> [9:17] Dana: The advantages of Java are . . .<br/> [9:17] Dana: that it's a fairly easy language to learn . . .<br/> [9:17] Dana: that it is object oriented so well organized and easy to create objects and use them<br/> [9:17] Dana: It's very popular, and now just as fast as any other language, and it's FREE.<br/> [9:17] Dana: It's now Open Source, so you guys can add to the language as you learn<br/> [9:17] Heidi SunMicrosystems: So not brand spanking new<br/> [9:17] BertQuijalvo SunMicrosystems: I've been programming with Java for 10 years.<br/> [9:17] FLOSSGeek Raymond: @thks<br/> [9:17] Genji Nakajima: I'll be in Sun's Java class next week. The class for experienced programmers<br/> [9:17] Heidi SunMicrosystems: Oh Bert!<br/> [9:18] Siddhartha Fonda: Personally I've been programming with the SE for about 6 years now, but I've not yet had a business reason to use EE, although I'd like to, as well as curious about ME.<br/> [9:18] Heidi SunMicrosystems: So we have a lot of experience folks in here.<br/> [9:18] Heidi SunMicrosystems: Go JayR<br/> [9:18] curiousman Abramovic: About your Java Map, it`s really great good job, why don't you implement that completely online?<br/> [9:19] JayR Cela: Am I correct that Java is a runtime and if so how did you manage to speed it up?<br/> [9:19] Dana: I believe it is implemented online. Maybe I don't understand what you mean.<br/> [9:19] Dana: Our engineers are working on speeding up the runtime always. That is always and priority and we've seen huge improvement in just the last year.<br/> [9:19] JayR Cela: Let me think on a way to rephrase that ? / i will try again later<br/> [9:20] Heidi SunMicrosystems: Okay JayR<br/> [9:20] Heidi SunMicrosystems: Next question?<br/> [9:20] FLOSSGeek Raymond: @How are Sun dealing with .net its taking over my Uni<br/> [9:20] Heidi SunMicrosystems: GO Jayr<br/> [9:20] JayR Cela: OK, I believe I meant to ask or say is it an interpreted language?<br/> [9:20] FLOSSGeek Raymond: @Its a shame Java is so much better<br/> [9:21] Dana: I'm not sure what you mean by taking over your UNI? We are always seeking to improve Java and get it to do what developers need.<br/> [9:21] JayR Cela: so is not necessarily... to be compiled<br/> [9:21] FLOSSGeek Raymond: @The university I work for seem to be not teaching as much Java now and turning to .net<br/> [9:21] Heidi SunMicrosystems: Folks, please be sure to use the @<br/> [9:21] Heidi SunMicrosystems: before submitting your questions :)<br/> [9:21] Heidi SunMicrosystems: and then wait for me to call you<br/> [9:21] Dana: That's interesting as .net seems to have lost favor hugely in the professional world<br/> [9:21] Heidi SunMicrosystems: that way we don't miss any<br/> [9:22] Dana: I never learned .net, so I can't compare<br/> [9:22] Heidi SunMicrosystems: Go Floss<br/> [9:22] FLOSSGeek Raymond: Do you have any idea of the final release of NetBeans?<br/> [9:22] FLOSSGeek Raymond: 6<br/> [9:22] Dana: Yes, I believe on Dec 3rd<br/> [9:22] Dana: I'm looking forward to it myself!<br/> [9:22] Heidi SunMicrosystems: Yes.<br/> [9:23] FLOSSGeek Raymond: great<br/> [9:23] Dana: I love NetBeans!<br/> [9:23] Dana: Anyone who has used GridBagLayout, loves NetBeans:-)<br/> [9:23] Dana: Or any layout for that matter!<br/> [9:23] Woody Jameson: I come to Java from 'C' so the language is no problem for me, but there are so many class libraries it's hard to know where to start. Any suggestions?<br/> [9:23] Heidi SunMicrosystems: Go Woody<br/> [9:24] Dana: Just focus on what it is you want to do, then look up the library you need for that one thing, then move to the next . . .<br/> [9:24] Dana: otherwise, it can be really overwhelming<br/> [9:24] FLOSSGeek Raymond: yes the APIs are overwhelming<br/> [9:24] Heidi SunMicrosystems: flossgeek, you had another question?<br/> [9:24] Dana: Play in NetBeans. Drag and Drop, then put functionality in your buttons and so forth<br/> [9:24] FLOSSGeek Raymond: Is there any tutorials on using the APIs more effectively?<br/> [9:25] Dana: Well, you know, not really. I think that's a good Idea and I will look into that further.<br/> [9:25] Heidi SunMicrosystems: Amadeo, go!<br/> [9:26] Amadeo Spinotti: Would NetBeans 6 support Jboss Seam 2 projects in any way?<br/> [9:26] Dana: That I don't know. Do a search on the NetBeans site for JBoss, and see what comes up.<br/> [9:27] Heidi SunMicrosystems: I'll look into it further and get back to you<br/> [9:27] Duffy Lomu: I think I have seen demos in NetBeans using seams on the web<br/> [9:27] Heidi SunMicrosystems: Curious, your turn<br/> [9:27] curiousman Abramovic: Can we say Java is a great platform for just Enterprise platform? And it`s loosing desktop and web platform?<br/> [9:27] Dana: It is a great Enterprise platform, but it's also great for the desktop now too!<br/> [9:28] Dana: The GUI runs very fast these days, and is super easy to put together.<br/> [9:28] Heidi SunMicrosystems: Zarl, go ahead<br/> [9:28] Zarl Wunderlich: Apple did not ship a Java 6 version with OSX 10.5. Does sun have any plans to release a Java version for OSX ?<br/> [9:28] Dana: I don't know. I will look into it, and put a link in the New to Java Center when I find out, so keep checking.<br/> [9:28] Heidi SunMicrosystems: Good question<br/> [9:29] Heidi SunMicrosystems: Flossgeek, you are up<br/> [9:29] FLOSSGeek Raymond: JavaFX, what exactly does Sun want to achieve with it, will it compete with the Flash engine?<br/> [9:30] Dana: We are working hard on the JavaFX family. We do have tutorials out now, but as far as what it is going to compete with, I'm not exactly sure.<br/> [9:30] Dana: It is gaining in popularity greatly and developers are having fun with it.<br/> [9:30] Heidi SunMicrosystems: That is a good question.<br/> [9:30] Heidi SunMicrosystems: We will be holding another JavaFX chat in the new year.<br/> [9:30] Heidi SunMicrosystems: please stay tuned for that<br/> [9:30] Heidi SunMicrosystems: Curious!<br/> [9:30] FLOSSGeek Raymond: thanks<br/> [9:31] Heidi SunMicrosystems: While we are waiting<br/> [9:31] Heidi SunMicrosystems: ]I want to ask if you use the java forums to network with other developers<br/> [9:32] Heidi SunMicrosystems: If not, would it interest you?<br/> [9:32] Heidi SunMicrosystems: Or do you have another way to network that you find more appealing?<br/> [9:33] Erik Leominster: I'm sorry, which forums are you referring to?<br/> [9:33] Croft Ashbourne: I think that may just answer the question? ;)<br/> [9:33] Dana:<br/> [9:33] Heidi SunMicrosystems: Anybody else?<br/> [9:33] Heidi SunMicrosystems: For him, yes<br/> [9:33] FLOSSGeek Raymond: I actually prefer mailing lists<br/> [9:33] Heidi SunMicrosystems: So newsletters?<br/> [9:33] Dana: The Java forums are a great place for you to taking programming questions to and get answers from developers<br/> [9:34] Heidi SunMicrosystems: bulletins/tech tips?<br/> [9:34] Domchi Underwood: I usually ask questions in appropriate mailing list / web forum etc as my questions are usually with specific framework ... say Spring. :) Java newsgroup is great, but too much traffic mostly.<br/> [9:34] Dana: Fundamentals teaches the basics of Java programming<br/> [9:34] Heidi SunMicrosystems: Ah, ok<br/> [9:34] Heidi SunMicrosystems: Curious go ahead<br/> [9:34] Dana: Tech Tips teaches intermediate coding<br/> [9:34] Dana: Enterprise Tech Tips teaches the enterprise<br/> [9:35] Dana: They're really great learning tools and you can subscribe through RSS or visit the site to read online<br/> [9:35] Dana: And they're free but worth a million<br/> [9:35] Heidi SunMicrosystems: Curious?<br/> [9:35] Dana: JavaFX:<br/> [9:36] Heidi SunMicrosystems: Okay, go flossgeek<br/> [9:36] FLOSSGeek Raymond: Is the forums available as mailing list as well?<br/> [9:36] Dana: No, but I believe you can sign up for email notifications<br/> [9:36] FLOSSGeek Raymond: ok<br/> [9:36] Scooter Back: @<br/> [9:36] Heidi SunMicrosystems: Go Scooter<br/> [9:36] Dana: I also want to be sure you folks know about Tech Days<br/> [9:37] Dana: It's a world traveling conference where you can learn in person for two days in certain cities<br/> [9:36] Scooter Back: Are there forums on the site that are oriented for the new guy like myself?<br/> [9:37] Dana: Yes, the New to Java Forum is the link I posted:<br/> [9:37] Dana:<br/> [9:37] Java Mug whispers: Mmmmm...hot cup of Java from Sun Microsystems, Inc.<br/> [9:37] Scooter Back: got it, thanks<br/> [9:37] Heidi SunMicrosystems: Go JayR<br/> [9:37] JayR Cela: all this free stuff and open source in general / what is the business model , and how does Sun Micro determine the ROI on giving all these tools away<br/> [9:38] Heidi SunMicrosystems: Hey Rikart!<br/> [9:38] Dana: Sun does also sell services and you can pay for online classes, CD tutorials, and in person Java courses<br/> [9:38] Dana: And of course we sell servers!<br/> [9:38] JayR Cela: lol / ok / good answer<br/> [9:38] Heidi SunMicrosystems: go zarl<br/> [9:39] FLOSSGeek Raymond: The open source model is a service one not a product selling<br/> [9:39] Zarl Wunderlich: Is there an "official" statement from Sun about the Google Android phone project ? Its using Java as the developing language, but not the VM or the ME APIs<br/> [9:39] Dana: There may be but I'm sorry, I don't know<br/> [9:39] Heidi SunMicrosystems: Absolutely<br/> [9:39] Dana: It sounds like maybe we should do a Java mobility chat one day???<br/> [9:39] Siddhartha Fonda: Oh, I'd be interested in that<br/> [9:39] Croft Ashbourne: - pretty solid insight into what the company is doing. From where I'm sitting it seems like Sun is going to be focusing heavily on providing services... but of course I'm just another Sun customer speculating.<br/> [9:39] Heidi SunMicrosystems: on the calendar<br/> [9:40] Heidi SunMicrosystems: go Erik<br/> [9:40] Erik Leominster: Why is Sun here in SL, and is it working?<br/> [9:40] Dana: We have a whole site on mobility if that is the area you want to focus on<br/> [9:40] Dana:<br/> [9:41] Fiona May is Online<br/> [9:41] Heidi SunMicrosystems: So here's a question for you<br/> [9:41] Heidi SunMicrosystems: would you all be interested in Java Courses taught in second life in a virtual classroom<br/> [9:42] FLOSSGeek Raymond: yes<br/> [9:42] FLOSSGeek Raymond: very much so<br/> [9:42] Woody Jameson: Yep<br/> [9:42] Domchi Underwood: YES! :)<br/> [9:42] Scooter Back: absolutely!!!<br/> [9:42] curiousman Abramovic: yes<br/> [9:42] Scooter Back: raises both hands!<br/> [9:42] Erik Leominster: sure!<br/> [9:42] curiousman Abramovic: cool<br/> [9:42] Heidi SunMicrosystems: Okay<br/> [9:42] Dana: Me too!<br/> [9:42] FLOSSGeek Raymond: lol<br/> [9:42] Heidi SunMicrosystems: We get the picture loud and clear<br/> [9:42] Siddhartha Fonda: *SMASH* OH YEAH!!!<br/> [9:43] Scooter Back: Sheesh, I'd even build a classroom on my land! lol<br/> [9:43] Siddhartha Fonda: me too<br/> [9:43] Fiona May: we have the classrooms<br/> [9:43] Heidi SunMicrosystems: Okay so we can count on y'all to show up<br/> [9:43] Scooter Back: make a group, so we can be aware of events like that<br/> [9:43] Heidi SunMicrosystems: Join the Sun Developer Network group<br/> [9:43] Scooter Back: well there ya go!<br/> [9:43] Heidi SunMicrosystems: you'll be in the loop on all chats, events, classes<br/> [9:44] Heidi SunMicrosystems: parties, conferences<br/> [9:44] Heidi SunMicrosystems: the whole thing<br/> [9:44] Heidi SunMicrosystems: :)<br/> [9:44] Domchi Underwood: Post the thing in events...<br/> [9:44] curiousman Abramovic: great<br/> [9:45] curiousman Abramovic: What`s your plan for WPF and Vista Platform? and how do you compete WPF?<br/> [9:45] Dana: I'm sorry, that is out of my area. I just focus on teaching the Java platform to developers.<br/> [9:46] Heidi SunMicrosystems: And she does a great job at it too<br/> [9:46] curiousman Abramovic: thanks :-)<br/> [9:46] Heidi SunMicrosystems: Siddhartha, your question<br/> [9:46] Siddhartha Fonda: Ok I was curious if there a tool to aid in migrating up versions, like we are on 4 and I need to move to 5 and possibly 6<br/> [9:46] Siddhartha Fonda: Like say, something I can run against my code to tell me what I need to change<br/> [9:46] Dana: Yes, we have a whole site dedicated to that. I'll get the URL for you. . . <br/> [9:47] Dana: Upgrade:<br/> [9:46] Siddhartha Fonda: SE I'm talking here<br/> [9:47] Siddhartha Fonda: sweet<br/> [9:47] Heidi SunMicrosystems: Go curious<br/> [9:47] curiousman Abramovic: Are you going to update the Java Map?<br/> [9:47] Dana: What part of it needs to be updated?<br/> [9:47] Dana: Do you have a link so I can be sure we're talking about the same map?<br/> [9:47] curiousman Abramovic: about JavaME and Add Java FX<br/> [9:48] Dana: Ah, I will check and make sure those are added. Yes!<br/> [9:48] Dana: Thanks<br/> [9:48] Heidi SunMicrosystems: Next question?<br/> [9:48] Siddhartha Fonda: @<br/> [9:48] curiousman Abramovic: Thanks<br/> [9:48] Heidi SunMicrosystems: Go Siddhartha<br/> [9:49] Siddhartha Fonda: Thanks Heidi, I'm about to embark on studying for SCJP, it's currently on java 5, what is the time line for upgrading that exam to 6?<br/> [9:49] JayR Cela: @<br/> [9:49] Siddhartha Fonda: ok cool, thanks<br/> [9:49] Dana: I'll have to check on that. I don't know.<br/> [9:49] Heidi SunMicrosystems: and I'll get you the info<br/> [9:50] JayR Cela: how does Java compete with or can it be used together with MONO? the reason I ask this is that LL plans to integrate Mono into LSL / any thoughts on this issue?<br/> [9:50] Fiona May: JayR we are currently in dialog with LL over various topics this is one of them<br/> [9:50] Dana: Hmm, I'm afraid I don't know what MONO is so I can't say.<br/> [9:51] Heidi SunMicrosystems: any comment on JayR's question?<br/> [9:51] JayR Cela: Mono is basically a scripting language<br/> [9:51] FLOSSGeek Raymond: I thought Mono was a language like C# from Novell<br/> [9:51] JayR Cela: O.S. and cross platform<br/> [9:52] JayR Cela: Fiona / thank you<br/> [9:52] Scooter Back: I have so many questions, but I don't know how to word them. I'm so new to Java that I don't know what I can't do! lol<br/> [9:52] Dana: Scooter, I recommend that you start by reading Fundamentals, and looking through the New to Java center to get familiar with the terminology.<br/> [9:53] Dana: Also be sure to read through the Java forums to see the typical problems new developers have with Java.<br/> [9:52] Domchi Underwood: Any chance of Java/Swing on a prim? O :)<br/> [9:52] Heidi SunMicrosystems: Scooter: use the resources on the card you were given to start out<br/> [9:52] Croft Ashbourne: Fwiw, off Wikipedia and about Linden Labs and LSL: "The new engine executing scripts uses Mono (the open source implementation of the Microsoft .NET framework) as the virtual machine for scripts running on the servers."<br/> [9:52] Fiona May: for those of you who do not know what mono is this may help:.<br/> [9:53] Croft Ashbourne grins<br/> [9:53] Heidi SunMicrosystems: I have one last question for you<br/> [9:53] Heidi SunMicrosystems: If/when we do classes in world what would you want to see taught?<br/> [9:53] FLOSSGeek Raymond: OOP<br/> [9:54] Dana: You can always email me too, so I can point you to an online article or tutorial<br/> [9:54] Scooter Back: agrees, OOP<br/> [9:54] curiousman Abramovic: OOP<br/> [9:54] Siddhartha Fonda: J2EE beginner<br/> [9:54] curiousman Abramovic: OOP Fundamentals<br/> [9:54] Siddhartha Fonda: J2ME beginner<br/> [9:54] Domchi Underwood: Beginner Java - I think there's a lot of interest in that.<br/> [9:54] Dana: We have a wonderful Java EE tutorial online<br/> [9:54] FLOSSGeek Raymond: and Web<br/> [9:54] Siddhartha Fonda: Java with XML and AJAX<br/> [9:54] Dana: Yes, Java EE is great. Start small like with servlets and JavaServer Pages.<br/> [9:54] Dana: Yes, and you can add XML and a<br/> [9:54] Dana: Ajax<br/> [9:55] Dana: NetBeans can build a good website now.<br/> [9:55] Dana: I'll be writing a tutorial on that, but their site has some good tutorials too<br/> [9:55] Scooter Back: is NetBeans WYSIWYG?<br/> [9:55] Dana: Yes!<br/> [9:55] Dana: But you can also go into code mode if you don't like that<br/> [9:55] Teleport-Beam 1.1a14: Thank you for using HPC-Teleporter-System.<br/> [9:56] Siddhartha Fonda: You're right, the tutorials are a great place to start -- I personally would get more out of a class that, say, teaches how to build a sample app, or discusses advanced aspects of some particular topic.<br/> [9:56] Scooter Back: See how new I am! lol<br/> [9:56] Siddhartha Fonda: I keep forgetting about the tutorials. LOL<br/> [9:56] Dana: We do have classes, and I put that link in your Note<br/> [9:56] Heidi SunMicrosystems: Scooter, we are here to help!<br/> [9:56] Heidi SunMicrosystems: Folks, we are coming up on the hour<br/> [9:56] Scooter Back: That I realize, or I would have been laughed out of here<br/> [9:56] Dana: Also, in Fundamentals I advertise web, CD, and in person courses you can take through Sun<br/> [9:56] Heidi SunMicrosystems: Is there any last question?<br/> [9:57] Heidi SunMicrosystems: Burning thought?<br/> [9:57] curiousman Abramovic: Is there any meeting?<br/> [9:57] Scooter Back: @ last one, I promise<br/> [9:57] Heidi SunMicrosystems: How do you mean?<br/> [9:57] Scooter Back: Are there tutors available, so while I'm taking my class, I can ask someone live?<br/> [9:57] Siddhartha Fonda: I just have one last suggestion -- do more of these sessions!:)<br/> [9:57] Heidi SunMicrosystems: Go Scooter<br/> [9:57] Heidi SunMicrosystems: Sidd...stay tuned! Two chats next week<br/> [9:58] Heidi SunMicrosystems: One with Dana, one focus group with Me!<br/> [9:58] Siddhartha Fonda: awesome Heidi!<br/> [9:58] Dana: There isn't a live chat resource, but you can look stuff online on the website<br/> [9:58] JayR Cela: I agree / more of these is a wonderful idea :_)<br/> [9:58] Fiona May: go Heidi!!!!!!!!!<br/> [9:58] Heidi SunMicrosystems: Domchi, go ahead<br/> [9:58] Domchi Underwood: Anything you are allowed to say on talks with LL?<br/> [9:58] Croft Ashbourne: I have one too, that isn't at all Java related... move all the partner training stuff in here so I can run SL at work and have a legitimate reason to do so. ;)<br/> [9:58] Heidi SunMicrosystems: Fiona?<br/> [9:59] Croft Ashbourne: My comment was quite tongue in cheek, so feel free to ignore it.<br/> [9:59] Fiona May: I wish Croft<br/> [9:59] Heidi SunMicrosystems: Can we talk about our talks with LL?<br/> [9:59] Heidi SunMicrosystems: Time is up<br/> [9:59] Dana: I'm really thrilled with your questions. I can take these back to my group so we can continue helping you in learning Java<br/> [9:59] Fiona May: Actually its a much more fun environment - I wish I could<br/> [9:59] Heidi SunMicrosystems: I want to thank Dana for joining us<br/> [9:59] Dana: Please do join us again<br/> [9:59] Fiona May: lobby your Sun account managers<br/> [9:59] Heidi SunMicrosystems: I want to thank all of you for attending<br/> [9:59] Dana: Thank you Heidi. IT was fun!<br/> [9:59] FLOSSGeek Raymond: thanks<br/> [10:00] Heidi SunMicrosystems: and I want to remind you to join the SUn Developer Network group here in SL<br/> [10:00] JayR Cela: thank you Dana / and Heidi<br/> [10:00] Heidi SunMicrosystems: We'll be heading over to Club Java for a post chat meet n' greet<br/> [10:00] Heidi SunMicrosystems: all are welcome<br/> [10:00] Siddhartha Fonda: thanks Dana, thanks Heidi<br/> [10:00] Croft Ashbourne: Yes, thanks, this was a nice experience. Do it again and I'll grab a few of the real Java-heads at work and have them show up.<br/> [10:00] Heidi SunMicrosystems: Terrific!<br/> [10:00] Dana: The more the merrier!<br/> [10:00] Heidi SunMicrosystems: Depending on my day.<br/> [10:00] Heidi SunMicrosystems: LOL<br/> [10:00] Heidi SunMicrosystems: Transcripts available in next 24 hours<br/> [10:01] Scooter Back: I don't want to leave<br/> [10:01] Heidi SunMicrosystems: Then come on over to Club Java<br/> [10:01] LaureenHudson SunMicrosystems: Heidi, where's Club Java?<br/> [10:01] Heidi SunMicrosystems: I'll tp you<br/> [10:01] Fiona May: one moment we will send landmarks<br/> [10:01] Scooter Back: cool<br/> [10:01] Cygnus Theater: Requesting teleport-list... Please wait, Genji Nakajima...<br/> [10:01] Christopher Carter: who ever needs a LM for club java just let Me know<br/> [10:01] FLOSSGeek Raymond: me<br/> [10:01] Yornifer Miles: Hey Fiona...hey Chris! :)<br/> [10:01] Scooter Back: me<br/> [10:02] curiousman Abramovic: Thanks, great experience<br/> [10:02] Fiona May: Hey Yornifer<br/> [10:02] LaureenHudson SunMicrosystems: Thanks RichardNg! </p> <p>Note card given to attendees:</p> <p>Java Technology Resources</p> <p><a href="">New to Java Programming Center</a></p> <p><a href="">Java Tutorial (Getting Started)</a></p> <p><a href="">Java Technology Fundamentals (Subscribe through RSS)</a></p> <p><a href="">Core Java Tech Tips (Subscribe through RSS)</a></p> <p><a href="">Enterprise Tech tips (Subscribe through RSS)</a></p> <p><a href="">Mobility Tech Tips (Subscribe through RSS)</a></p> <p><a href="">Java Essential Forums (New to Java)</a></p> <p><a href="">Sun Java In-Person, Virtual, Web, and CD Courses</a></p> <p><a href="">Java Sun</a></p> <p><a href="">Sun Tech Days ( Year Round World Conference)</a></p> <p><a href="">JavaOne Conference (Annually)</a></p> <p><a href="">Java User Groups (JUGS)</a></p> <p><a href="">NetBeans.org</a></p> <p><a href="">Java Upgrade</a> </p> The Abook Database dananourie 2007-11-20T10:00:32-08:00 2007-11-20T17:46:24-08:00 In several previous issues of Java Technology Fundamentals, we looked at various parts of the <a href="">Abook application</a>. In this article, we look at the Abook database. by John Zukowski <p> In several previous issues of Java Technology Fundamentals, we looked at various parts of the <a href="">Abook application</a>. If you missed those articles, they are listed at the end of this one. In this article, we look at the Abook database. </p> <p> The database behind the Abook application is rather simple. If a database does not yet exist when you start the application, the application creates one. You don't even need a database server, have to add your own database driver, or need to modify the application to connect to your local installation of some database system. </p> <p> All of that comes prepackaged with the application and the Java Runtime Environment, thanks to the <a href="" target="_blank">Apache Derby project</a>. Sun provides <a href="" target="_blank">JavaDB</a> as its supported distribution of Derby with <a href="" target="_blank">Java SE 6</a>, though the <code>derby.jar</code> file is packaged with the <a href="">Abook project</a>. </p> <p> To see whether the database needs to be created, the system looks for the necessary files in the user's home directory under an <code>.addressbook</code> subdirectory. This is done with the help of the <code>derby.system.home</code> system property, set by the application itself. When the files are not found, an <code>APP.ADDRESS</code> table will be created with all the necessary fields previously seen in the user interface (UI) screens. Looking in either of the Dao classes found in the <code>abook.db</code> package -- <code>ContactDao</code> or <code>AddressBookDao</code> -- you'll find the <code>CREATE TABLE</code> string used to create the table: </p> <p> <pre> private static final StringJava DB</a>, Sun's supported distribution, is a standard part of the Java SE 6 release. In other words, you don't have to install your own database any more for applications. Java DB is not only for toy applications. It is fully capable for real-world use. And like the Java platform, it runs wherever you have the platform available. </p> <p> <a href="" target="_blank">NetBeans</a> provides some additional tools to help you connect to a database and to view or manipulate what is available there. However, because Abook uses the embedded version of Derby, you cannot connect to it from the NetBeans IDE. </p> <p> If instead you configure Derby using a more typical client-server architecture, where the server is running separately from the client application, you can use the NetBeans IDE to attach to that server and monitor or manipulate what's available in the database. If you are interested in trying out JavaDB with NetBeans in this fashion, use this <a href="" target="_blank">NetBeans tutorial</a>. A second tutorial, <a href="" target="_blank">Connecting a GUI to a Java DB Database With NetBeans IDE</a>, is also available. And if you are using the NetBeans 6.0 beta version, use the tutorial <a href="" target="_blank">Building a Java Desktop Database Application</a>.The Abook articles started with: </p> <p> For Previous Articles on Abook: </p> <p> <a href="">Genterting UML from the NetBeans IDE</a> </p> <p> <a href="">Generating UML from the NetBeans IDE</a> </p> <p> <a href="">Getting to Know Sequence Diagrams</a> </p> <p> <a href="">Getting to Know Abook</a> </p> <p> ********* </p> <p> <a href="" target="_blank"><img src="" alt="Sun Microsystems Developer Playground - Dana Nourie" width="100" height="88" border="0" align="left" /></a> <b><a href="" target="_blank">Java Technology Fundamentals Chat</a></b><br /> Join Dana Nourie November 29 at 9-10 AM PDT in <a href="" target="_blank">Second Life</a> at the <a href="">Sun Microsystems Developer Playground</a> to chat about how you can learn the Java platform. </p> <br> Understanding Scope and Managed Beans dananourie 2007-11-15T09:22:58-08:00 2007-11-15T09:22:58-08:00 <p.</p> <p>In developing applications, either for the desktop or the Internet, you need to get a good understanding of the concepts of scope, which is the lifespan an object that may last just for that page, for an entire session, or for the duration of an application. In addition, managed beans can save you code time, and is necessary for dealing with the data of your application. A managed bean is a JavaBeans object that stores data in either request scope, session scope, or application scope.</p> <p>To understand these concepts fully, read the following tutorial and create the sample application.</p> <p>In this tutorial, you use the NetBeans Visual Web Pack to create an applica
http://blogs.sun.com/JavaFundamentals/feed/entries/atom
crawl-001
refinedweb
13,149
50.77
Re: Windows Services GUI From: Iain Mcleod (mcleodia_at_dcs.gla.ac.uk) Date: 10/07/04 - ] Date: Thu, 7 Oct 2004 22:08:17 +0100 See the thread entitled: Open a form from a Windows Service It's very recent and discusses exactly what you are talking about. I shall paste the reply below: Regards Iain Rob, SQL Server has a number of Windows Services (I see at least 3) that run, plus it has a UI program that runs, the services are listed under Services in the "Control Panel - Administrative Tools" The UI program (for SQL Server) is called "Service Manager" is under Programs - Startup. The easiest way to have a service accept a "command" (from its UI) is to do something is to override the ServiceBase.OnCustomCommand method and have it call the same procedure your Timer.Elapsed event handler calls. Then you can use ServiceController.ExecuteCommand to invoke this custom command. Note I would probably define an Enum of CustomCommands that my service supported so its easier to keep track of them. A custom command for OnCustomCommand is an integer between 128 & 256, which also means you can have multiple custom commands defined. Remember that ServiceController can control services on your local machine as well as services on remote machines. Note you may need to configure the various machines to allow remote control of services. An alternative, more flexible method, which also entails more work, is to enable your service for .NET Remoting. You could either make it a .NET Remoting Server, in which case you call a method to have it perform some action, or a .NET Remoting Client, and possible handle an "update data event" on your server remoting object that says to update data... Both of the custom commands & remoting with a service are discussed in Matthew MacDonalds book "Microsoft Visual Basic .NET Programmer's Cookbook" from MS Press. You can also use WMI (Windows Management Instrumentation) via the classes in the System.Management namespace to monitor your windows service. Here is a recent MSDN article on WMI & .NET: In addition to/instead of WMI you can also simply use Performance Counters & Event Logs to keep track of your service doing work. See System.Diagnostics.EventLog & System.Diagnostics.PerformanceCounter. Hope this helps Jay "Rob" <Rob@discussions.microsoft.com> wrote in message news:FAD6128A-26B3-403B-8D20-ABCB611253D0@microsoft.com... > Thank you for your information. This is a new area for me. > > But I am curious as to how applications such as SQL Server, MSN Messenger, > etc. use the Notify Icons. When the service starts is there a program > whose > UI is hidden? I need to have some form of UI for a service I am > constructing > and I am looking for the best way to go about it. > > "Herfried K. Wagner [MVP]" wrote: > >> "Rob" <Rob@discussions.microsoft.com> schrieb: >> > Can a form be opened from a Windows service? >> >> Typically services do not have a UI. If they do, that's a separate >> application that communicates with the service over remoting, sockets, >> etc. >> >> -- >> Herfried K. Wagner [MVP] >> <URL:> >> >> "Christopher Kurtis Koeber" <c_koeber@myrealbox.com> wrote in message news:ecEFs8KrEHA.3608@TK2MSFTNGP14.phx.gbl... > What's so embarrassing about someone asking for help using your name? It > could be just a coincidence that this person has the same name as you. > > Christopher > > "kimpton" <kimpton@community.nospam> wrote in message > news:1AFC4B4E-9970-4850-B964-752211299913@microsoft.com... >> hey! >> >> please don't post embarressing questions with a name the same as mine :) >> >> "Kimpton" wrote: >> >>> I am trying to write a program with VB.NET to run as a service. I have a >>> good >>> grasp of VB.NET but have never made a service before. I am strugling to >>> get a >>> GUI interface with a system tray icon working correctly in a service. >>> Please >>> Thanks! > > - ]
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.vb/2004-10/1726.html
crawl-002
refinedweb
636
58.28
This section describes some of the basics about working with applets that use the JDBC Thin driver. It begins with a simple example of coding a JDBC applet, it then describes what you must do to allow the applet to connect to a database. This includes how to use the Oracle8 Connection Manager or signed applets if you are connecting to a database that is not running on the same host as the web server. It also describes how your applet can connect to a database through a firewall. The section concludes with how to package and deploy the applet. Except for importing the JDBC interfaces to access JDBC entry points, you write a JDBC applet like any other Java applet. Depending on whether you are coding your applet for a JDK 1.1.1 browser or a JDK 1.0.2 browser, there are slight differences in the code that you use. In both cases, your applet must use the JDBC Thin driver, which connects to the database with TCP/IP protocol. If you are targeting a JDK 1.1.1 browser (such as Netscape 4.x or Internet Explorer 4.x), then you must: java.sqlpackage into your program. The java.sqlpackage contains the standard JDBC 1.22 interfaces and is part of the standard JDK 1.1.1 class library. oracle.jdbc.driver.OracleDriver()class and specify the driver name in the connect string as thin. If you are targeting a JDK 1.0.2 browser (such as Netscape 3.x or Internet Explorer 3.x), then you must: jdbc.sqlpackage into your program. The jdbc.sql package is not a part of the standard JDK 1.0.2 class library. It is a separate library that you download as part of the JDBC distribution. The jdbc.sql package was created because JDK 1.0.2 browsers do not allow packages starting with the string " java" to be downloaded. As a work-around, the java.sql package has been renamed to jdbc.sql. This renamed package is shipped with the Oracle JDBC product. oracle.jdbc.dnlddriver.OracleDriver()class and specify the driver name in the connect string as dnldthin. The following sections illustrate the differences in coding an applet for a JDK 1.1.1 browser compared with a JDK 1.0.2 browser. If you are coding an applet for a JDK 1.1.1 browser, then import the JDBC interfaces from the java.sql package and load the Oracle JDBC Thin driver. import java.sql.*; public class JdbcApplet extends java.applet.Applet { Connection conn; // Hold the connection to the database public void init() { // Register the driver. DriverManager.registerDriver (new oracle.jdbc.driver.OracleDriver());// Connect. For more information on connecting to the database, see "Opening a Connection to a Database". If you are coding an applet for a JDK 1.0.2 browser, then; // Hold the connection to the database public void init () { // Register the driver DriverManager.registerDriver (new oracle.jdbc.dnlddriver.OracleDriver());// Connect to the database conn = DriverManager.getConnection ("jdbc:oracle:dnldthin:scott/tiger@www-aurora.us.oracle.com:1521:orcl"); ... } } This section includes the following subsections: The most common task of an applet using the JDBC driver is to connect to and query a database. Because of applet security restrictions, an applet can open TCP/IP sockets only to the host from which it was downloaded (this is the host on which the web server is running). This means that your applet can connect only to a database that is running on the same host as the web server. In this case, the applet can connect to the database directly; no additional steps are required. This section begins with describing the most simple case, connecting to a database on the same host from which the applet was downloaded (that is, the same host as the web server). It then describes the two different ways in which you can connect to a database running on a different host. If your database is running on the same host from which the applet was downloaded, then you can connect to the database by specifying it in your applet. You specify the database in the connect string of the getConnection() method in the DriverManager class. username. If you are connecting to a database on a host other than the one on which the web server is running, then you must overcome the applet's security restrictions. You can do this by using either the Oracle8 Connection Manager or signed applets. 5-1 illustrates the relationship between the applet, the Oracle8 Connection Manager, and the database. Using the Oracle8 Connection Manager requires two steps that are described in these sections: You must install the Connection Manager on the web server host. You install it from the Oracle8 distribution media. Please refer to the Net8 Administrator's Guide if you need more help to install the Connection Manager. On the web server host you must create a CMAN.ORA file in the [ORACLE_HOME]/NET8/ADMIN directory. The options you can declare in a CMAN.ORA file include firewall and connection pooling support. Please refer to the Net8 Administrator's Guide for more information on the options you can enter in a CMAN.ORA file.=<cman_profile = (parameter_list = web-server-host>) (PORT=1610))). You can find a description of the options listed in the CMAN.ORA file in the Net8 Administrator's Guide. situation illustrated in Figure 5-1. The web sever. If your browser supports JDK 1.1.x, (for example, Netscape 4.0), then you can use signed applets. Signed applets can request socket connection privileges to other machines. To set this up, you must: If you are using Netscape, then your code would include a statement like this: netscape.security.PrivilegeManager.enablePrivilege("UniversalConnect");Connection conn = DriverManager.getConnection(...); For more information on writing applet code that asks for permissions, see Netscape's Introduction to Capabilities Classes at: for information on obtaining and installing a certificate. For a complete example of a signed applet that uses the Netscape Capabilities classes, see "Creating Signed Applets". Under normal circumstances, an applet that uses the JDBC Thin Driver cannot access the database through a firewall. In general, the purpose of a firewall is to prevent requests from unauthorized clients from reaching the server. In the case of applets trying to connect to the database, the firewall prevents the opening of a TCP/IP socket to the database. You can solve this problem by using a Net8-compliant firewall and connect. The following sections describe these topics: Firewalls are rule-based. They have a list of rules that define which clients can connect, and which cannot. Firewalls compare the client's hostname with the rules, and based on this comparison, either grant the client connect. Connecting through a firewall requires two steps that are described in the following sections: The instructions in this section assume that you are running a Net8-compliant firewall. Java applets do not have access to the local system (that is, they cannot get the hostname locally or environment variables) bogus and not need your applet classes files and the JDBC driver classes file (this will be either classes111.zip if you are targeting the applet to a browser running JDK 1.1.1, or classes102.zip if you are targeting the applet to a browser running JDK 1.0.2). Follow these steps: classes111.zip(or classes102.zip) to an empty directory. If you are targeting a browser running the JDK 1.0.2, then DELETE the packages listed in the left-hand column of the following table. Next, ensure that the packages listed in the right-hand column are present. All of the packages listed in the table are included in the JDBC distribution. .jar) file. To target a browser running the JDK 1.1.1, the single zip file should contain: classes111.zip oracle/jdbc/driver/OracleDatabaseMetaData.classfile. Note that this file is very large and might have a negative impact on performance. If you do not use DatabaseMetadata entry points, omit this file. To target a browser running the JDK 1.0.2, the single zip file should contain: classes102.zip(minus the files you deleted in Step 2) jdbcinterface files from the jdbc.sqlpackage in the classes/jdbc/sqldirectory of the JDBC distribution CODE. You" then this would indicate that the applet resides in the my_Dir/ Applet_Samples directory. The ARCHIVE parameter is optional and specifies the name of the archive file (either a .zip or .jar file) that contains the applet classes and resources the applet needs. Oracle recommends the use of a .zip> The communication between an applet that uses the JDBC Thin driver and the Oracle database happens on top of Java TCP/IP sockets. In a JDK 1.0.2-based web browser, such as Netscape 3.0, an applet can open sockets only to the host from which it was downloaded. For Oracle8 this means that the applet can only connect to a database running on the same host as the web server. If you want to connect to a database running on a different host, then you must connect through the Oracle8 Connection Manager. For more information, see "Using the Oracle8 Connection Manager". In a JDK 1.1.1-based web browser, such as Netscape 4.0, an applet can request socket connection privileges and connect to a database running on a different host from the web server host. In Netscape 4.0 you perform this by signing your applet (that is, writing a signed applet), then opening your connection as follows: netscape.security.PrivilegeManager.enablePrivilege("UniversalConnect");connection = DriverManager.getConnection("jdbc:oracle:thin:scott/tiger@dlsun511:1721:orcl"); Please refer to your browser documentation for more information on how to work with signed applets. You can also refer to "Using Signed Applets".
http://www.csee.umbc.edu/help/oracle8/java.815/a64685/advanc2.htm
crawl-002
refinedweb
1,641
59.09
Django recipe: Remove newlines from a text block I like to keep a Django app called "toolbox" or "utils" in my projects where I store odds and ends. One recent addition is this quickie function for removing any newlines characters found in a block of text. One place I find it useful is when I'm looking to dump scraped or user-submitted data to a spreadsheet or CSV. Imagine a file like "toolbox/templatetags/misc_tags.py" within a Django project that looks like so: from django import template from django.utils.safestring import mark_safe from django.template.defaultfilters import stringfilter from django.utils.text import normalize_newlines register = template.Library() def remove_newlines(text): """ Removes all newline characters from a block of text. """ # First normalize the newlines using Django's nifty utility normalized_text = normalize_newlines(text) # Then simply remove the newlines like so. return mark_safe(normalized_text.replace('\n', ' ')) remove_newlines.is_safe = True remove_newlines = stringfilter(remove_newlines) register.filter(remove_newlines) It then can be called in the template environment... {% load misc_tags %} {{ foo|remove_newlines }} ...or in the shell. >>> from toolbox.templatetags.misc_tags import remove_newlines >>>>> remove_newlines(text) 'Line one Line two Line three Line four' That's the whole trick. If there's something I screwed up—a common event—feel free to let me know.
http://palewi.re/posts/2009/09/01/django-recipe-remove-newlines-text-block/
CC-MAIN-2019-09
refinedweb
209
51.65
Today I published a new Ruby on Rails sample, the Office 365 VCF Import/Export Sample. It's basically a simple tool that allows a user to export contacts from her Office 365 mailbox to vCard files, or import vCard files into her contacts. But wait, there's more! Instead of implementing the Contacts API in the sample, I created a separate Ruby gem and implemented a portion of the Mail, Calendar, and Contacts APIs. That gem is published on rubygems.org for you to install and use in your projects. vCard Import/Export The look and feel of the sample app owes a LOT to the great work done by Michael Hartl in his Ruby on Rails Tutorial. The sample app basically uses the Contacts API in two ways: to create new contacts and to get existing contacts. Getting the contacts list When the user signs in, the app presents a list of existing contacts, sorted alphabetically by display name. The list is paged, showing 30 contacts at a time. Here's what that looks like using the ruby_outlook gem: # Maximum 30 results per page. view_size = 30 # Set the page from the query parameter. page = params[:page].nil? ? 1 : params[:page].to_i # Only retrieve display name. fields = [ "DisplayName" ] # Sort by display name sort = { :sort_field => 'DisplayName', :sort_order => 'ASC' } # Call the ruby_outlook gem wrapped_contacts = outlook_client.get_contacts user.access_token, view_size, page, fields, sort The wrapped_contacts variable is a JSON hash of the returned contacts. Getting details on a single contact When the user clicks the "Export" button for a contact, the app gets all fields for that single contact, using its Id. Here's how that's done with the ruby_outlook gem: outlook_client = RubyOutlook::Client.new # Call the ruby_outlook gem to get the contact from its ID contact = outlook_client.get_contact_by_id current_user.access_token, contact_id The contact variable is a JSON hash of all the fields of the contact. That is used to build a vCard stream, which the user can download. Creating a new contact When the user clicks the "Import" button on the main page, they are given the option to enter vCard data manually, or to open a local vCard file for import. When the user clicks "Import" on this page, the vCard data is transformed into a JSON hash conforming to the Contact entity defined by the Contact API. It then uploads that to the server via the ruby_outlook gem: outlook_client = RubyOutlook::Client.new response = outlook_client.create_contact current_user.access_token, contact The contact variable contains the JSON hash of the new contact, and the response variable contains the JSON hash that's returned by the server after creation. More on the ruby_outlook gem The gem doesn't just implement the Contacts API. It implements the following: - Contacts API: CRUD, including a "get by ID" function. - Calendar API: CRUD, including a "get by ID" function and a calendar view function. - Mail API: CRUD, including a "get by ID" function. Also implements a send mail function. You may have noticed that there's a lot here that isn't implemented. For the stuff that isn't there, I created a make_api_call function that allows you to call any part of the API that you need. If you look at the source for the gem, you'll notice that all of the implemented functions (like get_contacts for example), use the make_api_call to do the actual work. Following their example, you can implement any other API call you want. Let's take a look at an example. # method (string): The HTTP method to use for the API call. # Must be 'GET', 'POST', 'PATCH', or 'DELETE' # url (string): The URL to use for the API call. Must not contain # the host. For example: '/api/v1.0/me/messages' # token (string): access token # params (hash) a Ruby hash containing any query parameters needed for the API call # payload (hash): a JSON hash representing the API call's payload. Only used # for POST or PATCH. def make_api_call(method, url, token, params = nil, payload = nil) Using the gem to create a folder The gem currently has no functions for working with mail folders. However, you can use the make_api_all function to do the work. The details on creating a folder are documented here. Using that information, you can do the following to create a subfolder in the Inbox: outlook_client = RubyOutlook::Client.new create_folder_url = '/api/v1.0/me/folders/inbox/childfolders' new_folder_payload = { 'DisplayName' => 'New Subfolder' } create_result = outlook_client.make_api_call('POST', create_folder_url, token, nil, new_folder_payload) Using similar methods you should be able to call anything that the APIs support. Go download the gem, the sample, or both! As always, I'd love to hear your feedback in the comments or on Twitter (@JasonJohMSFT). Feel free to report issues or submit pull requests on GitHub! Links - ruby_outlook gem: - ruby_outlook source: - Office 365 VCF Import/Export Sample source: - Ruby and Outlook APIs Tutorial: Hello. I’m working on a project where I’d like to be able to connect my app users to their @outlook.com accounts. I’ve successfully registered my app , connected via the api, received calendar and contacts data, etc. What’s confusing me is that this all works when i connect with my @subdomain.onmicrosoft.com account, but I can’t seem to connect my separate @outlook.com account to my app. Did I not configure something correctly in Azure? The idea is that general consumers would have an @outlook.com account, and i need them to be able to link that account to my app… Thank you! Hi i want to know how to send calendar create event request in ruby on rails application. Try this walkthrough:. Calendar events are at the end.
https://blogs.msdn.microsoft.com/exchangedev/2015/03/24/ruby-gem-for-mail-calendar-and-contacts-apis/
CC-MAIN-2017-34
refinedweb
944
65.62
Handling False Positives in PVS-Studio and CppCat It occurred to me recently to reanalyze the Newton Game Dynamics physics engine. The project's code is very high-quality, so there were almost no genuine bugs detected, but I did get a few dozens of false positives. Seems like there's nothing to write about, doesn't it? Well, I thought I should write about how to handle false positives and how to avoid them. I found the Newton Game Dynamics project a good example to demonstrate that on. Project analysis The number of diagnostic messages the PVS-Studio analyzer generated on this project is as follows: - 48 first-level messages; - 79 second-level messages; - 261 third-level messages (off by default). All of these refer to the general analysis set of diagnostic rules (GA). So, it makes total 127 warnings that need to be examined. The CppCat analyzer generates just as many messages. Further in the article, I won't distinguish between PVS-Studio and CppCat. In general, both provide identical false positive suppression mechanisms. PVS-Studio does have a bit more of them, but it doesn't affect the whole picture much. Note. To learn about the differences between the functional capabilities of PVS-Studio and CppCat, follow this link. It took me three hours odd to write down all the necessary samples for the article and get rid of all the warnings. I think it would have taken me not more than an hour if I had not had to write down the examples. It suggests that the difficulty of fighting against false positives is exaggerated. It's true that they hinder and distract you; it's true that large projects contain piles of false positives. But nevertheless, it's not difficult at all to get rid of them. The result is: CppCat generates 0 warnings; so does PVS-Studio. Well, we could turn on the third-level or 64-bit diagnostics of course, but they are not as interesting. First of all, you need to eliminate those warnings the analyzer draws your attention to as most crucial ones. And that is already a large step toward higher quality of your code. It is what you should start with. If you turn on all the diagnostics at once, you won't feel strong and patient enough to go through all of them. By the way, it's a main mistake of novice programmers. Remember that "more" doesn't mean "better". Analysis report review The PVS-Studio and CppCat analyzers don't group diagnostic messages and don't sort them. You don't need this when you use them regularly: if a tool detects 2 or 3 bugs in new code, there is nothing to group and sort there, while implementing this feature would only complicate the interface. When using the tool for the first time, you can sort messages by diagnostic number. It is done by clicking on the header of the column with diagnostic code. We decided not to implement it as an automatic feature. Warnings are displayed in the same order as files are being analyzed, which allows you to start viewing messages without having to wait for the analysis to finish. If we chose to automatically sort messages while the analysis is running, they would "jump" all over the table and you won't be able to handle them until the analysis is over. Thus, sorting by message type (diagnostic number) is most useful at the first stages. And this is what I will do. It will allow me to quickly find false positives of one type and eliminate them, thus significantly simplifying the work and reducing the time of initial setting up. Handling diagnostic messages If any of the false positive suppression methods don't seem clear enough, see the corresponding section in the documentation: - For PVS-Studio - Suppression of false alarms. - For CppCat - the documentation coming with the distribution package. Warnings No. 1, No. 2 void dgWorldDynamicUpdate::CalculateJointsVelocParallelKernel (....) { .... dgVector velocStep2 (velocStep.DotProduct4(velocStep)); dgVector omegaStep2 (omegaStep.DotProduct4(omegaStep)); dgVector test ((velocStep2 > speedFreeze2) | (omegaStep2 > omegaStep2)); .... } Diagnostic message: V501 There are identical sub-expressions to the left and to the right of the '>' operator: omegaStep2 > omegaStep2 dgworlddynamicsparallelsolver.cpp 546 The "omegaStep2 > omegaStep2" expression looks suspicious. I cannot say for sure if there is a genuine error here or not. Because this comparison can also be found in another file, I guess it's not a bug but the programmer's conscious intention. Let's assume it's not an error. I have marked these two fragments with a special comment: dgVector test ((velocStep2 > speedFreeze2) | (omegaStep2 > omegaStep2)); //-V501 From now on, the V501 warning will not be generated for these fragments. Warning No. 3 dgInt32 dgWorld::CalculatePolySoupToHullContactsDescrete(....) const { .... dgAssert (dgAbsf(polygon.m_normal % polygon.m_normal - dgFloat32 (1.0f)) < dgFloat32 (1.0e-4f)); .... } Diagnostic message: V501 There are identical sub-expressions to the left and to the right of the '%' operator: polygon.m_normal % polygon.m_normal dgnarrowphasecollision.cpp 1921 The analyzer is both right and wrong about this code fragment. On the one hand, the "polygon.m_normal % polygon.m_normal" expression is very suspicious indeed. On the other hand, the analyzer just can't figure out that this is a test to check the '%' operator implemented in a class. So the code is actually correct. Let's help the analyzer by adding a comment: dgAssert (dgAbsf(polygon.m_normal % polygon.m_normal - //-V501 dgFloat32 (1.0f)) < dgFloat32 (1.0e-4f)); Warning No. 4 static void PopupateTextureCacheNode (dScene* const scene) { .... if (!(info->IsType(dSceneCacheInfo::GetRttiType()) || info->IsType(dSceneCacheInfo::GetRttiType()))) { .... } Diagnostic message: V501 There are identical sub-expressions 'info->IsType(dSceneCacheInfo::GetRttiType())' to the left and to the right of the '||' operator. dscene.cpp 125 One and the same condition is checked twice. Suppose the second check is redundant. To get rid of the false warning, I fixed the code in the following way: if (!(info->IsType(dSceneCacheInfo::GetRttiType()))) { Warning No. 5 dFloat dScene::RayCast (....) const { .... dFloat den = 1.0f / ((globalP1 - globalP0) % (globalP1 - globalP0)); //-V501 .... } Diagnostic message: V501 There are identical sub-expressions '(globalP1 - globalP0)' to the left and to the right of the '%' operator. dscene.cpp 1280 The variables globalP0 and globalP1 are instances of the 'dVector' class. Because of that, this code makes sense and the analyzer's worry is all for nothing. Let's mark the code with the comment: dFloat den = 1.0f / ((globalP1 - globalP0) % (globalP1 - globalP0)); //-V501 Although the analyzer is wrong, this code still can't be called neat. I suggest implementing special functions or something else for such cases. Warnings No. 6 - No. 15 dgInt32 dgCollisionCompound::CalculateContactsToCompound ( ...., dgCollisionParamProxy& proxy) const { .... dgCollisionInstance childInstance (*subShape, subShape->GetChildShape()); .... proxy.m_referenceCollision = &childInstance; .... m_world->CalculateConvexToConvexContacts(proxy); .... } Diagnostic message: V506 Pointer to local variable 'childInstance' is stored outside the scope of this variable. Such a pointer will become invalid. dgcollisioncompound.cpp 1815 The function receives a reference to an object of the 'dgCollisionParamProxy' type. A pointer to a local variable is written into this object, and the analyzer warns that it is potentially dangerous. After leaving the function, this pointer can't be used because the local variable it points to will be destroyed by then. There is no error in this particular case. The pointer is used only while the variable exists. I don't feel like using comments to suppress such warnings. You see, there are 9 more of them, all of the same kind. So let's do it another way. All the lines these false positives are generated on contain a variable named 'proxy'. We can write one single comment to suppress all these warnings at once: //-V:proxy:506 It should be added into some file that gets included into all the other files. In our case, the file "dgPhysicsStdafx.h" will do best. From now on, the V506 warning won't be displayed for any lines containing the word 'proxy'. This mechanism was initially implemented to suppress warnings in macros. But it doesn't actually matter if a word serves as a name for a macro or some other entity (a variable, function, class, etc.). The principle behind it is simple: if a string contains a specified substring, the corresponding warning is not displayed. Warning No. 16 The following sample is a lengthy one. There's nothing of much interest about it, so you may skip it. We have a vector class: class dgVector { .... union { __m128 m_type; __m128i m_typeInt; dgFloat32 m_f[4]; struct { dgFloat32 m_x; dgFloat32 m_y; dgFloat32 m_z; dgFloat32 m_w; }; struct { dgInt32 m_ix; dgInt32 m_iy; dgInt32 m_iz; dgInt32 m_iw; }; }; .... }; And we have the following piece of code where vector members are filled with values by the memcpy() function: DG_INLINE dgMatrix::dgMatrix (const dgFloat32* const array) { memcpy (&m_front.m_x, array, sizeof (dgMatrix)) ; } Diagnostic message: V512 A call of the 'memcpy' function will lead to overflow of the buffer '& m_front.m_x'. dgmatrix.h 118 The analyzer doesn't like that more bytes are written into the variable of the 'dgFloat32' type than it actually occupies. Far from neat, this practice, however, works well and is widely used. The function is actually filling with values the variables m_x, m_y, m_z, and so on. I was not attentive enough for the first time and just fixed the code in the following way: memcpy(m_front.m_f, array, sizeof(dgMatrix)); I thought that only one vector was copied, while the size of the 'm_f' array is just the same as that of the vector. But at the next launch, the analyzer drew my attention to the code once again. There were actually 4 vectors to be copied, not one. And it is 4 vectors that the 'dgMatrix' class contains: class dgMatrix { .... dgVector m_front; dgVector m_up; dgVector m_right; dgVector m_posit; .... } I don't know how to make this code neat and short, so I decided to leave it all as it had been before and just added the comment: memcpy (&m_front.m_x, array, sizeof (dgMatrix)) ; //-V512 Warnings No. 17, No. 18 void dgWorldDynamicUpdate::UpdateDynamics(dgFloat32 timestep) { dgWorld* const world = (dgWorld*) this; dgUnsigned32 updateTime = world->m_getPerformanceCount(); m_bodies = 0; m_joints = 0; m_islands = 0; m_markLru = 0; world->m_dynamicsLru = world->m_dynamicsLru + DG_BODY_LRU_STEP; m_markLru = world->m_dynamicsLru; .... } Diagnostic message: V519 The 'm_markLru' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 91, 94. dgworlddynamicupdate.cpp 94 The 'm_markLru' variable is first initialized by 0 and then 'world->m_dynamicsLru' is written into it. There is no error here. To get rid of the warning, I removed the variable initialization by zero. Just in the same way I fixed one more code fragment. The corresponding diagnostic message: V519 The 'm_posit' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 1310, 1313. customvehiclecontrollermanager.cpp 1313 Warnings No. 19, No. 20 dgFloat32 dgCollisionConvexPolygon::GetBoxMinRadius () const { return m_faceClipSize; } dgFloat32 dgCollisionConvexPolygon::GetBoxMaxRadius () const { return m_faceClipSize; } Diagnostic message: V524 It is odd that the body of 'GetBoxMaxRadius' function is fully equivalent to the body of 'GetBoxMinRadius' function. dgcollisionconvexpolygon.cpp 88 Two functions whose names contain the words 'Min' and 'Max' are implemented in the same way. The analyzer finds it suspicious. But everything is OK here. To eliminate the false positive, I implemented one function through the other: dgFloat32 dgCollisionConvexPolygon::GetBoxMaxRadius () const { return GetBoxMinRadius(); } In the same way I handled the functions GetBoxMaxRadius/GetBoxMaxRadius implemented in the 'dgCollisionScene' class. Warning No. 21 dgInt32 AddFilterFace (dgUnsigned32 count, dgInt32* const pool) { .... for (dgUnsigned32 i = 0; i < count; i ++) { for (dgUnsigned32 j = i + 1; j < count; j ++) { if (pool[j] == pool[i]) { for (i = j; i < count - 1; i ++) { pool[i] = pool[i + 1]; } count --; i = count; reduction = true; break; } } } .... } Diagnostic message: V535 The variable 'i' is being used for this loop and for the outer loop. Check lines: 105, 108. dgpolygonsoupbuilder.cpp 108 We have two loops here. One of them uses the variable 'i' as a counter; the other, the variable 'j'. Inside these loops, one more loop is sometimes run. It uses the variable 'i' as a counter, too. The analyzer doesn't like it, though there is no error here. When an internal loop is executed, external loops are stopped: - the loop organized by the 'j' variable is stopped by the 'break' operator; - the loop organized by the 'i' variable is stopped through the assignment "i = count". The analyzer failed to figure out these nuances. The fragment above is a fine example of code that works but smells. I used commenting to eliminate the false positive: for (i = j; i < count - 1; i ++) { //-V535 Warnings No. 22 - No. 25 DG_INLINE dgMatrix::dgMatrix (const dgVector& front) { .... m_right = m_right.Scale3 (dgRsqrt (m_right % m_right)); m_up = m_right * m_front; .... } Diagnostic message: V537 Consider reviewing the correctness of 'm_right' item's usage. dgmatrix.h 143 The analyzer generates the V537 warning when it comes across a suspicious mixture of variables whose names contain the words "right", "left", "front", and the like. This diagnostic proved unsuccessful for this project as the analyzer generated 4 warnings on absolutely safe code. In that case, we can completely turn this diagnostic off in PVS-Studio's settings. In CppCat, you can't turn off single diagnostics, so we have to use an alternative method. All the lines that triggered the false positives contain the word "right". I added the following comment into the file "dgStdafx.h": //-V:right:537 Warning No. 26 Notice the comment. int pthread_delay_np (struct timespec *interval) { .... /* * Most compilers will issue a warning 'comparison always 0' * because the variable type is unsigned, * but we need to keep this * for some reason I can't recall now. */ if (0 > (wait_time = secs_in_millisecs + millisecs)) { return EINVAL; } .... } Diagnostic message: V547 Expression is always false. Unsigned type value is never < 0. pthread_delay_np.c 119 The comment tells us that it's not a bug, but the programmer's conscious intention. Well, in that case we only have to suppress the warning by a comment: if (0 > (wait_time = secs_in_millisecs + millisecs)) //-V547 Warning No. 27 typedef unsigned long long dgUnsigned64; dgUnsigned64 m_mantissa[DG_GOOGOL_SIZE]; dgGoogol::dgGoogol(dgFloat64 value) :m_sign(0) ,m_exponent(0) { .... m_mantissa[0] = (dgInt64 (dgFloat64 ( dgUnsigned64(1)<<62) * mantissa)); // it looks like GCC have problems with this dgAssert (m_mantissa[0] >= 0); .... } Diagnostic message: V547 Expression 'm_mantissa[0] >= 0' is always true. Unsigned type value is always >= 0. dggoogol.cpp 55 The analyzer shares GCC's opinion that something is wrong with this code (see the comment in the code). The check "dgAssert(m_mantissa[0] >= 0)" makes no sense: an unsigned variable is always equal to or larger than zero. 'dgAssert' doesn't actually check anything. Programmers tend to be lazy. They would rather write a comment than spend some time to investigate an issue and fix a mistake. I fixed the code so that 'dgAssert' executed a correct check. For this purpose, I had to add a temporary signed variable: dgInt64 integerMantissa = (dgInt64(dgFloat64( dgUnsigned64(1) << 62) * mantissa)); dgAssert(integerMantissa >= 0); m_mantissa[0] = integerMantissa; Warnings No. 28 - No. 31 void dgRedBackNode::RemoveFixup (....) { .... if (!ptr) { return; } .... ptr->SetColor(RED) ; ptr->RotateLeft (head); tmp = ptr->m_right; if (!ptr || !tmp) { return; } .... } Diagnostic message: V560 A part of conditional expression is always false: !ptr. dgtree.cpp 215 The '!ptr' expression is always false. The reason is that the 'ptr' pointer has already been checked for being null before. If it was found to be null, the function would be left. The second check looks even sillier because of the pointer being dereferenced before it: "tmp = ptr->m_right;". I eliminated the false positive by removing the second meaningless check. The code now looks like this: if (!ptr) { return; } .... tmp = ptr->m_right; if (!tmp) { return; } .... In the same way I fixed 3 other code fragments. By the way, this code could additionally trigger the V595 warning. I felt too lazy to check it for sure, but if we miss a couple of warnings in the end of the article, be aware that it will be just because of that. Warnings No. 32, No. 33 DG_INLINE bool dgBody::IsCollidable() const; void dgBroadPhase::AddPair (dgBody* const body0, dgBody* const body1, const dgVector& timestep2, dgInt32 threadID) { .... bool kinematicBodyEquilibrium = (((body0->IsRTTIType(dgBody::m_kinematicBodyRTTI) ? true : false) & body0->IsCollidable()) | ((body1->IsRTTIType(dgBody::m_kinematicBodyRTTI) ? true : false) & body1->IsCollidable())) ? false : true; .... } Diagnostic message: V564 The '&' operator is applied to bool type value. You've probably forgotten to include parentheses or intended to use the '&&' operator. dgbroadphase.cpp 921 This code smells. I don't think I understand why the programmer would need such a complicated and obscure check. I rewrote it, and the code became a bit shorter and more readable. Besides, I got rid of the false warning. bool kinematicBodyEquilibrium = !((body0->IsRTTIType(dgBody::m_kinematicBodyRTTI) && body0->IsCollidable()) || (body1->IsRTTIType(dgBody::m_kinematicBodyRTTI) && body1->IsCollidable())); There was one more V564 warning, and I simplified the corresponding code fragment too: V564 The '&' operator is applied to bool type value. You've probably forgotten to include parentheses or intended to use the '&&' operator. dgbroadphase.cpp 922 Warnings No. 34 - No. 37 class dgAIWorld: public dgAIAgentGraph { .... }; typedef struct NewtonAIWorld{} NewtonAIWorld; NewtonAIWorld* NewtonAICreate () { TRACE_FUNCTION(__FUNCTION__); dgMemoryAllocator* const allocator = new dgMemoryAllocator(); NewtonAIWorld* const ai = (NewtonAIWorld*) new (allocator) dgAIWorld (allocator); return ai; } Diagnostic message: V572 It is odd that the object which was created using 'new' operator is immediately casted to another type. newtonai.cpp 40 That's a pretty strange way to store objects: creating an object of the 'dgAIWorld' class and casting it explicitly to the ' NewtonAIWorld' type. I didn't feel like figuring out why it had been done - there must have been some reason; I simply suppressed this warning by a comment in this and 3 other functions. Warning No. 38 void dgCollisionCompound::EndAddRemove () { .... if (node->m_type == m_node) { list.Append(node); } if (node->m_type == m_node) { stack.Append(node->m_right); stack.Append(node->m_left); } .... } Diagnostic message: V581 The conditional expressions of the 'if' operators situated alongside each other are identical. Check lines: 952, 956. dgcollisioncompound.cpp 956 The analyzer doesn't like one and the same condition being checked twice on end. Perhaps there is some typo here. What if this code was meant to look like this: if (node->m_type == m_node) { .... } if (node->m_type == m_FOO) { .... } However, that code sample is alright. To get rid of the false positive, we should fix the code. I don't think I will violate the program execution logic by leaving only one check: if (node->m_type == m_node) { list.Append(node); stack.Append(node->m_right); stack.Append(node->m_left); } Warning No. 39 void dSceneGraph::AddEdge (....) { .... if ((!parentLink && !childLink)) { .... } Diagnostic message: V592 The expression was enclosed by parentheses twice: '((!parentLink &&!childLink))'. One pair of parentheses is unnecessary or misprint is present. dscenegraph.cpp 209 It's OK, just redundant parentheses. I removed them: if (!parentLink && !childLink) { Warnings No. 40 - No. 44 dgVector dgCollisionCylinder::SupportVertex (....) const { dgAssert (dgAbsf ((dir % dir - dgFloat32 (1.0f))) < dgFloat32 (1.0e-3f)); .... } Diagnostic message: V592 The expression was enclosed by parentheses twice: '((dir % dir - dgFloat32(1.0f)))'. One pair of parentheses is unnecessary or misprint is present. dgcollisioncylinder.cpp 202 It's alright, just redundant parentheses. I removed them so that the analyzer didn't worry: dgAssert (dgAbsf (dir % dir - dgFloat32 (1.0f)) < dgFloat32 (1.0e-3f)); This line was replicated to 4 other code fragments through the Copy-Paste method. I fixed those too. Warnings No. 45 - No. 65 void ptw32_throw (DWORD exception) { .... ptw32_thread_t * sp = (ptw32_thread_t *) pthread_getspecific (ptw32_selfThreadKey); sp->state = PThreadStateExiting; if (exception != PTW32_EPS_CANCEL && exception != PTW32_EPS_EXIT) { exit (1); } .... if (NULL == sp || sp->implicit) .... } Diagnostic message: V595 The 'sp' pointer was utilized before it was verified against nullptr. Check lines: 77, 85. ptw32_throw.c 77 The V595 diagnostic works in the following way. The analyzer considers the code suspicious if a pointer is first dereferenced and then checked for being null. There are certain nuances and exceptions for rule, but the general principle is just like I said. Here we have just such a case: the 'sp' variable is first dereferenced in the expression "sp->state" and then is checked for being null. The analyzer has detected 20 more fragments like that. In each particular case, we should act differently: in some fragments I placed the check before the dereferencing operation and in some other fragments I simply removed it. Note False V595 warnings are very often triggered by macros of the following pattern: #define FREE(p) { if (p) free(p); } In this particular case, the analyzer will figure out the programmer's intention and keep silent. But in general, the following code pattern may trigger false positives: p->foo(); FREE(p); In these cases, I recommend that you throw macros away completely. The FREE() macro shown above is absolutely meaningless and even harmful. Firstly, you don't have to check the pointer for being null. The free() function handles null pointers correctly. The same is true for the 'delete' operator. That's why the FREE() macro is not needed - at all. Secondly, it is dangerous. If we extract pointers from an array, it may cause an error. For example: FREE(ArrayOfPtr[i++]) - the first pointer will be checked, and the next one will be freed. Warning No. 66 void dgCollidingPairCollector::Init () { dgWorld* const world = (dgWorld*) this; // need to expand teh buffer is needed world->m_pairMemoryBuffer[0]; m_count = 0; } Diagnostic message: V607 Ownerless expression 'world->m_pairMemoryBuffer[0]'. dgcontact.cpp 342 The comment tells us that the "world->m_pairMemoryBuffer[0]" expression makes sense. The analyzer, however, doesn't know that and generates a false positive. I removed it by adding a comment: world->m_pairMemoryBuffer[0]; //-V607 A nicer solution would be to add a special method expanding the buffer. Then the code would look something like this: void dgCollidingPairCollector::Init () { dgWorld* const world = (dgWorld*) this; world->m_pairMemoryBuffer.ExpandBuffer(); m_count = 0; } We don't need the comment anymore - the code says it all by itself. The analyzer doesn't generate any warnings, and everything's fine. Warning No. 67 dgGoogol dgGoogol::Floor () const { .... dgUnsigned64 mask = (-1LL) << (64 - bits); .... } Diagnostic message: V610 Undefined behavior. Check the shift operator '<<. The left operand '(- 1LL)' is negative. dggoogol.cpp 249 You cannot shift negative numbers to the left - it leads to undefined behavior. To learn more about that, see the article "Wade not in unknown waters. Part three". I fixed the code in the following way: dgUnsigned64 mask = (~0LLU) << (64 - bits); Warnings No. 68 - No. 79 void dGeometryNodeSkinModifierInfo::RemoveUnusedVertices( const int* const vertexMap) { .... dVector* vertexWeights = new dVector[m_vertexCount]; dBoneWeightIndex* boneWeightIndex = new dBoneWeightIndex[m_vertexCount]; .... delete boneWeightIndex; delete vertexWeights; } Diagnostic messages: - V611 The memory was allocated using 'new T[]' operator but was released using the 'delete' operator. Consider inspecting this code. It's probably better to use 'delete [] boneWeightIndex;'. dgeometrynodeskinmodifierinfo.cpp 97 - V611 The memory was allocated using 'new T[]' operator but was released using the 'delete' operator. Consider inspecting this code. It's probably better to use 'delete [] vertexWeights;'. dgeometrynodeskinmodifierinfo.cpp 98 Square brackets are missing near the 'delete' operators. It is a mistake and it must be fixed. The correct code will look as follows: delete [] boneWeightIndex; delete [] vertexWeights; The analyzer found 10 more fragments like that, and I fixed them all. Warning No. 80 #if defined(_MSC_VER) /* Disable MSVC 'anachronism used' warning */ #pragma warning( disable : 4229 ) #endif typedef void (* PTW32_CDECL ptw32_cleanup_callback_t)(void *); #if defined(_MSC_VER) #pragma warning( default : 4229 ) #endif Diagnostic message: V665 Possibly, the usage of '#pragma warning(default: X)' is incorrect in this context. The '#pragma warning(push/pop)' should be used instead. Check lines: 733, 739. pthread.h 739 This is a bad way to suppress warnings, especially if this code refers to the library. To find out the reason why and how to fix this code, see the description of the V665 diagnostic. I fixed the code by using "warning(push)" and " warning(pop)": #if defined(_MSC_VER) /* Disable MSVC 'anachronism used' warning */ #pragma warning( push ) #pragma warning( disable : 4229 ) #endif typedef void (* PTW32_CDECL ptw32_cleanup_callback_t)(void *); #if defined(_MSC_VER) #pragma warning( pop ) #endif Warnings No. 81 - No. 99 dgAABBPointTree4d* dgConvexHull4d::BuildTree (....) const { .... const dgBigVector& p = points[i]; .... varian = varian + p.CompProduct4(p); .... } Diagnostic message: V678 An object is used as an argument to its own method. Consider checking the first actual argument of the 'CompProduct4' function. dgconvexhull4d.cpp 536 The analyzer doesn't like the calls of the X.Foo(X) pattern. Firstly, it may be a typo. Secondly, the class may not be ready for handling itself. In this particular case, the code is correct. The false positive should be suppressed. We could use the following comment, for example: varian = varian + p.CompProduct4(p); //-V678 But that's a bad idea. The analyzer has generated 18 more warnings of this kind, and you don't want to add so many comments into the code. Fortunately, all the 19 warnings refer to calls of the functions CompProduct3() or CompProduct4. So you can write only one comment to suppress all the V678 warnings in lines containing the substring "CompProduct": //-V:CompProduct:678 I placed this comment in the file dgStdafx.h. Warnings No. 100 - No. 119 The 'dgBaseNode' class contains pointers: class dgBaseNode: public dgRef { .... dgBaseNode (const dgBaseNode &clone); .... private: .... dgBaseNode* parent; dgBaseNode* child; dgBaseNode* sibling; }; Because of that, it has a full-blown copy constructor: dgBaseNode::dgBaseNode (const dgBaseNode &clone) :dgRef (clone) { Clear (); for (dgBaseNode* obj = clone.child; obj; obj = obj->sibling) { dgBaseNode* newObj = (dgBaseNode *)obj->CreateClone (); newObj->Attach (this); newObj->Release(); } } Diagnostic message: V690 The 'dgBaseNode' class implements a copy constructor, but lacks the the '=' operator. It is dangerous to use such a class. dgnode.h 35 The "Law of the Big Two" is violated here: the copy constructor is present, but the copy assignment operator = is missing. It will result in the compiler simply copying pointers' values while executing assignment, which will in its turn give birth to hard-to-find bugs. Even if the = operator is not used currently, this code is potentially dangerous as you may very easily make a mistake. There is only one correct way to fix it all - implement the = operator. If this operator is not needed according to the code's logic, you can declare it private. The analyzer has found 18 more classes with the = operator missing (or not forbidden). There is also one strange class whose meaning and purpose I failed to figure out: struct StringPool { char buff[STRING_POOL_SIZE]; StringPool () { } StringPool (const StringPool &arg) { } }; I simply suppressed the false positive by a comment: struct StringPool //-V690 { .... }; Note 1. C++11 has new keywords to make it simpler to forbid the use of copy constructors and copy assignment operators; or to tell the compiler that the copy constructor or = operator created by the compiler by default are correct. What I mean are =default and =delete. To learn more about these, see C++FAQ. Note 2. In many programs, copy assignment operators or assignment operators are implemented, though they are not needed. I mean the situation when an object can be easily copied by the compiler. Here is a simple artificial example: struct Point { int x, y; Point &Point(const Point &p) { x = p.x; y = p.y; return *this; } }; This code contains a = operator that no one needs. The "Law of the Big Two" is violated here, and the analyzer generates the warning. To avoid writing one more unnecessary function (the copy constructor), we need to delete the = operator. Here is an excellent short and correct class: struct Point { int x, y; }; Warnings No. 120 - No. 125 We have 6 more warnings of different types left. I failed to cope with them as I am absolutely unfamiliar with the code. I can't figure out if I'm dealing with a genuine bug or a false positive. Besides, even if it is an error, I still don't see how to fix it. I didn't feel like worrying my head off about it and simply marked them as false positives. Warnings No. 126 - No. 127 Two warnings have been "lost". It's OK. You see, one and the same suspicious code fragment may sometimes trigger 2 or even 3 warnings. Therefore, one fix can eliminate several warnings at once. For example, V595 warnings might have disappeared because of the fixes related to the V560 diagnostic (see warnings No. 28 - No. 31). Conclusions As you can see, there are pretty few false positives as such. Most warnings point out smelling code fragments. They work indeed, but they are still pretty strange, hard to read and maintain. What can confuse the analyzer is even more likely to confuse a human. Many of the code fragments the analyzer didn't like can be rewritten. It will not only help to eliminate a warning, but also make the code clearer. For those cases when the analyzer is obviously wrong, it provides you with a number of various methods of false positive suppression. They are described in detail in the documentation. I hope I have managed to demonstrate in this article that handling false positives is far not as difficult as it might seem at first. I wish you luck in mastering our static code analyzers.
http://www.viva64.com/en/b/0263/
CC-MAIN-2016-26
refinedweb
4,819
58.18
lp:~matsubara/maas/fix-maas-cc-sed-precise-sru - Get this branch: - bzr branch lp:~matsubara/maas/fix-maas-cc-sed-precise-sru Branch merges - Raphaël Badin (community): Approve on 2013-04-11 - Diff: 12 lines (+1/-1)1 file modifieddebian/maas-cluster-controller.postinst (+1/-1) Related bugs Related blueprints Branch information - Owner: - Diogo Matsubara - Status: - Development Recent revisions - 170. By Diogo Matsubara on 2013-04-11 fix sed quoting on maas-cluster- controller. postinst - 169. By Andres Rodriguez on 2013-03-20 Releasing 1.2+bzr1373+ dfsg-0ubuntu1~ 12.04.1 to precise-proposed - 168. By Andres Rodriguez on 2013-03-20 Rebase against latest quantal - 167. By Andres Rodriguez on 2013-03-08 * MAAS Stable Release Update (LP: #1109283). See changelog entry bellow. (>= 1.3.1-4ubuntu1.7). - debian/copyright: Update copyright to reflect libraries license. *:/ /lists. ubuntu. com/archives/ ubuntu- devel-announce/ 2013-February/ 001012. html * New upstream release: - MAAS file storage mechanism is shifting from a single shared namespace to a per-user namespace. Operators of the majority of MAAS systems will not notice any change. However, operators of the most complex installations may find that a new "shared- environment" user is created, and that some resources are reassigned to it, such as API credentials and SSH public keys. This provides a transitional environment that mimics the behaviour of a shared namespace. - 166. By Andres Rodriguez on 2013-02-22 Update changelog for latest bzr version - 165. By Andres Rodriguez on 2013-02-20 Update changelog and copyright - 164. By Andres Rodriguez on 2013-02-20 * Continue to ship yui3 and raphael with MAAS. - debian/ patches/ 04_precise_ no_yui_ root.patch: Add. - debian/control: Drop dependencies on yui3 and raphael. - debian/ source/ include- binaries: Add to not FTBFS - 163. By Raphaël Badin on 2013-02-19 Remove the debian/tests dir and remove the XS-Testsuite header. - 162. By Andres Rodriguez on 2013-02-02 * debian/ maas-dhcp. maas-dhcp- server. upstart: leases file should be owned by user/group 'dhcpd' instead of root. * debian/control: Force dependency version for python-django to ((>> 1.3.1-4ubuntu1.4). - 161. By Andres Rodriguez on 2013-02-01 Update to match Quantal changelog Branch metadata - Branch format: - Branch format 7 - Repository format: - Bazaar repository format 2a (needs bzr 1.16 or later) - Stacked on: - lp:maas/trunk
https://code.launchpad.net/~matsubara/maas/fix-maas-cc-sed-precise-sru
CC-MAIN-2019-39
refinedweb
389
51.24
One of the advantages of interpreted languages, like Python, is that when your vendor ships a tool that uses Python… you can see how the Python bits actually work. Or maybe seeing how the sausage is made is a disadvantage? Delana's vendor tool needs to determine the IP address of the computer in a platform independent way. The "standard" way of doing this in Python is to check the computer's hostname then feed that into a function like one of these which turns it into IP addresses. Those methods should behave more-or-less the same on both Windows and Linux, so checking the IP address should be a simple and short function. Let's see how Delana's vendor did it: def get_ip_address(ifname): os_type = 'windows' if sys.platform == 'linux' or sys.platform == 'linux2': os_type = 'linux' else: if sys.platform == 'win32': os_type = 'windows' if os_type == 'linux': import fcntl s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) f = fcntl.ioctl(s.fileno(), 35093, struct.pack('256s', ifname[:15]))[20:24] ip = socket.inet_ntoa(f) return ip if os_type == 'windows': ip = [(s.connect(('8.8.8.8', 80)), s.getsockname()[0], s.close()) for s in [ socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1] return ip Let's start with the OS check. We set os_type = 'windows', then check the platform. If it's 'linux' or 'linux2', our os_type is 'linux'. Otherwise, if the platform is 'win32', we set os_type to 'windows', which is what it already was. The entire 'windows' branch of the if isn't needed. Also, the "proper" way of writing it would be elif. Okay, that's silly, but fine. What if we want to check the IP on Linux? Well, this solution pulls in the fcntl library, which is a Python wrapper around the fnctl.h collection of syscalls. They open a socket, get the filehandle, and then do a cryptic ioctl operation which I'm sure I'd understand if I pored over the manpages for a bit. A quick call to inet_ntoa turns the integer representing the IP back into a string representing the IP. From this, I get the sense that the original developer is a C programmer who's doing Python against their will. And more to the point, a Linux C programmer that has an understanding of low-level Linux syscalls. Which is also supported by their approach to this question on Windows. Before we talk about the Python code, let's just talk about their process. They open a socket to one of Google's DNS servers, then calls getsockname() to get the IP address. Now, they're not the only ones to use this basic approach. Since this is a UDP socket that they're opening, they're not actually starting a connection, nothing actually gets sent to Google's DNS server (and they could use any IP, since we don't need a valid route). It's weird, but not wrong. But then there's the actual line, which is such an utter mess of a way of writing this I need to break it into bits to explain the abuse of Python syntax constructs. Let's start with this bit, as this is easy: socket.socket(socket.AF_INET, socket.SOCK_DGRAM) That just creates the socket. But we wrap it up in a generator expression: for s in [ socket.socket(socket.AF_INET, socket.SOCK_DGRAM)] We turn that socket into a list of one item, and then say that we're going to iterate across that, to generate a list of tuples: [(s.connect(('8.8.8.8', 80)), s.getsockname()[0], s.close()) for s in [ socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1] And this is where we're really going into a special place with this code. We have three operations we want to perform. We want to open the socket, check the IP address, and then close the socket. This code does that inside of a tuple: the first item in the tuple is the connect, then we getsockname, then we Finally, we grab the 0th element of the list (the tuple itself), and then the 1st element in the tuple- the result of s.getsockname()[0]. The "turn the socket into a list and iterate across that list to generate a new list of one tuple" seems like an attempt to code-golf this down into one line. Maybe they were so angry about having to do a Windows version that they didn't want to spend any more lines of code on it than they had to? I don't know, but I have a bad feeling this isn't the last time I'm going to encounter someone performing a sequence of operations in a tuple like this.
https://thedailywtf.com/articles/a-proper-mode-of-address
CC-MAIN-2021-43
refinedweb
797
73.68
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Hi, I have not able to find this info anywhere. Is there a way that we can have the Summary and Description fields pre-populated with a text template when an end-user clicks on the "+ Create Issue" button? We would want the text used to pre-populate in those fields dependent on the Issue Type (e.g. Bug, Story, Epic). We used something like this in Redmine () before we switched to Jira. In that case we had to select the template we wanted to use and then insert it in. I'm fine with the same functionality of that Redmine plugin, but it would be cool if the fields could be prepopulated. Thanks! -Rich Options are javascript to set a value if none exists, or behaviours plugin to do it. Behaviours script example: def summary = getFieldById("summary") if (! summary.getValue()) { summary.setFormValue("Summary template...") } @Jamie Echlin [Adaptavist] I'm new to JIRA, where do I do this? (Feel free to point me to a documentation link) If you are using ScriptRunner, then This is only available for scriptrunner server right? not for the cloud version? which is odd, considering that this quesiton had jira cloud as a tag on it @Jamie Echlin [Adaptavist] can we also prepopulate the description in the subtask? I want some content to be already there(prefilled) in the description field. For example, can you have “Test” filled into the description field by default? I need caps to say THANK YOUUUUU JAMIE... I finally found the solution to populate the summary during the ticket creation. I'm using workflow post-functions to sort of do this without any plugins. It may not prepopulate the fields at the create screen, but once you create the issue those fields are populated via that workflow action. I would just omit those fields that need to be prepopulated from the create screen altogether and have the workflow fill it in. Later you can edit those fields through an edit screen or workflow function. Hrm, but you can't create an issue without a summary... also seems painful having to do a transition just to enter summary and description. Sadly, I've also looked everywhere and this basic option is missing in Jira. What I use instead is a simple chrome addon that with a few keystrokes does the job for.
https://community.atlassian.com/t5/Jira-questions/Pre-filled-fields-upon-quot-Create-issue-quot/qaq-p/310954
CC-MAIN-2018-34
refinedweb
413
64.41
[ ] Harald Wellmann edited comment on OPENJPA-1784 at 9/8/10 9:37 AM: ------------------------------------------------------------------ The attachment MapUpdate.patch contains a unit test exhibiting the problem. A test which updates a map key passes. Another test which only update the map value fails. was (Author: hwellmann): Unit test exhibiting the problem. A test which updates a map key passes. Another test which only update the map value fails. > Map value updates not flushed > ----------------------------- > > Key: OPENJPA-1784 > URL: > Project: OpenJPA > Issue Type: Bug > Components: jdbc > Affects Versions: 2.0.1 > Reporter: Harald Wellmann > Attachments: MapUpdate.patch > > > I have an entity with a map element collection where the map value is an Embeddable. > @Embeddable > public class LocalizedString { > private String language; > private String string; > // getters and setters omitted > } > > @Entity > public class MultilingualString { > @Id > private long id; > @ElementCollection(fetch=FetchType.EAGER) > private Map<String, LocalizedString> map = new HashMap<String, Localized. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
http://mail-archives.apache.org/mod_mbox/openjpa-dev/201009.mbox/%3C19645419.75391283953057793.JavaMail.jira@thor%3E
CC-MAIN-2017-13
refinedweb
166
51.95
By this time you know the routine. Put the answers in a directory named "assignment6". Include a README file and any other files (Python, data files, etc.) needed to do the assignment. Email me a gzip'ed or bzip'ed tar file of the directory, or a zip file. When will I grade the previous homework? I like grading even less than you like doing the homework! Maybe when Janet starts teaching ontologies I'll have the time and inclination... Part 1 This will build on the generators lecture from last week. Put your function definitions in the Python file named "codon_functions.py". Make sure the file can be imported and used as a module. Once all the functions are implements you can test the module by putting test_codon_functions.py in your assignment directory and running it as python test_codon_functions . Generate codons Write a generator function named "get_codons" which takes a DNA sequence as a string and yields each 3-letter codon. Note: The sequence might not have a multiple of 3 bases. If that happens you must exclude the final 1- or 2-base term. To help, here is code to test your function. def test_get_codons(): for seq, expected_codons in ( # These lengths are a multiple of 3 ("", []), ("ATC", ["ATC"]), ("ATCGAT", ["ATC", "GAT"]), ("ATCGTGCATAGACTATGCAATATACCG", ["ATC","GTG","CAT","AGA","CTA","TGC","AAT","ATA","CCG"]), # These lengths are not a multiple of 3 ("A", []), ("TC", []), ("TTTC", ["TTT"]), ("TTTCC", ["TTT"]), ("AAATTTCCCGGGATCG", ["AAA", "TTT", "CCC", "GGG", "ATC"]), ): codons = list(get_codons(seq)) if codons != expected_codons: raise AssertionError("Codons for %r was %r, expected %r" % (seq, codons, expected_codons)) # This checks that the 'get_codon' function returns a generator gen = get_codons("ATC") if gen.next() != "ATC": raise AssertionError("Could not get the codon") try: gen.next() except StopIteration: pass else: raise AssertionError("Only supposed to find one codon") def test(): test_get_codons() if __name__ == "__main__": test() print "All tests passed." Generate translated bases Write a generator function named "translate_codons" which converts each codon from get_codons() into protein residues. The standard translation table is table = { 'TTT': 'F', 'TTC': 'F', 'TTA': 'L', 'TTG': 'L', 'TCT': 'S', 'TCC': 'S', 'TCA': 'S', 'TCG': 'S', 'TAT': 'Y', 'TAC': 'Y', 'TGT': 'C', 'TGC': 'C', 'TGG': 'W', 'CTT': 'L', 'CTC': 'L', 'CTA': 'L', 'CTG': 'L', 'CCT': 'P', 'CCC': 'P', 'CCA': 'P', 'CCG': 'P', 'CAT': 'H', 'CAC': 'H', 'CAA': 'Q', 'CAG': 'Q', 'CGT': 'R', 'CGC': 'R', 'CGA': 'R', 'CGG': 'R', 'ATT': 'I', 'ATC': 'I', 'ATA': 'I', 'ATG': 'M', 'ACT': 'T', 'ACC': 'T', 'ACA': 'T', 'ACG': 'T', 'AAT': 'N', 'AAC': 'N', 'AAA': 'K', 'AAG': 'K', 'AGT': 'S', 'AGC': 'S', 'AGA': 'R', 'AGG': 'R', 'GTT': 'V', 'GTC': 'V', 'GTA': 'V', 'GTG': 'V', 'GCT': 'A', 'GCC': 'A', 'GCA': 'A', 'GCG': 'A', 'GAT': 'D', 'GAC': 'D', 'GAA': 'E', 'GAG': 'E', 'GGT': 'G', 'GGC': 'G', 'GGA': 'G', 'GGG': 'G', } stop_codons = [ 'TAA', 'TAG', 'TGA', ], start_codons = [ 'TTG', 'CTG', 'ATG', ]Various codon tables are available in Biopython so you should use that instead. The library also handles the ambiguous encodings so "ATH" (where "H" means "A", "C", or "T") encodes for isoleucine because "ATA", "ATC" and "ATT" all code for isoleucine. Here are examples of use: >>> from Bio.Data import CodonTable >>> table = CodonTable.ambiguous_dna_by_name["Standard"] >>> table.forward_table["ATG"] 'M' >>> table.forward_table["ATA"] 'I' >>> table.forward_table["ATH"] 'I' >>> table.stop_codons ['TAA', 'TAG', 'TGA'] >>> table.forward_table["TAA"] Traceback (most recent call last): File "<stdin>", line 1, in ? File "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/ python2.3/site-packages/Bio/Data/CodonTable.py", line 557, in __getitem__ raise KeyError, codon # it's a stop codon KeyError: 'TAA' >>>The KeyError is used for anything which can't be translated, including stop codons. Your function must test if the codon is a stop codon and return a "*" for that case. Here is the new test code for this function def test_translate_codons(): # A quick test to make sure the function allows lists of codons residues = list(translate_codons( ["GAG", "AAG", "TTG", "GCT", "GAT"])) if residues != ["E", "K", "L", "A", "D"]: raise AssertionError("Residues for list data was %r, expected %r" % (residues, ["E", "K", "L", "A", "D"]))", "*", "K", "*", "K"]), # Test the ambiguous codes ("ATH", ["I"]), ("AGRAGY", ["R", "S"]), ("AAAAGRTAGATHC", ["K", "R", "*", "I"]), ): residues = list(translate_codons(get_codons(seq))) if residues != expected_residues: raise AssertionError("Residues for %r was %r, expected %r" % (seq, residues, expected_residues)) # This checks that the 'translate_codons' function uses an iterator # and returns a generator def yield_codons(): yield "GCT" raise AssertionError("I didn't ask for the second codon") gen = translate_codons(yield_codons()) if gen.next() != "A": raise AssertionError("Could not get the codon") try: # Make sure it raises the expected gen.next() except AssertionError: pass else: raise AssertionError("There was a second item?") def test(): test_get_codons() test_translate_codons() I'll get picky before anyone corrects me - biologically speaking translations occur from RNA to protein so this skips the transcription step. Translate to stop codon Translations stop at the stop codon. Add the function "translate_codons_to_stop" which takes a codon iterator and yields the translated protein residue until the end of the input or up to the stop codon. Here's some test, which isn't much changed from "test_translate_codons". def test_translate_codons_to_stop():"]), ("GAGAAATGA", ["E", "K"]), # Test the ambiguous codes ("ATH", ["I"]), ("AGRAGY", ["R", "S"]), ("AAAAGRTAGATHC", ["K", "R"]), ): residues = list(translate_codons_to_stop(get_codons(seq))) if residues != expected_residues: raise AssertionError("Residues for %r was %r, expected %r" % (seq, residues, expected_residues)) # This checks that the 'translate_codons_to_stop' function uses an iterator # and returns a generator def yield_codons(): yield "GCT" raise AssertionError("I didn't ask for the second codon") gen = translate_codons_to_stop(yield_codons()) if gen.next() != "A": raise AssertionError("Could not get the codon") try: # Make sure it raises the expected gen.next() except AssertionError: pass else: raise AssertionError("There was a second item?") Translate reading frames (optional) The previous function assumes the first codon is part of the translation. Translation actually starts at a start codon and goes to the end codon. The translation may occur in one of 6 different reading frames. ( See here for background.) To get the start codons from the CodonTable >>> table.start_codons ['TTG', 'CTG', 'ATG'] >>> For this assignment assume that get_codons() returns codons for the proper reading frame. In your "codon_functions.py" module add a new function named "translate_reading_frame" which takes the output from get_codons and returns the translated region from the first start codon up to the end codon or the end of the sequence. The start codon is included in the protein sequence but the stop residue is not. Hint: There are many ways to implement this. One of the simpler ways uses the iter() function. This converts a list or iterator into an iterator. (Generator functions are one type of itererator.) A nice thing about iterators is that when it's used in a for loop it starts from where it left off in the previous for loop. Lists, by comparison, start again from the beginning. Here's an example of the difference >>>>> seq_iter = iter(seq) >>> >>> def show_word(seq_iter): ... for c in seq_iter: ... if c == " ": ... break ... print repr(c), ... print ... >>> show_word(seq_iter) 'H' 'e' 'l' 'l' 'o' >>> show_word(seq_iter) 'C' 'a' 'p' 'e' >>> show_word(seq_iter) 'T' 'o' 'w' 'n' '!' >>> show_word(seq_iter) >>> show_word(seq) 'H' 'e' 'l' 'l' 'o' >>> show_word(seq) 'H' 'e' 'l' 'l' 'o' >>> show_word(seq) 'H' 'e' 'l' 'l' 'o' >>> Another way is to use a composition of two of the functions from the itertools module. This is probably the most elegant solution but it takes some practice to think about how to solve it this way. If you have the time, try this approach as well and let me know what you think about them. Here is test code for your new function def test_translate_reading_frame(): for (seq, expected_orf) in ( ("", []), # Has a start and an end ("CTGAAATGA", ["L", "K"]), # start codon is the 3rd residue ("ATHAGYCTGAAATGA", ["L", "K"]), # make sure it does not read multiple reading frames ("ATHAGYCTGAAATGAATHAGYCTGAAATGA", ["L", "K"]), # Test a different start codon; go to the end (no stop codon) ("AAAATGGGGGCT", ["M", "G", "A"]), # Make sure only one start codon is accepted ("AAAATGGGGATGGCT", ["M", "G", "M", "A"]), ): orf = list(translate_reading_frame(get_codons(seq))) if orf != expected_orf: raise AssertionError("ORF for %r was %r, expected %r" % (seq, orf, expected_orf)) # Check that the translate_reading_frame function works with # both lists and iterables orf = translate_reading_frame(["ATH", "AGY", "CTG", "AAA", "TGA"]) x = orf.next() if x != "L": raise AssertionError("First residue is %r" % (x,)) x = orf.next() if x != "K": raise AssertionError("Second residue is %r" % (x,)) try: x = orf.next() except StopIteration: pass else: raise AssertionError("Unexpected third residue %r" % (x,)) Remember, once you've created your "codon_functions.py" module you should test it with the test_codon_functions.py test code. Part 2 This section applies only to those who want to practice doing HTML. If you feel comfortable already with HTML then in your README write "I am comfortable using HTML." Write an HTML page named "bio.html" containing a short description of yourself. It should have the following: - A <title> with a relevant title - Your name as a centered <h1> element - An <h2> field titled "Education" In this section write one paragraph about the school you most recently attended. Make sure the name of the school is hyperlinked to the school's home page. For example, I went to Florida State for my undergraduate studies. - An <h2> field titled "Links" In this section make a itemized list of hyperlinks (use the <ul> and <li> elements) to four bioinformatics resources. One must be NCBI. For the NCBI link use the image at ( ) and make the image a hyperlink to NCBI's home page. - An <h2> field titled "Exchange rates" In this section make an HTML table listing the exchange rates between the different countries and the rand. You can use the table at xe.com as a guide. Your table only needs to list US Dollars (USD), UK pounds (GBP), Euros (EUR) and Swedish crowns (SEK). The result should look like Currency Unit | Units per ZAR | ZAR per Unit --------------+----------------+--------------- USD | 0.1599155622 | 6.2533000924 GBP | 0.0865696418 | 11.5513935311 EUR | 0.1271463895 | 7.8649500309 SEK | 1.1786189282 | 0.8484506536 --------------+----------------+---------------but using the HTML <table> elements. The headers on the top must use the <th> element. Use #F0F0F0 as the background color for the USD and EUR rows and #FFFFFF for the GBP and SEK rows. Optionally make the outlines drawn as above, with vertical separators only between the two columns (not at the edges) and horizontal separators only between the row headers and the data and at the end of the data. (This is actually rather difficult; the pure HTML solution only works under Mozilla, or you need to use CSS.) - An <h2> field titled "HTML entities" Using the HTML entities for Latin-1 and symbols, write the following: - The four suites in a deck of cards are hearts (♥), clubs (♣), spades (♠) and diamonds (♦). - 1 £ == 202 ¥ Part 3 - making HTML pages A - make a table You will write a command-line program named "lengths.py" which makes an HTML page describing the sequence length of records from a FASTA file. The input will be from a FASTA-formatted file specified on the command-line. It will have 0 or more FASTA records. The first word of the title is the sequence id. For example, if the title is "gi|12345 Golem/Hobbit hybrid" then the sequence id is "gi|12345". Your program will generate the HTML file named "lengths.html". That file will contain a title and an h1 header based on the input FASTA filename. For example, if the FASTA filename is "br_sequences.fasta" then you might use the text "br_sequences.fasta record properties". It will contain an HTML table with information about the record number (starting at 1), record id, and total sequence length. (The point of this is to generate HTML tables, not compute sequence properties.) Here's an example of what it might look like: Optional: Use the following FASTA file as input to your program. >"silly<!-- strange FASTA record #1 ACDE >αβ--> strange FASTA record #2 PPPPDid it cause strange output? If so you'll need to escape the special characters. Use cgi.escape(). Optional: use the generator-based FASTA record reader from today's lecture instead of the older fasta_reader module. B - HTML with a generated image Modify the hydrophobicity plot code to make a new command-line program named "hydroplot.py". This will take a FASTA-formatted filename on the command line and generate two files based on the first record in the FASTA file. One output file is "hydroplot.png" which is the hydrophobicity plot saved to a PNG file. (Use the code I gave in my lecture for making a PNG instead of displaying to the screen.) You should use either the triangle or the Savitzky-Golay filter and may use a hard-coded window size of 19. The other is "hydroplot.html" which contains some basic information about the input record. You may decide for yourself what it should be. This HTML file must embed a image of "hydroplot.png" and that image must be a hyperlink to the PNG itself. Optional: did you use cgi.escape in the right places? Optional: Include a table of the start and end positions (in biologist coordinates) of each predicted transmembrane helix. Biologist coordinates start at 1 and include the end coordinate so the Python range [4,8) coorresponds to the biologist coordinates [5,8]. For both A and B, include your test files with the assignment and in your README tell me what I need to do to run your programs with your test files.
http://www.dalkescientific.com/writings/NBN/assignment_6.html
CC-MAIN-2018-34
refinedweb
2,241
63.9
This got back-burnered for awhile, but here's the fixed up copy from thelast round of feedback. Thanks to everyone that's given input. It's allbeen helpful and I think this copy reflects everything that wasdiscussed last time.If there's no major changes requested, the next time will be in diffformat for Documentation/ inclusion.-- Ubuntu - - 1394 - - How to become a kernel driver maintainer ----------------------------------------This document explains what you must know before becoming the maintainerof a portion of the Linux kernel. Please read SubmittingPatches,SubmittingDrivers and CodingStyle, also in the Documentation/ directory.With the large amount of hardware available for Linux, it's becomingincreasingly common for drivers for new or rare hardware to be maintainedoutside of the main kernel tree. Usually these drivers end up in thekernel tree once they are stable, but many times users and distributionmaintainers are left with collecting these external drivers in order tosupport the required hardware.The purpose of this document is to provide information for the authors ofthese drivers to eventually have their code in the mainline kernel tree.Why should I submit my driver?------------------------------This is often the question a driver maintainer is faced with. Most driverauthors really don't see the benefit of having their code in the mainkernel. Some even see it as giving up control of their code. This issimply not the case, and the end result is always beneficial to users anddevelopers alike.The primary benefit is availability. When people want to compile a kernel,they want to have everything there in the kernel tree. No one (not evenkernel developers) likes having to search for, download, and buildexternal drivers out-of-tree (outside the stock kernel source). It's oftendifficult to find the right driver (one known to work correctly), and iseven harder to find one that works on the kernel version they arebuilding.The benefit to users compiling their own kernel is immense. The benefit todistributions is even greater. Linux distributions already have a largeamount of work to provide a kernel that works for most users. If a driverhas to be provided for users that isn't in the primary kernel source, itadds additional work to maintaining (tracking the external driver,patching it into the build system, often times fixing build problems).With a driver in the kernel source, it's as simple as tracking the mainkernel tree.This assumes that the distribution finds your driver worth the time ofdoing all this. If they don't, then the few users needing your driver willprobably never get it (since most users are not capable of compiling theirown modules).Another benefit of having the driver in the kernel tree is to promote thehardware that it supports. Many companies that have written drivers fortheir hardware to run under Linux have not yet taken the leap to placingthe driver in the main kernel. The "Old Way" of providing downloadabledrivers doesn't work as well for Linux, since it's almost impossible toprovide pre-compiled versions for any given system. Having the driver inthe kernel tree means it will always be available. It also means thatusers wishing to purchase hardware that "Just Works" with Linux will havemore options. A well-written and stable driver is a good reason for a userto choose that particular type of hardware.Having drivers in the main kernel tree benefits everyone.What should I do to prepare for code submission?------------------------------------------------First you need to inspect your code and make sure it meets criteria forinclusion. Read Documentation/CodingStyle for help on proper coding format(indentation, comment style, etc). It is strongly suggested that yourdriver builds cleanly when checked by the "sparse" tool. You will probablyneed to annotate the driver so sparse can tell that it is following thekernel's rules for address space accesses and endianness. Adding theseannotations is a simple, but time-consuming, operation that often exposesreal portability problems in drivers.There are also many targets in the kernel build system (KBuild) that willhelp check your code as well. These targets are listed if you type "makehelp" in the kernel tree. Some targets of note are, checkstack,buildcheck and namespacecheck. You can also add C=1 to the make arguments,in order to use the sparse tool for checking your code.Once you have properly formatted the code, you also need to check a fewother areas. Most drivers include backward compatibility for older kernels(usually ifdef's with LINUX_VERSION_CODE). This backward compatibilityneeds to be removed. It's considered a waste of code for the driver to bebackward compatible within the kernel source tree, since it is going to becompiled with a known version of the kernel.Proper location in the kernel source needs to be determined. Find driverssimilar to yours, and use the same location. For example, USB networkdrivers go in drivers/usb/net/, and filesystems go in fs/.The driver should then be prepared for building from the main source treein this location. A proper Makefile and Kconfig file in the Kbuild formatshould be provided. Most times it is enough to just add your entries toexisting. If built. For example, a wireless network driver may need to "depend on NET && IEEE80211". Also, if your driver is specific to a certain architecture, be sure the Kconfig entry reflects this. DO NOT force your driver to a specific architecture simply because the driver is not written portably. - Make sure you provide useful help text for every entry you add to Kconfig so that users of your driver will be able to read about what it does, what hardware it supports and perhaps find a reference to more extensive documentation.More info on the kbuild system is available in Documentation/kbuild/ inthe kernel source tree.Lastly, you'll need to create an entry in the MAINTAINERS file. It shouldreference you or the team responsible for the code being submitted (thisshould be the same person/team submitting the code). Also include, ifavailable, a mailing that should be used for correspondence.Code review-----------Once your patches are ready, you can submit them to the linux-kernelmailing list. However, since most drivers fall under some subsystem (net,usb, etc), then it is often more appropriate to send them to the mailinglist for this subsystem (see MAINTAINERS file for help finding the correctaddress).The code review process is there for two reasons. First, it ensures thatonly good code, that follows current API's and coding practices, gets intothe kernel. The kernel developers know you have good intentions ofmaintaining your driver, but too often a driver is submitted to thekernel, and some time later becomes unmaintained. Then developers who arenot familiar with the code or its purpose are left with keeping itcompiling and working. So the code needs to be readable, and easilymodifiable.Secondly, the code review helps you to make your driver better. The peoplelooking at your code have been doing Linux kernel work for years, and areintimately familiar with all the nuances of the code. They can help withlocking issues as well as big-endian/little-endian and 64-bit portability.Be prepared to take some heavy criticism. It's very rare that anyone comesout of this process without a scratch. Usually code review takes severaltries. You'll need to follow the suggested changes, and make these to yourcode, or have clear, acceptable reasons for *not* following thesuggestions. Code reviewers are generally receptive to reasoned argument.If you do not follow a reviewer's initial suggestions, you should adddescriptive comments to the appropriate parts of the driver, so thatfuture contributors can understand why things are in a possibly unexpectedstate. Once you've made the changes required, resubmit. Try not to take itpersonally. The suggestions are meant to help you, your code, and yourusers (and is often times seen as a rite of passage).What is expected of me after my driver is accepted?---------------------------------------------------The real work of maintainership begins after your code is in the tree.This is where some maintainers fail, and is the reason the kerneldevelopers are so reluctant to allow new drivers into the main tree.There are two aspects of maintaining your driver in the kernel tree. Theobvious first duty is to keep your code synced to the kernel source. Thismeans submitting regular patch updates to the linux-kernel mailing listand to the particular tree maintainer (e.g. Linus or Andrew). Now thatyour code is included and properly styled and coded (with that shiny newdriver smell), it should be fairly easy to keep it that way.The other side of the coin is keeping changes in the kernel synced to yourcode. Often times, it is necessary to change a kernel API (driver model,USB stack changes, networking subsystem change, etc). These sorts ofchanges usually affect a large number of drivers. It is not feasible forthese changes to be individually submitted to the driver maintainers. Soinstead, the changes are made together in the kernel tree. If your driveris affected, you are expected to pick up these changes and merge them withyour temporary development copy. Usually this job is made easier if youuse the same source control system that the kernel maintainers use(currently, git), but this is not required. Using git, however, allows youto merge more easily.There are times where changes to your driver may happen that are not theAPI type of changes described above. A user of your driver may submit apatch directly to Linus to fix an obvious bug in the code. Sometimes thesetrivial and obvious patches will be accepted without feedback from thedriver maintainer. Don't take this personally. We're all in this together.Just pick up the change and keep in sync with it. If you think the changewas incorrect, try to find the mailing list thread or log commentsregarding the change to see what was going on. Then email the patch authorabout the change to start discussion.How should I maintain my code after it's in the kernel tree?------------------------------------------------------------The suggested, and certainly the easiest method, is to start a git treecloned from the primary kernel tree. In this way, you are able toautomatically track the kernel changes by pulling from Linus' tree. Youcan read more about maintaining a kernel git tree at you decide to use for keeping your kernel tree, just rememberthat the kernel tree source is the primary code. Your repository shouldmainly be used for queuing patches, and doing development. Users shouldnot have to regularly go to your source in order to get a stable andusable driver.
http://lkml.org/lkml/2006/6/2/302
CC-MAIN-2015-18
refinedweb
1,740
56.05
We have a public class C network that our company owns, we're moving from T-1 lines to cable, most of my public servers will be moving to a data center, but I still need the public class C network working, can this be done with a home type router in a Comcast connection? Routing an IP network needs to be done through BGP protocol. So either your ISP let you announce BGP routes yourself, or the ISP does it for you. I'm not a comcast expert, but I doubt that that kind of service is part of their "home" package ;) Just to clear up some of the stupidity in this thread... 1) Yes you can OWN a /24. It's called a "legacy netblock", in other words, pre-IRR. Google knows. It's owned, because courts have established property rights in legacy netblocks. 2) A /24 can be "internationally routed" - /24s are still accepted into the routing tables of every member of the Internet Default-Free Zone (i.e. Tier 1 ISPs and others who reach all of the Default-Free Zone) 3) Anyone (who speaks BGP to the Internet DFZ) can route your /24, it's just a question of the business relationship you have with a given ISP. And if you speak BGP, you can originate your /24. The More You Know(tm). Technically, I don't see that it would be a problem. I suggest talking to your service manager about your needs for routing to your own /24 over whatever infrastructure your new ISP will hook you up to. There is a way to do this. You have to have a very cooperative ISP (in addition to Comcast) and, at least in my case, a colocated system there. The company (Linkline) added my /24 to their BGP table and pointed it at my colocated system, which knows how to route packets for (part of) that network over a VPN link to my system that sits behind a Verizon FIOS connection. I've been using this setup for over three years now and it works, although I certainly don't recommend it for high-volume or low-latency applications. I use it for incoming email and ssh, primarily, and a couple of other things. That pretty much is it - no way. You can route whatever you want, but unless the ISP supports it this is like trying to order a mercedes by calling a pizza service - it will get you nowhere. What you route through YOUR router is totally irrelevant unless the ISP forwards the packets fourther. Now, let's see: We have a public class C network that our company owns We have a public class C network that our company owns No. You dont own anyxthing - noone does. IP ddresses are not owned, they are assigned like a rental car. That said, a C network is NEVER assigned to someone - RIPE's smallest block that one can get assigned is 4096 IP Addresses. You got a C network from your provider for use with them. YOu kill your provider - the C network gets back to the provider, much as you return a car from a rental agency when you end the rental. By posting your answer, you agree to the privacy policy and terms of service. asked 4 years ago viewed 1272 times active 1 year ago
http://serverfault.com/questions/181329/how-do-i-route-a-public-class-c-network-through-comcast-router
CC-MAIN-2014-49
refinedweb
565
77.67
When .NET 3.5 was released a lot of people wondered why ASP.NET 3.5 wouldn’t show up in IIS. Well with .NET 4.5 you might be a bit more confused… So first .NET 4.5 will not show up in IIS But now if you check the Microsoft.NET framework folder you will see that you will not have a .NET 4.5 folder as well Well Why ? If you have read my previous post by now you would have understood that there are two ways the .NET framework is upgraded. 1. Side By Side release – Like v1.1 and v2.0. There releases are completely independent of each other 2. Enhancements – Like v3.0 and 3.5 .NET 3.5 and .NET 3.0 are just additions to the .NET 2.0. So the v2.0 folder has all the .NET 2.0 files and v3.0 and v3.5 folders have all the files that are required for the enhancements like WCF, LINQ. If you wanted to use these enhancements in ASP.NET your web.config files had to explicitly have references to these 3.5 assemblies. .NET 4 is a Side By Side upgrade. Which means it can exist independent of v1.1 and v2.0. .NET 4.5 is an enhancement, but unlike 3.0 or 3.5 it will not be separated out. It is an in place upgrade. Which means once you install .NET 4.5 the v4.0 folder will be updated to contain all the .NET 4.5 files. Does that mean you wiped out .NET 4.0 from your machine ? Well yes and no. Yes, because the installation updates the v4.0 folder to .NET 4.5 My machine has both VS 2010 and VS 2012 installed. If I launch the VS 2010 Command Prompt and run launch the C Sharp compiler it will say 4.5. There is no compiler for 4.0 after the upgrade. No, because even though you now have a single updated folder, you can control which version of .NET 4 your application will use. Visual Studio 2012 provides you an option to target either .NET 4 or .NET 4.5 When you switch the Target framework in Visual Studio two thing happen 1. The config file will reflect the targetFramework A .NET 4.5 web.config will have an entry similar to <compilation debug="true" targetFramework="4.5"/> and if you choose .NET 4 it will be <compilation debug="true" targetFramework="4.0"/> Similarly in a Windows Application the version is controlled in its app.config file with <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5"/> 2. The reference assemblies for the common namespaces like System are updated. The reference assemblies are still separate for .NET 4 and .NET 4.5 Do you have to upgrade your existing ASP.NET 4 web application to .NET 4.5 because of this? No, if you have an ASP.NET web application built using VS 2010, it will have a compilation tag with targetFramework="4.0" in its web.config already. Which means it will continue to work fine even after the .NET 4.5 upgrade. In case you want to use the new .NET 4.5 features like async that's when you will have to upgrade your web application. Very helpful post. Thank you. Thanks. Make things much clear. Really like that you are providing such info on .net being enrolled in .NET freshers training at i really thank that you are providing such information.Thanks a lot. Thanks for the info! That made me feel a little better about the upgrade process. "In case you want to use the new .NET 4.5 features like async that's when you will have to upgrade your web application." Isn't it that Once 4.5 is installed then everything that used 4.0 before will use 4.5.? Do we need to upgrade our web application manually? Good blog I have solved my prob. Thanks
http://blogs.msdn.com/b/vijaysk/archive/2012/10/12/where-is-asp-net-4-5-wait-where-is-net-4-5.aspx?Redirected=true
CC-MAIN-2015-48
refinedweb
683
80.68
Game geeks will love it By octav on Apr 09, 2007 import f3.ui.\*; import f3.ui.canvas.\*; import java.lang.System; import java.lang.Math; ... operation Lines.moveBallTo (to:LinesCell) { var path:LinesCell\* = [to]; while(to.marker <> 0) { to = getMinimumAround(to); insert to as first into path; } //OK. We have Path activeCell =-1; var frm:LinesCell = path[0]; to = path[sizeof path -1]; var l:Integer = sizeof path; if (l == 2) { to.ballColor = frm.ballColor; frm.busy = false; to.busy = true; changed = true; dropBalls(); return; } // Clearing FROM to.ballColor = frm.ballColor; frm.busy = false; for (cell in path[indexof . <l-1]) (dur (l-2)\*100 linear) { cell.marker = -10; //Field in movement } for (cell in path[indexof . < l]) (dur (l-2)\*200 linear) { cell.marker = -1; //Stop markup if ((cell.x == to.x) and (cell.y == to.y)) { //Ok. We at the end to.busy = true; for (c in fld[n|n.marker == -10]) { c.marker = -1;} changed = true; dropBalls(); } } } Take a look at the screen shot of the game running on my Mac. Posted by Vojtech on April 09, 2007 at 02:56 AM PDT # Posted by guest on April 09, 2007 at 12:31 PM PDT # Posted by guest on May 09, 2007 at 03:54 PM PDT # Posted by guest on May 15, 2007 at 01:39 AM PDT #
https://blogs.oracle.com/octav/entry/game_geeks_will_love_it
CC-MAIN-2015-48
refinedweb
223
69.99
Image The Image component is used to display images. Image composes CBox so you can use all the style props and add responsive styles as well. import { CImage } from "@chakra-ui/vue"; <c-box <c-image </c-box> The size of the image can be adjusted using the size prop. <c-stack is-inline> <c-image <c-image <c-image </c-stack> <c-image Ina Vue CLI project you might find that using relative assets doesn't load the image paths correctly in Chakra's image components. This is because vue-loader converts relative paths like @/assets/path-to-img.jpg into require functions automatically for you at build time. Unfortunately, this is not the case when it comes to custom components from an installed component library like Chakra UI Vue. You can circumvent this issue by using require('@/assets/path-to-img.jpg'). Below is the correct way to require relative assets for the CImage and CAvatar components. <!-- ❌ Incorrect --> <c-image <!-- ✅ Correct --> <c-image : You can provide a fallback image for when there is an error loading the src of the image. You can also opt out of this behavior by passing the ignoreFallback prop. <c-image ❤️ Contribute to this page Caught a mistake or want to contribute to the documentation? Edit this page on GitHub!
https://vue.chakra-ui.com/image
CC-MAIN-2020-40
refinedweb
217
56.66
With the InkCanvas element in WPF you can create stunning inking experiences for TabletPC users (mouse users can play along, too!). Several very cool scenarios can be enabled just by writing markup - without any additional code behind. Here is a first example: an ink input field (takes input from either stylus or mouse) that has a dynamic reflection effect. While the user is inking, the reflection gets updated dynamically, in real-time. Implemented entirely in markup. The key feature (besides InkCanvas) here is the VisualBrush. The bottom part of the scene is a Rectangle that gets painted with a VisualBrush that is bound to the ink input control. As a result it gets updated dynamically as the user draws onto the control. Below is the relevant piece of markup. The full XAML file is attached to this post. You can load it into XamlPad to play with it, or just open in IE and ink away. <Rectangle.Fill> <VisualBrush Visual="{Binding ElementName=inkBorder}"> <VisualBrush.RelativeTransform> <TransformGroup> <ScaleTransform ScaleX="1" ScaleY="-1" /> <TranslateTransform Y="1" /> </TransformGroup> </VisualBrush.RelativeTransform> </VisualBrush> </Rectangle.Fill> More information about creating reflections using VisualBrush can be found in this MSDN How-To Topic. Additional How-To topics on Ink in WPF are also available on MSDN. Next post in this series: Fun with Ink & Xaml - Part2: Zoom and Scroll PingBack from Here is some more fun with ink and Xaml: Scrolling and zooming ink content - in WPF it is as easy as Hi Stefan and thanks for a couple of great seminars at Öredev. When you showed this cool ink and reflections demo I offcourse wanted to try that out(not the ink part just the reflections) only to discover that there are no such thing as a visual brush in Silverlight is that right? Ofcourse I can mimic the reflections on pictures and videos using a image brush or a video brush but there are nothing that can reflect the content of a canvas? Then i discovered that your exaplecode runs in IE and it seams that it is running from the web server how is that? I'm a little bit confused here could you please clarify this to me... Best Regards Johan Hi Johan, The Ink Reflection sample is a WPF-specific sample that takes advantage of WPF's VisualBrush feature. Silverlight's current feature set is a subset of WPF's features and VisualBrush is one of the features that are not yet available in the Silverlight platform. As you pointed out, you can create reflections from images and videos by using the ImageBrush and VideoBrush. There is currently not a direct way to reflect the content of a Canvas in Silverlight. You would have to duplicate and manage the content - as well as any updates to the content - yourself, in your code. For this ink scenario, you'd have to create a second InkPresenter in the bottom half - and then when user draws a stroke, you'd have update the reflection from your code. To answer your other question: When you click on the attachment to this post (InkReflections.xaml) the .xaml file gets pushed down to your browser. If your machine has WPF installed, then the sample will run directly in the browser as a so called 'XBAP' (XAML Browser Application). You can also create compiled XBAP's in Visual Studio that have C# or VB.NET code behind the XAML markup and deploy that via the browser. If your app doesn't have any code-behind (like my little sample here) then you don't even need to compile it. You can just deploy the .xaml and compilation will happen on the file. More information about XBAP's: Please let me know if you have more questions. Thanks, Stefan Wick Hey Stefan, I am brand new to WPM and Silverlight... question for you... if Silverlight is a subset of WPF... what tools do you use to view the WPF elements such as the Visual Brush? I tried copying the xaml code into Expression Blend 2 (beta) but failed. I get an error: The name "VisualBrush" does not exist in the namespace... Is there a tool like Expression Blend for viewing WPF xaml? Hi Tad, the shipping version of Microsoft Expression Blend can be used to view/edit/create WPF XAML: It should also work in the Expression Blend 2 Preview release. Just make sure you create a WPF project (as oppossed to a Silverlight project). You can also use XamlPad (a very simple, lightweight tool) to view and edit WPF XAML. It ships as part of the Windows SDK. Hope this helps. I have received several question from folks about my earlier post on Ink Reflections in WPF . People One of the things missing from Silverlight 3 is WPF’s Visualbrush . Visualbrush basically allows you If you would like to receive an email when updates are made to this post, please register here RSS Trademarks | Privacy Statement
http://blogs.msdn.com/swick/archive/2007/10/31/fun-with-ink-xaml-part1-ink-reflections.aspx
crawl-002
refinedweb
827
64.1
I've seen the following code in a couple Python projects, in __main__.py __main__.py sys.path __package__ is None and not hasattr(sys, 'frozen') sys.path.insert os.path.dirname import sys if __package__ is None and not hasattr(sys, 'frozen'): # direct call of __main__.py import os.path path = os.path.realpath(os.path.abspath(__file__)) sys.path.insert(0, os.path.dirname(os.path.dirname(path))) The test for __package__ lets the code run when package/__main__.py has been run with a command like python __main__.py or python package/ (naming the file directly or naming the package folder's path), not the more normal way of running the main module of a package python -m package. The other check (for sys.frozen) tests if the package has been packed up with something like py2exe into a single file, rather than being in a normal file system. What the code does is put the parent folder of the package into sys.path. That is, if __main__.py is located at /some/path/to/package/__main__.py, the code will put /some/path/to in sys.path. Each call to dirname strips off one item off the right side of the path ( "/some/path/to/package/__main__.py" => "/some/path/to/package" => "/some/path/to").
https://codedump.io/share/XHkSPo6Yuwui/1/purpose-of-some-boilerplate-code-in-mainpy
CC-MAIN-2017-09
refinedweb
219
77.84
this is my first Arduino project it is quite simple to make . it will take about half an hour to make it it's quite simple as you think you can also make it by following steps given below and make have fun it's give me create joy when I finally made it . now it's your turn to make it. just follow the steps given under description Step 1: Requirements Arduino uno board Bluetooth module hc 05 4wd shield (adafruit) Jumper wires Two 9v battery (1. arduino board 2. external power supply to sheild ) Battery cap with 9v pin jack Step 2: Connection mount the shield on arduino board and connect Bluetooth to the audino or directly to the shield by soldering on it connect RX to TX ,TX to RX ,ground to ground ,vcc to vcc as shown in the figure and also connect Motors to M1 (left) and M2 (right) Step 3: Coding //*** 1- Documentation //This program is used to control a robot car using a app that communicates with Arduino trough a bluetooth module #include <AFMotor.h> //creates two objects to control the terminal 3 and 4 of motor shield AF_DCMotor motor1(3); AF_DCMotor motor2(4); } } Step 4: Uploading remove Bluetooth module before uploading the codes Step 5: Power Supply connect 9 volt battery to arduino board and you can also connect 9 volt battery to shield through external power to increase power supply to your Motors remove the power jumper shown in the figure. Step 6: Install App install Bluetooth RC controller app through play store open the app it ask for permission turn on Bluetooth allow it the first time password is 1234 or 0000 then click on gear shown in the app now "choose connect to car" the red button blinking in the app turn to green now you are connected to your car play with it and enjoy your project. thank you for watching this instruction 12 Discussions 1 year ago Hi sir. Can u help me to edit the same bluetooth control car with arduino by adding 1. ultrasonic sensor as sense the obstacle 2. 2 leds green when forward 3. 2 leds red when reverse 4. Siren by using buzzer 5. 1 led stop yellow 2 years ago Hi, I am getting a lot of code errors, could you please help\ccqB0Z6e.ltrans0.ltrans.o: In function `__static_initialization_and_destruction_0': C:\Users\user\Documents\BTcar\CarCode/CarCode.ino:6: undefined reference to `AF_DCMotor::AF_DCMotor(unsigned char, unsigned char)' C:\Users\user\Documents\BTcar\CarCode/CarCode.ino:7: undefined reference to `AF_DCMotor::AF_DCMotor(unsigned char, unsigned char)' C:\Users\user\AppData\Local\Temp\ccqB0Z6e.ltrans0.ltrans.o: In function `Stop': C:\Users\user\Documents\BTcar\CarCode/CarCode.ino:73: undefined reference to `AF_DCMotor::setSpeed(unsigned char)' C:\Users\user\Documents\BTcar\CarCode/CarCode.ino:74: undefined reference to `AF_DCMotor::run(unsigned char)' C:\Users\user\Documents\BTcar\CarCode/CarCode.ino:75: undefined reference to `AF_DCMotor::setSpeed(unsigned char)' C:\Users\user\Documents\BTcar\CarCode/CarCode.ino:76: undefined reference to `AF_DCMotor::run(unsigned char)' Reply 2 years ago and remove the Bluetooth module before uploading the code Reply 2 years ago maybe you should first install AFmotor library. it is easy To get through adafruit official site. 2 years ago Thanks for posting. Will save me time trying to figure it out. Reply 2 years ago my pleasure my dear friend. 2 years ago it will be work or not ? cause my car is not working Reply 2 years ago definitely it will works if you follow the instructions properly. 2 years ago Can you tell the use of the shield and the connections bit more in detail also can this be made without the shield?? Reply 2 years ago if you using the shield I mentioned then you just mount pin to pin right mentioned on arduino and the shield it is quite easy to use. 2 years ago thanks for appreciating me?? 2 years ago thanks for appreciating me ??
https://www.instructables.com/id/Bluetooth-Control-Car-With-Arduino/
CC-MAIN-2019-22
refinedweb
669
53.31
Hello everyone, I was playing around with Java Swing since i wanted to learn more about it. I have 2 .java files as follows: import java.awt.*; import javax.swing.*; import java.awt.event.*; public class Easy extends JFrame{ JTextArea text=new JTextArea(); JPanel panel=new JPanel(new GridLayout(2,2)); JButton button1 =new JButton("Start"); public Easy(){ panel.add(text); panel.add(button1); add(panel,BorderLayout.CENTER); button1.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent ae){ //add code to call the other class and make the JTextArea act as a console } }); } public static void main(String arg[]){ Easy frame=new Easy(); frame.setSize(300,100); frame.setVisible(true); } } Both codes work fine individually. Now, when I run the file Easy, and click on the "Start" button, I want the class AddNumber to be implemented. I mean to say that, instead of the AddNumber running on the console, is there any way I could make AddNumber run in the JTextArea i have created in the first class upon clicking the "Start" button? I thought maybe by action listener? But I'm not sure. Is there any other way to make my JTextArea act as a console for the other .java file? Thanks
http://www.javaprogrammingforums.com/awt-java-swing/33543-make-jtextarea-act-console.html
CC-MAIN-2014-10
refinedweb
202
63.9
qtextistream.3qt man page QTextIStream — Convenience class for input streams Synopsis All the functions in this class are reentrant when Qt is built with thread support.</p> #include <qtextstream.h> Inherits QTextStream. Public Members QTextIStream ( const QString * s ) QTextIStream ( QByteArray ba ) QTextIStream ( FILE * f ) Description The QTextIStream class is a convenience class for input streams.. Member Function Documentation QTextIStream::QTextIStream ( const QString * s ) Constructs a stream to read from the string s. QTextIStream::QTextIStream ( QByteArray ba ) Constructs a stream to read from the array ba. QTextIStream::QTextIStream ( FILE * f ) Constructs a stream to read from the file f.textistream.3qt) and the Qt version (3.3.8). Referenced By The man page QTextIStream.3qt(3) is an alias of qtextistream.3qt(3).
https://www.mankier.com/3/qtextistream.3qt
CC-MAIN-2018-05
refinedweb
122
68.97
The CPAN is one of the most prolific collections of open source software on the Internet. At the time of noding this, there are nearly 3,000 authors and over 10,000 distributions1. CPAN is often cited as a reason for why Perl is the language of choice, as for every programming task, someone has nearly always already written a module that will help. There's also the benefit of sharing code with others without the need to host or publicise it. Releasing to CPAN will allow others to benefit, and users can provide feedback. Such is the way of the Perl community. In 1993, Jarkko Hietaniemi launched CPAN as an effort to encourage Perl code reuse. At the time, CPAN was merely an FTP server with anonymous read access and named accounts for update. The way to get your module known about was to publicise it yourself, and to register the namespace with "The Module List" - a great long HTML page listing the modules, abstracts, authors and download links. In 1997, was launched. This has all but replaced the module list, as the way of publicising and discovering modules2. There are indexes by author, distribution and module, and search.cpan.org is often the place to go to browse documentation before downloading. is the Perl Authors Upload SErver. This provides a front end to CPAN for authors. Having received a module as an upload, this is checked for basic integrity, the author is sent an email, and a request is queued to the indexer. The various CPAN mirrors around the world will detect this upload, and receive the new distribution version. Distributions follow a standard installation procedure, similar to the GNU four stage installation process (configure, build, test, install): perl Makefile.PL make make test make install or in some cases perl Build.PL ./Build ./Build test ./Build install The last stage is usually run as root, or the admin account that has write access to the Perl directories. It is possible to specify installation to your own private directory instead. The first stage may report warnings for other modules not installed. These dependencies need to be resolved before your distribution can be built, tested or installed. The purpose of the third stage is to exercise the functionality of the module, as a check if all is hunky dory. It is possible to skip this step and plough ahead with the install. The process of resolving dependencies manually can get very tedious. To save this hassle, the module CPAN.pm automates the process, having asked the user for the necessary config information. CPAN.pm ships with perl, and updates are available via CPAN. It is invoked in one of the following ways: cpan perl -MCPAN -e 'shell()' CPANPLUS is a rewrite of CPAN.pm, to make more use of pure perl code and less use of external shell commands. CPANPLUS also provides an automated test suite, used by the cpan-testers group to test all fresh uploads. It is invoked in one of the following ways: cpanp perl -MCPANPLUS -e 'shell()' For problems with the install such as failing tests, it's worth checking the cpan-testers reports, linked from the CPAN page. The documentation should include details of how to contact the author. Even if it doesn't, there's authorname@cpan.org. Also, there's a Response Tracker site for the whole of CPAN at - it's worth raising a ticket here, as everyone will be able to see it. RT is also a good place to post patches. As the entry bar to authoring is very low, it may be the case that 90% of CPAN is complete rubbish. But I cite Ted Sturgeon: it's the other 10% that counts! Besides the cpan-testers with their automated smoke test, there's also a published CPAN kwalitee index for each module. There's even a special place for joke modules: the ACME:: namespace. Modules generally come with their own unit tests. This is not always the case, and The Phalanx Project aims to address this lack for a number of key modules. When it comes to documentation, Annocpan is a place to add notes to an existing module. Despite the incentive to reuse code, there's a great proliferation of wheel reinventing, which some may regard as unhealthy. But TIMTOWDI is after all, part of Perl's philosophy. Finding the right module can often be non-trivial, and probably counts for much traffic on Perlmonks. Flexibility is the name of the game in Perl, and people go to great lengths to argue their corner. However, too much flexibility is a problem from the marketing point of view. Last November (2005), I attended an evening on web frameworks, which had talks on Catalyst, Django and Ruby on Rails. Perl's catalyst lost that debate, as the other two presentations were very slick, and would have appealed to everybody, not just the geeks. Perl's lego approach with CPAN modules is causing a credibility gap, but there are those inside the Perl community, myself included, that want to do something about this. Present trends have seen the perl 5 core become very stable. This is largely thanks to the work put in by Nicholas Clark release managing in the past few years. This trend is continuing, with what would be new core features appearing as CPAN modules instead. This is especially true of perl 6, which is being prototyped by Pugs. This is available now, whereas Perl 6 is still a few years away from being released. Dead, no; it's pining for the fjords ;). When it comes to porting perl 5 modules to perl 6, there will ultimately need to be a separate place (CP6AN) or namespace for the new modules, as they won't play with perl 5. However, perl 6 does promise to be able to interoperate fully with perl 5 modules. 1 2951 authors and 10343 distributions as counted on my minicpan archive. 2 To be clear on the terminology, a module is a single source file - a .pm file, a distribution is a single CPAN download containing one or more modules, packaging, unit tests, optionally some scripts, and anything else the author wants to include. A collection of distributions is called a bundle. Log in or register to write something here or to contact authors. Need help? accounthelp@everything2.com
https://everything2.com/title/Comprehensive+Perl+Archive+Network
CC-MAIN-2017-47
refinedweb
1,067
63.8
Hello, I'm using NetBeans 6.9 on GNU/Linux (Ubuntu 10.10) When I run this, ..all I get is....all I get is..Code:#include <iostream>; using namespace std; void showname(); void showage(); int main(){ showname(); showage(); return 0; } void showname(){ int name; cout << "What is your name?" << endl; cin >> name; cout << "Your name is " << name << endl; } void showage(){ int age; cout << "What age are you?" << endl; cin >> age; cout << "You are " << age << " years old." << endl; } "Your name is 0 What age are you? You are 0 years old." when I enter my name. Please help (It should be obvious what output I'm expecting to get)
http://cboard.cprogramming.com/cplusplus-programming/138507-void-returning-functions.html
CC-MAIN-2016-07
refinedweb
109
96.79
It’s been a week since we announced first public EAP build for the AppCode and now we can say it’s adoption has beaten our highest hopes. We’ve received a lot of constructive feedback and that’s what we like most – working hard if we have interesting stuff to do. So we’ve got a new build for you to try. The most noticeable changes are: - Mercurial SCM support - New intention actions (including DeMorgan law) - Profiling agent (i.e. AppCode can now be asked to profile itself when slow and send CPU/memory snapshot to JetBrains). - Many bugfixes As usual latest build is available on EAP page. Develop with pleasure! -JetBrains AppCode team Hi, great news, am a long time customer of JetBrains IDE’s (IDEA7, 8, 9, PhpStorm). I have a few questions: 1) Does AppCode support (refactoring, debugging) Objective C++ mixed with C++ (templates, namespaces, V-inheritance – the lot)? 2) Are you supporting LLVM and GCC or just one 3) Will you be able to run the IDE on Windows/Linux 4) Any plans for Android support? Hi. 1. We don’t have any plans for supporting C++ at this moment. 2. We’re not aware of gcc or llvm yet, AppCode using xcode for building. 3. No, AppCode will be available only for Mac OS. 4. IntelliJ IDEA already has Android support, please look at It will be great if AppCode support c and c++ as well. AppCode will support C because Objective-C is a subset of C. We don’t have any plans on supporting C++ at this moment. ObjC is a superset of C Sure Thanks for update. Looking for integration with data modeler Pity with the lacking C++ support. I’m currently looking into porting some Java-code to C++ as my first C++ project, and it seems there’s no single superior C++ IDE for Mac. So I’m sure there’s a spot for Jetbrains there Actually, we are working on basic C++ support at the moment. Of course in scope of macos/ios applications only. How does DeMorgan’s law fit into this upgrade? @Bill please check out Settings|Intention Actions. That provides brief insight to what certain lightbulb offers to transform. @Maxim Got it – thanks!
https://blog.jetbrains.com/objc/2011/04/a-week-later-new-stuff-to-try/
CC-MAIN-2018-09
refinedweb
378
73.98
However, if you take a look at the code for that method, you will notice there is no code to create any Task and that the return statement returns the result of a sum of two decimal variables. The code just returns a decimal but the async modifier will wrap and unwrap the decimal variable under the hoods. We don't need to think about either tasks or continuations, the code is pretty similar than synchronous code. The method represent a very common case in which we need to wait for I/O bound operations to compute a final result and return it. The method performs the following steps: - Call the GetSubTotalAsyncmethod with an asynchronous execution to compute the subtotal. - Call the GetOtherChargesAsyncmethod with an asynchronous execution to compute the other charges. - Compute the sum of the subtotal plus the other charges. - Calculate the sales tax amount based on the subtotal plus other charges computed in the aforementioned step. - Return the sum of the subtotal plus other charges plus the sales tax amount. The code that retrieves the subtotal uses the await keyword make the call to the GetSubTotalAsync method with an asynchronous execution. var subTotal = await GetSubTotalAsync(); The next line won't be executed until the GetSubTotalAsync method finishes its execution and the subTotal variable has the value returned from the method or an exception occurs. Thus, the code behaves just like synchronous code. I will dive deeper on exception handling later. You can think of the GetSubTotalAsync method as the typical I/O bound operation that you don't want to block the UI until it finishes. In this case, I used the new DownloadStringTaskAsync method for the WebClient class. .NET Framework 4.5 added many methods with the Async suffix that can be called with the await modifier to the WebClient class and to other .NET classes. The code just downloads the contents of Dr Dobb's home page with an asynchronous execution and just converts the first character to a decimal value. Forgive me for the simplicity; I want to keep the example as simple as possible while providing some real-life I/O bound operations. The GetOtherChargesAsync method does the same thing than the GetSubTotalAsync method but it takes the second character of the string instead of the first one. Again, the idea was to have two real I/O bound operations. The following listing shows the code for MainWindow.xaml. The simple UI displays a button and a listbox on a WPF Window (see Figure 1). <Window x: <Grid> <Button Name="UpdateTotal" Content="Add total" HorizontalAlignment="Left" Height="66" Margin="10,10,0,0" VerticalAlignment="Top" Width="252" Click="UpdateTotalClick"/> <ListBox Name="TotalList" HorizontalAlignment="Left" Height="220" Margin="10,91,0,0" VerticalAlignment="Top" Width="498" /> </Grid> </Window> Figure 1. The UI design with a button and a listbox. The button calls an asynchronous method and doesn't block the UI. The following listing shows the code for MainWindow.xaml.cs, the interaction logic for MainWindow. It's very simple the UpdateTotalClick even handler calls MathService.GetCalculatedTotalAsync with an asynchronous execution and then adds the calculated total to the TotalList listbox. Notice that the event handler method declaration includes the async modifier because it uses the await keyword to execute MathService.GetCalculatedTotalAsync. To keep things simple, I didn't create a MVVM (short for Model-View-ViewModel) WPF solution. namespace WpfAsyncAwaitSample { using System.Windows; public partial class MainWindow : Window { public MainWindow() { this.InitializeComponent(); } private async void UpdateTotalClick(object sender, RoutedEventArgs e) { //// Sales Tax = 8.5% = 0.0850 const decimal salesTax = 0.0850M; var total = await MathService.GetCalculatedTotalAsync(salesTax); TotalList.Items.Add(total); } } }
http://www.drdobbs.com/tools/using-asynchronous-methods-in-aspnet-45/240008768?pgno=3
CC-MAIN-2015-40
refinedweb
608
56.15
this was used in 1.4. libguile-1.6 .c source uses it as #ifndef, however both snarf.h and guile-snarf have been in the meantime changed to not define this. 1.6 snarfing differs from 1.4 primarily in that there are now two kinds: init and doc, as opposed to merely init. in defining the names of the specialization-trigger macros, we now remember usage of this old friend (i.e., also as guard). in the end, it is better to migrate self-guard naming internally, and include that info in the data stream only (snarfing programs generate program fragments opaque to all re-snarfing). so, this is sort of a good-bye from public view for SCM_MAGIC_SNARFER the cpp macro. [band plays.] anyone know what are the official namespaces allowed for programs that munge the cpp macro space? [macro looks up, wisened and ready for the inevitable coup. it is prepared to don a blade, but it has been heard that the really crafty old dictators (the ones who stayed alive) mingled and mewed and were the only ones allowed to slit their own throats.] thi
http://lists.gnu.org/archive/html/guile-devel/2002-03/msg00136.html
CC-MAIN-2017-30
refinedweb
190
73.37
Last night I presented VB Whidbey at the .NET Developer Association General Meeting here in Redmond. I was on after Dave Winer and Robert Scoble spoke on blogs and RSS, etc. It is no coincidence that my blog debuts the very next day. Thanks to Dave and Robert for inspiring me. My hour was all demos. I showed several IDE enhancements, including- Faster startup - Not having to be prompted for project location when you start- Snap lines to help you line up controls- Editing control Names and Text on the form all at once rather than having to individually edit them in the Properties Window- Error correction options. Let's say you type Dim iVar as integr. You'll get a blue squiggly and a smart tag. Click the smart tag and we offer suggestions as to what you may have meant. Select Dim iVar as integer from the list and we'll change your code to that. Or you create a property and then make it ReadOnly. You can't have a Set in a ReadOnly property. We'll prompt you and offer to either remove the ReadOnly or remove the Set. - If you rename form1.vb to GroovyForm.vb we'll change the class name from form1 to GroovyForm. (Actually, this was in my notes to show, but I don't remember if I did it.) I showed debugging enhancements, including- Edit and Continue, including how we grey out code that you can't edit if you then want to continue- You have a class VBGuy and you Dim oBoy as VBGuy. Set a breakpoint and when you hover over oBoy you will see a dropdown that lists the properties of oBoy and their values. You don't have to dive into the debug windows to see that. You can also change the values right there in your code. This is nice because you don't have to stop looking at your code. - I showed snippets, collections of precanned code we will ship in the box. I was not able to show how you create your own because that doesn't work in the build I was using. - And I showed the My namespace and code like My.Computer.Name or My.Forms.Form1.Text. The My namespace is really just wrappers around existing Framework classes. So you can do things like print with less code. My.Computer.Printers.DefaultPrinter.WriteLine(“Wicked cool“)My.Computer.Printers.Print(True)And here is code to copy a fileMy.Computer.FileIO.CopyFile("C:\BigFile.zip", "C:\Backup", False, True)This is going to be a very popular feature! You can use the My shortcuts and of course you can use the .NET Framework classes. Your call. Tomorrow I will talk about the Data features I showed. Robert Green, a PM on the Visual Basic team, has recently started up PingBack from
http://blogs.msdn.com/rgreen_msft/archive/2004/02/10/71037.aspx
crawl-002
refinedweb
480
75.5
Divide N segments into two non-empty groups such that given condition is satisfied Given N segments (or ranges) represented by two non-negative integers L and R. Divide these segments into two non-empty groups such that there are no two segments from different groups that share a common point. If if it is possible to do so, assign each segment a number from the set {1, 2} otherwise print Not Possible. Examples: Input: arr[][] = {{5, 5}, {2, 3}, {3, 4}} Output: 2 1 1 Since 2nd and 3rd segment have one point common i.e. 3, they should be contained in same group. Input: arr[][] = {{3, 5}, {2, 3}, {1, 4}} Output: Not Possible All segments should be contained in the same group since every pair has a common point with each other. Since the other group is empty, answer is not possbile. Prerequisites: Merge Overlapping Intervals Approach: Using the concept of merging overlapping intervals, we can assign the same group to all those segments that are overlapping and alternatively changing the group number. To merge overlapping segments, sort all the segments with respect to their right indexes keeping in order of the original indexes of the segments. Then, iterate over the segments and check if the previous segment is overlapping with the current segment. If it does then merge it making it one segment and if it doesn’t create a new one. At last, check if one of the group is empty of not. If one of them is empty, answer is not possible, otherwise print all the assigned values of segments. Below is the implementation of the above approach: C++ Python3 # Python3 Program to divide N segments # into two non empty groups such that # given condition is satisfied # Structure to hold the elements of # a segment class segment: def __init__(self): self.l = None # left index self.r = None # right index self.idx = None # index of the segment # Function to print the answer if # it exists using the concept of # merge overlapping segments def printAnswer(v, n): seg = [segment() for i in range(n)] # Assigning values from the vector v for i in range(0, n): seg[i].l = v[i][0] seg[i].r = v[i][1] seg[i].idx = i # Sort the array of segments # according to right indexes seg.sort(key = lambda x: (x.r, x.idx)) # Resultant array res = [0] * (n) # Let’s denote first group with 0 and # second group with 1 Current segment prev = 0 # Assigning group 1 to first segment res[seg[prev].idx] = 0 for i in range(1, n): # If the current segment overlaps # with the previous segment, merge it if seg[i].l <= seg[prev].r: # Assigning same group value res[seg[i].idx] = res[seg[prev].idx] seg[prev].r = max(seg[prev].r, seg[i].r) else: # Change group number and create # new segment res[seg[i].idx] = res[seg[prev].idx] ^ 1 prev += 1 seg[prev] = seg[i] # Check if one of the groups is # empty or not one, two = 0, 0 for i in range(0, n): if not res[i]: one += 1 else: two += 1 # If both groups are non-empty if one and two: for i in range(0, n): print(res[i] + 1, end = " ") print() else: print("Not Possible") # Driver Code if __name__ == "__main__": v = [[1, 2], [3, 4], [5, 6]] n = len(v) printAnswer(v, n) # This code is contributed # by Rituraj Jain [tabbyending] 1 2 1 Time Complexity: O(n * log n), where n is the number of segments Recommended Posts: - Divide an array into k segments to maximize maximum of segment minimums - Maximum number of customers that can be satisfied with given quantity - Number of segments where all elements are greater than X - Maximize the number of segments of length p, q and r - Partition the array into three equal sum segments - Maximum number of segments that can contain the given points - Partition an array such into maximum increasing segments - Split the array into odd number of segments of odd lengths - Find element using minimum segments in Seven Segment Display - Pairs from an array that satisfy the given condition - Largest sub-set possible for an array satisfying the given condition - Sort 3 Integers without using if condition or using only max() function - Check whether two strings are equivalent or not according to given condition - Number of strings that satisfy the given condition - Count sub-sets
https://www.geeksforgeeks.org/divide-n-segments-into-two-non-empty-groups-such-that-given-condition-is-satisfied/
CC-MAIN-2019-18
refinedweb
740
53.75
Serving Static Files It’s often useful to load static files like images and videos when creating components and stories. Storybook provides two ways to do that. 1. Via Imports1. Via Imports You can import any media assets by simply importing (or requiring) them as shown below. import React from 'react'; import { storiesOf } from '@storybook/react'; import imageFile from './static/image.png'; const image = { src: imageFile, alt: 'my image', }; storiesOf('<img />', module) .add('with an image', () => ( <img src={image.src} alt={image.alt} /> )); This is enabled with our default config. But, if you are using a custom Webpack config, you need to add the file-loader into your custom Webpack config. 2. Via a Directory2. Via a Directory You can also configure a directory (or a list of directories) for searching static content when you are starting Storybook. You can do that with the -s flag. See the following npm script on how to use it: { "scripts": { "start-storybook": "start-storybook -s ./public -p 9001" } } Here ./public is our static directory. Now you can use static files in the public directory in your components or stories like this. import React from 'react'; import { storiesOf } from '@storybook/react'; const imageAlt = 'my image'; // assume image.png is located in the "public" directory. storiesOf('<img />', module) .add('with a image', () => ( <img src="/image.png" alt={imageAlt} /> )); You can also pass a list of directories separated by commas without spaces instead of a single directory. { "scripts": { "start-storybook": "start-storybook -s ./public,./static -p 9001" } } 3. Via a CDN3. Via a CDN Upload your files to an online CDN and just reference them. In this example we’re using a placeholder image service. import React from 'react'; import { storiesOf } from '@storybook/react'; storiesOf('<img />', module) .add('with a image', () => ( <img src="" alt="My CDN placeholder" /> )); Absolute versus relative pathsAbsolute versus relative paths Sometimes, you may want to deploy your storybook into a subpath, like. In this case, you need to have all your images and media files with relative paths. Otherwise, Storybook cannot locate those files. If you load static content via importing, this is automatic and you do not have to do anything. If you are using a static directory, then you need to use relative paths to load images.
https://storybook.js.org/docs/configurations/serving-static-files/
CC-MAIN-2019-26
refinedweb
377
51.24
Authorization¶ Once a project is set up, you will still have to follow a few steps before you can actually use Pyrogram to make API calls. This section provides all the information you need in order to authorize yourself as user or bot. Contents User Authorization¶ In order to use the API, Telegram requires that users be authorized via their phone numbers. Pyrogram automatically manages this process, all you need to do is create an instance of the Client class by passing to it a session_name of your choice (e.g.: “my_account”) and call the run() method: from pyrogram import Client app = Client("my_account") app.run() This starts an interactive shell asking you to input your phone number (including your Country Code) and the phone code you will receive in your devices that are already authorized or via SMS: Enter phone number: +39********** Is "+39**********" correct? (y/n): y Enter phone code: 32768 Logged in successfully as Dan After successfully authorizing yourself, a new file called my_account.session will be created allowing Pyrogram to execute API calls with your identity. This file will be loaded again when you restart your app, and as long as you keep the session alive, Pyrogram won’t ask you again to enter your phone number. Important Your *.session file is personal and must be kept secret. Note The code above does nothing except asking for credentials and keeping the client online, hit CTRL+C now to stop your application and keep reading. Bot Authorization¶ Bots are a special kind of users that are authorized via their tokens (instead of phone numbers), which are created by the Bot Father. Bot tokens replace the users’ phone numbers only — you still need to configure a Telegram API key with Pyrogram, even when using bots. The authorization process is automatically managed. All you need to do is choose a session_name (can be anything, usually your bot username) and pass your bot token using the bot_token parameter. The session file will be named after the session name, which will be my_bot.session for the example below. from pyrogram import Client app = Client( "my_bot", bot_token="123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11" ) app.run()
https://docs.pyrogram.org/start/auth
CC-MAIN-2021-49
refinedweb
361
60.35
C#: Functional warts Introduction notoc Some features of C# make functional programming difficult. Let’s call them warts. Warts make it difficult to reason about code and thus to write correct code. Wart 1: null Tony Hoare invented the null pointer and later apologised for doing. Null breaks parametricity. In the presence of null an implementation of the following type cannot assume that x is always a value of type T because the domain of T in C# includes all values of type T and the null pointer. string Mystery<T, TJsonShow>(T x) where TJsonShow : IJsonShow<T> Null breaks more than parametricity. It makes it difficult to reason in a principled way about monomorphic functions too. An implementation of the type string Mystery(string x) cannot assume that x is a string because the set of inhabitants of the string type in C# includes null. Wart 2: reflection Reflection breaks parametricity. Reflection enables recovering type information at runtime. With that capability, a type like Mystery below cannot provide the guarantees that parametric types should give. string Mystery<T, TEq>(T x, T y, TEq eq) where TEq : IEqualityComparer<T> Without reflection, Mystery is guaranteed to only ever call the IEqualityComparer methods on eq. With reflection, Mystery can do just about anything, like the following crazy implementation does. string Mystery<T, TEq>(T x, T y, TEq eq) where TEq : IEqualityComparer<T> { if (x is int) return "x is an int"; if (x is string) return ((string) x).ToUpper(); return eq.Equals(x, y).ToString(); } Trivially, default(T) also breaks parametricity by enabling construction of values outside the presence of a new() constraint. It does this from type information gained at runtime, hence, reflection. Wart 3: object methods Object methods break parametricity. In C# every type inherits from object. This means methods on object are callable on values of parametric type parameters, even when that makes no sense. For example. Every value except null has a ToString method, which means this function has an implementation that uses its argument. string Mystery<T>(T x) Without object methods (and any of the other warts in this post), there is only one kind of implementation of Mystery: one that returns a constant string without touching x. string Mystery<T>(T x) { return "asdf"; } With object methods, implementations like the following are possible. string Mystery<T>(T x) { return x.ToString(); } This second implementation is problematic because it is defined on any T, including types for which ToString does not make sense. It should really be typed something like the following, where the interface IString provides the ToString method. string Mystery<T>(T x) where T : IToString Or, to use the previous post’s type class encoding: string Mystery<T, TToString>(T x, TToString str) where TToString : IToString<T> Wart 4: mutation Mutation interferes with equational reasoning. Variables in mathematics are immutable. In the function f(x) = x * x, x is called a variable, not because the equation can mutate its value, but because the equation can be evaluated for different values of x. Evaluate f(3): f(3) = 3 * 3 => f(3) = 9. Evaluate f(a + 2): f(a + 2) = (a + 2) * (a + 2) => f(a + 2) = a^2 + 4a + 4. Once the value of x is known it can be substituted wherever x appears. If x were mutable and the implementation of f mutated it, evaluation by substitution would not work; instead the equation would have to be run or stepped through to work out what it was doing. Programs that avoid mutation are easier to reason about. A trivial example is equality. Assuming immutability, if x == y now, x == y always. With mutation, if x == y now, there is no guarantee that in n clock cycles x != y because one or both of them was mutated in the meantime. An equality relation between mutable values is somewhat nonsensical. Another example is validation. Let bool IsValid<T>(T x) be a function that determines whether a T is valid according to some specification. If x is immutable a true return value from IsValid(x) guarantees that x is valid forever: x can be used anywhere a valid T is required. If x is mutable there is no such guarantee, and if x is mutable and more than one thread has access to it, all bets are off. Wart 5. side effects Similar to mutation, side effects mess with equational reasoning. Without mutation and side effects, string Mystery(int x) will always return the same for any value of x. Mystery(3) == Mystery(3) // guaranteed With side effects this guarantee vanishes. For example, none of the following implementations fulfill the above guarantee because their return value depends on external state which may change between calls. string Mystery1(int x) { return DateTime.Now.AddSeconds(x).ToString(); } string Mystery2(int x) { return string.Join( ",", File.ReadLines("C:\\Windows\\win.ini").Take(x)); } Five warts are enough for one day.
http://jedahu.github.io/blog/2014-05-16-csharp-functional-warts.html
CC-MAIN-2017-22
refinedweb
825
56.15
> Uche Ogbuji writes: > > I should note that I quite hope that leading-underscore-means-private > > is indeed not normative, and never will be so. The leading underscore > > is the most readable way to escape symbol names that would clash with > > an applicable naming convention. > > The use of a single "_" at the front of a name in Python has always > (at least since about Python 1.2) been used to filter out names that > should not be used outside a module. It is more than an idiom. The > concept is enforced in statements of the form "from foo import *". It > isn't absolutely enforced. You can still say "from foo import _thing". Everything I hear is that it's a weak restriction. Maybe that doesn't matter. > Arguing that other languages use "_" to eliminate name conflicts is a > red-herring when talking about Python. Python namespaces make this > unnecessary. It is annoying to some Python users to see an idiom from > another language carried over to Python. In this case the carryover > directly conflicts with a Python language feature. All true, but we all use other languages to buttress our arguments when it suits us, and protest against bleeding in idioms from other languages when it doesn't. All's fair in flame and dialectic, no? > >). > > Why not use append a "_" to the name to distinguish it from a keyword. > The name "class_" would be as readable and it wouldn't conflict > with the longstanding Python rule for leading "_" characters. That was what I said in my last parenthesis. It's a bit less natural to me, but that's just viscera and there's no need for you to pay attention to it. > >. > > The special meaning of _* is defined here: > > > The rule is also discussed in section 6.1 of the Tutorial. Still only a weak restriction, but in the interests of moving on to productive work, I think it's time for me to concede this particular argument. So what do we do about the underscores in the DOM binding? --
https://mail.python.org/pipermail/xml-sig/2000-June/002685.html
CC-MAIN-2014-10
refinedweb
346
74.39
Editorial Why Red Hat will go bust because of Ubuntu Download the whole article as PDF Short URL: Write a full post in response to this! I don’t like writing controversial editorials. Controversy is an effective means to get a lot of accesses: most people seem to enjoy reading controversial articles, maybe because they like torturing themselves. (And yes, I used to read a lot of Maureen O’Gara’s articles myself!). Besides, controversy is a double edged sword: there’s very little chance that I would ever go back to those sites! And yet here I am.. - 816 Why Red Hat will go bust because of Ubuntu Submitted by Robert Pogson on Tue, 2006-08-01 20:17.Vote! In business, timing is everything. When RedHat started Fedora, there was no immediate revenue stream possible from the desktop. Look at Caldera! They had a very smooth distro and changed course. Thank goodness, RedHat had more sense. They did, in a way, abandon the desktop, but they have used Fedora as a testbed to keep their iron hot, and they have done good work with K12LTSP to make one of the smoothest GNU/Linux distros specialized for schools. That has planted seeds. Many students will have experienced GNU/Linux in schools thanks to RedHat. Many businesses will have experienced the reliability of GNU/Linux through RedHat on the server. RedHat has the ability to grow onto the desktop by several avenues: influences in education and widespread acceptance of GNU/Linux in business ( see ). RedHat is in it for the long haul and I would not count them out so quickly. They are huge in America and in a few other parts of the planet, such as India. Ubuntu definitely has mindshare among the passionate and soon-to-be-passionate GNU/Linux lovers, and it is a more global movement. Mark Shuttleworth has definitely earned a big return in passion for a relatively small investment and the time is right to catch the wave. I am still convinced that RedHat can also catch the wave. Migrations/new installations are going to Linux at such a rate that there will be room for several profitable/widespread distros. --- A problem is an opportunity. Why Red Hat will go bust because of Ubuntu Submitted by Frihet on Wed, 2006-08-02 13:00.Vote! When Red Hat threw away its desktop community I was a support-paying customer. I dropped Red Hat the day the news was released and moved my machines to Mandrake, where I cheerfully paid for support until Ubuntu 6.06 hit the street. Now I send money to Ubuntu. At that time, I wrote a short blurb in which I said Red Hat had forgotten how Microsoft knocked off IBM. Of course there were many reasons, but one was key. That key reason was that Microsoft conquered IBM from the desktop, not from the server room or the board room. Little guys everywhere were using Windows for Workgroups to get stuff done without the red tape and resistance they got from the centralized IT department. They took risks and simply ignored corporate policy. It was evolution in action. Darwin (had he been a hacker) would have been proud. I thought SUSE was going to win the battle with Red Hat, and so I wrote, "If the desktop OS comes in a green box, the server OS will come in a green box." I was wrong about SUSE; it looks like the box will be brown. It didn't happen to Solaris Submitted by PhillipFayers on Tue, 2006-08-01 21:02.Vote! Many people, me included, thought that the same thing would happen with Solaris when Sun's desktop market declined due to competition from Linux (and the lousy price/performance of SPARC). It didn't and in fact Solaris is experiencing a bit of a resurgance in use at the moment. I realise the situation isn't quite the same. Firstly Sun had a high end hardware market which kept Solaris alive in the lean times. Secondly there isn't really a lot of room in the market for profitable Linux distributions. Thirdly its easier to switch between Linux distributions than it was for people to switch from Solaris to Linux. But Redhat has a lot of mindshare in the Enterprise markets and it'll take quite a while for Ubuntu to challenge that. --- Phillip Fayers Red Hat not so easy to dislodge Submitted by danwalsh on Tue, 2006-08-01 22:15.Vote! Before i go on let me say 'All hail Ubuntu', i am a believer. However, Red hat is not going anywhere but up for some time. Sitting in the corporate world the Red Hat edge is the infrastructure that supports corporate IT. Big IT will not go anywhere near unsopported products (this is corporate law). Sure, lots of GNU/Linux exsists across big IT shops but when it goes into production it MUST be supported. At this point it all comes down to who bought the CIO lunch last week......its likely he or she had a Red Hat business card ;-) Regards, DW Won't happen overnight, certainly ... Submitted by Terry Hancock on Wed, 2006-08-02 08:18.Vote! But Ubuntu is making significant progress in the support area. The "long term support" option is positively unheard of in the computer industry. Six years might as well be "lifetime" in an industry where you pitch the hardware in the bin every two years (not that I do that -- I'm the one who cleans out the bins and keeps them running until the battery-backed RAM and realtime clocks go belly up or the board gets cracked. Or at least I used to, I'm getting spoiled to merely 5-yr-old equipment lately). Seriously, though, Ubuntu is looking to become stiff competition and Red Hat is going to have to make changes if it wants to stay in the game, let alone on top. One thing of course that Red Hat does have is more than six years of existence -- that six year policy would be more convincing coming from a company that had been around longer than Canonical has. As conservative as corporate buyers are, that's going to matter quite a bit. long term support Submitted by dsas on Wed, 2006-08-02 17:13.Vote! Uhm, there is 3 years support for desktop packages and 5 years support for server packages. solaris 7 (considered a server os) is still supported from sun and was released in 1998, you can't buy it now though. Solaris 8 is still supported and purchasable and was released in 2000. Both beat Ubuntu 6.06 servers 5 years support. Windows 98 has only just lost security support 8 years after being released, beating Ubuntu 6.06 desktops 3 years support. Not just Corporate IT uses Redhat servers Submitted by Anonymous visitor on Thu, 2006-08-10 17:07.Vote! I work in Corporate IT and have brought that mindset home. I run CentOS4 (don't want to pay EL4 license)for my home servers and Ubuntu desktop. In my opinion, it's the best of both worlds. I've used Redhat since RH7.2. The EL4 server is rock solid. However, RH or Fedora desktop has always been overly bloated. This is where Ubuntu wins. It's lean and to the point. Just what I need and it worked from the start without any additions. Down to my TV card. Now that Dapper Drake has LTS (Long Term Support) and OpenOffice 2.0 with great Office software compatibility, we finally have a real desktop. To balance the discussion, Redhat servers and Ubuntu Desktops. It is just my opinion and preference. lt. Ubuntu definitely is interesting Submitted by Terry Hancock on Tue, 2006-08-01 22:25.Vote! I don't know if I would be so bold as to predict the demise of Red Hat, but Ubuntu is certainly going to give them a run for the money. Ubuntu has started out much more in tune with the philosophy of free software, and that gives them a strong lead with their core market. People will be passionate about Ubuntu in ways that people usually aren't about Red Hat. When I started working with GNU/Linux, I did from a "principles first" approach. I read Richard Stallman's "Gnu Manifesto" and his announcement of the GNU project, and I became very interested in the theory of free software development and what it was capable of. I'm a skeptical reader, though, and I don't really believe things like this until I see them work. On the other hand, I don't discount them, either. So my immediate inclination was to go full steam ahead with the idea and see just how strong an idea it was. So, instead of going with Red Hat, which was a kind of amphibious, proprietary/free creature at the time, I went with Debian, because it was a full community-driven "for users, by users" distribution, and the whole mechanism was under free licenses. I figured I didn't want to get behind the idea until I had proven to myself that it wasn't just fairy gold. The going was incredibly hard at first -- but I expected that. I knew, first of all, that ANY such total switch in operating system was going to hurt. I knew that because I'd already switched from TRS-80 to IBM to Mac to BSD Unix to Solaris to MS-DOS to Windows 3.1 (variously at home, work, and school), and had even tried Win 95 on a new job. Also, I knew Debian was an all-volunteer project, and I've been a member of enough of those (Boy Scouts, National Space Society, Planetary Society, even Junior Achievement, a couple of PTAs, and church groups) to know what that's like. What I was surprised by, though, was not the little foibles when compared with full-blown corporate enterprises, but rather the fact that you could compare it with full-blown corporate enterprises! Clearly the free software method worked. So when I first heard of Ubuntu trying to create a more end-user-desktop–and–corporate–friendly distribution built on Debian, I had a strong feeling that I was going to like it from the start. Ubuntu has taken that wisdom and added the right amount of corporate backing in a very savvy way. Shuttleworth has avoided spoiling the process (a tricky thing to do!), and he's got momentum. Most commercial GNU/Linux companies seem to have a lot of conflict between "community" and "business" goals, but Ubuntu seems to know how to cut with the grain so that the "community" and "business" goals are the same. Of course, for a space junkie like me, it can't possibly hurt that the founder is one of the world's first space tourists! In any case, Ubuntu is the first commercial GNU/Linux distribution to seriously tempt me away from mainline Debian. Not so sure Ubuntu will be successfull Submitted by tf on Wed, 2006-08-02 14:00.Vote! Having used Ubuntu Breezy during a few months and then used Dapper, I felt a lot of disappointment. To my mind, the latest Ubuntu is a lot to buggy to be used for something serious. Furthermore, it doesn't provide you with xonfiguration tools like Yast or Webmin : if you don't like the default configuration, then you need to change every config file by hand. And finally, they have no clear focus : they seemed to be a desktop-oriented distro and all of a sudden they created a server version and released it with Long Time Support although it was the first version. The same is true for their Live-cd installer and there has been many reports on the Net that it is buggy. So, all in all, I don't think they're doing themselves good advertisement with their latest distribution. This review shows my frustration with Ubuntu Submitted by Anonymous visitor on Wed, 2006-08-09 14:06.Vote! I installed Ubuntu on my dad's pc and it had tons of issues that are stil wasting my time becuse he keeps on calling for help! RE: This review shows my frustration with Ubuntu Submitted by Siraaj (not verified) on Fri, 2007-04-13 04:35.Vote! My mom (49) and little sister (15) have been using Ubuntu 6.06 on a 500Mhz PIII since a little after its release, and called me for help once. Wanna know what the problem was? The cable modem was unplugged! :-) Here Here! Ubuntu is not the end all OS Submitted by Anonymous visitor on Mon, 2006-08-14 22:20.Vote! I think that, for me, the biggest difference is the .deb/.rpm issue. After abandoning SuSE and their confusing sysconfig system, I obtained a copy of Ubuntu 5 from a regular customer at the coffee-bean slingery which signed my checks at the time. I will say that the install was cake. Pleasantly, I found that it did not take a lot of time. If you've used SuSE you know you can grow a beard before it installs. I liked the desktop and thought it was clean. Then I tried to tweak it to my personal needs. Nothing was there! No video, properly working firefox, missing perl mods, missing dev libraries! It seemed that I couldn't find/understand things that I used to be able to handle just by screwing with /etc. But, it's got repositories, so all of the files are there, right? Sure, but when I wanted to play with them, I couldn't find them. When I downloaded .deb packages, I couldn't figure out how to get them to work. When I tried to convert .rpm to .deb, all hell broke loose. Enough was enough. With Fedora 5, I did have to do some tweaking to set up the box, but when yum does an install, also from those wonderful repos, I know where the executables are and where the config files are. They were quite easy to find. When I compile from source, I don't get errors. I can script, surf, listen to music, watch movies, create documents, run a test server, the whole bit. It's nice to feel like I have control over the box, which is why I switched away from windows in the first place. Ubuntu may be great for someone that is used to working with debian, but for an rpm guy, it doesn't make a lot of sense. The end all would be a deb/rpm hybrid system, where the configuration is not some wacky, new idea. For me, the key is availability and usability Submitted by jonathanbrickman0000 on Wed, 2006-08-02 14:17.Vote! For me, the primary reason I use and prefer Ubuntu, is availability and usability of all of the bits which I and my users need. Some commentators on Ubuntu complain about the standard repositories, but that does not matter to me: for me, the relevant datum is the fact that I have yet to find a major Linux software package for which someone has not made a good, reliable, and functional Ubuntu .deb. Under most Linuxes I have tried, I have to consider using outdated rpms with prayer; under Ubuntu, I look far more for the best repository for the package I want, than I do for the proper .deb. This is the best possible way, I think: the Ubuntu community includes a very careful subcommunity of package repositories, which is different and, I suggest, flatly better than any other Linux. And for less-than-major packages, I have yet to find a current .deb or binary installer I cannot load using Ubuntu. As long as I have reliable software to load, I'm happy, and the .deb system has been tremendously better in my experience than the rpm system. I used the rpm system on and off for years, in more than one distro including a few versions from RedHat, and had to regularly get in and fix its own damages. When the rpm system first came out, it looked like it should work wonderfully...but bugs crept in which they never worked out, and today, if I load an rpm into a Linux, I have to wonder how much manual munging of the result I am going to have to do, almost exactly like the Slackware .tgz I started out in. I have yet to have this problem with the .deb system. Ubuntu Technical Objections Submitted by Rick Stockton on Wed, 2006-08-02 22:31.Vote! For power users such as myself, the depth of Ubuntu's software repositories is fantastic. But, I have a BIG problem with the default 'anyone can sudo' management scheme. And for me, installing EVERYTHING into a single partition is a show-stopper: My Dapper install does crash from time to time, and I get really nervous about whether the sole partition will come up OK when it does that. With others such as my main computer, I can easily 'tar' my important $HOME stuff to backup file(s) in /opt, a partition which doesn't get filled with temporary files as I do my work. /usr and /opt are otherwise read-only; I like it that way. Why Red Hat will go bust because of Ubuntu Submitted by toaster on Thu, 2006-08-03 06:37.Vote! Like Frihet, I too was a support paying customer of Red Hat until they dropped their desktop community. I moved to Suse, and stayed. The bottom line is that people will use what works, what has good support, and is easy to configure. Ubuntu, Xandros and a few select other distros have that core criteria. Of them all though Ubuntu alone has captured the imagination of the alternate os crowd (and lets face it many of them, (us)are system administrators), and has a product that is not only cool but very very functional. I spend a lot of time evaluating server and desktop software for clients, I usually test each and every distro as its released. Two stand out with neon lights blazing, Ubuntu and Simplymepis 6 based on Ubuntu core. Red Hat died and was buried several years ago. Why it wont happen Submitted by rmjb on Thu, 2006-08-03 17:36.Vote! At least not yet. When choosing a server flavour of Linux we are limited. Limited to what our hardware vendor will support on the server and limited to what our application vendor will support. SAP is not certified on Ubuntu. EMC most likely will not certify on Ubuntu, at least not yet. RedHat and SuSE have spent the last few years building expensive relationships with these hardware and software vendors, it will take some time for Ubuntu to do that. - rmjb But they are making that investment Submitted by Terry Hancock on Sat, 2006-08-05 06:05.Vote! There's no doubt in my mind that you are right about the time it's going to take, but Ubuntu has been making the necessary effort in a way that Debian proper is (perhaps constitutionally) unable to do. For example, IBM recently included Ubuntu in the supported distributions for its database products. True, that's one win out of dozens that are needed, but it's a start. I think Canonical has the will and ability to follow through, so I think it's just a matter of time. Of course, that doesn't necessarily mean the downfall of Red Hat. There's room for more than one top-flight commercial Linux distribution in the world. Ubuntu is the only one I would personally take seriously, though, because of my long time affinity for and experience with Debian. I don't think your editorial Submitted by Scott Carpenter on Sun, 2006-08-06 14:37.Vote! I don't think your editorial was overly inflammatory, Terry. I'm just getting going with using GNU/Linux, largely because I believe in the principle of free software. (It's not for convenience -- it's a lot of work to make the move.) I have Fedora Core 5 on an old P2 in my basement and it installed pretty well. I haven't done very much with it yet to form an attachment or preference. As I read about how Ubuntu is striving more to really be a free distribution, it seems like an easy choice to make to at least give Ubuntu a try. From the little I know about the subject, I get the sense that things are shifting this way. I'd think that "mind share" (if we must use that term!) is especially important in the free/open source community. Even if large companies like the one I work for are currently using Red Hat, things have a way of changing from the ground up. That's how GNU/Linux got in to the enterprise in the first place, isn't it? ---- Scott C. Fedora Core is 100% Free and OpenSource unlike Ubuntu Submitted by Anonymous visitor on Mon, 2006-08-21 19:52.Vote! "As I read about how Ubuntu is striving more to really be a free distribution, it seems like an easy choice to make to at least give Ubuntu a try." If you're concerned about freedom then you should be aware that Ubuntu packages non-free, proprietary software to drive video cards and other hardware devices (in the "Restricted" component). Fedora Core on the other hand explicitly will not do this. Ubuntu's supporting of this sort of hardware makes it less likely that there will be sufficient market demand from Linux users placed on hardware manufacturers. In all the posts above I have read vague, generalised assertions that Ubuntu is a "better desktop" with no specific data to back this up, or even a description of what the poster feels a "better desktop" should be. Having used Red Hat for a long time (5.2 on to FC6Test2 now) and also tried out MacOSX, Win3.1, Win95, Win2K, WinXP and Debian, SuSE, Mandrake and now Ubuntu I am honestly unimpressed with any of them and think that the longer you spend with any one particular system the more comfortable you'll be with tweaking it to your own needs. Fedora Core comes with a vast array of repositories to obtain any piece of Free Software that I wish (and if I were foolish enough to search for non-Free software there are repositories of that run by people that feel there's a need for it). I'd honestly like to know from some of the posters above: 1) What specifically is lacking in Fedora Core for the "desktop" experience 2) Why do you feel comfortable with a distro that includes non-Free packages? Oisin Feeley Non-free != evil Submitted by tomviolin on Fri, 2006-10-06 09:11.Vote! I do definitely prefer to use free and open-source software, when possible. I also happen to believe that the core OS (i.e. kernel) should always be free and open-source. However, I also recognize, having been involved with software development for many years, that "non-free" has its place in the world. If a hardware manufacturer will only support Linux -- at least for now -- with a closed-source driver, so be it. At least they are supporting Linux, in a world that is still so dominated by Windows that Linux could be easily ignored. Same thing for application software. If a company wants to sell a closed-source product to protect trade secrets, and the product does the job for me and is well-supported, I'm not going to rule it out on philosophical grounds. So, for me, a distribution that offers both free and non-free packages is a plus; it allows me the freedom to choose the best of both worlds. Why is it not listed on the "Free GNU/Linux Distributions page" Submitted by Anonymous visitor on Mon, 2006-11-06 20:21.Vote! lists free distributions, and Fedora isn't on there, so I never considered it to be. Any idea why? I meant *Tony*, not Terry! Submitted by Scott Carpenter on Sun, 2006-08-06 14:41.Vote! Sorry. (Darn all those Terry comments on the same page :-) No worries at all :-D Submitted by Tony Mobily on Sun, 2006-08-06 21:03.Vote! Hello, No problem at all Scott! Thanks for your comment :-D Merc. Why Red Hat will go bust because of Ubuntu Submitted by lokeey on Tue, 2006-08-08 11:31.Vote! So true...so true! I busted my Linux cherry with Red Hat (5) and the last version that I used, which I think in my opinion was the best release of Fedora, does not compare or stack up even, to Ubuntu. I have already migrated my Fedora 4 server to Drapper and also switched my laptop to Ubuntu/Kubuntu as well. Fedora 5 did not "WOW" me at all! It would be interesting to see the number of Fedora systems compared to Ubuntu and not downloads, but actual users currently running either of these two. You are just a biased Ubuntu fanatic Submitted by Anonymous visitor on Wed, 2006-08-09 14:03.Vote! Fedora is NOT bad. It is just a matter of preference. In my opinion fedora is better because a lot of programs just come in rpm format and are made for RHEL and they dont work properly on Ubuntu. Fedora gives me more applications to choose from and RHEL rpms work on it as they should. I am biased Submitted by Tony Mobily on Wed, 2006-08-09 14:18.Vote! Hello, I am biased because I like Ubuntu 10 times more than Red Hat. However, I am not a fanatic. I would love to have a list of programs that "don't work properly on Ubuntu". Merc. Where are we going? Submitted by Anonymous visitor on Tue, 2006-08-08 21:02.Vote! When I knew nothing, I heard about this thing called Linux which was a free version of Unix. Wow! I want some of that I thought. I managed with limited skills to install it from somewhere at the time, way back before there were easy installs or a decent GUI. So, it worked and I could practice my very basic Unix skills on it. Then I wanted more, and found I could get a better package by buying one. Buying it? I thought it was free! OK, so I understood someone had had to do the work to package it and that was worth money to me. I used several distributions and still have SuSe 8 installed somewhere. Then along came Ubuntu, quite simply it worked and it's free - someone finally got it right! But - here my cynical side comes out. Why? What's the long term plan? The guys who are giving us this great stuff are being paid, and someone somewhere happily sends me a bunch of CD's whenever I ask - this all costs money. I'd love to understand how this is a viable business, and if it is, will there come a time when I have to start paying? Still, Ubuntu having a larger global market share than Windows would have a kind of poetic justice.... free Submitted by Terry Hancock on Wed, 2006-08-09 04:05.Vote! Are you serious? Red Hat, SUSE, and Ubuntu all have exactly the same business model: they sell support contracts. You can download free versions of all of them from various places online, or make as many copies as you like of the disks. But if you want somebody to solve your problems for you and help you out, then you pay for a support contract. As individual, hobbyist users, we're unlikely to blow money on that, but it all translates to money for a company: whether it's money paid to an inside expert or contracted to an outside one, it all comes off the balance sheet. Ubuntu, like Red Hat, and SUSE (and MEPIS and Mandriva and many, many others) basically make money by being that line item. In a way, of course, you're a "free rider", but in reality you're more like "advertising", as well as "quality control". Individual users test a distribution and they help it gain mindshare -- both contribute to the products success in the business environment where success is easier to monetize. You can of course, try to get a similar win from individual desktop users, which tends to be the point of "package" distributions like MEPIS and some early versions of Red Hat. There's also "value added" distributions like Linspire and Xandros, which contain non-free packages which effectively make license fees possible, but it's not all that clear that their method is that successful. 2 words Submitted by Anonymous visitor on Wed, 2006-08-09 02:23.Vote! Won't happen - in the corporate of which I am a part off, RedHat rules. We have hundreds of RedHat servers and growing every day. Ubantu might sometime rule desktops, but not corporate servers. Dapper Drake - gimme a break, what idiot thinks up these names? "Red Hat rules" -- but why?? Submitted by Anonymous visitor on Fri, 2006-08-11 14:21.Vote! I adopted Red Hat Enterprise Server for our business because the consensus out there seemed to be that it was "the best." (Well, to be fair, maybe a toss up between RHEL and Debian was the feedback I saw). So, I agree that the PERCEPTION is that RHEL "rules". But, my experience so far has been that a perception is all it is. We've had plenty of driver issues with staples of enterprise computing such as 3ware, software compatability issues with standard enterprise components such as mod_perl, and just a bunch of headaches in general. I have no substantial experience with other Linux systems, but I feel like they HAVE to be better. Maybe that's not true, but it's my perception (which is what drives future purchases). We are entrenched with RHEL now, and that may prevent us from switching in the near future, but if I had to do it again, I'd try Debian, Suse, or Ubuntu. IMHO, the only reason Red Hat still rules is momentum. If they don't start doing a better job, they will lose that momentum, and a superior product will prevail. Sincerely, James Delibrate controversy Submitted by Anonymous visitor on Wed, 2006-08-09 05:45.Vote! I don’t like writing controversial editorials. I'm sure you don't, but you know its a necessary evil as a magazine editor or a journalist, to generate the necessary hits for ad dollars. I started with Linux using Fedora Core 3, and now I'm using a mix of Gentoo, OpenSUSE, Debian and Ubuntu. Throughout my short experience with Linux (over 18 months now), I've learnt to ignore the majority of opinions, editorials, Podcasts, etc. I find they're just a BIG waste of time, as I feel they don't push Linux or open-source in an active manner. What I'm saying is, such things aren't as effective as writing an application that fills a niche for open-source in general. (Sometimes, they cause more damage than promote a positive outlook). If something is "Windows only", I sure would like to bring an application that would fill this niche into the open-source world. I don't doubt a LOT of folks would appreciate what I did...Its better than getting your "10 minutes of fame" on the web. Yeah right Submitted by Tony Mobily on Wed, 2006-08-09 14:12.Vote! Hi, >I'm sure you don't, but you know its a necessary evil as a magazine editor or a journalist, to > generate the necessary hits for ad dollars. Yeah, sure. You can see the HUNDREDS of controversial articles we publish all the time to generate necessary hits fo ad dollars - not. >What I'm saying is, such things aren't as effective as writing an application that fills a niche for open-source >in general. Opinions are necessary for the world to _work_ and for people to think. Not everybody wants to think, unfortunately. >I don't doubt a LOT of folks would appreciate what I did...Its better than getting your "10 minutes of >fame" on the web. I have run Free Software magazine with Dave Guard for 2 years. The last thing I need is "10 minutes of fame". Please check your facts before posting. Merc. 2 words Submitted by Anonymous visitor on Thu, 2006-08-10 23:27.Vote! Won't happen, RedHat has it's place in the enterprise, I know, I work with over 400 of them everyday. You guys are mad because RedHat needed to make money to stay in business. Are you going to feel the same way if Ubuntu starts charging? You are looking for a handout....moochers. Why Red Hat will go bust because of Ubuntu Submitted by merdmann on Mon, 2006-08-14 09:38.Vote! I also think, that the timing was right for Red Hat. They made their move, when many sysops were trained and used to Red Hat. Many servers run Red Hat at that time. Few controllers will allow their IT-staff to spent expensive working hours on re-setup the servers to another linux distro. I think Red Hat will be in widespread use for the next ten years (at least). Beside that... "...First of all: Red Hat was my first love..." Mine was called Julia. Oracle Corp is support Red Hat Linux Submitted by Anonymous visitor on Fri, 2006-08-18 21:57.Vote! Oracle Corp is now supporting Red Hat Linux. Oracle Application Server 10G is developed on Redhat Linux. A pretty compelling reason would be needed to switch from Red Hat Linux. CY Re: Oracle Corp and rumors on Ubuntu Submitted by Anonymous visitor on Tue, 2006-10-24 00:12.Vote! I Googled into this discussion because of rumors about Oracle plans to compete against RHEL with Ubuntu... Whether this is true or not - your post was on 08/18 and it is just 10/22 now ... just a measure of how fast things change in this industry. I started with Fedora on the second partition of my new Dell laptop... and went thro sheer hell with the drivers.. and ultimately had to uninstall/ clean up the machine...A friend suggested running Linux as a virtual server inside the Windows env. Ubuntu, RHEL, Debian - whoever - the key for large scale adoption, IMHO, is driver certification for specific desktop h/w configurations - either right at the vendor's site (a la Dell), or as a precertification process before install. My 2c. Except they didn't -- fast indeed Submitted by Terry Hancock on Mon, 2006-10-30 18:02.Vote! In fact, just a few days after your post Larry Ellison of Oracle announced his company's intent to fork a version of Red Hat and compete against RHEL with that. Sounds crazy, but that's apparently what the man said. SuSE Submitted by Anonymous visitor on Thu, 2006-08-31 11:57.Vote! I was a Redhat fan myself but with problems mounting to use Fedora on my desktop I thought of moving to another Linux distribution. There was a huge appreciation for Ubuntu. So I did go to Ubuntu. It was crazy. But now I switched to SuSE 10.1 when a friend of mine recommended it. Everything started working straight out of the box. SuSE is definitely a superb desktop linux version. Kabeer Ahmed. ubuntu still has a way to go on its "long march" Submitted by Anonymous visitor on Mon, 2006-09-04 00:06.Vote! I wasn't as impressed with ubuntu 6.06 or any of its earlier incarnations as I wanted to be. I had a difficult time getting it to work on my new laptop. It had a pretty GUI even though it is still way too buggy to be taken seriously. While I consider ubuntu to be still too buggy, it does show a lot of promise. I look forward to its next incarnation. Its progress has been phenomenal since it first started not too long ago. On the other hand, SUSE 10.1 really impressed me. It installed on the laptop with no showstoppers or major problems and actually ran much smoother than the 10.0 version, so much so that it should almost be version 11.0. The only black mark was that idiotic beagle application (I uninstalled it almost immediately) - what a useless piece of software! It's as bad as kwallet. I've always used kfind or just plain old find if I needed to look for something on my disk. Why reinvent the wheel? Beagle only allows people to be mentally lazy and slipshod in how they organize their files. Why Redhat . . . Submitted by jons on Tue, 2006-09-05 09:29.Vote! Controversial editorial? No, I don't think so! Ubuntu (Dapper) was not my first Linux experience, nor my best, but it was a turning point. I'd wanted to jump into Linux for quite so time, but I couldn't afford a large expense, timewise. Since then, I've keep my eyes on Ubuntu (& Xubuntu) & I gotta' tell ya, these people are SERIOUS! Their 'live cd' just flat works on everything that I've tried it on, so far. tf: Try the 6.06, try it, you'll like it, I think. Big improvement in 6.06. Its really easy on my old hardware, which may be something that 'corporate' hasn't thought of yet; what a way to 'make' money by NOT spending any! Come on people, IF Ubuntu, Redhat, or any other flavor will let business keep using their old hardware another year or two, hey just another six months, what a savings THAT would be! Yes, there are concerns about support, but you PAY for it, just like every other business expense. So the bottom line is, does it cost MORE, or does it cost LESS? Anonymous coward: Yes, a "new" laptop, but did you try it on an older one? And your dad won't call you if he had Windoze, instead? dsas: You mentioned Win98 & its 'longterm support'; but how much did it cost to license 250 seats (or more) w/ that, compared to a Linux flavor? & as for Redhat, its in the game to stay, but so is Microsoft, & its not even in the same league anymore, IMHO. No, this isn't about the 'demise' of Redhat, or about one of the other Linuxes, although some may pine for one of them. What we're talking about is what a lot Linux aficionados have been looking forward to for quite some time; Linux, of any variety, is finally getting some real traction where it counts, in the corporate world. All those Linux desktops DO make a difference. You like Ubuntu, or SUSE, or Debian, Redhat, that's fine. Right now, there's a way to go, but time will shake out one or two that will raise above the others, whether its the best or not. (Personally, I still think OS/2 was better than Windoze.) But the point is that 'open source,' of some kind, is in the hunt. Some of you don't think that 'corporate' will switch, but compare software (IT) w/ personnel; If they'll 'dump' the old hands to save money up front, how long before they figure they can swap out software the same way? Sooner or later, Ubuntu, Redhat, or maybe one of the others IS going to go big at one of the big names in business. Everyone else will be watching, & if it looks like that business is doing good w/ Linux, others will follow. In that regard, the Linux version will matter less than how well that 'early adopter' does in the marketplace, where success is dependent on the winds of fortune & fate as much as anything else. Just an opinion. [:^) Keep looking up!] jons Ubunutu or Red Hat Submitted by asokadd on Tue, 2006-09-12 09:11.Vote! I learned all my linux from Red Hat. My hat is always up for them for making impossible possible. My turning to Suse is for different reason. I have upto seven opersting systems in my home computer (including windows) except Ubuntu. Knoppix is my best utility. UBUNTU has its teathing problems yet but I don't want to discourage the effort they are making. Good Luck and long live all the distributions and their administrators. Without your efforts the computer world would be boring. Flamming is little good for your soul but not your heart. That is my advicee as a sympathetic doctor (by profession). Visit my site at Or (British Council) Author name as Dr.S.B.Asoka Dissanayake I use RedHat AND Ubuntu, but prefer Ubuntu Submitted by Anonymous visitor on Tue, 2006-09-19 18:38.Vote! I wanted to learn/use Linux 6 years ago when I started a small web-consulting business. I had precious little resources and did not want to spend them on licensing fees for operating systems, database servers, development tools, etc, etc. I paid $800 to build a robust (enough) server and $15 for 6 cd's of Red Hat 5.5, I think it was??? Wow! What a great price to get up and going! I soon got frustrated after several attempts to install/configure and gave up to go back to Microsoft. What I could not afford most of all was the time I was pouring into this! Ok, a few years later I got the urge to try Linux again. A co-worker (I went back to a corporate IT department) gave me a copy of Ubuntu. Funny name, pictures of half-naked people on the cd-sleeve... what the hell was this? He told me to just install it and see if I like it (4.10 - Warty Warthog ). He also told me of a FANTASTIC site that every Ubuntu newbie needs to visit - ubuntuguide.org. Anyway, I got it installed and IT JUST WORKED! (To be fair, I imagine that the newer Linux distros have similiar "ease of install" features.) Anyway, Ubuntu made me a Linux convert! I now have 3 servers, 2 desktops and 1 laptop running 6.06 Dapper Drake. Ok, the real reason I prefer Ubuntu..... At work we have 2 RHEL 9 boxes. I am admin on them. When I try to install some software to evaluate (document mgmt system, BPM Suite, CRM app, etc.) I inevitably have to install/upgrade packages like MySql, PHP, Perl, etc. I kept getting dependency issues with the RPMs. It took several extra hours tracking down misc packages to satisfy these dependencies. In a way I liked it, it honed my Linux sys admin skills, but my boss doesn't like the extra time it takes to accomplish this. Let me say this, we are a mixed shop and always will be - Windows and Linux. I prefer it this way. It keeps me current on more skills and more marketable. Anyway, I decided to spend some company money and take some Linux classes. The only thing that my company would pay for was SUSE. Red Hat training was more expensive. Anyway I took some SLES training and saw the future and the future was YAST!!! Ok, YAST was cool, but I soon ran into problems with it as well. I set up a SLEH box at work and had some YAST problems. The system hung up sometimes while doing updates. I could not get sound to work (this is not a big deal for a server, but it is for a desktop). Anyway, at home I ALWAYS had Ubuntu working and was becoming more proficient with it. By this time I had a desktop and a server with Ubuntu serving up a few websites and CMS sites for outside consulting work. I just set up an Ubuntu box here at work to test out an open-source DMS called Knowledge Tree DMS and it is working beautifully. I currently use RHEL 9 and Ubuntu 6.06. I must say that I prefer Ubuntu because of it ease and simplicity. I like the fact that it is installed from 1 cd and I can apt-get repositories from the web for updates. I like apt-get vs. rpm and I think that is the main reason people will like Ubuntu vs. RedHat. Don't get me wrong, Red Hat isn't going away, just like Windows is not going away. I work for a VERY large govt agency (20,000+ users) and we will AlWAYS have a healthy mixture of Microsoft and Open-source solutions. I am very proud and extremely excited that we are investigating more and more uses for open-source software. We are in the process of selecting an OSS desktop to start experimenting with and see if we can, not phase-out Windows but phase-in open-source software. As US-taxpayers we should all be happy that our Govt is exploring ways to cut back on licensing fees paid to any one vendor. This is not only going on in Europe, Asia, Africa, but here as well. Can you tell I like the word anyway? Anonymous coward Submitted by Anonymous visitor on Thu, 2006-09-28 17:13.Vote! void RedHatCheck() { if( Ubuntu.kills(RedHat) ) { RedHat = Ubuntu = Junk; return (Rise(SuSE) || Rise (openSuSE)); }//end if } I still think SuSE / OpenSuSE is the Linux Distro "done right". I fail to see what the big hype is about Ubuntu, in much the same way that I think Firefox is an over-hyped browser that's not really "THAT" phenominal (I use Opera btw and it kicks major FF butt!) Ubuntu vs Red Hat Submitted by Anonymous visitor on Sat, 2006-09-30 01:23.Vote! Ubuntu > Red Hat all the way, the stability and performance is just great, keep it up! Not Fedora Submitted by Anonymous visitor on Wed, 2006-10-04 00:40.Vote! I think Fedora is not good because of its short life. Some 2 years ago I installed FC3 on our servers and now we are having problems because there isn't new versions of the binary RPMs. Now I jumped from winxp to ubuntu on my PC and it worked like a charm, and now I'm trying on the server side. Just one installer CD is a good idea. I think editorial is very interesting for 'normal' sysadmins like me because we can know all of your opinions and points of view. Thanks, Pablo The main reason I'm going Submitted by Anonymous visitor on Wed, 2006-10-04 14:19.Vote! The main reason I'm going with Ubuntu is the long term support for my desktop and servers. As much as I like RH, I'm afraid they simply made themselves too expensive for the small guy. Thanks for the discussion Submitted by atlantia on Thu, 2006-10-19 10:05.Vote! One more comment because ubuntu has caught my desktop distro slot. I am not a sysadmin, only an amateur that has been caught up in the idea of some college student's gift to the planet that seemed to have the support of some incredibly creative and giving people. Working with Windows at work and then at home, read some Eric Raymond through a link online, played with an early Red Hat ( which I could never get to install after months of effort ) then spent hours trying to install Mandrake which finally took ( thank God ), had to play with some drivers to get it to work on an old notebook. Then found Mepis, then SimplyMepis ( due to it being the distro of choice of the local LUG ). Definately liked deb vs. rpm for the reasons listed above. Then heard about MS ( Mark Shuttleworth - kind of like the coopting of those two letters ) and ubuntu - a space traveler and would send me a bunch of CDs for my relatives and friends in those slick covers - just boot with it and try it ... ( And me burning the ISO and flawlessly installing on only the second try made me feel like a god ) and I actually felt like I was contributing in my own miniscule way to the promotion of a linux distribution that even a long term noob like me feels confident about. It truly came home now when I watch my partner use it on her laptop at home - a dual boot. Even though she thinks the whole open source movement is .. nice .. it just had to work on her laptop as easily as Windows .. Now after going back and forth for a few months, she rarely uses Windows and actually does her own updating and has even spent some time in a terminal to install Flash on Firefox. She now is passing out the CDs in their nifty covers to friends at work who may or may not play with them. I know, a little off topic, and the reason the masses may come to use ubuntu is that most of us can drop in the CD and it works - can't say that for most of the distros I have tried. Better than Red Hat? - Can't say definatively - I'm not a professional. Ubuntu just works everytime I have exposed someone to the wonders of linux - I know I'll get flamed if I start espousing the free CDs and the slick packaging, its just that I can't tell you how many people seriously consider trying linux because some how MS ( see how it works .. ) gave it away for free in a nifty package that even my partner can praise. Thanks Linus and Richard and Mark and Tony and yes You for making the world a little more fun Utter Ignorance Submitted by Anonymous visitor on Sun, 2007-01-14 08:46.Vote! Most of these responses are ignorant of fact. Red Hat has ALWAYS been considered a server distro. Don't get me wrong, I like Ubuntu and run it on one of my machines. However, Red Hat is still far more reputable than Ubuntu. Ubuntu, like ANY Linux distro is Linux (which is a kernel), and usually includes the options to install other packages. ANY Linux distro can be made as powerful as the next. The only real significant differences between the different distros are this: UI and/or presentation, support, business model, contribution to products which may be distributed with the product, configuration, and so on. Again Linux is a kernel. It is not a desktop or a server. I really hope that from now on when people want to make arguments, they do it by first studying and understanding the facts of which they are speaking for or against. Confused as hell.... Submitted by redenex on Fri, 2007-03-09 10:47.Vote! Hiya guys! I am a n00b to Linux world, just scrapped my XP last weekend and got on with Ubuntu 6.06. I am pretty happy with the performance, though I will need to educate myself more with Linux. Now, I plan to do some web development and 2 friends have reccomended me Red Hat, and one even has brought a few CD's and help me with installation, but on major forums that I visit, I get the idea that Debian is more stable a system and Ubuntu is good enough for my requirement! Can someone help me clear my insanity? :) Cheers and good day! Russian Admin Submitted by Anonymous visitor on Sat, 2007-03-31 12:57.Vote! Red Hat it is alive and will live, especially in Russia! reviewer Red Hat vs. Ubuntu Submitted by sjuras on Mon, 2007-04-09 21:02.Vote! As a computer science student I have to say that Ubuntu 6.10. is the best linux distribution for novices and intermediate users. It contains all the applications you need - a web browser, presentation, document and spreadsheet software, instant messaging packed in nice GUI similar to windows, it's user friendly and anyone can install it and start working. For experienced users slackware is better choice. Sasa Juras my blog Ubuntu / RHEL Submitted by Philippe Laquet (not verified) on Tue, 2007-10-23 23:23.Vote! [troll]FreeBSD?... :p[/troll] Ubuntu was the "best" for you because of your experience started with it - I am not sure that one of those distributions are "better" than the other. Essential matters (IMHO) for an Entreprise Class distribution: * The editor's support (quality AND location a.k.a. "presence") * Hardware and software support (with vendor's contributions, support, drivers...) * How long will be the life cycle and support for the distribution itself You can easily find almost exactly the same packages, GUI, update utilities even if the packaging is different, you can do the same on a RHEL and on an Ubuntu Server... The difference, today, is that Redhat as much experience, partnernships, coverage and experience in entreprise environment. May be it will change in a near future, but for the moment, some may ask: "why should I switch to Ubuntu instead of my current RHEL subscriptions, support, technical teams practice... ?" I am using Debian/Redhat/Ubuntu/RHEL since 10 years - for personnal and professionnal needs. That was just my humble opinion.
http://www.freesoftwaremagazine.com/articles/editorial_13/
crawl-002
refinedweb
9,014
72.26
Geertjan's Blog Random NetBeans Stuff 2014-03-07T08:41:10+00:00 Apache Roller "NetBeans Platform for Beginners" is Leanpub Poster Child Geertjan 2014-03-07T08:00:19+00:00 2014-03-07T08:00:51+00:00 <p>If you've gone to the <a href="">Leanpub.com</a> site over the past few days, you'll have noticed that "<a href="">NetBeans Platform for Beginners</a>" is the current Leanpub poster child:</p> <p><a href=""><img style="border: 1px solid black;" src="" /></a> </p> <p>I.e., the landing page currently promotes "NetBeans Platform for Beginners". That's no surprise, since "NetBeans Platform for Beginners" is the number 1 earner over the past 7 days, i.e., it's doing really well:<br /></p> <p><a href=""><img style="border: 1px solid black;" src="" /></a> </p> <p>More on the book can be found in a new article on jaxenter entitled "<a href="">What's the relevance of the Java desktop in a world that's tilting towards mobile?</a>", as well as <a href="">a great book review in Tushar Joshi's blog</a>.</p> <p>Reading the book and have feedback? Or just have some questions about it? Go here, the authors are very responsive and helpful:</p> <p><a href=""></a><br /></p> Sidebar for NetBeans Platform Applications Geertjan 2014-03-06T15:28:11+00:00 2014-03-06T15:59:25+00:00 <pre>import java.awt.BorderLayout; import java.awt.Color; import java.awt.Container; import java.awt.Dimension; import java.awt.Frame; import javax.swing.JPanel; import javax.swing.JRootPane; import javax.swing.border.LineBorder; import org.openide.windows.OnShowing; import org.openide.windows.WindowManager; @OnShowing public class Startable implements Runnable { @Override public void run() { Frame mainWindow = WindowManager.getDefault().getMainWindow(); Container contentPane = ((JRootPane) mainWindow.getComponents()[0]).getContentPane(); contentPane.add(BorderLayout.WEST, new SideBarPanel()); } private class SideBarPanel extends JPanel { public SideBarPanel() { setPreferredSize(new Dimension(40, 40)); setBorder(new LineBorder(Color.RED, 2)); } } }</pre> <p>The result is the red border on the left of the screenshot below, which could just as easily be BorderLayout.EAST, of course, or any other place:</p> <p><img src="" style="border: 1px solid black;" /></p> <p>On the NetBeans Platform mailing list, Djamel Torche from Algeria recently showed off a screenshot of an app that has a sidebar on which the solution above is based:</p> <p> <a href=""><img src="" style="border: 1px solid black;" /></a><br /></p> NetBeans Community: We Need Your Help! Geertjan 2014-03-05T12:37:03+00:00 2014-03-05T12:39:59+00:00 <p>The NetBeans IDE 8 release cycle is nearing its end. Would be wonderful if everyone out there would use <a href="">NetBeans IDE 8.0 RC 1</a>, specifically that release (not Beta, not a nightly build, not anything else) for a few days and then fill out the release survey referred to below.<br /></p> <p><a href=""><img width="539" height="213" style="border: 1px solid black;" src="" /></a> </p> <p><a href=""></a></p> <p>To orientate yourself on the key goodies provided by NetBeans 8, read <a href="">The Top 5 Features of NetBeans IDE 8</a> on JAXEnter.com.<br /></p> Quick Pic: Maven in NetBeans IDE 8 Geertjan 2014-03-04T08:00:00+00:00 2014-03-05T19:29:29+00:00 <a href=""><img src="" /></a> Say Hello to JavaLand! Geertjan 2014-03-03T14:00:55+00:00 2014-03-03T14:17:02+00:00 <p><a href="">JavaLand</a> is a new Java/JVM oriented conference happening in Germany from <b>March 25 to 26, 2014</b>. It covers Core Java, JVM languages, Enterprise Java, Tools, Software architecture, Security, Front-end technologies, and Trends around the complete Java/JVM ecosystem.<br /> </p> <p><a href=""><img src="" /></a><br /> </p> <p>The conference is organized by the German Oracle User Group (DOAG e.V.) and supported by the iJUG (<a href="" class="moz-txt-link-freetext"></a>), the association of over 18 German-speaking JUGs in Germany, Switzerland and Austria. One of the interesting things about JavaLand is the venue -- the Phantasialand theme park (<a href="" class="moz-txt-link-freetext"></a>)..<br /> <br /> I had the chance to talk to Markus Eisele (<a href="">@myfear</a>, <a href="">blog.eisele.net</a>) who is the content chair for JavaLand about it in more detail.<br /> <br /> <img align="left" src="" /><b>How is Javaland different from all the other Java conferences, in Germany and the rest of the world?</b> <br /> <br /> First of all it is a brand new conference happening for the very first time. It was in the making for quite a while now and the biggest concern was about how to deliver added value the major conferences which are there already.<br /> </p> <ul> <li> The first differentiation is that we're not commercially oriented. We decided to be community-driven on many levels and this paradigm influences everything around and inside the JavaLand. <br /><br /></li> <li.<br /><br /></li> <li.<br /></li> </ul> <img align="left" src="" /><b>What else is there to expect from JavaLand?</b><br /> <br />.<br /> <br /> I can promise that you will be continually struggling to find the one and only schedule for you, since there will be so many options!<br /> <br />.<br /> <br />.<br /> <br /> There's still time to register! What are you waiting for?<br /> <br /> <a href="" class="moz-txt-link-freetext"></a><br /> Quick Search Embedded East Geertjan 2014-03-02T09:03:48+00:00 2014-03-02T09:05:12+00:00 The requirement is to have an Action that shows the Quick Search field. One approach <a href="">was described here</a> and here's another one. <p><img style="border: 1px solid black;" src="" /> </p> <p>Here's how.</p> <pre><font size="1">public final class SearchAction implements ActionListener { JPanel panel = new JPanel(); @Override public void actionPerformed(ActionEvent e) { JButton closeButton = createButton(); Action qs = Actions.forID( "Edit", "org.netbeans.modules.quicksearch.QuickSearchAction"); Component quickSearchToolbarItem = ((Presenter.Toolbar) qs).getToolbarPresenter(); panel.add(quickSearchToolbarItem); panel.add(closeButton); Frame mainWindow = WindowManager.getDefault().getMainWindow(); Container contentPane = ((JRootPane) mainWindow.getComponents()[0]).getContentPane(); contentPane.add(BorderLayout.EAST, panel); mainWindow.validate(); } private JButton createButton() { JButton closeButton = new JButton(); closeButton.setText("x"); closeButton.setFocusPainted(false); closeButton.setMargin(new Insets(0, 0, 0, 0)); closeButton.setContentAreaFilled(false); closeButton.setBorderPainted(false); closeButton.setOpaque(false); closeButton.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { Frame mainWindow = WindowManager.getDefault().getMainWindow(); Container contentPane = ((JRootPane) mainWindow.getComponents()[0]).getContentPane(); panel.removeAll(); contentPane.remove(panel); mainWindow.validate(); } }); return closeButton; } }</font></pre> Runtime Look and Feel Switching Geertjan 2014-03-01T22:07:26+00:00 2014-03-01T22:14:28+00:00 Following on from my earlier <a href="">Options Window Color Analysis</a>, here's the Search field in the Options window, also darkened, important for air traffic systems that need to avoid the white glare of default UI text fields. <p><img src="" /> </p> <p>The key to this is this:</p> <pre>UIManager.put("TextField.background", Color.LIGHT_GRAY);</pre> <p>Below is code for a runtime look and feel swither.</p> <font size="1" face="courier new,courier,monospace">@OnShowing<br />@ActionID(NetBeans Platform for Beginners</a>" is selling really well. If you look at the <a href="">Leanpub Bestsellers page</a>, you'll see it's second in "earnings in last 7 days". And the reviews are really good, too, for example, here's a review from a newbie to the NetBeans Platform:</p> <p><img src="" /> </p> <p>And here's the review of a very experienced NetBeans Platform developer:</p> <p><img src="" /> </p> <p>And there's a growing list of other reviews at the bottom of the book page, <a href="">check it out</a>, also from Benno Markiewicz and Michael Bishop. </p> <p.</p> Top 5 Features of NetBeans 8 Geertjan 2014-02-26T08:22:47+00:00 2014-02-28T08:26:50+00:00 <p><a href="">The release candidate for NetBeans 8 is available.</a></p> <p><a href=""><img src="" /></a> </p> <p>The top 5 features of the release</p> <ul> <li><b>Tools for Java 8 Technologies.</b> Especially code analyzers and converters for moving to lambdas, functional operations, and method references, as well as one-click deployment and profiling for IoT, i.e., on embedded devices, such as Raspberry Pi.<br /><br /></li> <li><b>Tools for Java EE Developers. </b>Here the focus is on out of the box PrimeFaces application and artifact generators. End to end development of Java EE applications, from code generators to Java editor enhancements supporting PrimeFaces developers. In addition, there's TomEE support for the first time, and the new WildFly plugin by Red Hat guys.<br /><br /></li> <li><b>Tools for Maven. </b>Maven is really the key feature of NetBeans IDE, since it is the meeting point of so many different technologies and tools. No plugins needed in NetBeans IDE for this, just open the folder containing your POM and the IDE does the rest, defining the project in the IDE completely from your POM.<br /><br /></li> <li><b>Tools for JavaScript. </b>Lots of love for AngularJS in NetBeans 8, especially focused on connecting model, controller, and view classes with each other via code completion and hyperlinks.<b> </b><br /><br /></li> <li><b>Tools for HTML5. </b>Grunt and Karma out of the box!<br /></li><b> </b> </ul><b> </b> <p>Also, many other enhancements, and new tools, for PHP developers and C/C++ developers too. <br /></p> <p>For more details, see <a href="">The top 5 features of NetBeans IDE 8</a> on JAXEnter.com.</p> Big Day for the NetBeans Platform Geertjan 2014-02-25T14:51:45+00:00 2014-02-25T17:08:45+00:00 <p>Obviously today is a very big day for the <a href="">NetBeans Platform</a> and all its users around the world. Two new books have been published on <a href="">Leanpub</a> especially for users of the NetBeans APIs. (To understand what Leanpub is all about, <a href="">watch this YouTube introduction</a>.)</p> <p><img align="left" style="padding: 10px; border: 1px;" src="" />In the first book, which is a completed book on Leanpub, entitled "<a href="">NetBeans Platform for Beginners</a>" (361 pages!), the authors take you on a walk through all the key NetBeans APIs, together with many example exercises and a free set of samples available on GitHub, with an <a href="">open discussion forum</a> included. You'll be creating well architected and pluggable Java desktop applications before you're even fully aware of what you're doing.</p> <p>What strikes me about this book is that it gives a very good <em>weighting</em>. </p> <p>By the end of the book, you'll have a really thorough understanding of what the NetBeans Platform wants to do for you and how your application maps to its idioms. </p> <p><img align="left" style="padding: 10px; border: 1px;" src="" /> The second book is, if anything, even more interesting. Its value proposition lies in your involvement with its writing. It is not a complete book. It is called "<a href="">Exercises in Porting to the NetBeans Platform</a>".</p> <p>In this particular case, it is the hope of the authors that readers get involved. Contribute small example applications that encompass a problem you're facing in porting to the NetBeans Platform. Then the authors will dedicate the next chapter of their book to your problem scenario.</p> <p <i>think</i> in terms of NetBeans Platform idioms, i.e., it applies all the principles of the first book to porting scenarios that can easily be followed and learned from. On the <a href="">feedback page of the second book</a>, tell the authors what scenario the next chapter of the book should focus on.</p> <p>All in all great news for the NetBeans Platform. Really comprehensive texts for getting started, also available bundled together at a reduced price as a "<a href="">NetBeans Platform Starter Kit</a>". </p> <p>Anyone out there read it and have opinions to share? Some pretty positive reviews are already <a href="">available on the page</a>, by Benno Markiewicz, Donald A. Evett, Michael W. Bishop, and Sean Phillips.</p> Of Toolbars and Banners Geertjan 2014-02-24T11:38:06+00:00 2014-02-24T11:47:44+00:00 From some code by Djamel Torche, here's how to create a toolbar along the left side of the NetBeans Platform. <pre><font size="1">); } } }</font></pre> <p.<br /></p> YouTube: JPA Model Code Generator for NetBeans IDE Geertjan 2014-02-21T08:00:00+00:00 2014-02-23T00:32:32+00:00 <p>Awesome new plugin for NetBeans IDE!</p> <p> <iframe width="420" height="315" frameborder="0" src=""> </iframe> </p> <p><a href=""> </a></p> <p>Get the plugin here:</p> <p><a href=""></a><br /></p> Terence "ANTLR Guy" Parr Invitation! Geertjan 2014-02-20T08:00:00+00:00 2014-02-23T00:44:05+00:00 Terence Parr, awesome superhero language guru guy and definitely related somehow to Jim Carrey, as well as being the author of one of the best blog entries ever written <a href="'s+death+greatly+exaggerated">Report of GUI's death greatly exaggerated</a>, has an important announcement to make and here it is: <blockquote style="border: 2px solid #666666; padding: 8px; background-color: #cccccc;">Calling all coders that love to discuss computer language design and implementation! Whether you're functional, imperative, or declarative come hang out for open discussion, lightning talks, and some formal talks. <p> </p> <p>Meet other language geeks and get some socialization! Experts and beginners are welcome. You might learn, teach, eat, or drink something. </p> <p...). </p> <p>For fun, I might even take the show on the road to tech spots like Seattle, Bangalore, or Superman's hideout at the North Pole.</p> <p><a href=""></a> </p> </blockquote> <p>The first event Thursday, March 6, 2014 at 19.00 in San Fran:</p> <p><a href=""> </a></p> Interview with Authors of "NetBeans Platform for Beginners" (Part 2) Geertjan 2014-02-18T20:29:54+00:00 2014-02-19T08:29:46+00:00 <img align="right" src="" style="border: 1px solid black;" /> <p>In <a href="">part 1</a>, Jason Wexbridge and Walter Nyland were interviewed about the book they're working on about the NetBeans Platform.</p> <p><a href=""></a> <br /></p> <p> I caught up with them again, in the final stages of working on their book, to ask them some questions. <br /></p> <p><b>Hi Jason and Walter, how are things going with the book?</b></p> <p><font color="blue"><b>Jason:</b></font> Great. Really finishing touches time, at this point.</p> <p><font color="green"><b>Walter:</b></font> It's been an intense time, but we're coming to the end of it now.</p> <p><b>It's not a pamphlet, then?</b></p> <p><font color="green"><b>Walter:</b></font>.</p> <p><font color="blue"><b>Jason:</b></font>.</p> <p> <b>What do you consider to be the highlights of the book?</b></p> <p><font color="green"><b>Walter:</b></font> Well, to me, the highlight is that it is exactly the book I would have wanted to have when I started working with the NetBeans Platform.</p> <p><font color="blue"><b>Jason:</b></font>.</p> <p><b>What about JavaFX? Source editor? Maven?</b></p> <p><font color="blue"><b>Jason:</b></font> Out of scope for the current book, though definitely the focus of the books we'll be working on next.</p> <p><b>Wow, awesome news. Great work. Any final thoughts?</b></p> <p><font color="green"><b>Walter:</b></font>!</p> <p><a href=""></a></p> <p><b.</b> <br /></p> Embedded Development with Lua in NetBeans Geertjan 2014-02-17T18:54:21+00:00 2014-02-18T20:15:50+00:00 <p>Lua is intended to be embedded into other applications. One hears a lot of buzz around it in the context of embedded development. Since recently, the team providing the professional <a href="">Lua Glider</a> have made a subset available as a free extension to NetBeans IDE, consisting of an editor, debugger, and profiler.<br /></p> <p><img src="" style="border: 1px solid black;" /><br /> </p> <p>Download and install it here:</p> <p><a href=""></a></p> <p>And <a href="">here's a tutorial</a> I've been using to learn Lua.</p> Visual Database Modeling in NetBeans Geertjan 2014-02-14T08:00:00+00:00 2014-02-18T20:15:35+00:00 <p> There's an extremely interesting and professional new plugin, by Gaurav Gupta, who also created the really nice <a href="">JBPM tools</a> for NetBeans recently. It's a modeling tool that lets you generate Java classes from diagrams such as this:<br /></p> <p><img src="" style="border: 1px solid black;" /><br /> </p> <p>Get it here and see the explanatory screenshots on the page: <a href=""></a><br /></p> Seamless Aspose Integration with NetBeans IDE Geertjan 2014-02-13T08:00:00+00:00 2014-02-16T17:10:39+00:00 <p><a href="">Aspose</a> makes file format APIs and components available in multiple languages and for various technologies, including Java. <br /></p> <p>Recently the Aspose team made a great plugin available to the NetBeans community as a way to highlight their technology. Go to Tools | Plugins and you'll find the Aspose plugin available, while it is also on the <a href="">NetBeans Plugin Portal</a>, where it is described too.<br /></p> <p><img src="" /> </p> <p>You'll find a new project template in the Java category in the New Project dialog:<br /></p> <p><img src="" /> </p> <p>On the next step, you specify the name of the project and when you click Next again you're able to access Java libraries from the Aspose site:<br /></p> <p><img src="" /> </p> <p>That, in turn, creates the project below, adding the JARs from Aspose to the classpath:<br /></p> <p><img src="" /> </p> <p>Then you can go to the New File dialog and you'll see "Aspose Example" in the Java category:<br /></p> <p><img src="" /> </p> <p>Now you can choose from heaps of Aspose samples:<br /></p> <p><img src="" /> </p> <p>For example, now I have code for creating e-mails via the Aspose solution for doing so:<br /></p> <p><img src="" /> </p> <p>Congratulations for creating such seamless integration with NetBeans IDE, Aspose team! Aspose is now included on the NetBeans Partner page:</p> <p><a href=""></a><br /></p> New NetBeans Book by Oracle Press Geertjan 2014-02-12T16:47:01+00:00 2014-02-16T16:56:43+00:00 Build and distribute business web applications that target both desktop and mobile devices. <p>Cowritten by Java EE and NetBeans IDE experts, "Java EE and HTML5 Enterprise Application Development" fully explains cutting-edge, highly responsive design tools and strategies.</p> <p>Find out how to navigate NetBeans IDE, construct HTML5 programs, employ JavaScript APIs, integrate CSS and WebSockets, and handle security. </p> <p>This Oracle Press guide also offers practical coverage of SaaS, MVVM, and RESTful concepts.</p> <p><a href=""><img src="" style="border: 1px solid black;" /></a> </p> Two Recent IDE Polls -- Odd Similarities Geertjan 2014-02-11T19:23:41+00:00 2014-02-12T20:08:30+00:00 <p>Over the years, in this blog, I have frequently questioned the difference between <i>data </i>and <i>information</i>. All the way back in 2008, I even discussed <a href="">statistics in the context of (my childhood hero) John McEnroe</a>. How many out there can claim to have been arguing about statistics consistently and coherently for as long as I have? Indeed, not many.<br /></p> <p> Data is numbers, graphs, statistics, ups, downs, etc. Meanwhile, information is the <i>meaning</i> of data, that is, the conclusions that can be drawn from it. </p> <p>What does it <u><i>mean</i></u> that the statistics <i><u>over the past month</u> </i>show, both on jaxenter.de <b>AND</b> on java.net that NetBeans IDE is (in the case of <a href="">jaxenter.de</a>) the best IDE and (in the case of <a href="">java.net</a>) the IDE in which most coding is done?<br /></p> <p><img src="" style="border: 1px solid black;" /></p> <p><img src="" style="border: 1px solid black;" /></p> <p> </p> <p>In both cases, interestingly, <i>more</i> votes were cast in the polls above than in other recent polls on those sites which signifies, surely, that there is more <i>passion</i> around this topic than others that have been dealt with in polls on those sites. <br /></p> <p>But, the point is that, the <u>correlation</u> between these two different data points is surely interesting. Scoff if you like, but scoff meaningfully. How can the above two polls on different sites, with different people responding to it, be explained?</p> Centralized Search Action Geertjan 2014-02-10T12:35:11+00:00 2014-02-11T12:37:54+00:00 Here's how to show the Quick Search textfield in a dialog by clicking a menu item. <font size="1"> </font> <pre><font size="1">import java.awt.BorderLayout; import java.awt.Component; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.Action; import javax.swing.JPanel; import org.openide.DialogDescriptor; import org.openide.DialogDisplayer; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionRegistration; import org.openide.awt.Actions; import org.openide.util.NbBundle.Messages; import org.openide.util.actions.Presenter; @ActionID( </font> James Webb Space Telescope on NetBeans Geertjan 2014-02-07T22:11:56+00:00 2014-02-08T22:14:03+00:00 <p>Great screenshot by Sean Phillips, Duke's Choice Award winner, that provides a visualization of the James Webb Space Telescope Contact Analysis. It uses JavaFX 8 and NetBeans Platform 8 as its basis.<br /></p> <p><a href=""><img width="625" height="351" style="border: 1px solid black;" src="" /></a> </p> Functional Interfaces in the NetBeans Platform Geertjan 2014-02-06T08:00:00+00:00 2014-02-08T22:06:47+00:00 <p>SaveCookie, and the other loosely coupled capabilities in the NetBeans Platform, meet the requirements of the new concept <i>functional interface </i>in JDK 8. They're classes that have a single abstract method.</p> <p> For example:</p> <pre>SaveCookie s = new SaveCookie() { public void save() { System.out.println("save to database..."); } };</pre> <p>When a change is made somewhere, you can publish the above into the Lookup, causing the SaveAction to be enabled, which means the Save menu item, toolbar button, and keyboard shortcut can be invoked. When invoked, the System.out above is executed. <br /></p> <p>The Java editor in the IDE lets you convert the above to the below, using the "Use lambda expression" hint:</p> <pre>SaveCookie s = () -> { System.out.println("save to database..."); }; </pre> <p>Cleaned up even further, it can be simplified to this, though it's kind of unlikely you're going to have one statement in the method:</p> <pre>SaveCookie s = () -> System.out.println("save to database...");</pre> <p>On the other hand, you could point to a method, where the actual save functionality is found:</p> <pre>SaveCookie s = () -> save();</pre> LambdaFicator - From Imperative to Functional Code through Automated Refactoring Geertjan 2014-02-05T08:00:00+00:00 2014-02-09T07:55:10+00:00 <p>The origins of the awesome Java hints for migrating to JDK 8 in NetBeans 8 comes from the tooling that came out of the research shown below!</p> <p> <iframe width="420" height="315" frameborder="0" src=""> </iframe> </p> <p><a href=""></a><br /></p> YouTube: Starting a Project with the Vaadin NetBeans Plugin Geertjan 2014-02-04T08:00:00+00:00 2014-02-09T08:53:20+00:00 <p>Get started with Vaadin in NetBeans IDE, today!</p> <p> <iframe width="420" height="315" frameborder="0" src=""> </iframe> </p> <p><a href=""></a><br /></p> BufferedReader.lines Geertjan 2014-02-03T15:17:48+00:00 2014-02-03T15:24:23+00:00 <p><a href="--">BufferedReader.lines</a> is kind of interesting, letting you turn a BufferedReader into a <tt>java.util.Stream</tt> in Java 8. Here's some small experiments. </p> <p>Print out the number of lines in a file:<br /></p> <pre>public static void main(String[] args) { try (BufferedReader reader = Files.newBufferedReader( Paths.get("myfile.txt"), StandardCharsets.UTF_8)) { <b>System.out.println(reader.lines().count());</b> } catch (IOException ex) { } }</pre> <p>Print out all the lines:<br /></p> <pre>public static void main(String[] args) { try (BufferedReader reader = Files.newBufferedReader( Paths.get("myfile.txt"), StandardCharsets.UTF_8)) { <b>reader.lines().forEach(System.out::println);</b> } catch (IOException ex) { } }</pre> <p>Print out the longest line in a file:<br /></p> <pre>public static void main(String[] args) { try (BufferedReader reader = Files.newBufferedReader( Paths.get("myfile.txt"), StandardCharsets.UTF_8)) { <b>System.out.println(reader .lines() .mapToInt(String::length) .max() .getAsInt());</b> } catch (IOException ex) { } }</pre> RedMonk at FOSDEM: Lies, Damned Lies, and Statistics Geertjan 2014-02-02T14:29:51+00:00 2014-02-02T15:49:18+00:00 <p>Imagine you switch on the TV and you're just in time to catch the start of the weather report. The reporter is smiling cheerfully into the camera, while saying: </p> <blockquote style="border: 2px solid #666666; padding: 8px; background-color: #cccccc;">"We haven't been able to measure weather statistics in the <i>whole</i> country. In fact, we don't really know exactly how large the country is. Nevertheless, that being said, based on the weather statistics that we <i>did</i> manage to get hold of, here is the state of the weather today."</blockquote> <p>That, honestly, is what happened at FOSDEM yesterday in a packed out (people standing in the aisles, sitting on the floor, lined up outside in the corridor) <a href="">RedMonk</a> session entitled "What a Long Strange Trip It's Been: The Past, Present and Future of Java". (<a href="">Here's the abstract.</a>)</p> <p><img src="" /></p> <p>Now, Steve O'Grady was perfectly frank in saying exactly what he said (<a href="">and says it again in his blog article</a>), , <i>nor have any clue about how to do so</i>.. <br /></p> .<br /></p> <p>Personally, I have no reason to make this point other than a concern that we really <u>do</u> need usable and independent research to be done on the ubiquity of programming languages (and frameworks and libraries and IDEs), since the conclusions reached by RedMonk <i>favor</i> Java, which is my personal favorite language, the one I have been supporting and promoting for the past 10 years.</p> <p>In other words, I have nothing to gain by calling RedMonk's bluff. (I'd argue that even if you call your own bluff, it's still bluff.) They reached a conclusion I would have <i>wanted </i:</p> <p><img src="" /> </p> .</p> ".</p> <p".<br /></p> <p. </p> FOSDEM: Constantin Drabo from FasoJUG Geertjan 2014-02-01T08:00:00+00:00 2014-02-03T14:09:46+00:00 <p.</p> <p><img src="" /> </p> <p>Constantin is, aside from being a NetBeans fan, a Fedora fan. Above we're at the Fedora booth and between us is Jaroslav, the Fedora product manager.</p> <p>It was great to meet you at last, Constantin!<br /></p> Awesome Blogs (Part 1): Geertjan 2014-02-01T06:09:07+00:00 2014-02-01T08:23:00+00:00 <p>I'm starting a new occasional series where I highlight awesome blogs around the webiverse.</p> <p>And a great one, what a pleasure to kick the series off with this one, is the brand new <a href=""></a> by Brett Ryan, from Melbourne, Australia.</p> <ul> <li>In <a href="">Migrating to JDK 8 with NetBeans</a>, his first blog entry, he shows how to set up and use NetBeans 8 to refactor applications to use JDK 8. It is well written, clear, has lots of screenshots, and is just a great read. <br /><br /></li> <li>In <a href="">The Power of the Arrow</a> he shows 5 examples of code where he starts with a pre-Java 8 construct and then moves it step by step via NetBeans 8 to a new Java 8 lambdafied style, with all the intermediate steps included, and motivations why making the move makes sense. <br /></li> </ul> <p>And he says cool things about NetBeans 8, such as: </p> <blockquote style="border: 2px solid #666666; padding: 8px; background-color: #cccccc;">"I started."</blockquote> <p>I'm really looking forward to seeing what's next in Brett's blog! <br /></p> YouTube: "Smart Migration to JDK 8" Geertjan 2014-01-31T01:22:28+00:00 2014-01-31T01:23:05+00:00 <p>Don't know where to start with JDK 8? Here's a complete kit of info getting you in the right frame of mind for upgrading your code to leverage the Java 8 language changes.</p> <p> <iframe width="420" height="315" frameborder="0" src=""> </iframe> </p> <p><a href=""></a><br /></p> <p>Feedback is welcome!</p> Slides: "Smart Migration to JDK 8" Geertjan 2014-01-29T23:25:46+00:00 2014-01-29T23:40:58+00:00 <p> A new slide deck for a screencast I am working on, planning to publish it tomorrow, on how to effectively migrate to JDK 8. I.e., you simply must have tools to do so, otherwise it's a nightmare.</p> <p>All the slides with "Demo" on them are the points where I will switch into NetBeans IDE 8 and do a small demo.</p> <p> <iframe width="467" scrolling="no" height="356" frameborder="0" allowfullscreen="true" style="border-style: solid; border-color: #cccccc; border-image-source: none; border-width: 1px 1px 0px; margin-bottom: 5px;" marginheight="0" marginwidth="0" src=""> </iframe> </p> <p>Probably a lot of the above will change quite a bit. </p> <p>The slideshare location is here: <a href=""></a></p> <p>On a related note, take a look at this brilliant new article by Brett Ryan on migrating with NetBeans 8 to JDK 8:</p> <a href=""></a>
http://blogs.oracle.com/geertjan/feed/entries/atom
CC-MAIN-2014-10
refinedweb
5,156
54.22
Created on 2017-02-09 00:17 by vstinner, last changed 2017-03-14 21:38 by vstinner. This issue is now closed. Subset of the (almost) rejected issue #29259 (tp_fastcall), attached patch adds _PyMethod_FastCall() and uses it in call_method() of typeobject.c. The change avoids the creation of a temporary tuple for Python functions and METH_FASTCALL C functions. Currently, call_method() calls method_call() which calls _PyObject_Call_Prepend(), and calling method_call() requires a tuple for positional arguments. Example of benchmark on __getitem__(): 1.3x faster (-22%). $ ./python -m perf timeit -s 'class C:' -s ' def __getitem__(self, index): return index' -s 'c=C()' 'c[0]' Median +- std dev: 130 ns +- 1 ns => 102 ns +- 1 ns See also the issue #29263 "Implement LOAD_METHOD/CALL_METHOD for C functions". Maybe PyObject_Call(), _PyObject_FastCallDict(), etc. can also be modified to get the following fast-path: + if (Py_TYPE(func) == &PyMethod_Type) { + result = _PyMethod_FastCall(func, args, nargs); + } But I don't know how common it is to get a PyMethod_Type object in these functions, nor the code of the additional if. Maybe, we can skip Method object entirely using _PyObject_GetMethod(). Currently it is used only in LOAD_METHOD. But PyObject_CallMethod(), _PyObject_CallMethodId(), PyObject_CallMethodObjArgs(), _PyObject_CallMethodIdObjArgs() can use it too. > But PyObject_CallMethod(), _PyObject_CallMethodId(), PyObject_CallMethodObjArgs(), _PyObject_CallMethodIdObjArgs() can use it too. CallMethod[Id]ObjArgs() can use it easily. But format support is not so easy. callmethod.patch: + ../python.default -m perf compare_to default.json patched2.json -G --min-speed=1 Slower (5): - logging_silent: 717 ns +- 9 ns -> 737 ns +- 8 ns: 1.03x slower (+3%) - fannkuch: 1.04 sec +- 0.01 sec -> 1.06 sec +- 0.02 sec: 1.02x slower (+2%) - call_method: 14.5 ms +- 0.1 ms -> 14.7 ms +- 0.1 ms: 1.02x slower (+2%) - call_method_slots: 14.3 ms +- 0.3 ms -> 14.6 ms +- 0.1 ms: 1.02x slower (+2%) - scimark_sparse_mat_mult: 8.66 ms +- 0.21 ms -> 8.76 ms +- 0.25 ms: 1.01x slower (+1%) Faster (17): - scimark_lu: 433 ms +- 28 ms -> 410 ms +- 24 ms: 1.06x faster (-5%) - unpickle: 32.9 us +- 0.2 us -> 31.7 us +- 0.3 us: 1.04x faster (-4%) - sqlite_synth: 10.0 us +- 0.2 us -> 9.77 us +- 0.24 us: 1.03x faster (-3%) - telco: 21.1 ms +- 0.7 ms -> 20.6 ms +- 0.4 ms: 1.03x faster (-2%) - unpickle_list: 8.22 us +- 0.18 us -> 8.02 us +- 0.17 us: 1.03x faster (-2%) - json_dumps: 30.3 ms +- 0.8 ms -> 29.6 ms +- 0.4 ms: 1.02x faster (-2%) - nbody: 245 ms +- 6 ms -> 240 ms +- 5 ms: 1.02x faster (-2%) - meteor_contest: 207 ms +- 2 ms -> 203 ms +- 2 ms: 1.02x faster (-2%) - scimark_fft: 738 ms +- 14 ms -> 727 ms +- 17 ms: 1.02x faster (-2%) - pickle_pure_python: 1.27 ms +- 0.02 ms -> 1.25 ms +- 0.02 ms: 1.01x faster (-1%) - django_template: 401 ms +- 4 ms -> 395 ms +- 5 ms: 1.01x faster (-1%) - sqlalchemy_declarative: 317 ms +- 3 ms -> 313 ms +- 4 ms: 1.01x faster (-1%) - json_loads: 64.2 us +- 1.0 us -> 63.4 us +- 1.0 us: 1.01x faster (-1%) - nqueens: 270 ms +- 2 ms -> 267 ms +- 2 ms: 1.01x faster (-1%) - crypto_pyaes: 234 ms +- 1 ms -> 231 ms +- 3 ms: 1.01x faster (-1%) - chaos: 300 ms +- 2 ms -> 297 ms +- 4 ms: 1.01x faster (-1%) - sympy_expand: 1.01 sec +- 0.01 sec -> 1.00 sec +- 0.01 sec: 1.01x faster (-1%) Benchmark hidden because not significant (42) I'm sorry, callmethod.patch is tuned other place, and causing SEGV. method_fastcall2.patch is tuning same function (call_method() in typeobject.c), and uses trick to bypass temporary method object (same to _PyObject_GetMethod()). $ ./python -m perf timeit --compare-to `pwd`/python.default -s 'class C:' -s ' def __getitem__(self, index): return index' -s 'c=C()' 'c[0]' python.default: ..................... 155 ns +- 4 ns python: ..................... 111 ns +- 1 ns Median +- std dev: [python.default] 155 ns +- 4 ns -> [python] 111 ns +- 1 ns: 1.40x faster (-28%) > method_fastcall2.patch is tuning same function (call_method() in typeobject.c), and uses trick to bypass temporary method object (same to _PyObject_GetMethod()). Oh, great idea! That's why I put you in the nosy list ;-) You know better than me this area of the code. > Median +- std dev: [python.default] 155 ns +- 4 ns -> [python] 111 ns +- 1 ns: 1.40x faster (-28%) Wow, much better than my patch. Good job! Can we implement the same optimization in callmethod() of Objects/abstract.c? Maybe add a "is_method" argument to the static function _PyObject_CallFunctionVa(), to only enable the optimization for callmehod(). method_fastcall3.patch implement the trick in more general way. (I haven't ran test yet since it's midnight. I'll post result later.) method_fastcall4.patch: Based on method_fastcall3.patch, I just added call_unbound() and call_unbound_noarg() helper functions to factorize code. I also modified mro_invoke() to be able to remove lookup_method(). I confirm the speedup with attached bench.py: Median +- std dev: [ref] 121 ns +- 5 ns -> [patch] 82.8 ns +- 1.0 ns: 1.46x faster (-31%) method_fastcall4.patch looks clean enough, and performance benefit seems nice. I don't know current test suite covers unusual special methods. Maybe, we can extend test_class to cover !unbound (e.g. @classmethod) case. method_fastcall4.patch benchmark results. It's not the first time that I notice that fannkuch and nbody benchmarks become slower. I guess that it's effect of changing code placement because of unrelated change in the C code. Results don't seem significant on such macro benchmarks (may be random performance changes due to code placement). IMHO the change is worth it! "1.46x faster (-31%)" on a microbenchmark is significant and the change is small. $ python3 -m perf compare_to /home/haypo/benchmarks/2017-02-08_15-49-default-f507545ad22a.json method_fastcall4_ref_f507545ad22a.json -G --min-speed=5 Slower (2): - fannkuch: 900 ms +- 20 ms -> 994 ms +- 10 ms: 1.10x slower (+10%) - nbody: 215 ms +- 3 ms -> 228 ms +- 4 ms: 1.06x slower (+6%) Faster (3): - scimark_lu: 357 ms +- 23 ms -> 298 ms +- 8 ms: 1.19x faster (-16%) - scimark_sor: 400 ms +- 11 ms -> 355 ms +- 12 ms: 1.12x faster (-11%) - raytrace: 1.05 sec +- 0.01 sec -> 984 ms +- 15 ms: 1.07x faster (-6%) Benchmark hidden because not significant (59): (...) +1 Though this is a rather large and impactful patch, I think it is a great idea. It will be one of the highest payoff applications of FASTCALL, broadly benefitting a lot of code.. New changeset 7b8df4a5d81d by Victor Stinner in branch 'default': Optimize slots: avoid temporary PyMethodObject Raymond Hettinger: "+1 Though this is a rather large and impactful patch, I think it is a great idea. It will be one of the highest payoff applications of FASTCALL, broadly benefitting a lot of code." In my experience, avoiding temporary tuple to pass positional arguments provides a speedup to up 30% faster in the best case. Here it's 1.5x faster because the optimization also avoids the creation of temporary PyMethodObject. ." I reviewed Naoki's patch carefully, but in fact it isn't as big as I expected. In Python 3.6, call_method() calls tp_descr_get of PyFunction_Type which creates PyMethodObject. The tp_call of PyMethodObject calls the function with self, nothing crazy. The patch removes a lot of steps and (IMHO) makes the code simpler than before (when calling Python methods). I'm not saying that such change is bugfree-proof :-) But we are far from Python 3.7 final, so it's the right time to push such large optimization. Naoki: "method_fastcall4.patch looks clean enough, and performance benefit seems nice." Ok, I pushed the patch with minor changes: * replace "variants:" with "Variants:" * rename lookup_maybe_unbound() to lookup_maybe_method() * rename lookup_method_unbound() to lookup_method() "I don't know current test suite covers unusual special methods." What do you mean by "unusual special methods"? "Maybe, we can extend test_class to cover !unbound (e.g. @classmethod) case." It's hard to test all cases, since they are a lot of function types in Python, and each slot (wrapper in typeobject.c) has its own C implementation. But yeah, in general more tests don't harm :-) Since the patch here optimizes the most common case, a regular method implemented in Python, I didn't add a specific test with the change. This case is already very well tested, like everything in the stdlib, no? -- I tried to imagine how we could avoid temporary method objects in more cases like Python class methods (using @classmethod), but I don't think that it's worth it. It would require more complex code for a less common case. Or do someone see other common cases which would benefit of a similar optimization? patch looks good to me. New changeset be663c9a9e24 by Victor Stinner in branch 'default': Issue #29507: Update test_exceptions Oh, I was too lazy to run the full test suite, I only ran a subset and I was bitten by buildbots :-) test_unraisable() of test_exceptions fails. IHMO the BrokenRepr subtest on this test function is really implementation specific. To fix buildbots, I removed the BrokenRepr unit test, but kept the other cases on test_unraisable(): change be663c9a9e24. See my commit message for the full rationale. In fact, the patch changed the error message logged when a destructor fails. Example: --- class Obj: def __del__(self): raise Exception("broken del") def __repr__(self): return "<useful repr>" obj = Obj() del obj --- Before, contains "<useful repr>": --- Exception ignored in: <bound method Obj.__del__ of <useful repr>> Traceback (most recent call last): File "x.py", line 3, in __del__ raise Exception("broken del") Exception: broken del --- After, "<useful repr>" is gone: --- Exception ignored in: <function Obj.__del__ at 0x7f10294c3110> Traceback (most recent call last): File "x.py", line 3, in __del__ raise Exception("broken del") Exception: broken del --- There is an advantage. The error message is now better when repr(obj) fails. Example: --- class Obj: def __del__(self): raise Exception("broken del") def __repr__(self): raise Excepiton("broken repr") obj = Obj() del obj --- Before, raw "<object repr() failed>" with no information on the type: --- Exception ignored in: <object repr() failed> Traceback (most recent call last): File "x.py", line 3, in __del__ raise Exception("broken del") Exception: broken del --- After, the error message includes the type: --- Exception ignored in: <function Obj.__del__ at 0x7f162f873110> Traceback (most recent call last): File "x.py", line 3, in __del__ raise Exception("broken del") Exception: broken del --- Technically, slot_tp_finalize() can call lookup_maybe() to get a bound method if the unbound method failed. The question is if it's worth it? In general, I dislike calling too much code to log an exception, since it's likely to raise a new exception. It's exactly the case here: logging an exception raises a new exception (in repr())! Simpler option: revert the change in slot_tp_finalize() and document that's it's deliberate to get a bound method to get a better error message. The question is a tradeoff between performance and correctness. I checked typeobject.c: there is a single case where we use the result of lookup_maybe_method()/lookup_method() for something else than calling the unbound method: slot_tp_finalize() calls PyErr_WriteUnraisable(del), the case discussed in my previous message which caused test_exceptions failure (now fixed). Thanks for finishing my draft patch, Victor. callmetohd2.patch is same trick for PyObject_CallMethod* APIs in abstract.c. It fixes segv in callmethod.patch. And APIs receiving format string can do same trick when format is empty too. As I grepping "PyObject_CallMethod", there are many format=NULL callers. New changeset e5cd74868dfc by Victor Stinner in branch 'default': Issue #29507: Fix _PyObject_CallFunctionVa() callmethod2.patch: I like that change on object_vacall(), I'm not sure about the change on PyObject_CallMethod*() only for empty format string. I suggest to split your patch into two parts, and first focus on object_vacall(). Do you have a benchmark for this one? Note: I doesn't like the name I chose for object_vacall(). If we modify it, I would suggest to rename it objet_call_vargs() instead. Anyway, before pushing anything more, I would like to take a decision on the repr()/test_exceptions issue. What do you think Naoki? New changeset b1dc6b6d5fa20f63f9651df2e7986a066c88ff7d by Victor Stinner in branch 'master': Issue #29507: Fix _PyObject_CallFunctionVa() > I'm not sure about the change on PyObject_CallMethod*() only for empty format string. There are many place using _PyObject_CallMethodId() to call method without args. Maybe, we should recommend to use _PyObject_CallMethodIdObjArgs() when no arguments, and replace all caller in cpython? performance benefit is small. >> I'm not sure about the change on PyObject_CallMethod*() only for empty format string. > > There are many place using _PyObject_CallMethodId() to call method without args. I'm more interested by an optimization PyObject_CallMethod*() for any number of arguments, as done in typeobject.c ;-) > performance benefit is small. > Are you using PGO+LTO compilation? Without PGO, the noise of code placement can be too high. In your "perf stat" comparisons, I see that "insn per cycle" is lower with the patch, which sounds like a code placement issue like a performance issue with the patch. Yes, I used --enable-optimization this time. But my patch is not good for branch prediction of my CPU in this time. I'm willing Object/call.c solves such placement issue. BTW, since benefit of GetMethod is small, how about this? * Add _PyMethod_FastCallKeywords * Call it from _PyObject_FastCall* _PyObject_FastCall* can use FASTCALL C function and method (PyCFunction), and Python function (PyFunction). Python method (PyMethod) is last common callable PyObject_FastCall* can't use FASTCALL. > I'm willing Object/call.c solves such placement issue. I also *hope* that a call.c file would *help* a little bit, but I'm not sure that it will fix *all* code placement issues. I created the issue #29524 with a patch creating Objects/call.c. call_method() of typeobject.c has been optimized by the commit 516b98161a0e88fde85145ead571e13394215f8c. I consider that the initial issue is now fixed. I created the issue #29811 to discuss optimizing callmethod() of call.c which is more complex.
https://bugs.python.org/issue29507
CC-MAIN-2020-16
refinedweb
2,330
70.39
Converting a Xen Linux virtual machine to KVM The virt-v2v tool converts virtual machines, including their disk images and metadata, from foreign hypervisors for use with Red Hat Enterprise Linux with KVM managed by libvirt, Red Hat Virtualization, and Red Hat OpenStack Platform. This article provides instructions for converting and importing a Red Hat Enterprise Linux virtual machine from a Xen. On successful completion of the conversion process, virt-v2v creates a new libvirt domain XML file for the converted VM with the same name as the original VM. The VM can be started using libvirt tools, such as virt-manager or virsh. Note: If virt-v2v is run as a non-root user, the virt-manager application will not detect the converted VM. This is because virt-v2v saves converted VMs to the current user's namespace, but libvirt maintains separate namespaces for the VMs for each user. Prerequisites Ensure that the virtual machine is stopped prior to running the conversion process. The following minimum system resources must be available: - Minimum network speed 1Gbps - Disk space: sufficient space to store the VM's disk image, plus 1 GB - Sufficient free space in the VM's file system according to the following table: Procedure Step 1 Install the virt-v2v packages and their dependencies: # yum install virt-v2v Step 2 Configure password-less SSH access to the remote Xen host from the virt-v2v conversion server. # ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/username/.ssh/id_rsa): # ssh-agent SSH_AUTH_SOCK=/tmp/ssh-36jmUA3rn2JS/agent.24318; export SSH_AUTH_SOCK; SSH_AGENT_PID=24319; export SSH_AGENT_PID; echo Agent pid 24319; # SSH_AUTH_SOCK=/tmp/ssh-36jmUA3rn2JS/agent.24318; export SSH_AUTH_SOCK; # SSH_AGENT_PID=24319; export SSH_AGENT_PID; # echo Agent pid 24319; Agent pid 24319 # ssh-add Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa) Step 3 Export the LIBGUESTFS_BACKEND=direct environment variable on the conversion server. # export LIBGUESTFS_BACKEND=direct Step 4 Add your SSH public key to the /root/.ssh/authorized_keys file on the Xen host. # ssh-copy-id root@xen-example-host.com @xen-example-host.com's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'root@xen-example-host.com'" and check to make sure that only the key(s) you wanted were added. Step 5 Test the SSH configuration: # ssh root@xen-example-host.com Last login: Tue Nov 1 14:39:21 2016 from ovpn-123-45.brq.redhat.com If SSH has been configured successfully, the conversion server logs you in to the Xen host and does not request any password. Step 6 To convert a Xen Linux virtual machine, use the following command, and replace: - testguest with the intended name for the converted VM - $pool with the storage pool that the converted VM should use - $format with the disk format the converted VM should use, such as qcow2 or raw # virt-v2v -ic 'xen+ssh://root@xen.example.com' testguest -os $pool -of $format For a list of virt-v2v parameters, see the virt-v2v man page. Step 7 Observe the conversion progress. The following example shows the progress of virt-v2v converting a Red Hat Enterprise Linux 5 virtual machine running on a Xen host: # virt-v2v -ic "xen+ssh://root@xen-sample-host.com" -os default rhel5-para -of raw [ 0.0] Opening the source -i libvirt -ic xen+ssh://root@xen-sample-host. com rhel5-para [ 0.0] Creating an overlay to protect the source from being modified [ 0.0] Opening the overlay [ 69.0] Initializing the target -o libvirt -os default [ 69.0] Inspecting the overlay [ 80.0] Checking for sufficient free disk space in the guest [ 80.0] Estimating space required on target for each disk [ 80.0] Converting Red Hat Enterprise Linux Server release 5.11 to run on KVM virt-v2v: This guest has virtio drivers installed. [ 136.0] Mapping filesystem data to avoid copying unused and blank areas [ 137.0] Closing the overlay [ 137.0] Checking if the guest needs BIOS or UEFI to boot [ 137.0] Copying disk 1/1 to /var/lib/libvirt/images/rhel5-para-sda (raw) (100.00/100%) [ 599.0] Creating output metadata Pool default refreshed Domain rhel5-para defined from /tmp/v2vlibvirt8e77ed.xml [ 599.0] Finishing off Step 8 Confirm that the VM has been imported correctly: # virsh list --all If this command lists the new Linux virtual machine, you have successfully converted and imported the VM. Boot the VM and confirm its full functionality before deleting the original VM or migrating active services. Known Issues - Currently, virt-v2v cannot directly access a Xen VM, or any VM located remotely over SSH, if that VM’s disks are located on host block devices. To work around this problem, copy the VM over to the conversion server using the separate virt-v2v-copy-to-local tool. For more information about block devices, see the virt-v2v man page. (BZ# 1283588), you can see the following articles: - Converting a VMware vCenter Linux virtual machine to KVM - Converting a VMware vCenter Windows virtual machine to KVM - Converting VMware guests to import to Red Hat OpenStack Platform - Exporting a guest virtual machine from VMware as an OVA file and importing it into KVM In addition, you can refer to the virt-v2v man page and the virt-v2v upstream documentation. - Product(s) - Red Hat Enterprise Linux 1 Comments Connecting to the Xen hypervisor over ssh requires the use of ssh keys (no password/key prompt) and the use of ssh-agent. This can be accomplished by running these commands from the conversion server: Then run virt-v2v as described.
https://access.redhat.com/articles/1353783
CC-MAIN-2020-50
refinedweb
942
54.73
To those who have programmed before, simple C programs shouldn't be too hard to read. Suppose you call this program basics.c #include <stdio.h> #include <stdlib.h> int mean(int a,int b) { return (a + b)/2; } int main() { int i, j; int answer; /* comments are done like this */ i = 7; j = 9; answer = mean(i,j); printf("The mean of %d and %d is %d\n", i, j, answer); exit (0); } Note that the source is free-format and case matters. All C programs need a main function where execution begins. In this example some variables local to main are created and assigned (using `=' rather than `:='. Also note that `;' is a statement terminator rather than a separator as it is in Pascal). Then a function mean is called that calculates the mean of the arguments given it. The types of the formal parameters of the function (in this case a and b) should be compatible with the actual parameters in the call. The initial values of a and b are copied from the variables mentioned in the call (i and j). The function mean returns the answer (an integer, hence the `int' before the function name), which is printed out using printf. The on-line manual page describes printf fully. For now, just note that the 1st argument to printf is a string in which is embedded format strings; %d for integers, %f for reals and %s for strings. The variables that these format strings refer to are added to the argument list of printf. The ` \n' character causes a carriage return. C programs stop when This program can be compiled using `cc -Aa -o basics basics.c'. The `-o' option renames the resulting file basics rather than the default a.out. Run it by typing `basics'. A common mistake that beginners make is to call their executable `test'. Typing test is likely to run the test facility built into the shell, producing no input, rather than the user's program. This can be circumvented by typing ./test but one might just as well avoid program names that might be names of unix facilities. If you're using the ksh shell then typing `whence program_name' will tell you whether there's already a facility with that name.
http://www-h.eng.cam.ac.uk/help/tpl/languages/C/teaching_C_old/node3.html
CC-MAIN-2018-09
refinedweb
380
73.07
Deploy a Kubernetes Cluster using Kubespray Kubernetes is the next big thing. If you are here I am assuming you already know of Kubernetes. If you don’t you better get started soon. Kubernetes also called K8s, is a system for automating deployment, scaling and management of containerized applications — basically a container Orchestration tool. Google had been using Borg for years, an in-house container orchestration tool they had built and in 2014 they open-sourced it to advance the technology along with Linux Foundation. So let’s get started.. There are multiple ways to set up a Kubernetes Cluster. One of them is using Kubespray which uses Ansible. The official Github link for installation via kubespray is just about crisp but has a lot of reading between the lines. I spent days getting this right. If you are getting started with Kubernetes then use can follow the steps. We would be going through the following to deploy the cluster. - Infra Requirements - Network Configuration - Installations - Set configuration parameters for Kube cluster - Deploy the Kube Cluster I have created a 1 master 2 node cluster. I have used another machine to deploy the whole cluster which I call my base machine So, I would require 4 VM’s (1 Base Machine and 3 for my Kubernetes Cluster) Since I already have an AWS account, I will be using it to spin up 4 Ubuntu machines. You may choose to use Google Cloud or Microsoft Azure. Infra Requirements Create the following infra on AWS. Base machine: Used to clone the kubespray repo and trigger the ansible playbooks from there. Base Machine Type: t2.micro — 1 Core x 1 GB Memory OS: Ubuntu 16.04 Number of Instances: 1 Cluster Machines Create the 3 instances in one shot so that they remain in the same security group and subsequent changes in the security group will reflect on the whole cluster. Type: t2.small — 1 Core x 2 GB RAM OS: Ubuntu 16.04 Number of Instances: 3 Using t2.micro for the cluster machines will fail. There is a check in the installation which fails further installation if the memory is not sufficient. Also, when you create your instances on AWS, create a new pem file for the cluster. Pem file is like a private key used for authentication. Save this pem file as K8s.pem on your local machine. This will be used later by you/ansible to ssh into the cluster machines. Network Configurations On the AWS console in the EC2 section. Click on the security group corresponding any instance of the cluster (since they all belong to the same security ) Click on Inbound rules Click on Edit and under Type click on ‘’All Traffic” to allow internal communication within the cluster Installations Tools to be installed on the Base Machine - Ansible v2.4 - Python-netaddr - Jinja 2.9 Install latest ansible on debian based distributions. $ sudo apt-get update $ sudo apt-get install software-properties-common $ sudo apt-add-repository ppa:ansible/ansible $ sudo apt-get update $ sudo apt-get install ansible Check Installation $ ansible — version Install Jinja 2.9 (or newer) Execute below commands to install Jinja 2.9 or upgrade existing Jinja to version 2.9 $ sudo apt-get install python-pip $ pip2 install jinja2 — upgrade Install python-netaddr $ sudo apt-get install python-netaddr Allow IPv4 forwarding You can check IPv4 forwarding is enabled or disabled by executing below command. $ sudo sysctl net.ipv4.ip_forward If the value is 0 then, IPv4 forwarding is disabled. Execute below command to enable it. $ sudo sysctl -w net.ipv4.ip_forward=1 Check Firewall Status $ sudo ufw status If the status is active then diable is using the following $ sudo ufw disable Set configuration parameters for the Kube Cluster Clone the kubespray github repository Clone the repo onto this base machine $ git clone Copy the key file into the Base machine Navigate into the kubespray folder $ cd kubespray Now you can either copy the pem file which you used to create the cluster on AWS into this directory from your local machine OR just copy the contents into a new file on the base machine. Navigate to the location where you have downloaded your pem file from AWS when you created your cluster. This I have downloaded on my local machine (which is different from the base machine and the cluster machines). View the contents of K8s.pem file on your local machine using the command line. $ cat K8s.pem Copy the contents of the file Connect / ssh onto the Base machine On Base Machine $ vim K8s.pem This will create and open a new file by the name K8s.pem. Paste the contents here. To save Hit Esc key and then type :wq Change permissions of this file. $ chmod 600 K8s.pem Modify the inventory file as per your cluster Copy the inventory sample inventory and create your own duplicate as per your cluster $ cp -rfp inventory/sample inventory/mycluster Since I will be creating a 1 master 2 node cluster, I have accordingly updated the inventory file. Update Ansible inventory file with inventory builder. Run the following 2 commands to update the inventory file Replace the sample IP’s with Private IP’s of the newly created instances before running the command $ declare -a IPS=(172.31.66.164 172.31.72.173 172.31.67.194) $ CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]} Now edit/verify the hosts.ini file to ensure there is one master and 2 nodes as shown below. Keep only node1 under [kube-master] group and node2 and node3 under [kube-node] group. Hosts.ini file [all] node1 ansible_host=172.31.66.164 ip=172.31.66.164 node2 ansible_host=172.31.67.194 ip=172.31.67.194 node3 ansible_host=172.31.72.173 ip=172.31.72.173 [kube-master] node1 [kube-node] node2 node3 [etcd] node1 node2 node3 [k8s-cluster:children] kube-node kube-master [calico-rr] [vault] node1 node2 node3 The above is how the file finally looks. Verify other kube cluster configuration parameters Review and change parameters under ``inventory/mycluster/group_vars`` $ vim inventory/mycluster/group_vars/all.yml Change the value of the variable ‘boostrap_os’ from ‘none ’to ‘ubuntu’ in the file all.yml. Save and exit the file. Make necessary changes in the k8s-cluster.yml file if any. $ vim inventory/mycluster/group_vars/k8s-cluster.yml Save and exit the file Deploy Kubespray with Ansible Playbook $ ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml — private-key=K8s.pem — flush-cache -s Check your Deployment Now SSH into the Master Node and check your installation Command to fetch nodes in the namespace ‘kube-system’ $ kubectl -n kube-system get nodes Command to fetch services in the namespace ‘kube-system’ $ kubectl -n kube-system get services Wohhoooo!!! We are done!!! You now have your kubernetes cluster up and running.
https://medium.com/@iamalokpatra/deploy-a-kubernetes-cluster-using-kubespray-9b1287c740ab
CC-MAIN-2018-22
refinedweb
1,153
56.76
10 days ago Joachim Topf commited the new flex backend into osm2pgsql. This new backend gives you full control on what data goes where, and the opportunity to massage it a little bit before committing to the db. This is useful not only because we could move processing from the rendering time to the import time (which, let's be honest, we could already do some with the --tag-transform-script option), but also because we can move away from the 4 table setting of the original osm2pgsql mode ( line, point, polygons, roads) and also ignore data we don't want in the db or even create new one. This new backend works exactly like osm2pgsql. There are two stages; the first one goes through all the nodes, ways and relations (in that order), and the second one only through ways and relations. For each one of these types you can define a function, process_[type](). process_node() is only going to be called in stage 1, but process_way() and process_relation() are going to be called in both stages. The functions, of course, can figure out in which stage they are; my advise is to split the functions into process_node_stage1() and process_node_stage2() and call them from process_node() to make that more clear. An object is processed in stage 2 only if it was marked in stage 1. For more details, please read the docs, and definitely learn lua. My first task for this is bus routes. Last time I put some of my few spare time on my fork I managed to display bus routes but it was ugly: overlapping dash based lines that were difficult to follow. Compare that to a real map of a (quite small) bus network, and you get the idea that my take wasn't going to be useful. Let's try to think how this can be achieved. What we need is a way to take each way and count how many bus lines are going through it. Bus lines are represented as relations, of which we can take the members. So the first step must be to find all the members of a relation. This can be done in process_relation_stage1(). We mark all the ways in a bus relation, and we also somehow associate the bus to the way. In stage 2, we process all the marked ways (ways with buses), we count how many buses go through it, we create a new way for rendering each bus line, and we displace this new line based on which 'slot' it belongs to, so the lines are parallel. I have a first version of such algorithm. It's based on one of the examples. Let's take it apart: local tables = {} tables.routes = osm2pgsql.define_way_table('routes', { { column = 'tags', type = 'hstore' }, { column = 'offset', type = 'float' }, { column = 'ref', type = 'text' }, { column = 'colour', type = 'text' }, { column = 'way', type = 'linestring' }, }) We declare a table that will store ways, called routes. It will store the original tags of the way in hstore format, plus an offset column, that will tell the renderer how much to displace the line, plus a ref and a colour; and of course, the way itself. pt_ways = {} function osm2pgsql.process_relation(relation) -- Only interested in relations with type=route, route=bus and a ref -- TODO: other pt routes too if relation.tags.type == 'route' and relation.tags.route == 'bus' and relation.tags.ref then for _, member in ipairs(relation.members) do if member.type == 'w' then -- print(relation.id, member.ref, relation.tags.ref) if not pt_ways[member.ref] then pt_ways[member.ref] = {} end -- insert() is the new append() table.insert(pt_ways[member.ref], relation) osm2pgsql.mark_way(member.ref) end end end end We process the relations. Like I said before, we take all the members that are ways, we store this bus line in an array indexed by the original way id, and we mark the way for processing in stage 2. Notice the the bus line's ref is in the tags.ref indexes (more on this later), while it's id is in the ref index. This last part was confusing to me. function sort_by_ref(a, b) return a.tags.ref < b.tags.ref end function osm2pgsql.process_way(way) -- Do nothing for ways in stage1, it'll be in relations where the magic starts if osm2pgsql.stage == 1 then return end -- We are now in stage2 local routes = pt_ways[way.id] table.sort(routes, sort_by_ref) We do nothing for ways in stage 1, and in stage 2 we sort the routes to give them a consistent ordering when rendering consecutive ways. local line_width = 2.5 local offset = 0 local side = 1 local base_offset local offset local index local ref local slot local shift if #routes % 2 == 0 then base_offset = line_width / 2 shift = 1 else base_offset = 0 shift = 0 end for index, route in ipairs(routes) do -- index is 1 based! slot = math.floor((index - shift) / 2) offset = (base_offset + slot * line_width) * side if side == 1 then side = -1 else side = 1 end This is the part of the algo that calculates the offset. It was refined after a couple of iterations and it seems to work fine with odd and even amount of bus lines. line_width will be moved to the style later, so I can apply widths depending on ZL. In short, we're assigning slots from the center to the outside, alternating sides. -- TODO: group by colour -- TODO: generic line if no colour row = { tags = way.tags, ref = route.tags.ref, colour = route.tags.colour, offset = offset, geom = { create = 'line' } } tables.routes.add_row(tables.routes, row) end end And this is the end. We set the row values and add it to our table. It's that simple :) Now we run osm2pgsql with the flex backend: $) Reading in file: monaco-latest.osm.pbf Processing: Node(46k 0.4k/s) Way(3k 3.78k/s) Relation(10 10.00/s) parse time: 125s Node stats: total(46745), max(7199027992) in 124s Way stats: total(3777), max(770935077) in 1s Relation stats: total(215), max(10691624) in 0s Entering stage 2... Creating id indexes... Creating id index on table 'routes'... Creating id indexes took 0 seconds Lua program uses 0 MBytes Entering stage 2 processing of 458 ways... Entering stage 2 processing of 0 relations... Clustering table 'routes' by geometry... Using native order for clustering Creating geometry index on table 'routes'... Analyzing table 'routes'... All postprocessing on table 'routes' done in 0s. Osm2pgsql took 129s overall Now, for the render part, it's twofold; a layer definition...: SELECT way, COALESCE(colour, 'purple') as color, ref as route, "offset" FROM routes ORDER BY ref ... and the style itself: #routes { [zoom >= 14] { line-width: 1; line-color: @transportation-icon; line-join: round; line-offset: [offset]; [zoom >= 17] { line-color: [color]; line-width: 2; } } } Quite simple too! That's because all the data is already prepared for rendering. All the magic happens at import time. Here's the same region as before: Now, two caveats: This last thing means that if you want to change the style, you most probably will need to reimport the data. It must have taken me some 20 iterations until I got the data in a way I could use for rendering, that's why I tested with an extract of Monaco :) I also used a separate db from the main render database, but maybe just another table would be enough. Second, you have to specify all the data you want to save, there is no compatibility with the current rendering database, so you will also need to base your code on the compatible example. In my tests, I just imported the data the usual way: $) Mid: loading persistent node cache from nodes.cache Setting up table: planet_osm_nodes Setting up table: planet_osm_ways Setting up table: planet_osm_rels Using lua based tag processing pipeline with script openstreetmap-carto.lua Setting up table: planet_osm_point Setting up table: planet_osm_line Setting up table: planet_osm_polygon Setting up table: planet_osm_roads Reading in file: monaco-latest.osm.pbf Processing: Node(46k 0.8k/s) Way(3k 3.78k/s) Relation(210 105.00/s) parse time: 61s Node stats: total(46745), max(7199027992) in 58s Way stats: total(3777), max(770935077) in 1s Relation stats: total(215), max(10691624) in 2s Stopping table: planet_osm_nodes Stopped table: planet_osm_nodes in 0s Stopping table: planet_osm_ways Stopped table: planet_osm_ways in 0s Stopping table: planet_osm_rels Stopped table: planet_osm_rels in 0s Sorting data and creating indexes for planet_osm_point Sorting data and creating indexes for planet_osm_roads Sorting data and creating indexes for planet_osm_polygon Sorting data and creating indexes for planet_osm_line Using native order for clustering Using native order for clustering Using native order for clustering Using native order for clustering Copying planet_osm_point to cluster by geometry finished Creating geometry index on planet_osm_point Creating indexes on planet_osm_point finished Copying planet_osm_roads to cluster by geometry finished Creating geometry index on planet_osm_roads Creating indexes on planet_osm_roads finished All indexes on planet_osm_point created in 0s Completed planet_osm_point All indexes on planet_osm_roads created in 0s Completed planet_osm_roads Copying planet_osm_polygon to cluster by geometry finished Creating geometry index on planet_osm_polygon Creating indexes on planet_osm_polygon finished All indexes on planet_osm_polygon created in 0s Completed planet_osm_polygon Copying planet_osm_line to cluster by geometry finished Creating geometry index on planet_osm_line Creating indexes on planet_osm_line finished All indexes on planet_osm_line created in 0s Completed planet_osm_line Osm2pgsql took 64s overall One thing to notice is that it took half of the time of the flex backend.
https://www.grulic.org.ar/~mdione/glob/posts/bus-routes-with-osm2pgsql-flex-backend/
CC-MAIN-2020-29
refinedweb
1,577
62.78
Base class for popup widgets. More... #include <Wt/WPopupWidget> Base class for popup widgets. A popup widget anchors to another widget, for which it usually provides additional information or assists in editing, etc... The popup widget will position itself relative to the anchor widget by taking into account available space, and switching sides if necessary to fit the widget into the current window. For example, a vertically anchored widget will by default be a "drop-down", positioning itself under the anchor widget, but it may also choose to position itself above the anchor widget if space is lacking below.. Returns whether the popup is transient. Returns the orientation.. Reimplemented in Wt::WSuggestionPopup, and Wt::WDialog. Sets an anchor widget. A vertical popup will show below (or above) the widget, while a horizontal popup will show right (or left) of the widget. Lets the popup delete itself when hidden. When this is enabled, the popup will delete itself when hidden. You need to take care that when overriding setHidden(), the popup may thus be deleted from within WPopupWidget::setHidden(). The default value is false.(). Reimplemented from Wt::WCompositeWidget. Reimplemented in Wt::WDialog, and Wt::WMessageBox. Sets transient property. A transient popup will automatically hide when the user clicks outside of the popup. When autoHideDelay is not 0, then it will also automatically hide when the user moves the mouse outside the widget for longer than this delay. Signal emitted when the popup is shown. This signal is emitted when the popup is being hidden because of a client-side event (not when setHidden() or show() is called).
http://www.webtoolkit.eu/wt/wt-3.3.8/doc/reference/html/classWt_1_1WPopupWidget.html
CC-MAIN-2017-51
refinedweb
266
50.63
On 10/9/06, Clinton Begin <clinton.begin@gmail.com> wrote: > >> I haven't, but I think the JavaBeans standard is actually much more > >> pervasive than most folks realize > > Which is a very good thing. I'm quite familiar with the full spec, and I'm > very happy that 99% of Java developers only know (or at least use) 1% of it. I suppose I get the part about using only a small percentage of the capabilities, but I'd personally be happier if more developers had a more complete understanding of the spec. Even (especially?) if they come to the conclusion that it's to be avoided. > >> How about the possibility of plugging-in custom Probe/ProbeFactory > >> impls? I'm assuming that would be a feature request? > > Yes...but to be honest, that wouldn't come any time soon. There is already > an interface called Probe that I beleve most of the iBATIS code uses as a > facade in front of ClassInfo. I'm sure it wouldn't be a high priority. The Probe interface is fine, it looks like the real roadblock to custimization is actually ProbeFactory. > Cheers, > Clinton > > > On 10/9/06, Kris Schneider <kschneider@gmail.com> wrote: > > On 10/9/06, Clinton Begin <clinton.begin@gmail.com> wrote: > > > Only two reasons: > > > > > > 1) Performance. At the time (not sure about now) ClassInfo was much > faster > > > than BeanUtils. > > > > Understood. I wasn't suggesting that iBATIS should use BeanUtils, it > > was just an example of extending JavaBeans while maintaining > > compatibility. > > > > > 2) More important -- I never thought in a million years anyone would > ever > > > touch that nightmare of an API ... BeanInfo. > > > > > > Seriously...I would strongly recommend you avoid that stuff. You're > > > wandering a path full of complexity and verbosity with very little > benefit. > > > > > > Don't fall for "Sun Standards". I did, and iBATIS is worse for it. > > > > I haven't, but I think the JavaBeans standard is actually much more > > pervasive than most folks realize, especially in the J2EE world (and > > no, I'm not confusing JavaBeans with EJB). I won't claim to have seen > > widespread use of the BeanInfo facilities though. > > > > How about the possibility of plugging-in custom Probe/ProbeFactory > > impls? I'm assuming that would be a feature request? > > > > > Cheers, > > > Clinton > > > > Thanks for the feedback. > > > > > On 10/9/06, Kris Schneider <kschneider@gmail.com> wrote: > > > > Not sure if this should be a dev list discussion, but I'll start it > > > > here and see where it leads... > > > > > > > > I guess I never realized this, but iBATIS doesn't seem to honor > > > > BeanInfo classes. For example, given this class: > > > > > > > > public class Foo { > > > > private String name; > > > > public String name() { return this.name; } > > > > public void name(String name) { this.name = name; } > > > > } > > > > > > > > and its BeanInfo: > > > > > > > > import java.beans.*; > > > > import java.lang.reflect.*; > > > > public class FooBeanInfo extends SimpleBeanInfo { > > > > private static final Class BEAN_CLASS = Foo.class; > > > > public PropertyDescriptor[] getPropertyDescriptors() { > > > > PropertyDescriptor[] props = null; > > > > try { > > > > Method nameReadMethod = > > > BEAN_CLASS.getDeclaredMethod("name", null); > > > > Method nameWriteMethod = > > > > BEAN_CLASS.getDeclaredMethod("name", new Class[] { > > > String.class }); > > > > props = new PropertyDescriptor[] { new > > > > PropertyDescriptor("name", nameReadMethod, nameWriteMethod) }; > > > > } catch (NoSuchMethodException ignore) { > > > > } catch (IntrospectionException ignore) { > > > > } > > > > return props; > > > > } > > > > } > > > > > > > > com.ibatis.common.beans.ClassInfo reports that the > bean > > > has a single, > > > > read-only property called "class". However, java.beans.Introspector > > > > reports that it has a read/write property called "name". Was there a > > > > reason to ignore the existing JavaBeans framework and implement custom > > > > introspection? I can understand the need to extend the core JavaBeans > > > > framework to support features like List-based indexed properties or > > > > mapped properties, but that can be done without breaking > > > > compatibility. For example, Jakarta Commons BeanUtils implements both > > > > of those previously mentioned features, but also honors BeanInfo. > > > > > > > > In addition, if someone wanted to override the current implementation, > > > > I don't see a way to plug in a different Probe/ProbeFactory. Is there > > > > any way to do that? > > > > > > > > Thanks. > > > > > > > > -- > > > > Kris Schneider <mailto:kschneider@gmail.com> > > > > -- > > Kris Schneider <mailto: kschneider@gmail.com> -- Kris Schneider <mailto:kschneider@gmail.com>
http://mail-archives.apache.org/mod_mbox/ibatis-user-java/200610.mbox/%3C3d242a3d0610091947t3157921cy94b63f7e8fcf64a8@mail.gmail.com%3E
CC-MAIN-2017-39
refinedweb
651
50.53
Hi, I want to dynamically instanciate new mbeans from a sar-deployed "service-mbean". What is the best way for the service-mbean to get a reference to its mbeanserver... I could do like this: private MBeanServer getMBeanServer() { List srvrList = MBeanServerFactory.findMBeanServer(null); return (MBeanServer)srvrList.iterator().next(); } , but do I have a guarantee for that the find-method returns a list with only one element. What if I for some strange reason should have more than one mbean-server? I could also search through the list for an mbean-server which recognizes the ObjectName of the already registered service-mbean (which I call from), but is there an easier/better/safer way ? Implement MBeanRegistration, you get your MBeanServer at preRegister. Regards, Adrian Just what I needed... Thanks a lot! -Marius
https://developer.jboss.org/thread/50629
CC-MAIN-2018-30
refinedweb
131
59.6
A boilerplate module setup using Browserify, Babel, Mocha, and Chai Over the past few months I’ve been taking a look at a few of the ES2015 features that have landed in JavaScript language. There some excellent additions that will help JS developers write more succinct code. My personal favorites at the moment are Template literals and Default parameters. Being able to specify a default parameter without a load of boilerplate code at the top of a function is a real win, and no more concatenating strings when you need to include a variable. So while experimenting with the new features, I figured I’d also bring a few other technologies together to build a very simple module boilerplate, that I could then use on future projects. Purpose of the boilerplate The initial purpose was to experiment with the ES2015 import statement, but this then branched out to include Babel, and Browserify. Once those were implemented, I figured I would also roll in Mocha, Chai, and jQuery. There was a big question in my head on how these could all work together to create a simple workflow, which could then be built upon in the future. My checklist of what I wanted to achieve with the experiment was: - Ability to use ES2015+ features - Split the code into separate modules, for easier maintenance and testability - Be able to bundle the JavaScript in an efficient manor - Work directly with the DOM using the querySelector methods, or even jQuery since I sometimes require old IE support - Be testable using a popular testing framework Technologies A list of the technologies used and their purpose is listed below: ES2015 (the language formally known as ES6, formally known as Harmony) ES2015 is the latest iteration of the JavaScript language. The year is now being used in the name, so it is easy to distinguish when the changes in the language were added (in theory!). So as you would expect there’s also an ES2016, and ES2017 in the pipeline. These latest iterations add a whole host of useful language features and improvements to make it easier to write JavaScript. The full ES2015 language specification can be found here Babel The problem with introducing new language features on the web platform is you still need to support older browsers. As the saying goes, you can’t break the web. Because of this, nobody wants to write some lovely ES2015 code, and then have to write another version just for older browsers. Older browsers won’t know how to interpret these new features, and the page will break. This is where Babel comes to the rescue. Babel is a ES2015 compiler, that will transform the latest features back into ES5 compatible code so older browsers will still work. It even has the ability to polyfill methods that older browsers don’t support; so there’s no reason not to start learning and using ES2015 today. Mocha & Chai Mocha is a very popular JavaScript testing framework that runs on Node.js. It allows a developer to write and run a set of tests across their JavaScript code, and accurately report the results. Any tests that fail can then be debugged and fixed. Chai is a BDD / TDD assertion library that can be used with any testing framework, but is often used in conjunction with Mocha. It comes with a whole set of API’s that allow you to write your tests in whatever way you prefer (Should, Expect, Assert). Using both libraries together can be a real life-saver when it comes to refactoring code. It is very quick and easy to see when new code changes have broken functionality (assuming you have written your tests correctly!). jQuery I don’t think I really need to explain this one do I? Everyone knows jQuery, I’m sure it has saved most of us from cross-browser hell in the past! I have included it here because I often have projects that still require IE8 support. In my opinion it is much easier to include jQuery, than pick out a host of micro-libs to do the same job, with all their custom API’s and documentation. Browserify Browserify is the “glue” that brings all of these technologies together. Browserify allows you to write code like you would in Node.js (using the require() method). This then gets bundled up into a single JavaScript file that the browser can execute. Browserify deals with all the code dependencies for you, and in the case of this demo also compiles ES2015 back to ES5 using Babel mentioned above. Simple module example The code below shows a very simple example on how modules are imported into the main index.js file, then Browserify does the rest for you. The init() method kicks off the DOM interaction in module 1, and a sample unique array is being imported in from module 2. The functionality of each module is hidden away in separate files, allowing you to easily modularise you code. Using this setup, it is easy to create private variables and methods inside the modules. A developer then only need expose what is required to the wider application using the export keyword. I’ve created a very simple sample page you can view using the grunt command. Grunt compiles everything and then launches a small BrowserSync http server that can be used to view the page. For more information on how to run the code see the Github readme. Note: If you prefer to use Gulp instead of Grunt, this is fully possible. I simply used Grunt in the demo as I already had most of the code written from previous projects. This allowed me to focus on the JavaScript, rather than workflow tooling. Issues There was a fair bit of trial and error with the initial setup to get it working as I wanted. My main issue occurred when it came to implementing Mocha, Chai, and jQuery. Testing jQuery in a Node environment became a real pain because of two errors I ran into: ReferenceError: $ is not defined and Error: jQuery requires a window with a document. The first, $ no defined, was easily solved. In the browser environment I had jQuery loaded as a global object via its own script. You then tell Browserify this is the case using "jquery": "global:jQuery" in the package.json file. This of course didn’t work in Node since it isn’t a browser environment, so the script wasn’t loaded. Adding import $ from 'jquery'; inside the module fixed that. The second, jQuery requires a window with a document, took a good few hours to fix. I tried everything I could find on Stack Overflow including jsdom and mocha-jsdom; But no matter what setup I tried, the error still occurred. In the end it was installing jsdom-global that fixed it. Install it, create yourself a mocha.opts file for your the custom settings, and you are ready to go. Then you can either run Mocha from the command line using npm run test with the following settings in your package.json: Or what I prefer to do is run the tests directly in my IDE. JetBrains WebStorm is my editor of choice, just remember you may need to point your IDE towards the mocha.opts file in the test settings. Conclusion This is my first iteration of a module system I plan to use on new projects going forwards, and it will continue to evolve to fit particular project requirements as they are discovered. You can find all the code, along with the very simple example on GitHub. Feel free to use it if you like, or even suggest improvements. I’m always open to constructive criticism so raise an issue or get in contact with me via the contact page.
https://nooshu.github.io/blog/2016/12/16/browserify-boilerplate-module/
CC-MAIN-2019-04
refinedweb
1,307
69.31
Hello, I have an arduino UNO and a Seeed bluetooth shield. I am using the same sample code that everybody has been using downloaded from the website, and correctly set up the jumpers (so that in the code where I have RxD 6, I connect BT_TX 6 and Digital, and for TxD 7, I connect BT_RX 7 and Digital). I also tried moving these numbers and the jumpers around, but that didn’t seem to have any different effect. Code included below. My problem is that I cannot get from the double green flashing (idle mode) to the alternating red/green blinking (pairable mode), which would indicate that I can pair it with other bluetooth devices. I have confirmed from several computers and my phone that the Seeed shield is not discoverable. Can anyone offer any help or suggestions? Thank you! #include <SoftwareSerial.h> //Software Serial Port #define RxD 6 #define TxD=SeeedBTSlave\r\n"); //set the bluetooth name as "SeeedBTSlave"(); }
https://forum.arduino.cc/t/seeed-bluetooth-shield-cannot-get-into-pairing-mode-alternating-red-green-blink/173886
CC-MAIN-2021-49
refinedweb
161
60.24
Hello! A couple of days I wrote about tiny personal programs, and I mentioned that it can be fun to use “secret” undocumented APIs where you need to copy your cookies out of the browser to get access to them. A couple of people asked how to do this, so I wanted to explain how because it’s pretty straightforward. We’ll also talk a tiny bit about what can go wrong, ethical issues, and how this applies to your undocumented APIs. As an example, let’s use Google Hangouts. I’m picking this not because it’s the most useful example (I think there’s an official API which would be much more practical to use), but because many sites where this is actually useful are smaller sites that are more vulnerable to abuse. So we’re just going to use Google Hangouts because I’m 100% sure that the Google Hangouts backend is designed to be resilient to this kind of poking around. Let’s get started! step 1: look in developer tools for a promising JSON response I start out by going to opening the network tab in Firefox developer tools and looking for JSON responses. You can use Chrome developer tools too. Here’s what that looks like The request is a good candidate if it says “json” in the “Type” column” I had to look around for a while until I found something interesting, but eventually I found a “people” endpoint that seems to return information about my contacts. Sounds fun, let’s take a look at that. step 2: copy as cURL Next, I right click on the request I’m interested in, and click “Copy” -> “Copy as cURL”. Then I paste the curl command in my terminal and run it. Here’s what happens. $ curl ' -X POST ........ (a bunch of headers removed) Warning: Binary output can mess up your terminal. Use "--output -" to tell Warning: curl to output it to your terminal anyway, or consider "--output Warning: <FILE>" to save to a file. You might be thinking – that’s weird, what’s this “binary output can mess up your terminal” error? That’s because by default, browsers send an Accept-Encoding: gzip, deflate header to the server, to get compressed output. We could decompress it by piping the output to gunzip, but I find it simpler to just not send that header. So let’s remove some irrelevant headers. step 3: remove irrelevant headers Here’s the full curl command line that I got from the browser. There’s a lot here! I start out by splitting up the request with backslashes ( \) so that each header is on a different line to make it easier to work with: curl ' \ -X POST \ -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:96.0) Gecko/20100101 Firefox/96.0' \ -H 'Accept: */*' \ -H 'Accept-Language: en' \ -H 'Accept-Encoding: gzip, deflate' \ -H 'X-HTTP-Method-Override: GET' \ -H 'Authorization: SAPISIDHASH REDACTED' \ -H 'Cookie: REDACTED' -H 'Content-Type: application/x-www-form-urlencoded' \ -H 'X-Goog-AuthUser: 0' \ -H 'Origin: \ -H 'Connection: keep-alive' \ -H 'Referer: \ -H 'Sec-Fetch-Dest: empty' \ -H 'Sec-Fetch-Mode: cors' \ -H 'Sec-Fetch-Site: same-site' \ -H 'Sec-GPC: 1' \ -H 'DNT: 1' \ -H 'Pragma: no-cache' \ -H 'Cache-Control: no-cache' \ -H 'TE: trailers' \ -' This can seem like an overwhelming amount of stuff at first, but you don’t need to think about what any of it means at this stage. You just need to delete irrelevant lines. I usually just figure out which headers I can delete with trial and error – I keep removing headers until the request starts failing. In general you probably don’t need Accept*, Referer, Sec-*, DNT, User-Agent, and caching headers though. In this example, I was able to cut the request down to this: curl ' \ -X POST \ -H 'Authorization: SAPISIDHASH REDACTED' \ -H 'Content-Type: application/x-www-form-urlencoded' \ -H 'Origin: \ -H 'Cookie: REDACTED'\ -' So I just need 4 headers: Authorization, Content-Type, Origin, and Cookie. That’s a lot more manageable. step 4: translate it into Python Now that we know what headers we need, we can translate our curl command into a Python program! This part is also a pretty mechanical process, the goal is just to send exactly the same data with Python as we were with curl. Here’s what that looks like. This is exactly the same as the previous curl command, but using Python’s requests. I also broke up the very long request body string into an array of tuples to make it easier to work with programmmatically. import requests import urllib data = [ ('personId','101777723'), # I redacted these IDs a bit too ('personId','117533904'), ('personId','111526653'), ('personId','116731406'), (') ] response = requests.post(' headers={ 'X-HTTP-Method-Override': 'GET', 'Authorization': 'SAPISIDHASH REDACTED', 'Content-Type': 'application/x-www-form-urlencoded', 'Origin': ' 'Cookie': 'REDACTED', }, data=urllib.parse.urlencode(data), ) print(response.text) I ran this program and it works – it prints out a bunch of JSON! Hooray! You’ll notice that I replaced a bunch of things with REDACTED, that’s because if I included those values you could access the Google Hangouts API for my account which would be no good. and we’re done! Now I can modify the Python program to do whatever I want, like passing different parameters or parsing the output. I’m not going to do anything interesting with it because I’m not actually interested in using this API at all, I just wanted to show what the process looks like. But we get back a bunch of JSON that you could definitely do something with. curlconverter looks great Someone commented that you can translate curl to Python (and a bunch of other languages!) automatically with which looks amazing – I’ve always done it manually. I tried it out on this example and it seems to work great. figuring out how the API works is nontrivial I don’t want to undersell how difficult it can be to figure out how an unknown API works – it’s not obvious! I have no idea what a lot of the parameters to this Google Hangouts API do! But a lot of the time there are some parameters that seem pretty straightforward, like requestMask.includeField.paths=person.email probably means “include each person’s email address”. So I try to focus on the parameters I do understand more than the ones I don’t understand. this always works (in theory) Some of you might be wondering – can you always do this? The answer is sort of yes – browsers aren’t magic! All the information browsers send to your backend is just HTTP requests. So if I copy all of the HTTP headers that my browser is sending, I think there’s literally no way for the backend to tell that the request isn’t sent by my browser and is actually being sent by a random Python program. Of course, we removed a bunch of the headers the browser sent so theoretically the backend could tell, but usually they won’t check. There are some caveats though – for example a lot of Google services have backends that communicate with the frontend in a totally inscrutable (to me) way, so even though in theory you could mimic what they’re doing, in practice it might be almost impossible. And bigger APIs that encounter more abuse will have more protections. Now that we’ve seen how to use undocumented APIs like this, let’s talk about some things that can go wrong. problem 1: expiring session cookies One big problem here is that I’m using my Google session cookie for authentication, so this script will stop working whenever my browser session expires. That means that this approach wouldn’t work for a long running program (I’d want to use a real API), but if I just need to quickly grab a little bit of data as a 1-time thing, it can work great! problem 2: abuse If I’m using a small website, there’s a chance that my little Python script could take down their service because it’s doing way more requests than they’re able to handle. So when I’m doing this I try to be respectful and not make too many requests too quickly. This is especially important because a lot of sites which don’t have official APIs are smaller sites with less resources. In this example obviously this isn’t a problem – I think I made 20 requests total to the Google Hangouts backend while writing this blog post, which they can definitely handle. Also if you’re using your account credentials to access the API in a excessive way and you cause problems, you might (very reasonably) get your account suspended. I also stick to downloading data that’s either mine or that’s intended to be publicly accessible – I’m not searching for vulnerabilities. remember that anyone can use your undocumented APIs I think the most important thing to know about this isn’t actually how to use other people’s undocumented APIs. It’s fun to do, but it has a lot of limitations and I don’t actually do it that often. It’s much more important to understand that anyone can do this to your backend API! Everyone has developer tools and the network tab, and it’s pretty easy to see which parameters you’re passing to the backend and to change them. So if anyone can just change some parameters to get another user’s information, that’s no good. I think most developers building publicly availble APIs know this, but I’m mentioning it because everyone needs to learn it for the first time at some point :)
https://jvns.ca/blog/2022/03/10/how-to-use-undocumented-web-apis/?utm_campaign=Weekly%20Vue.js%20News&utm_medium=email&utm_source=Revue%20newsletter
CC-MAIN-2022-21
refinedweb
1,638
59.53
Hello! So I got this Ethernet shield and came home all sparkly-eyed, bursting with anticipation of what I can do with this thing. And apparently the answer is “not much” or I’m doing something wrong. Basically, whenever the Arduino (Duemilanove) encounters the instruction to read from a client it hangs. That is, no more instructions seem to be executed afterwards until I restart it or turn it off and on. Whatever else is programmed before reading from client works fine (as far as I can tell) including writing to a client or server. This happens with all the example programs that come with Arduino 018, with or without going through a router, with or without a 9V DC power supply, and via telnet or a browser. Example: Arduino loaded with the following program: #include <Ethernet.h> byte mac[] = { 0x02, 0x04, 0x06, 0x06, 0x05, 0x01 }; byte ip[] = { 192, 168, 1, 177 }; byte mask[] = { 255, 255, 1, 0 }; byte gw[] = { 192, 168, 1, 1 }; Server server(23); void setup() { Ethernet.begin(mac, ip, mask, gw); server.begin(); Serial.begin(9600); } void loop() { Client client = server.available(); if (client) { Serial.println("reading c"); int c = client.read(); Serial.print("c="); Serial.println(c); delay(1); client.stop(); } } I’m telnetting into 192.168.1.177 on port 23 and sending a couple of characters. The serial monitor displays “reading c” andnothing more. Monitoring the network with wireshark indicates that whenever I send some data through telnet to the Arduino it is being sent, and Arduino sends back ACK via TCP. Indeed the TX and RX diodes blink on the ethernet shield when they should. But nothing else seems to be happening on the processor. So… can someone point out to me where I’m being an idiot? Or is the unit broken or something? (Also, this is my first ever crying whining help-me post, so sorry if I forgot to mention something.)
https://forum.arduino.cc/t/ethernet-shield-hangs-on-client-read/27741
CC-MAIN-2022-05
refinedweb
322
66.23
Hello, I am very new to writing C++ and have started tryign to write a very basic calculator, capable of timesing, dividing, plussing or minusing two whole numbers. I could be way off with this effort, but this is the code so far. And the only errors I get are when in the 'if' sections and the error is:'ISO C++ forbids comparison between pointer and integer' Hope someone can tell me what i'm doing wrong, and how to fix the error.Hope someone can tell me what i'm doing wrong, and how to fix the error.Code:#include <iostream> using namespace std; int main() { int number1; char var; int number2; cout<<"Welcome to 'Phil's Calculator'\n"; cin.get(); cout<<"\nPlease choose a number \n"; cin>> number1; cin.get(); cout<<"\nNow choose your variable. *,/,+ or - \n"; cin>> var; cin.get(); cout<<"\nAnd finally, choose your second number \n"; cin>> number2; cin.get(); if ( var == "*") { cout<< number1*number2; } else if ( var == "/") { cout<< number1/number2; } else if ( var == "+") { cout<< number1+number2; } else if ( var == "-") { cout<< number1-number2; } else cout<<"sorry, this program cannot do that"; } Thanks Phil
http://cboard.cprogramming.com/cplusplus-programming/77992-help-simple-calculator-beginner.html
CC-MAIN-2014-52
refinedweb
188
71.44
> >>the reason why i ask is because it's not standard practice i see > >>people doing this with Python, > > Why? i've been programming in Python continously/full-time for the past 11 yrs at 4+ companies, and i'll stand by my statement... i just don't see it in the code, nor do i do this myself. i cannot tell you . > Does this mean checking an object to None is also outdated? again, i'm not sure what you mean here as my reply never inferred this. although i am curious... how *do* you check an object to None? (this is a completely tangential thread, but i'm curious if the beginners -- not the experts -- are familiar with the difference between "if obj == None" vs. "if obj is None".) > Setting it to None immediately puts the object in a state where the user > knows they need to re-initialize it before using it again. there is no more "object" since you replaced it with None. i'll assume you mean the variable. the only difference between this and using del is that you leave the name in the namespace (where it holds a reference to None). > del will only reclaim that var at a later time, when the gc kicks in. again, you said "var" but i'm assuming you mean object. the var is removed from the namespace and the object's reference count decremented. the gc won't claim if there are additional aliases or references to the object. > Of course, if that object won't be used again in the same scope, does it > really matter ? then just let it go out of scope. there's no need to set it to anything else if it won't be used again... you're just wasting CPU cycles and memory access then. cheers, -wesley
https://mail.python.org/pipermail/tutor/2008-June/062693.html
CC-MAIN-2014-10
refinedweb
308
83.46
Expressions Expression -- from my dictionary: expression: Mathematics a collection of symbols that jointly express a quantity : the expression for the circumference of a circle is 2πr. In gross general terms: Expressions produce at least one value. In Python, expressions are covered extensively in the Python Language Reference In general, expressions in Python are composed of a syntactically legal combination of Atoms, Primaries and Operators. Python expressions from Wikipedia Examples of expressions: Literals and syntactically correct combinations with Operators and built-in functions or the call of a user-written functions: >>> 23 23 >>> 23l 23L >>> range(4) [0, 1, 2, 3] >>> 2L*bin(2) '0b100b10' >>> def func(a): # Statement, just part of the example... ... return a*a # Statement... ... >>> func(3)*4 36 >>> func(5) is func(a=5) True Statement from Wikipedia: In computer programming a statement can be thought of as the smallest standalone element of an imperative programming language. A program is formed by a sequence of one or more statements. A statement will have internal components (e.g., expressions). Python statements from Wikipedia In gross general terms: Statements Do Something and are often composed of expressions (or other statements) The Python Language Reference covers Simple Statements and Compound Statements extensively. The distinction of "Statements do something" and "expressions produce a value" distinction can become blurry however: ifis usually a statement, such as if x<0: x=0but you can also have a conditional expression like x=0 if x<0 else 1that are expressions. In other languages, like C, this form is called an operator like this x=x<0?0:1; def func(a): return a*ais an expression when used but made up of statements when defined. Noneis a procedure in Python: def proc(): passSyntactically, you can use proc()as an expression, but that is probably a bug... func(x=2);Is that an Expression or Statement? (Answer: Expression used as a Statement with a side-effect.) The form func(x=2)is illegal in Python (or at least it has a different meaning func(a=3)sets the named argument ato 3)
https://pythonpedia.com/en/knowledge-base/4728073/what-is-the-difference-between-an-expression-and-a-statement-in-python-
CC-MAIN-2020-34
refinedweb
347
51.58
gethostbyname, gethostbyaddr, gethostent, sethostent, end- hostent, herror - get network host entry #include <netdb.h> extern int h_errno; struct hostent *gethostbyname(name) char *name; struct hostent *gethostbyname2(name, af) char *name; int af; struct hostent *gethostbyaddr(addr, len, type) char *addr; int len, type; struct hostent *gethostent() sethostent(stayopen) int stayopen; endhostent() herror(string) char *string; Gethostbyname, gethostbyname2, and gethostbyaddr_addrtype The type of address being returned; usually AF_INET. h_length The length, in bytes, of the address. h_addr_list A zero terminated array of network addresses for the host. Host addresses are returned in network byte order. h_addr The first address in h_addr_list; this is for backward compatiblity. When using the nameserver, gethostbyname will search for the named host in the current domain and its parents unless the name ends in a dot. If the name contains no dot, and if the environment variable ``HOSTALAIASES'' con- tains the name of an alias file, the alias file will first be searched for an alias matching the input name. See hostname(7) for the domain search procedure and the alias file format.. Endhostent closes the TCP connection. Error return status from gethostbyname. h_errno can have the following values: NETDB_INTERNAL This indicates an internal error in the library, unrelated to the net- work or name service. errno will be valid in this case; see per-. /etc/hosts resolver(3), hosts(5), hostname(7), named(8). All information is contained in a static area so it must be copied if it is to be saved. Only the Internet address format is currently understood.
http://www.linuxonlinehelp.com/man/gethostbyaddr.html
crawl-001
refinedweb
256
55.95
Facebook has become one of the most used methods to get users to sign-in to your application, in today’s post we’ll set up the Facebook native plugin through Ionic Native. That way we can use the Facebook app to sign-in our users, instead of having them log in through a browser. To make things easier, we’re going to break down the process into three different parts: - Step #1: We’ll log into our Facebook developer account, create a new app and get the credentials. - Step #2: We’ll go into our Firebase console and enable Facebook Sign-In with the credentials from the first step. - Step #3: We’ll write the code to authorize the user through Facebook and then authenticate that user into our Firebase app. By the way, at the end of this post, I’m going to link a Starter Template that already has Google & Facebook authentication ready to go, all you’d need to do is add your credentials and run npm install. With that in mind, let’s start! Step #1: Facebook Developer Console The first thing we need to do is to create a new application in Facebook’s developer dashboard, and this app is the one that Facebook will use to ask our users for their permission when we try to log them into our Ionic application. For that you’ll need to go to and create a new app. Once you click on the button, you’ll get a short form pop up, add the name you want for your app and the contact email that will be public to users. Once we finish creating our app, it will take you to the app’s dashboard, where you can see the app’s ID, take note of that ID, we’ll need it when it’s time to install the Facebook plugin. Install the Cordova Plugin Now that you created your app on Facebook, we need to install the Cordova plugin in our Ionic app so we can have our app calling the Facebook Sign-In widget. For that, open your terminal and type (All in the same line): $ ionic plugin add cordova-plugin-facebook4 --variable APP_ID="123456789" --variable APP_NAME="myApplication" You’ll need to replace the values or APP_ID and APP_NAME for your real credentials. You can find both of those inside your Facebook Developers Dashboard. It’s a bit of a pain to work with Cordova plugins, luckily the great Ionic Team created Ionic Native, which is a wrapper for the Cordova plugins so we can use them in a more “Angular/Ionic” way. So the next thing we need to do is install the facebook package from Ionic Native, open your terminal again and type: $ npm install --save @ionic-native/facebook After we finish installing it, we need to tell our app to use it, that means, we need to import it in our app.module.ts file import { SplashScreen } from '@ionic-native/splash-screen'; import { StatusBar } from '@ionic-native/status-bar' import { Facebook } from '@ionic-native/facebook' @NgModule({ ..., providers: [ SplashScreen, StatusBar, Facebook ] }) export class AppModule {} Add your Platforms to Facebook Once everything is set up in our development environment, we need to let Facebook know which platforms we’ll be using (if it’s just web, iOS, or Android). In our case, we want to add two platforms, iOS and Android. To add the platforms, go ahead and inside your Facebook dashboard click on settings, then, right below the app’s information you’ll see a button that says Add Platform, click it. Once you click the button, you’ll see several options for the platforms you’re creating,="co.ionic.facebook435" version="0.0.1" xmlns="" xmlns: Please, I beg you, change co.ionic.facebook435 (or what you got there) for something that’s more “on brand” with your app or your business. NOTE: Not kidding, go to Google Play and do a search for “ionicframework” you’ll see a couple of hundred apps that didn’t change the default package name 😛 Once you add the Bundle ID, just follow the process to create the app and then do the same for Android, the difference is that instead of Bundle ID, Android calls it “Google Play Package Name.” Step #2: Enable Facebook Sign-In in Firebase. Now that everything is set up on Facebook’s side, we need to go into our Firebase console and enable Facebook authentication for our app. To enable Facebook, you’ll need to go to your Firebase Console and locate the app you’re using. Once you’re inside the app’s dashboard, you’re going to go into Authentication > Sign-In Method > Facebook and are going to click the Enable toggle. Once you do, it’s going to ask you for some information, specifically your Facebook app ID and secret key, you’ll find both of those inside your Facebook console, under your app’s settings. Step #3: Authenticate Users. It is entirely up to you in which step of your app’s process you want to authenticate your users, so I’m going to give you the code so you can just copy it into whichever Page you’re using. The first thing we need to do is to get the authorization from Facebook, to do that, we need to import Facebook plugin from Ionic Native and ask our user to log in. import { Facebook } from '@ionic-native/facebook' constructor(public facebook: Facebook){} facebookLogin(): Promise<any> { return this.facebook.login(['email']); } That function right there takes care of opening the Facebook native widget and ask our user to authorize our application to use their data for login purposes. Now we need to handle the response. The function response will provide us with a Facebook access token we can then pass to Firebase. import firebase from 'firebase'; import { Facebook } from '@ionic-native/facebook' constructor(public facebook: Facebook){}) }); } Let’s break down the code above. const facebookCredential = firebase.auth.FacebookAuthProvider .credential(response.authResponse.accessToken); First, we’re using the line above to create a credential object we can pass to Firebase, then we need to pass that credential object to Firebase:) }); } This bit of code firebase.auth().signInWithCredential(facebookCredential) makes sure your user creates a new account in your Firebase app and then authenticates the user into the Ionic app, storing some authentication information (like tokens, email, provider info, etc.) in local storage. Next Steps By now you have a fully working sign-in process with Facebook using Firebase, I know it’s a lot of configuration, but it’s because we want our users to have an amazing experience using our apps. Now I want you to download a Starter Template I built it has both Facebook and Google Sign-In working out of the box, all you need to do is add your credentials from Facebook and Google 🙂 (If the link doesn’t show the price as $0 just use the discount code IonicBlog to manually set it as $0)
https://blog.ionicframework.com/ionic-firebase-facebook-login/
CC-MAIN-2019-51
refinedweb
1,175
54.46
A relaxed chat room about all things Scala (2 & 3 both). Beginner questions welcome. applies "yo"(as manually specifying that type makes it work). But I do mean to recall there has been such a problem mentioned somewhere (some issue on the tracker?) where one had to store things in a valand pass that valto the method for it to type it correctly. ValueOffor it. Singleton Please, anyone has a quick hint on how to replicate the three lines of scala2 in scala3? 🙏🏻 I will try to give a more specific example. I would like to have an Expr[CC] which is equivalent to instantiating a new instance of CC, yet generic. case class CC(i:Int) val expected = '{ new CC(1) } Attempts to generalize that: val ts = TypeRepr.of[CC] // or Type[CC] ... val attempt1 = '{new ts(1)} // something as simple as this fails val attempt2 = Apply( Select.unique(New(Ident(TermRef(ts,"???"))),"<init>"), List(Literal(IntConstant(1))) ).asExprOf[CC] // I have no idea how to instantiate this List[Expr[_]]) Type[CC]) def findGapsForSorted(x: List[Int]): List[Tuple2[Int, Int]] = def solveFor(xs: List[Int]): List[Tuple2[Int, Int]] = val midpoint = xs.length/2 if xs.length > 2 then val first = xs.head val last = xs.last if (first + xs.length - 1 >= last) then List((first, last)) else solveFor(xs.slice(0, midpoint)) ++ solveFor(xs.slice(midpoint, xs.length)) else xs match case Nil => List.empty case List(one) => List((one, one)) case start :: end :: Nil => if (start + 1) >= end then List((start, end)) else List((start, start), (end, end)) case _ => ??? solveFor(x) .foldLeft(List.empty) { (state, el) => state match case h :+ (start, end) if end + 1 == el._1 => h :+ (start, el._2) case xs => xs :+ el } .length, ++, :+) that might make this routine slow for large inputs. Hello there! Is there some way to get at an object's runtime TypeTag in Scala 3? (I'm not using macros. I just want to output it, in order to provide a helpful message in some unit tests.) Is it safe to add the 2.13 version of the scala-reflect API to my libraryDependencies with cross-compilation? Or is there a better way of doing this in Scala 3? NoSuchMethodErrortrying to call this Java method from Scala 3. @smarter is there anything about this Java type that makes you suspicious? 3.0.0so maybe it was just added?
https://gitter.im/scala/scala?at=60abb9d8850bfa2d3bdef7ea
CC-MAIN-2021-49
refinedweb
401
77.94
typecheck-decorator 1.3 flexible explicit run-time at run time. Class types, collection types, fixed-length collections and type predicates can be annotated as well. As of Python 3.5, PEP 484 specifies that annotations should be types and their normal use will be type checking. Many advanced types (such as Sequence[int]) can now be defined via the typing module, which is also available at PyPI for earlier versions of Python 3. The present module supports these typing annotations, but it predates Python 3.5 and therefore has other forms of type specification (via type predicates) as well. Many of these are equivalent, but some are more powerful. Here is a more complex example: import typecheck as tc @tc.typecheck def foo2(record:(int,int,bool), rgb:tc.re("^[rgb]$")) -> tc.any(int,float) : # don't expect the following to make much sense:. The first and third of these are expressible with typing annotations as well, the second is not. The closest approximation would look like this: import typing as tg import typecheck as tc @tc.typecheck def foo2(record:tg.Tuple[int,int,bool], rgb:str) -> tg.Union[int,float] : """rgb must be one of "r","g","b".""" (but meant-to-be-too-long string is not detected) foo2(None, "R") # Wrong: None is no tuple (but meant-to-be-illegal character is not detected) Other kinds of annotations: - tc.optional(int) or tg.Optional[int] will allow int and None, - tc.enum(1, 2.0, "three") allows to define ad-hoc enumeration types, - tc.map_of(str, tc.list_of(Person)) or tg.Mapping[str, tg.MutableSequence[Person]] describe dictionaries or other mappings where all keys are strings and all values are homogeneous lists of Persons, - and so on. Tox-tested on CPython 3.3, 3.4, 3.5. Find the documentation at - Author: Dmitry Dvoinikov, Lutz Prechelt - Keywords: type-checking - License: BSD License - Categories - Development Status :: 5 - Production/Stable - Intended Audience :: Developers - License :: OSI Approved :: BSD License - Programming Language :: Python :: 3 - Programming Language :: Python :: 3.3 - Programming Language :: Python :: 3.4 - Programming Language :: Python :: 3.5 - Topic :: Software Development :: Documentation - Topic :: Software Development :: Quality Assurance - Package Index Owner: prechelt - DOAP record: typecheck-decorator-1.3.xml
https://pypi.python.org/pypi/typecheck-decorator
CC-MAIN-2017-04
refinedweb
369
51.75