text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
CS::RenderManager::PostEffectManager Class Reference Helper for post processing effects usage in render managers. More... #include <csplugincommon/rendermanager/posteffects.h> Detailed Description Helper for post processing effects usage in render managers. Provides a simple way to render the screen to a texture and then use a number of full screen passes with settable shader to get the desired effect. To use post processing effects, rendering of the main context has to be redirected to a target managed by the post processing manager. After drawing the scene another call applies the effects. Example: // Set up post processing manager for the given view postEffects.SetupView (renderView); // Set up start context, RenderTreeType::ContextNode* startContext = renderTree.CreateContext (renderView); // render to a target for later postprocessing startContext->renderTargets[rtaColor0].texHandle = postEffects.GetScreenTarget (); // ... draw stuff ... // Apply post processing effects postEffects.DrawPostEffects (); Post processing setups are a graph of effects (with nodes called "layers" for historic reasons). Each node has one output and multiple inputs. Inputs can be the output of another node or the render of the current scene. Post processing setups are usually read from an external source by using PostEffectLayersParser. Example: const char* effectsFile = cfg->GetStr ("MyRenderManager.Effects", 0); if (effectsFile) { PostEffectLayersParser postEffectsParser (objectReg); postEffectsParser.AddLayersFromFile (effectsFile, postEffects); } A setup is not required to use a post processing manager. If no setup is provided the scene will just be drawn to the screen. Post processing managers can be "chained" which means the output of a manager serves as the input of the following, "chained" post processing manager instead of the normal rendered scene. Notably, using HDR exposure effects involved chaining an post processing manager for HDR to a another post processing manager. Example: hdr.Setup (...); // Chain HDR post processing effects to normal effects postEffects.SetChainedOutput (hdr.GetHDRPostEffects()); // Just use postEffects as usual, chained effects are applied transparently Definition at line 104 of file posteffects.h. Member Function Documentation Add an effect pass with custom input mappings. Add an effect pass with custom input mappings. Add an effect pass with custom input mappings. Add an effect pass. Uses last added layer as the input. Discard (and thus cause recreation of) all intermediate textures. Remove all layers. Draw post processing effects after the scene was rendered to the handle returned by GetScreenTarget(). Get the render target used to ultimatively render to. Definition at line 312 of file posteffects.h. Get the texture format for the intermediate textures used. Get the layer that was added last. Definition at line 289 of file posteffects.h. Get the output texture of a layer. Get SV context used for rendering. Get the layer representing the "screen" a scene is rendered to. Definition at line 286 of file posteffects.h. Get the texture to render a scene to for post processing. Initialize. Remove a layer. Returns whether the screen space is flipped in Y direction. This usually happens when rendering to a texture due post effects. Add an effect pass with custom input mappings. Definition at line 324 of file posteffects.h. Chain another post effects manager to the this one. The output of this manager is automatically used as input to the next. Set the render target used to ultimatively render to. Setting this on a post effects manager in a chain effectively sets the output target of the last chain member. Definition at line 304 of file posteffects.h. Set the texture format for the intermediate textures used. Add an effect pass with custom input mappings. Set up post processing manager for a view. - Returns: - Whether the manager has changed. If truesome values, such as the screen texture, must be reobtained from the manager. perspectiveFixup returns a matrix that should be applied after the normal perspective matrix (this is needed as the screen texture may be larger than the desired viewport and thus the projection must be corrected for that). The documentation for this class was generated from the following file: - csplugincommon/rendermanager/posteffects.h Generated for Crystal Space 2.0 by doxygen 1.6.1
http://www.crystalspace3d.org/docs/online/api-2.0/classCS_1_1RenderManager_1_1PostEffectManager.html
CC-MAIN-2014-49
refinedweb
667
51.85
class Solution(object): def canFindSum(self, nums, target, ind, n, d): if target in d: return d[target] if target == 0: d[target] = True else: d[target] = False if target > 0: for i in xrange(ind, n): if self.canFindSum(nums, target - nums[i], i+1, n, d): d[target] = True break return d[target] def canPartition(self, nums): s = sum(nums) if s % 2 != 0: return False return self.canFindSum(nums, s/2, 0, len(nums), {}) Here is Java Version: public class Solution { public boolean canPartition(int[] nums) { Arrays.sort(nums); int sum = 0; for (int num : nums) sum += num; if (sum % 2 == 1) return false; return backtracking(nums, 0, sum, 0); } private static boolean[] choices = { true, false }; private boolean backtracking(int[] nums, int sumSoFar, int sum, int k) { for (boolean choice : choices) { if (choice) { sumSoFar += nums[k]; } else { sumSoFar -= nums[k]; } if (sumSoFar == sum / 2) return true; if (k + 1 == nums.length || sumSoFar > sum / 2) return false; if (backtracking(nums, sumSoFar, sum, k + 1)) return true; } return false; } } Hey man... I think our dear admin just added a test case yesterday... And your code got TLE for that one... @whglamrock Thanks, I just added memoization to my code using a dict. Thank you so much! But please why does memoizing only the target work? Shouldn't the memoization be also associated with the index as well? @chamberlian1990 @lekzeey Index i means we have available num in nums[i:n], if we can't find a possible answer in previous steps with more candidates, surely we can't find one with less candidates. Check my post here if my explanation still confuses you. @realisking oh that makes sense lol Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/62286/python-backtracking-with-memoization-solution
CC-MAIN-2017-39
refinedweb
296
62.07
In article <1i4bm7a.1s51u591kreqe4N%aleax at mac.com>, Alex Martelli <aleax at mac.com> wrote: >Chris Mellon <arkanes at gmail.com> wrote: >> >> and I'll punch a kitten before I accept having to read >> Python code guessing if something is a global, a local, or part of >> self like I do in C++. > >Exactly: the technical objections that are being raised are bogus, and >the REAL objections from the Python community boil down to: we like it >better the way it is now. Bringing technical objections that are easily >debunked doesn't _strengthen_ our real case: in fact, it _weakens_ it. >So, I'd rather see discussants focus on how things SHOULD be, rather >than argue they must stay that way because of technical difficulties >that do not really apply. > >The real advantage of making 'self' explicit is that it IS explicit, and >we like it that way, just as much as its critics detest it. Just like, >say, significant indentation, it's a deep part of Python's culture, >tradition, preferences, and mindset, and neither is going to go away (I >suspect, in fact, that, even if Guido somehow suddenly changed his mind, >these are issues on which even he couldn't impose a change at this point >without causing a fork in the community). Making up weak technical >objections (ones that ignore the possibilities of __get__ or focus on >something so "absolutely central" to everyday programming practice as >inspect.getargspec [!!!], for example;-) is just not the right way to >communicate this state of affairs. While you have a point, I do think that there is no solution that allows the following code to work with no marker distinguishing the local bar from the instance's bar; the only question is how you want to mark up your code: class C: def foo(self, bar): print bar, self.bar x = C() x.bar = 'abc' x.foo() >From that standpoint, there is a technical objection involved. Of course, the solution can be any of a number of different marker systems (or even choosing to have no marker and not permit a method to access both object attributes and locals of the same name), and Python has chosen to force the use of a marker for all object attributes, while also choosing to have disjoint namespaces WRT globals and locals for any one name. -- Aahz (aahz at pythoncraft.com) <*> "Many customs in this life persist because they ease friction and promote productivity as a result of universal agreement, and whether they are precisely the optimal choices is much less important." --Henry Spencer
https://mail.python.org/pipermail/python-list/2007-September/420984.html
CC-MAIN-2014-15
refinedweb
431
57.91
patrickrock has asked for the wisdom of the Perl Monks concerning the following question: Well, the first thing I would do is use the US Postal Service Web API. You need to request permission, but they almost always grant it (there's even a checkbox on the application that says you'll be using it to cleanse address databases). This will allow you to send the addresses as they exist in your DB, and get the USPS normalized "official" address back. For example: 4321 Somewhere St. #305 St. Louis, MO 98765 [download] 4321 SOMEWHERE ST APT 305 SAINT LOUIS, MO 98765-0123 [download] Variations on that address should result in the same canonical address. You can then compare to see if there are duplicate canonical addresses with a simple string equality. You may need to do a little data cleaning, like removing multiple spaces ($address=~s{\s+}{\x20}g;, for example) before running the compare, but this should catch the vast majority of your duplicates. Anima Legato.oO all things connect through the motion of the mind Cheers - L~R On further review of this module, it appears the Parse::RecDescent grammar for US addresses could use some TLC. The author, Kim Ryan, appears to be from down under and complex US addresses don't seem to get parsed correctly. I bet someone here can improve it though ;-) When i was looking at the postal regulations for my periodicals license for The Perl Review, I discovered that there is a whole industry that does just this: give them a list and they give it back to you with duplicates removed and addresses cleaned-up. This service comes as a true service (someone else does the work) or off-the-shelf software. Depending on how much work you have to do and how much it is worth to the company, you might want to skip doing this yourself. However, if you program it yourself, part of the solution (along with what people have already mentioned) is getting the canonical address. People will often give you their version of their address (for instance, I can't remember if my street is a Road or an Avenue, so I use them interchangeably). The US Postal Service has all sorts of data and tools to help you figure out what it should be based on the zip code. Other post offices have similar things (the Royal Mail address lookup kicks ass). From there you get closer to finding the duplicates than just looking at the address the customer gave you or someone keyed in. Good luck and let us know how it turns out. First of all you need to define what "fuzzily matching" means. From your sample data I'd say you want "same city/state/zip with the same street name and number", or something close to that. Once you've decided on that, you need to tokenize the address and map what you've got in the DB into a canonical form. Split on spaces and write a little parser with logic something like: Once you've parsed everything into this canonical form push the real data onto a hash-of-arrays keyed by the canonical form (for large enough data sets you may want to use DB_File or the like). Then the last step is to print the canonical form and the raw data for any keys for which there's more than one entry for the given canonical form. I wrote one of these in C years ago, before I knew Perl. We got the job because the mailing house didn't fancy paying hundreds of thousands of dollars for a commercial US address deduplicator that didn't work particularly well on Australian addresses. The job took months, was a fixed price contract, and I think we lost money on that one. I remember that squeezing out high performance when de-duplicating millions of addresses was a challenge. The obvious general approach is to parse the addresses into a canonical internal form -- then use that to compare addresses. This sort of software is necessarily riddled with heuristics and ambiguities and can never be perfect -- for example, does "Mr John and Bill Camel" mean "Mr John" and "Mr Bill Camel" or "Mr John Camel" and "Mr Bill Camel"? For performance reasons you can't afford to compare every address with every other one, so you need to break them into "buckets" and compare all addresses in each bucket. How do you choose the buckets? Not sure, but I remember bucketing on post code worked out quite well for us. This thread may be of interest: "Fuzzy matching of postal addresses" comp.lang.python of 17-jan-2005. Update: Kim Ryan has years of commercial experience in this field, so I suggest you check out his CPAN modules. Occasionaly I have to de-dup mailing lists we receive from the government. Since my lists often include duplicates of the same address and duplicate names (different addresses) I use a variety of methods. First I get the list with identical names and cities (uppercaseing both to avoid case issues). I suppose that probably gets false positives but we would prefer that in our case. Then I take all the addresses and pull ones that the first 7 characters match on. It is pretty hard to have the same first 7 and note be a duplicate. Agian that is with full uppercasing of all fields. This also produces false positives but combining it with a name match makes it prety efficient. Depending on the list and the source I vary the number I choose. Normaly choosing a couple numbers and hand sampling until I reach what i feel is a happy medium. These methods have helped me narrow 15k addresses down into a more accurate 10k. I think any time you try matching like this you are going to get false positives, but if you are looking for a way to flag some for human intervention then this can be a pretty good test. Best of Luck. Sorry I meant to mention that the benifit of these is no regex, just plain old DB functions that are easy and quickly handled. For better results you could create a new colum and do some normalization like suggested by some above. One case you might want to watch out for that won't be caught by sorting is "Fifth St." vs. "5th St.". You could clean those up with a simple substitution hash ({ '1st' => 'First', '2nd' => 'Second', ...}), or have a look at the modules Lingua::EN::Numbers and Lingua::EN::Numbers::Ordinate and write something to handle the general case. -b What about taking the first number, the second word (and the first two letters of the thrid word) and look if that string occures multiples times. It won't find all duplicates, but it should find some I guess.... package AddressParser; use DBI qw(:sql_types); use DBUtil; use Utils; use Log::Log4perl qw(get_logger); use strict; use warnings; my $us_zip_re = qr/^\s*(([\w.]+\W)+)\s*([a-zA-Z]{2})\s*((\d{5})[ -]?(\ +d{4})?)(\d{2})?\s*$/; my $can_post_code_re = qr/^\s*((\w+\W)+)\s*([a-zA-Z]{2})\s+([a-zA-Z]\d +[a-zA-Z][ -]?\d[a-zA-Z]\d)\s*$/; my $foreign_canada_re = qr/^\s*((\w+\W)+)\s*([a-zA-Z]+)\s+CANADA\s+([a +-zA-Z]\d[a-zA-Z][ -]?\d[a-zA-Z]\d)\s*$/i; my $foreign_romania_re = qr/^\s*((\w+\W)+)\s*ROMANIA\s+(\d{4,}[ -]?(\d +{4})?)\s*$/i; my $foreign_france_re = qr/^\s*((\w+\W)+)\s*FRANCE\s+(\d{4,}[ -]?(\d{4 +})?)\s*$/i; my $street_re = qr/^\s*([a-zA-Z#]+\s*\d+\s+)?([a-zA-Z]?\d+[a-zA-Z]?)\s ++(\S.*)$/; my $po_box_re = qr/^\s*((\w+\W+)*\s*[bB][oO][xX])\s*(\d+)/; #--------------------------------------------------------------------- # Constructor #--------------------------------------------------------------------- sub new { my $self = { _state_table => undef }; my ($class, %arg) = @_; bless $self, $class; # $self->_init_state_table; return $self; } sub _init_state_table { my ($self) = @_; my $dbh; my $sth; my $query; my @row; my $key; my $name; $query = <<QUERY_END abbrev, name from state QUERY_END ; $dbh = DBUtil::fetch_connection('shared'); $sth = $dbh->prepare($query); $sth->execute; while (@row = $sth->fetchrow_array) { ($key, $name) = @row; $self->{_state_table}{$key} = $name; } $sth->finish; } sub insert_zip { my ($self, $beta_address) = @_; my $zipcd; my $line; my @lines; ($zipcd, @lines) = @$beta_address; if (!defined($zipcd)) { $zipcd = ''; } $zipcd =~ s/^\s+//; $zipcd =~ s/\s+$//; for (my $i=0; $i < scalar(@lines); $i++) { if (defined($lines[$i])) { $lines[$i] =~ s/ZIPCD/$zipcd/; } } return \@lines; } sub find_address_lines { my ($self, $lines_ref) = @_; my $line; my $line_index; my $street_line_index = -1; my $zip_line_index = -1; $line_index = scalar(@$lines_ref); foreach $line (reverse(@$lines_ref)) { if (defined($line)) { # Find the street line, but only if we've already found the zip +line if (($zip_line_index > -1) && (($line =~ /$street_re/) || ($line =~ /$po_box_re/))) { $street_line_index = $line_index - 1; } if (($line =~ /$us_zip_re/) || ($line =~ /$can_post_code_re/) || ($line =~ /$foreign_canada_re/) || ($line =~ /$foreign_romania_re/) || ($line =~ /$foreign_france_re/)) { $zip_line_index = $line_index - 1; } } $line_index--; } if ($zip_line_index == -1) { get_logger()->error("Couldn't find zip code"); } if ($street_line_index == -1) { get_logger()->error("Couldn't find street"); } if (($zip_line_index == -1) || ($street_line_index == -1)) { get_logger()->error("\n\t" . join("\n\t", (map { defined($_) ? $_ +: '' } @$lines_ref))); return (undef, undef); } else { return (@$lines_ref[$street_line_index], @$lines_ref[$zip_line_ind +ex]); } } sub parse { my ($self, $street_line, $city_line) = @_; my $address = Address->new(); if ($street_line =~ /$street_re/) { $address->set('street_number', $2); $address->set('street_name', $3); } elsif ($street_line =~ /$po_box_re/) { $address->set('street_number', $3); $address->set('street_name', $1); } else { get_logger()->error("Couldn't parse street: $street_line"); return undef; } if ($city_line =~ /$us_zip_re/) { $address->set('city', $1); $address->set('state', $3); $address->set('zip', $4); $address->set('country', 'US'); } elsif ($city_line =~ /$can_post_code_re/) { $address->set('city', $1); $address->set('state', $3); $address->set('zip', $4); $address->set('country', 'CA'); } elsif ($city_line =~ /$foreign_canada_re/) { $address->set('city', $1); $address->set('zip', $4); $address->set('country', 'CA'); } elsif ($city_line =~ /$foreign_romania_re/) { $address->set('city', $1); $address->set('zip', $3); $address->set('country', 'RO'); } elsif ($city_line =~ /$foreign_france_re/) { $address->set('city', $1); $address->set('zip', $3); $address->set('country', 'FR'); } else { get_logger()->error("Couldn't parse city: $city_line"); return undef; } return $address; } sub parse_address { my ($self, $address_lines, $address_type) = @_; my $street_line; my $city_line; if (defined($address_type) && ($address_type eq 'BETA')) { $address_lines = $self->insert_zip($address_lines); } ($street_line, $city_line) = $self->find_address_lines($address_line +s); if (!defined($street_line) || !defined($city_line)) { return undef; } return $self->parse($street_line, $city_line); } 1; [download] Merge/purge deduping services go for about $6.00/thousand names. Plus a lot of the services can zip+4 your zipcodes and standardize all the fields. You can also do things like a household match versus name match, update zip codes if the names in your list are a couple of years old. I would pay the $600.00 a let the pros do it. | | A scientific project A system/database administration tool A game A toy/personal tool Other web based tool None of the above Results (69 votes). Check out past polls.
https://www.perlmonks.org/index.pl/?node_id=426737
CC-MAIN-2021-43
refinedweb
1,800
58.01
Logger The core of the framework ships with an inbuilt logger built on top of pino (one of the fastest logging libraries for Node.js). You can import and use the Logger as follows: import Logger from '@ioc:Adonis/Core/Logger'Logger.info('A info message')Logger.warn('A warning') During an HTTP request, you must use the ctx.logger object. It is an isolated child instance of the logger that adds the unique request-id to all the log messages. Make sure to enable request id generation by setting generateRequestId = true inside config/app.ts file. Route.get('/', async ({ logger }) => {logger.info('An info message')return 'handled'}) Config The configuration for the logger is stored inside the config/app.ts file under the logger export. The options are the same as documented by pino logger . Following the bare minimum options required to configure the logger. {name: Env.get('APP_NAME'),enabled: true,level: Env.get('LOG_LEVEL', 'info'),redact: {paths: ['password', '*.password'],},prettyPrint: Env.get('NODE_ENV') === 'development',} name The name of the logger. The APP_NAME environment variable uses the name property inside the package.json file. enabled Toggle switch to enable/disable the logger level The current logging level. It is derived from the LOG_LEVEL environment variable. redact Remove/redact sensitive paths from the logging output. Read the redact section . prettyPrint Whether or not to pretty-print the logs. We recommend turning off pretty printing in production, as it has some performance overhead. use a separate process for that. In a nutshell, this is how logging works. - You can log at different levels using the Logger API, for example: Logger.info('some message'). - The logs are always sent out to stdout. - You can redirect the stdoutstream to a file or use a separate process to read and format them. Logging in Development Since logs are always written to stdout, there is nothing special required in the development environment. Also, AdonisJS will automatically pretty print the logs when NODE_ENV=development. Logging in Production In production, you would want to stream your logs to an external service like Datadog or Papertrail. Following are some of the ways to send logs to an external service. There is an additional operational overhead of piping the stdout stream to a service. But, the trade-off is worth the performance boost you receive. Make sure to check pino benchmarks as well. Using Pino Transports The simplest way to process the stdout stream is to use pino transports . All you need to do is pipe the output to the transport of your choice. For demonstration, let's install the pino-datadog npm package to send logs to Datadog. npm i pino-datadog Next, start the production server and pipe the stdout output to pino-datadog. node build/server.js | ./node_modules/.bin/pino-datadog --key DD_API_KEY Streaming. Redact values You can redact/remove sensitive values from the logging output by defining a path to the keys to remove. For example: Removing user password from the logging output. {redact: {paths: ['password'],}} The above config will remove the password from the merging object. Logger.info({ username: 'virk', password: 'secret' }, 'user signup')// output: {"username":"virk","password":"[Redacted]","msg":"user signup"} You can define a custom placeholder for the redacted values or remove them altogether from the output. {redact: {paths: ['password'],censor: '[PRIVATE]'}}// or remove the property{redact: {paths: ['password'],remove: true}} Check out the fast-redact package to view the expressions available for the paths array. Logger API Following is the list of available methods/properties on the Logger module. All of the logging methods accept the following arguments. - The first argument can be a string message or an object of properties to merge with the final log message. - If the first argument was a merging object, then the second argument is the string message. - Rest of the parameters are the interpolation values for the message placeholders. import Logger from '@ioc:Adonis/Core/Logger'Logger.info('hello %s', 'world')// output: {"msg": "hello world"}Logger.info('user details: %o', { username: 'virk' })// output: {"msg":"user details: {\"username\":\"virk\"}" Define a merging object as follows: import Logger from '@ioc:Adonis/Core/Logger'Logger.info({ username: 'virk' }, 'user signup')// output: {"username":"virk","msg":"user signup"} You can pass error objects under the err key. import Logger from '@ioc:Adonis/Core/Logger'Logger.error({ err: new Error('signup failed') }, 'user signup')// output: {"err":{"type":"Error","message":"foo","stack":"..."},"msg":"user signup"} Following is the list of logging methods. Logger.trace Logger.debug Logger.info Logger.warn Logger.error Logger.fatal isLevelEnabled Find if a given logging level is enabled inside the config file. Logger.isLevelEnabled('info')Logger.isLevelEnabled('trace') bindings Returns an object containing all the current bindings, cloned from the ones passed in via Logger.child(). Logger.bindings() child Create a child logger instance. You can create the child logger with a different logging level as well. const childLogger = Logger.child({ level: 'trace' })childLogger.info('an info message') You can also define custom bindings for a child logger. The bindings are added to the logging output. const childLogger = Logger.child({ userId: user.id })childLogger.info('an info message') level The current logging level value, as a string. console.log(Logger.level)// info levelNumber The current logging level value, as a number. console.log(Logger.levelNumber)// 30 levels An object of logging labels and values. console.log(Logger.levels)/**{labels: {'10': 'trace','20': 'debug','30': 'info','40': 'warn','50': 'error','60': 'fatal'},values: {trace: 10,debug: 20,info: 30,warn: 40,error: 50,fatal: 60}}*/ pinoVersion The version of Pino. console.log(Logger.pinoVersion)// '6.11.2'
https://docs-adonisjs-com.pages.dev/guides/logger
CC-MAIN-2021-49
refinedweb
930
51.34
The QTranslator class provides internationalization support for text output. More... #include <QTranslator(QPushButton::tr(. It is possible to look up a translation using translate() (as tr() and QApplication::translate() do). The translate() function takes up to three parameters:Application::removeTranslator() function and reinstall it with QApplication::installTranslator(). It will then be the first translation to be searched for matching strings.. Loads filename +:. This function overloads translate(). Returns the translation for the key (context, sourceText, disambiguation). If none is found, also tries (context, sourceText, ""). If that still fails, returns an empty string. If n is not -1, it is used to choose an appropriate form for the translation (e.g. "%n file found" vs. "%n files found").
http://idlebox.net/2010/apidocs/qt-everywhere-opensource-4.7.0.zip/qtranslator.html
CC-MAIN-2013-20
refinedweb
116
52.97
LoadPlugin (Avisynth Compatibility)¶ avs. LoadPlugin(string path)¶ Load an Avisynth 2.5 (32 bit only), 2.6 (32 and 64 bit) or Avisynth+ (32 and 64 bit) plugin. If successful, the loaded plugin’s functions will end up in the avs namespace. Note that in the case of Avisynth+ there’s no way to use the formats combined with alpha or higher bitdepth packed RGB. Coincidentally there are no plugins that use this in a meaningful way yet. The compatibility module can work with a large number of Avisynth’s plugins. However, the wrapping is not complete, so the following things will cause problems: The plugin tries to call env->invoke(). These calls are ignored when it is safe to do so, but otherwise they will most likely trigger a fatal error. Plugins trying to read global variables. There are no global variables. If there are function name collisions functions will have a number appended to them to make them distinct. For example if three functions are named func then they will be named func, func_2 and func_3. This means that Avisynth functions that have multiple overloads (rare) will give each overload a different name. Note that if you are really insane you can load Avisynth’s VirtualDub plugin loader and use VirtualDub plugins as well. Beware of Python’s escape character, this will fail: LoadPlugin(path='c:\plugins\filter.dll') Correct ways: LoadPlugin(path='c:/plugins/filter.dll') LoadPlugin(path=r'c:\plugins\filter.dll') LoadPlugin(path='c:\\plugins\\filter.dll')
https://www.vapoursynth.com/doc/functions/loadpluginavs.html
CC-MAIN-2021-21
refinedweb
252
58.99
Using these libraries makes a data scientist’s life very easy Python has tools for all stages of the life cycle of a data science project. Any data science project has the following 3 stages inherently included in it. - Data Collection - Data Modelling - Data Visualization Python provides very neat tools for all 3 of these stages. Data Collection 1) Beautiful Soup When data collection involves scraping data off of the web, python provides a library called beautifulsoup. from bs4 import BeautifulSoup soup = BeautifulSoup(html_doc, 'html.parser') This library parses a web page and stores its contents neatly. For example, it will store the title separately. It will also store all the <a> tags separately which will provide you with very neat list of URLs contained within the page. As an example let us look at a simple web page for the story of Alice’s Adventure in Wonderland. Clearly we can see a few html elements there which we can scrape. - Heading — The Dormouse’s story - Page text - Hyperlinks — Elsie, Lacie and Tillie. Soup makes it easy to extract this information soup.title # <title>The Dormouse's story</title> soup.title.string # u'The Dormouse's story' soup.p # <p class="title"><b>The Dormouse's story</b></p> for link in soup.find_all('a'): print(link.get('href')) # # # print(soup.get_text()) # The Dormouse's story # # Once upon a time there were three little sisters; and their names were # Elsie, # Lacie and # Tillie; # and they lived at the bottom of a well. # # ... For pulling data out of HTML and XML files this is an excellent tool. It provides idiomatic ways of navigating, searching, and modifying the parse tree. It commonly saves programmers hours or sometimes even days of work. 2) Wget Downloading data , especially from the web, is one of the vital tasks of a data scientist. Wget is a free utility for non-interactive download of files from the Web. Since it is non-interactive, it can work in the background even if the user isn’t logged in. It supports HTTP, HTTPS, and FTP protocols, as well as retrieval through HTTP proxies. So the next time you want to download a website or all the images from a page, wget is there to assist you. >>> import wget >>>>> filename = wget.download(url) 100% [................................................] 3841532 / 3841532 >>> filename 'razorback.mp3' 3) Data APIs Apart from the tools that you need to scrape or download data, you also need actual data. This is where data APIs help. A number of APIs exist in python that let you download data for free e.g. Alpha Vantage provides real-time and historical data for global equities, forex and cryptocurrencies. They have data for upto 20 years. Using alpha vantage APIs we can, for example extract data for bitcoin daily values and plot it from alpha_vantage.cryptocurrencies import CryptoCurrencies import matplotlib.pyplot as plt cc = CryptoCurrencies(key='YOUR_API_KEY',output_format='pandas') data, meta_data = cc.get_digital_currency_daily(symbol='BTC', market='USD') data['1a. open (USD)'].plot() plt.tight_layout() plt.title('Alpha Vantage Example - daily value for bitcoin (BTC) in US Dollars') plt.show() Other similar API examples are Data Modelling As mentioned in this article, data cleaning or balancing is an important step before data modelling. 1)Imbalanced-learn Imabalanced-learn is one such tool to balance datasets. A dataset is imbalanced when one class or category of data has disproportionately larger samples than other categories. This can cause huge problems for classification algorithms which may end up being biased towards the class that has more data. e.g. A command called Tomek-Links from this library helps balance the dataset. from imblearn.under_sampling import TomekLinks tl = TomekLinks(return_indices=True, ratio='majority') X_tl, y_tl, id_tl = tl.fit_sample(X, y) 2) Scipy Ecosystem — NumPy The actual data processing or modelling happens through python’s scipy stack. Python’s SciPy Stack is a collection of software specifically designed for scientific computing in Pytho. nScipy secosystem contains a lot of useful libraries but Numpy is arguably the most powerful tool among all. The most fundamental package, around which the scientific computation stack is built, NumPy stands for Numerical Python. It provides an abundance of useful features for operations on matrices. If someone has used MATLAB they immediately realize that NumPy is not only as powerful as MATLAB but is also very similar in its operation. 3) Pandas Pandas is a library that provides data structures to handle and manipulate data. A 2-dimensional structure called dataframe is the most popular one. Pandas is a perfect tool for data wrangling. It designed for quick and easy data manipulation, aggregation, and visualization. Data Visualization 1) Matplotlib Another package from the SciPy ecosystem that is tailored for the generation of simple and powerful visualizations with ease is Matplotlib. It is a 2D plotting library which produces publication quality figures in a variety of hardcopy formats Some examples of Matplotlib outputs import numpy as np import matplotlib.pyplot as plt p1 = plt.bar(ind, menMeans, width, yerr=menStd) p2 = plt.bar(ind, womenMeans, width, bottom=menMeans, yerr=womenStd) plt.ylabel('Scores') plt.title('Scores by group and gender') plt.xticks(ind, ('G1', 'G2', 'G3', 'G4', 'G5')) plt.yticks(np.arange(0, 81, 10)) plt.legend((p1[0], p2[0]), ('Men', 'Women')) plt.show() A few other examples 2) Seaborn Seaborn is a Python data visualization library based on matplotlib. It primarily provides a high-level interface for drawing attractive and informative statistical graphics. It is mostly focused on visualizations such as heat maps 3) MoviePy MoviePy is a Python library for video editing — cutting, concatenations, title insertions, video compositing, video processing, and creation of custom effects. It can read and write all common audio and video formats, including GIF. Bonus NLP Tool — FuzzyWuzzy This funny sounding tool is a very useful library when it comes to string matching. One can quickly implement operations like string comparison ratios, token ratios, etc. >>> fuzz.ratio("this is a test", "this is a test!") 97 >>> fuzz.partial_ratio("this is a test", "this is a test!") 100 >>> fuzz.ratio("fuzzy wuzzy was a bear", "wuzzy fuzzy was a bear") 91 >>> fuzz.token_sort_ratio("fuzzy wuzzy was a bear", "wuzzy fuzzy was a bear") 100 Python has a huge wealth of information and tools to perform data science projects. It is never too late to explore!
https://aigraduate.com/python-tools-for-a-beginner-data-scientist/
CC-MAIN-2020-45
refinedweb
1,054
57.98
This site uses strictly necessary cookies. More Information I'm working on a door that shuts when the player hits a collider that's near it. I wrote a script that doesn't get any errors and works, but the door still wont rotate. here is the script. using System.Collections; using System.Collections.Generic; using UnityEngine; public class DoorClose : MonoBehaviour { public GameObject Trigger; public GameObject Door; public GameObject Player; public Transform Dooor; void OnTriggerEnter(Collider other) { if (other.CompareTag("Player")) { Debug.Log("IT WORKS!!"); Dooor.transform.Rotate(0, 60, 0); } } } any help would be appreciated. Answer by SFoxx28 · Jan 17, 2018 at 04:50 AM I'm not near a computer with unity installed so I can't test this out locally but try this instead underneath the Debug.Log: Door.transform.Rotate(0,60,0); Make sure you pass a reference of the door object you want to rotate from the editor to the script. I think that you are rotating the transform alright but you are not rotating the door game object. Try it out and let me know if that worked. it still doesn't work. I should have mentioned that I tried that before. the script in on the trigger if that helps. does the function get called? not sure what you mean. I'm still using the script you see above except using the GameObject ins$$anonymous$$d of the transform. the Debug.Log works perfectly but it wont. How to translate an object in an unknown angle ? 1 Answer the character refuses to rotate even in the editor 0 Answers Button/Rotation script troubleshooting. Script working, but rotation is instant instead of timed. 1 Answer Rotating on other axis 1 Answer how would I make a capsule rotate and go down a platform on the y axis? 0 Answers EnterpriseSocial Q&A
https://answers.unity.com/questions/1455691/rotate-an-object-when-two-others-collide.html
CC-MAIN-2021-21
refinedweb
309
67.65
Import Woosmap JSON - Project and Private Key - Import With Woosmap Console - Import With Woosmap API - HTTP library for Python - Open and Post JSON File - Useful Links In this tutorial you will learn how to import a native Woosmap JSON to your Project, using the Woosmap Console or the Woosmap Data API. - Application to upload your file: Woosmap Console - Sample JSON file: foodmarkets.json - Python Script: woosmapjson_import.py Project and Private Key If you have’nt created a project yet, you need to create one. Projects are containers for your Assets and associated analytics usage. Assets data programmatically, use REST and Javascript APIs, and display the dealer/store locator widget. There are two kinds of keys: - Public keys - Private keys Public keys are used to implement Woosmap features on the client-side. They allow you to retrieve your Asset data and benefit from the read-only capabilities of Woosmap APIs. Private keys allow you to manage integrations on the server-side and perform creation of new and updates of existing Assets. The public key is automatically generated when you add a new project to your organization, while you need to create the private key manually. Import With Woosmap Console Once created, you can push a native Asset dataset using the UPLOAD JSON button. Just select your file you wish to upload with POST method. (for testing, download this sample file: foodmarkets.json) Please note that JSON file containing more than 500 assets, will be split to slices. On error, each data slice will independently fail to load. This may result to load only partial data. Import With Woosmap API If you want to have more control over your data management, you can develop a custom script or application implementing the dedicated Woosmap Data API. Here are the prerequisites for the following sample: - Python 2.7 or greater. - The pip package management tool. - Access to the internet. - A Woosmap account. HTTP library for Python. Open and Post JSON File The content of your Assets attributes need to be sent in the body of the HTTP POST request. Just open the JSON file with python native json module and call the Woosmap API Wrapper described above with the content of the JSON as parameter. import json def import_assets(): WOOSMAP_JSON_FILE = 'foodmarkets.json' with open(WOOSMAP_JSON_FILE, 'rb') as f: try: assets = json.loads(f.read()) woosmap_api_helper = Woosmap() woosmap_api_helper.delete() response = woosmap_api_helper.post(assets['stores']) if response.status_code >= 400: response.raise_for_status() else: print('Successfully imported') except requests.exceptions.HTTPError as http_exception: if http_exception.response.status_code >= 400: print('API Error: {0}'.format(http_exception.response.text)) else: print('Error requesting API: {0}'.format(http_exception)) except Exception as exception: print('Failed importing Assets! {0}'.format(exception)) finally: woosmap_api_helper.end() Useful Links - Login to Woosmap Console - Woosmap Data API Quick Start - Python Script : woosmapjson_import.py - Sample JSON File : foodmarkets.json
https://developers.woosmap.com/support/manage-assets/upload-woosmapjson-file/
CC-MAIN-2019-51
refinedweb
470
60.21
Support DHCPv6 stateless and stateful mode in Dnsmasq¶ As [Spec_RADVD] is proposed to use radvd as the preferred reference implementation for IPv6 Router Advertisements and SLAAC, this spec is to allow tenant VM to obtain stateful dhcpv6 address or stateless dhcpv6 address by Dnsmasq when ipv6_address_mode of a tenant subnet is set. Launchpad blueprint: Problem description¶ 1. When dhcpv6 stateful is set as IPv6 address mode of a tenant subnet, OpenStack admin wishes tenant VMs can obtain IPv6 address and optional info (such as DNS info) from OpenStack network service. 2. When dhcpv6 stateless is set as IPv6 address mode of a tenant subnet, router advertisement is taken care of by either external router or OpenStack managed RADVD. OpenStack admin wishes tenant VMs can obtain IPv6 address by SLAAC and obtain optional info from OpenStack network service. +----------------------------------------------------------------------+ | | | | | | +---------------------------------+ | +------------------------------+ +---------------------------+ | | | | |dhcp namespace | | router | | | +--------------+ | | | +-----------+ | | namespace +----------+ | | | | | | | | | dnsmasq | | | | RADVD | | | | | | | | | +-+---+-----+--------+ | | +----+--------+++ | | | +-------> | | | | | | qdhcp-xxxx | | | | qr-xxxx || | | | | | | | | | | +-------+------+ | | +----+---------+ | | | | +-----> VM | | | | | | | | | | | | | | | | | | | +------------------------------+ +---------------------------+ | | | | +-------+------+ | | | | | | | | | | | | | | +--+--------------------------------+-+ | | | | | | | | | | br-int | | | | | | +--------+--------+ | | | +------------------+------------------+ | | | | | | br-int | | | | | | | | | | +--------+--------+ | | | | | | | | | | | | | +------+------+ | | | | | | | | | | br-eth0 | | | | | | +---------------+ | | | +------+------+ | | | | | | br-eth0 | | | | | | | | | | +---------------+ | | | | | | | | | | | | | +-----+-----+ | | | | | +------+-------+ | | | | eth0 | | | | | | | eth0 | | +-----------------------------+-----+-----+----------------------------+ +--------+-------+------+---------+ | | | | | | | | | | | | | +--------------------------------------------------------+ | | | | +------------------------------------------------------------------------+ IPv6 address/optional info | RA | +----------------+ Proposed change¶ According to [DNSMASQ_MANPAGE]: 1. DHCPv6 stateless mode: dnsmasq in ‘static’ mode with ‘–dhcp-optsfile’ option specified can be leveraged to use dnsmasq as a simple stateless dhcp server. 2. DHCPv6 stateful mode: dnsmasq in ‘static’ mode with ‘–dhcp-hostsfile’ and ‘–dhcp-optsfile’ options specified can be leveraged to use dnsmasq as stateful dhcp server. REST API impact¶ In Icehouse release, ipv6_ra_mode and ipv6_address_mode are introduced to subnet API to enable tenant network IPv6 support. Rest API change for IPv6 modes in Icehouse reference: [IPv6_MODES_REST] This blueprint will implement the functionality required to satisfy IPv6 Subnets with ipv6_address_mode set to ‘dhcpv6-stateful’ or ‘dhcpv6-stateless’. Other end user impact¶ Other deployer impact¶ This change will change the behavior of Neutron in specific configurations, when the IPv6 attributes for Subnets are set. Previously, the attributes were no-ops. Implementation¶ Subnets will be created with ‘ipv6_address_mode’ set to ‘dhcpv6-stateful’ or ‘dhcpv6-stateless’. If no dnsmasq process for subnet’s network is launched, Neutron will launch new dnsmasq process on subnet’s dhcp port in ‘qdhcp-‘ namespace. If previous dnsmasq process is already launched, restart dnsmasq with new configuration. Neutron will update dnsmasq process and restart it when subnet gets updated. Work Items¶ Break down this code review into smaller patches, and submit new code review for dhcpv6 stateful/stateless mode. Dependencies¶ The assumption of this spec is Router Advertisement is provided by either provider network router or OpenStack Network Service (One implementation is RADVD [Spec_RADVD]). Bump DNSMASQ version to 2.63 to support IPv6 tag. [DNSMASQ_VERSTION] Testing¶ Add unit tests to support Subnets created with the ipv6_address_mode set to ‘dhcpv6-stateful’ or ‘dhcpv6-stateless. Verify that dnsmasq can be launched with expected mode and options, in correct namespace. Add tempest API and scenairo tests for stateful and stateless dhcpv6. Reference: [API_TESTS_IPV6] References¶ - Spec_RADVD(1,2) Spec for adding support for radvd for IPv6 SLAAC - DNSMASQ_MANPAGE - - API_TESTS_IPV6 Add IPv6 API test cases for Neutron Subnet API 254 - DNSMASQ_VERSTION DNSMASQ minimum version for IPv6 - IPv6_MODES_REST Rest API change for IPv6 modes in Icehouse <>_
http://specs.openstack.org/openstack/neutron-specs/specs/juno/ipv6-dnsmasq-dhcpv6-stateless-stateful.html
CC-MAIN-2020-05
refinedweb
505
51.07
I've been working crazy hours updating my Silverlight course for version 2 and expanding it with lots of new material. With the PDC coming up in less than a week, I've also been working on some cool tips and tricks demonstrating some of the lesser-known but potentially useful features of Silverlight 2. Each day leading up to the show I'm going to try to post one juicy tip or trick. Here's the first one. Silverlight's System.Windows.Browser namespace contains "HTML bridge" classes allowing managed code to integrate with the browser DOM. Among other things, you can use these classes to grab managed references to HTML DOM elements; call JavaScript functions from C#; call C# methods from JavaScript; process DOM events in C#; and handle events fired from C# with JavaScript. Whenever you call from one language to another, one issue that's sure to rear its head is how data—particularly data types that don't have an equivalent on the other side—is marshaled. The marshaling rules in Silverlight are complex but powerful. For example, when you call from C# into JavaScript, you can pass instances of managed types like this: // Define the type [ScriptableType] public class Person { public string Name { get; set; } public int Age { get; set; } } // Create a Person instance and pass it to JavaScript Person person = new Person { Name = "Adam", Age = 18 }; HtmlPage.Window.Invoke("js_func", person); Silverlight's marshaling layer uses reflection on the C# side to discover the particulars of the Person class, and then it creates a look-alike type on the JavaScript side. In JavaScript, you can consume a Person object like this: function js_func(arg) var name = arg.Name; var age = arg.Age; So far, so good. Passing instances of custom types from C# to JavaScript couldn't be easier. The marshaling layer even honors by-ref and by-val semantics, so class instances are passed by reference and value types are passed by value. Now suppose you want to pass a Person object from JavaScript to C#. You might do this on the JavaScript side, assuming the method is named CSharp_Method and it's exposed by a scriptable type instance named "page": var person = new Object(); person.Name = 'Adam'; person.Age = 18; var control = document.getElementById('SilverlightControl'); control.content.page.CSharp_Method(person); But in C#, the object comes through as a generic ScriptObject, which forces you to write code like this: public void CSharp_Method(ScriptObject arg) string name = arg.GetProperty("Name").ToString(); int age = Int32.Parse(arg.GetProperty("Age").ToString()); This is where HtmlPage's rather obscure RegisterCreateableType method comes in handy. In C#, use RegisterCreateableType to turn Person into a class that can be instantiated in client-side script: HtmlPage.RegisterCreateableType("Person", typeof(Person)); Then, in JavaScript, create a Person instance and pass it to C# like this: var The C# method can now be written as follows: public void CSharp_Method (Person person) string name = person.Name; int age = person.Age; Now the Person object really is a Person object, and you can consume it with all the benefits of strong typing. Does that mean you won't be needing the ScriptObject class any more? Hardly. Stay tuned for cool Silverlight trick #2, which uses ScriptObject to allow two Silverlight control instances to communicate without involving JavaScript. If you would like to receive an email when updates are made to this post, please register here RSS Hello Jeff, We've just updated our code base for Silverlight RTW too ;) Please take a look at the demo here: We're extremely in need of any suggestions or recommendations. Any suggestions from you can be really useful. In this issue: Paulio and Jeff Prosise Tim Heuer blogged about changes on Silverlight.net: Silverlight I have an example as you presented in calling a javascript function from C# (in Silverlight). It works if I pass a string but I get an InvalidOperationException when I try to pass an object. I've defined the object as you describe also. Have there been changes to Silverlight or can you suggest what problem I might look at?
http://www.wintellect.com/CS/blogs/jprosise/archive/2008/10/20/cool-silverlight-trick-1.aspx
crawl-002
refinedweb
689
61.46
Tutorial custom placing of windows and doors Introduction This tutorial shows how to place custom designed Arch Windows and Arch Doors in a building model. It uses the Draft Workbench, the Arch Workbench, and the Sketcher Workbench. Common tools used are: Draft Grid, Draft Snap, Draft Wire, Arch Wall, Arch Window, and Sketcher NewSketch. The user should be familiar with constraining sketches. This tutorial was inspired by the tutorials by jpg87 posted in the FreeCAD forums. Setup 1. Open FreeCAD, create a new empty document, and switch to the Arch Workbench. 2. Make sure your units are set correctly in the menu Edit → Preferences → General → Units. For example, MKS (m/kg/s/degree) is good for dealing with distances in a typical building; moreover, set the number of decimals to 4, to consider even the smallest fractions of a meter. 3. Use the Draft ToggleGrid button to show a grid with enough resolution. You can change the grid appearance in the menu Edit → Preferences → Draft → Grid and snapping → Grid. Set lines at every 50 mm, with major lines every 20 lines (every meter), and 1000 lines in total (the grid covers an area of 50 m x 50 m). 4. Zoom out of the 3D view if you are too close to the grid. Now we are ready to create a simple wall on which we can position windows and doors. Placing a wall 5. Use the Draft Wire tool to create a wire. Go counterclockwise. - 5.1. First point in (0, 4, 0); in the dialog enter 0 m Enter, 4 m Enter, 0 m Enter. - 5.2. Second point in (2, 0, 0); in the dialog enter 2 m Enter, 0 m Enter, 0 m Enter. - 5.3. Third point in (4, 0, 0); in the dialog enter 4 m Enter, 0 m Enter, 0 m Enter. - 5.4. Fourth point in (6, 2, 0); in the dialog enter 6 m Enter, 2 m Enter, 0 m Enter. - 5.4. Fifth point in (6, 5, 0); in the dialog enter 6 m Enter, 5 m Enter, 0 m Enter. - 5.5. In the number pad press A to finish the wire. - 5.6. In the number pad press 0 to get an axonometric view of the model. - Note: make sure the Relative checkbox is disabled if you are giving absolute coordinates. - Note 2: the points can also be defined with the mouse pointer by choosing intersections on the grid, with the help of the Draft Snap toolbar and the Draft Grid method. - Note 3: you can also create shapes programmatically by scripting in Python. Beware that most functions expect their input in millimeters. import FreeCAD import Draft p = [FreeCAD.Vector(0.0, 4000.0, 0), FreeCAD.Vector(2000.0, 0.0, 0.0), FreeCAD.Vector(4000.0, 0.0, 0.0), FreeCAD.Vector(6000.0, 2000.0, 0.0), FreeCAD.Vector(6000.0, 5000.0, 0.0)] w = Draft.makeWire(p, closed=False) 6. Select the DWire and click the Arch Wall tool; the wall is immediately created with a default width (thickness) of 0.2 m, and height of 3 m. Base wire for the wall Wall constructed from the wire Placing preset doors and windows 7. Click the Arch Window tool; as preset select Simple door, and change the height to 2 m. - 7.1. Change the snapping to Draft Midpoint, and try selecting the bottom edge of the frontal wall; rotate the standard view as necessary to help you pick the edge and not the wall face; when the midpoint is active, click to place the door. - 7.2. Click the Arch Window tool again, and place another door, but this time in the midpoint of the rightmost wall; rotate the standard view as necessary. Snapping to the midpoint of the bottom edge of the wall to place the door - Note: the Sill heightis the distance from the floor to the lower edge of the element. For doors the Sill heightis usually 0 m as doors are normally touching the floor; on the other hand, windows have a usual separation of 0.5 m to 1.5 m from the floor. The Sill heightcan only be set when initially creating the window or door from a preset. Once the window or door is inserted, modify its placement by editing the DataPosition vector [x, y, z]of the underlying Sketcher Sketch. Creating custom doors and windows 8. Switch to the Sketcher Workbench; select the part of the wall to the right that has no door; click on the Sketcher NewSketch; select FlatFace as attachment method. If the existing geometry obstructs your view, click on Sketcher ViewSection to remove it. 9. Draw a fancy sketch containing three closed wires. Make sure to provide constraints to all wires. - 9.1. The outside wire is the biggest one, and will define the main dimensions of the window object, and the size of the hole created when it's embedded in an Arch Wall. Make sure the dimensions are named appropriately, for example, Widthand Height. A constraint also defines the curvature of the outer wire; give it an appropriate name, like HeightCurve. - 9.2. The second wire is offset from the outer wire, and together with it, they define the width of the fixed frame of the window. Name the offset appropriately, for example, FrameFixedOffset. It will be used for both the top vertical and horizontal offsets. The bottom offset, if set to zero, will result in the fixed frame touching the bottom of the window; this can be used to model a door instead of a window. Give it an appropriate name, like FrameFixedBottom. - 9.3. The third, innermost wire is offset from the second wire, and together with it, they define the frame of the window that can open. The innermost wire also defines the size of the glass panel. Again, give meaningful names to these offsets, for example, FrameInnerOffsetand FrameInnerBottom. - 9.4. In order to build succesfully the sketch, use horizontal (Sketcher ConstrainHorizontal) and vertical (Sketcher ConstrainVertical) constraints for the straight sides; use auxiliary construction geometry (Sketcher ToggleConstruction), and tangential constraints (Sketcher ConstrainTangent) to correctly place the circular arcs at the top. As in this case the window is symmetrical, consider equality (Sketcher ConstrainEqual), symmetrical (Sketcher ConstrainSymmetric), and point on object (Sketcher ConstrainPointOnObject) constraints where it makes sense. Constraints for the outer wires of the sketch that form the window Constraints for the inner wires of the sketch that form the window 10. Once the sketch is fully constrained, press Close to exit the sketch (Sketcher LeaveSketch). - 10.1. Since a face of the wall was selected during the initial step of creating the sketch, the sketch is co-planar with that face; however, it may be in the wrong position, away from the wall. If this is the case, adjust DataPosition within DataAttachment Offset. Set DataPosition to [4 m, 1 m, 0 m]so the sketch is centered in the wall, and it is one meter above the floor level. - 10.2. You can see the named constraints under DataConstraints. The values can be modified to see the sketch change dimensions immediately. Window sketch moved to the desired position on the wall Named constraints of the sketch, which can be modified without going inside the sketch 11. Change back to the Arch Workbench and, with the new Sketch002 selected, use Arch Window. A window will be created, and will make a hole in the wall. The window is made from a custom sketch, and not from a preset, so it needs to be edited in order to correctly display its components, that is, the fixed frame, the inner frame, and the glass panel. Custom window created from the sketch; it still doesn't have a proper frame, nor glass Setting up the custom window 12. In the tree view select Sketch002 underlying Window, and press Space, or change the property ViewVisibility to True. 13. Double click Window in the tree view to start editing it. - 13.1. Inside the Window elementsdialog there are two panes, Wiresand Components. There are three wires, Wire0, Wire1, and Wire2, and one component, Default. The wires refer to the closed loops that were drawn in the sketch; the components define the areas in the sketch that will be extruded to create frame or glass panels with real thicknesses; these areas are delimited by the wires. A window created from a preset already has two components, OuterFrameand Glass. The custom window needs to be edited to have a similar structure. Dialog to edit a window or a door - 13.2. Click on Default, and click the Remove button to eliminate it. - 13.3. Click Add; this shows the properties of a new component like Name, Type, Wires, Thickness, Offset, Hinge, and Opening mode. Give a name, such as OuterFrame, choose Framefor Type, and click on Wire0and then Wire1; they should highlight in the 3D viewport. Add a small value for Thickness, 15 mm, and check the checkbox to add the default value. This default value is the length assigned to the DataFrame property; a similar default can be assigned to the DataOffset property. Click the +Create/update component button to finish editing the component. - 13.4. Click Add; give another name, such as InnerFrame, choose Framefor Type, and click on Wire1and then Wire2. Add a sensible Thickness, 60 mm, and Offset, 15 mm. Then click the +Create/update component button. - 13.5. Click Add; give another name, such as Glass, choose Glass panelfor Type, and click on Wire2. Add a sensible Thickness, 10 mm, and Offset, 40 mm. Then click the +Create/update component button. If any of the three components needs to be modified, select it and press Edit; modifications are only saved after pressing the +Create/update component button. Editing a previously defined component of a window or a door - 13.6. If everything is set, click Close to finish editing the window. The sketch may become hidden again, but the window will show distinct solid elements for the OuterFrame, the InnerFrame, and the Glass. Give a value of 100 mmto DataFrame to assign a default thickness, which will be added to the value specified in the OuterFramecomponent. Property view of the window to add default Frame length, Offset length, and other options Finished window with appropriate components embedded in the wall Duplicating the custom window 14. In the tree view, select Window and its underlying Sketch002. Then go to Edit → Duplicate selection, and answer No if asked to duplicate unselected dependencies. A new Window001 and Sketch003 will appear in the same position as the original elements. 15. Select the new Sketch003. Go to the DataMap Mode property, and click on the ellipsis next to the FlatFace value. In the 3D viewport select the left side of the wall which doesn't have any element; rotate the standard view as necessary. Change the Attachment offset to [-1 m, 0 m, 0 m] to center the window, and click OK. The sketch and the window should appear in a new position. - Note: the attachment operation can also be performed by changing to the Part Workbench, and then using the menu Part → Attachment. Dialog to edit the attachment plane of the sketch 16. You may adjust the dimensions of the new window by changing the named parameters in Sketch003 under DataConstraints, for example, set Height to 2 m, and Frame Fixed Bottom to 0 m. Then press Ctrl+R to recompute the model. If the wall doesn't show a bigger hole for the new window, select the wall in the tree view, right click and choose Mark to recompute, then press Ctrl+R again. 17. These operations have changed the position of the new window, but the opening in the wall doesn't look correct. It is slanted, that is, the hole is not perpendicular to the face of the wall, and it may even cut other parts of the wall. The problem is that Window001 has retained the DataNormal information of the original Window. Incorrect opening in the wall due to bad Normal of the window Normals of doors and windows 18. Each Arch Window object controls the extrusion of its body and the opening that is created in its host wall by means of the DataNormal. The normal is a vector [x, y, z] that indicates a direction perpedicular to a wall. When a window or door preset is created with the Arch Window tool directly over an Arch Wall, the normal is automatically calculated, and the resulting window or door is correctly aligned; the first two objects, Door and Door001, were created in this way. In similar way, when a sketch is created by selecting a planar surface, it is oriented on this plane. Then when the Arch Window tool is used, the window will use as normal the perpendicular direction to the sketch. This was the case with the third object, the custom Window. If the window already exists and needs to be moved, as was the case with the duplicated Window001 object, the sketch needs to be remapped to another plane; doing this moves both the sketch and the window, but the latter doesn't automatically update its normal, so it has incorrect extrusion information. The normal needs to be calculated manually and written to DataNormal. The three values of the normal vector are calculated as following. x = -sin(angle) y = cos(angle) z = 0 Where angle is the angle of the local Z axis of the sketch with respect to the global Y axis. When a sketch is created, it always has two axes, a local X (red) and a local Y (green). If the sketch is mapped to the global XY working plane, then these axes are aligned; but if the sketch is mapped on the global XZ or global YZ planes, as is common with windows and doors (the sketches are "standing up"), then the local Z (blue) forms an angle with the global Y axis; this angle varies from -180 to 180 degrees. The angle is considered positive if it opens counterclockwise, and it is negative if it opens clockwise, starting from the global Y axis. Local coordinates of a sketch that is "standing up", that is, mapped to the global XZ plane Intended directions of the normals for each door and window If we look at the geometry created so far, we see the following normals. Door - The local Z is aligned with the global Y, therefore, the angleis zero. The normal vector is x = -sin(0) = 0 y = cos(0) = 1 z = 0 or DataNormal is [0, 1, 0]. Door001 - The local Z is rotated 90 degrees from the global Y, therefore, the angleis 90 (positive, because it opens counterclockwise). The normal vector is x = -sin(90) = -1 y = cos(90) = 0 z = 0 or DataNormal is [-1, 0, 0]. Window - The local Z is rotated 45 degrees from the global Y, therefore, the angleis 45 (positive, because it opens counterclockwise). The normal vector is x = -sin(45) = -0.7071 y = cos(45) = 0.7071 z = 0 or DataNormal is [-0.7071, 0.7071, 0]. Window001 - The local Z direction is found by using the Draft Dimension tool and measuring the angle that the wall trace ( Wire) makes with the global Y axis, or any line aligned to it. This angle is 26.57; the desired angle is the complement to this, so 90 - 26.57 = 63.43. This means the local Z axis is rotated 63.43 degrees from the global Y, therefore, the angle is -63.46 (negative, because it opens clockwise). The normal vector is x = -sin(-63.43) = 0.8943 y = cos(-63.43) = 0.4472 z = 0 Therefore DataNormal should be changed to [0.8943, 0.4472, 0]. After doing these changes, recompute the model with Ctrl+R. If the wall doesn't update the hole, select it in the tree view, right click and choose Mark to recompute, then press Ctrl+R again. 19. The orientation of the extrusion of the window is resolved, together with the opening in the wall. Correct opening in the wall due to proper Normal of the window Final remarks 20. As demonstrated, the initial placement of the Arch Window is very important. The user should either - use the Arch Window tool to insert and automatically align a preset to a wall, or - map a sketch to the desired wall, and build the window after that. If a window already exists, and it needs to be moved, the supporting sketch should be remapped to a new plane, and the DataNormal of the window needs to be recalculated. The new normal direction can be obtained by measuring the angle of the new wall with respect to the global Y axis, considering whether this angle is positive (counterclockwise) or negative (clockwise), and using a simple formula. x = -sin(angle) y = cos(angle) z = 0 To confirm that the operations are correct, the absolute magnitude of the normal vector should be one. That is, abs(N) = 1 = sqrt(x^2 + y^2 + z^2) abs(N) = 1 = sqrt(sin^2(angle) + cos^2(angle) + z^2) -
https://wiki.freecadweb.org/index.php?title=Tutorial_custom_placing_of_windows_and_doors&oldid=629391
CC-MAIN-2020-16
refinedweb
2,874
70.43
The C Preprocessor vs D Back when C was invented, compiler technology was primitive. Installing a text macro preprocessor onto the front end was a straightforward and easy way to add many powerful features. The increasing size & complexity of programs have illustrated that these features come with many inherent problems. D doesn't have a preprocessor; but D provides a more scalable means to solve the same problems. - Header Files - #pragma once - #pragma pack - Macros - Conditional Compilation - Code Factoring - #error and Static Asserts - Template Mixins Header Files The C Preprocessor Way C and C++ rely heavily on textual inclusion of header files. This frequently results in the compiler having to recompile tens of thousands of lines of code over and over again for every source file, an obvious source of slow compile times. What header files are normally used for is more appropriately done doing a symbolic, rather than textual, insertion. This is done with the import statement. Symbolic inclusion means the compiler just loads an already compiled symbol table. The needs for macro "wrappers" to prevent multiple #inclusion, funky #pragma once syntax, and incomprehensible fragile syntax for precompiled headers are simply unnecessary and irrelevant to D. #include <stdio.h> The D Way D uses symbolic imports: import core.stdc.stdio; #pragma once The C Preprocessor Way C header files frequently need to be protected against being #include'd multiple times. To do it, a header file will contain the line: #pragma once or the more portable: #ifndef __STDIO_INCLUDE #define __STDIO_INCLUDE ... header file contents #endif The D Way Completely unnecessary since D does a symbolic include of import files; they only get imported once no matter how many times the import declaration appears. #pragma pack The C Preprocessor Way This is used in C to adjust the alignment for structs. The D Way For D classes, there is no need to adjust the alignment (in fact, the compiler is free to rearrange the data fields to get the optimum layout, much as the compiler will rearrange local variables on the stack frame). For D structs that get mapped onto externally defined data structures, there is a need, and it is handled with: struct Foo { align (4): // use 4 byte alignment ... } Macros Preprocessor macros add powerful features and flexibility to C. But they have a downside: - Macros have no concept of scope; they are valid from the point of definition to the end of the source. They cut a swath across .h files, nested code, etc. When #include'ing tens of thousands of lines of macro definitions, it becomes problematical to avoid inadvertent macro expansions. - Macros are unknown to the debugger. Trying to debug a program with symbolic data is undermined by the debugger only knowing about macro expansions, not the macros themselves. - Macros make it impossible to tokenize source code, as an earlier macro change can arbitrarily redo tokens. - The purely textual basis of macros leads to arbitrary and inconsistent usage, making code using macros error prone. (Some attempt to resolve this was introduced with templates in C++.) - Macros are still used to make up for deficits in the language's expressive capability, such as for "wrappers" around header files. Here's an enumeration of the common uses for macros, and the corresponding feature in D: - Defining literal constants: The C Preprocessor Way #define VALUE 5 The D Way enum int VALUE = 5; - Creating a list of values or flags: The C Preprocessor Way int flags: #define FLAG_X 0x1 #define FLAG_Y 0x2 #define FLAG_Z 0x4 ... flags |= FLAG_X; The D Way enum FLAGS { X = 0x1, Y = 0x2, Z = 0x4 }; FLAGS flags; ... flags |= FLAGS.X; - Distinguishing between ascii chars and wchar chars: The C Preprocessor Way #if UNICODE #define dchar wchar_t #define TEXT(s) L##s #else #define dchar char #define TEXT(s) s #endif ... dchar h[] = TEXT("hello"); The D Way dchar[] h = "hello"; D's optimizer will inline the function, and will do the conversion of the string constant at compile time. - Supporting legacy compilers: The C Preprocessor Way #if PROTOTYPES #define P(p) p #else #define P(p) () #endif int func P((int x, int y)); The D WayBy making the D compiler open source, it will largely avoid the problem of syntactical backwards compatibility. - Type aliasing: The C Preprocessor Way #define INT int The D Way alias INT = int; - Using one header file for both declaration and definition: The C Preprocessor Way #define EXTERN extern #include "declarations.h" #undef EXTERN #define EXTERN #include "declarations.h"In declarations.h: EXTERN int foo; The D WayThe declaration and the definition are the same, so there is no need to muck with the storage class to generate both a declaration and a definition from the same source. - Lightweight inline functions: The C Preprocessor Way #define X(i) ((i) = (i) / 3) The D Way int X(ref int i) { return i = i / 3; }The compiler optimizer will inline it; no efficiency is lost. - Assert function file and line number information: The C Preprocessor Way #define assert(e) ((e) || _assert(__LINE__, __FILE__)) The D Wayassert() is a built-in expression primitive. Giving the compiler such knowledge of assert() also enables the optimizer to know about things like the _assert() function never returns. - Setting function calling conventions: The C Preprocessor Way #ifndef _CRTAPI1 #define _CRTAPI1 __cdecl #endif #ifndef _CRTAPI2 #define _CRTAPI2 __cdecl #endif int _CRTAPI2 func(); The D WayCalling conventions can be specified in blocks, so there's no need to change it for every function: extern (Windows) { int onefunc(); int anotherfunc(); } - Hiding __near or __far pointer weirdness: The C Preprocessor Way #define LPSTR char FAR * The D WayD doesn't support 16 bit code, mixed pointer sizes, and different kinds of pointers, and so the problem is just irrelevant. - Simple generic programming: The C Preprocessor WaySelecting which function to use based on text substitution: #ifdef UNICODE int getValueW(wchar_t *p); #define getValue getValueW #else int getValueA(char *p); #define getValue getValueA #endif The D WayD enables declarations of symbols that are aliases of other symbols: version (UNICODE) { int getValueW(wchar[] p); alias getValue = getValueW; } else { int getValueA(char[] p); alias getValue = getValueA; } Conditional Compilation The C Preprocessor Way Conditional compilation is a powerful feature of the C preprocessor, but it has its downside: - The preprocessor has no concept of scope. #if/#endif can be interleaved with code in a completely unstructured and disorganized fashion, making things difficult to follow. - Conditional compilation triggers off of macros - macros that can conflict with identifiers used in the program. - #if expressions are evaluated in subtly different ways than C expressions are. - The preprocessor language is fundamentally different in concept than C, for example, whitespace and line terminators mean things to the preprocessor that they do not in C. The D Way D supports conditional compilation: - Separating version specific functionality into separate modules. - The debug statement for enabling/disabling debug harnesses, extra printing, etc. - The version statement for dealing with multiple versions of the program generated from a single set of sources. - The if (0) statement. - The /+ +/ nesting comment can be used to comment out blocks of code. Code Factoring The C Preprocessor Way It's common in a function to have a repetitive sequence of code to be executed in multiple places. Performance considerations preclude factoring it out into a separate function, so it is implemented as a macro. For example, consider this fragment from a byte code interpreter: unsigned char *ip; // byte code instruction pointer int *stack; int spi; // stack pointer ... #define pop() (stack[--spi]) #define push(i) (stack[spi++] = (i)) while (1) { switch (*ip++) { case ADD: op1 = pop(); op2 = pop(); result = op1 + op2; push(result); break; case SUB: ... } } This suffers from numerous problems: - The macros must evaluate to expressions and cannot declare any variables. Consider the difficulty of extending them to check for stack overflow/underflow. - The macros exist outside of the semantic symbol table, so remain in scope even outside of the function they are declared in. - Parameters to macros are passed textually, not by value, meaning that the macro implementation needs to be careful to not use the parameter more than once, and must protect it with (). - Macros are invisible to the debugger, which sees only the expanded expressions. The D Way D neatly addresses this with nested functions: ubyte* ip; // byte code instruction pointer int[] stack; // operand stack int spi; // stack pointer ... int pop() { return stack[--spi]; } void push(int i) { stack[spi++] = i; } while (1) { switch (*ip++) { case ADD: op1 = pop(); op2 = pop(); push(op1 + op2); break; case SUB: ... } } The problems addressed are: - The nested functions have available the full expressive power of D functions. The array accesses already are bounds checked (adjustable by compile time switch). - Nested function names are scoped just like any other name. - Parameters are passed by value, so need to worry about side effects in the parameter expressions. - Nested functions are visible to the debugger. Additionally, nested functions can be inlined by the implementation resulting in the same high performance that the C macro version exhibits. #error and Static Asserts Static asserts are user defined checks made at compile time; if the check fails the compile issues an error and fails. The C Preprocessor Way The first way is to use the #error preprocessing directive: #if FOO || BAR ... code to compile ... #else #error "there must be either FOO or BAR" #endif This has the limitations inherent in preprocessor expressions (i.e. integer constant expressions only, no casts, no sizeof, no symbolic constants, etc.). These problems can be circumvented to some extent by defining a static_assert macro (thanks to M. Wilson): #define static_assert(_x) do { typedef int ai[(_x) ? 1 : 0]; } while(0) and using it like: void foo(T t) { static_assert(sizeof(T) < 4); ... } This works by causing a compile time semantic error if the condition evaluates to false. The limitations of this technique are a sometimes very confusing error message from the compiler, along with an inability to use a static_assert outside of a function body. The D Way D has the static assert, which can be used anywhere a declaration or a statement can be used. For example: version (FOO) { class Bar { const int x = 5; static assert(Bar.x == 5 || Bar.x == 6); void foo(T t) { static assert(T.sizeof < 4); ... } } } else version (BAR) { ... } else { static assert(0); // unsupported version } Template Mixins D template mixins superficially look just like using C's preprocessor to insert blocks of code and parse them in the scope of where they are instantiated. But the advantages of mixins over macros ‘protected’.
https://docarchives.dlang.io/v2.071.0/pretod.html
CC-MAIN-2019-18
refinedweb
1,755
50.97
Creating javadoc like document from XML Schema(\*.xsd) By Katsumii-Oracle on Dec 20, 2007 This is a sequel to my similar tool for DTD entry. There is one for XML Schema called xsddoc. This time, I will use a XML Schema defining BPEL which I found at OASIS site. $ gtar xvzf xsddoc-1.0.tar.gz $ cd xsddoc-1.0 $ wget -nd $ sed 's/\^M//' bin/xsddoc > bin/xsddoc.unix // that's Control-M $ chmod +x bin/xsddoc.unix $ mkdir bpeldoc $ bin/xsddoc.unix -t "BPEL Execution" -o bpeldoc -verbose ws-bpel_executable.xsd xsddoc starting. process attribute {}lang from file process attribute {}space from file process attribute {}base from file process attribute {}id from file process attributeGroup {}specialAttrs from file process element {}process from file ws-bpel_executable.xsd process complexType {}tProcess from file ws-bpel_executable.xsd [...] $ du -sk ws-bpel_executable.xsd bpeldoc/\* 46 ws-bpel_executable.xsd 5 bpeldoc/help-doc.html //it created a lot of data so I'm not uploading them to blog. Sorry! 1709 bpeldoc/http___docs.oasis-open.org_wsbpel_2.0_process_executable 46 bpeldoc/http___ 40 bpeldoc/index-all.html 1 bpeldoc/index.html 21 bpeldoc/overview-all.html 1 bpeldoc/overview-namespaces.html 7 bpeldoc/schema-summary.html 2 bpeldoc/stylesheet.css $ firefox bpeldoc/index.html There you go! The xsddoc page says there's a new commercial development called xnsdoc. It may produce better output. You should also check out the Adivo TechWriter for XML (). It's an awesome tool for the price. Posted by J.H. on January 23, 2009 at 06:42 PM JST # Thanks for the info. Looks interesting. Posted by Katsumi INOUE on January 26, 2009 at 02:44 AM JST #
https://blogs.oracle.com/LetTheSunShineIn/entry/creating_javadoc_like_document_from
CC-MAIN-2016-22
refinedweb
279
56.11
#pragma directives are used within the source program to request certain kinds of special processing. The #pragma directive is part of the standard C and C++ languages, but the meaning of any pragma is implementation-defined. The front end recognizes several pragmas. The following are described in detail in the template instantiation section of this chapter: #pragma instantiate #pragma do_not_instantiate #pragma can_instantiate and two others are described in the section on precompiled header processing: #pragma hdrstop #pragma no_pch The front end also recognizes #pragma once, which, when placed at the beginning of a header file, indicates that the file is written in such a way that including it several times has the same effect as including it once. Thus, if the front end sees #pragma once at the start of a header file, it will skip over it if the file is #included again. A typical idiom is to place an #ifndef guard around the body of the file, with a #define of the guard variable after the #ifndef: #pragma once // optional #ifndef FILE_H #define FILE_H ... body of the header file ... #endif The #pragma once is marked as optional in this example, because the front end recognizes the #ifndef idiom and does the optimization even in its absence. #pragma once is accepted for compatibility with other compilers and to allow the programmer to use other guard-code idioms. By default (this can be changed under a custom porting arrangement), #pragma pack is recognized. It is used to specify the maximum alignment allowed for nonstatic data members of structs and classes, even if that alignment is less than the alignment dictated by the member's type. The basic syntax is: #pragma pack(n) #pragma pack() where argument n, a power-of-2 value, is the new packing alignment that is to go into effect for subsequent declarations, until another #pragma pack is seen. The second form cancels the effect of a preceding #pragma pack(n) and either restores the default packing alignment specified by the --pack_alignment command-line option or, if the option was not used, disables the packing of structs and classes. In addition, an enhanced syntax is also supported in which keywords push and pop can be used to manage a stack of packing alignment values - for instance: #pragma pack (push, xxx) #include "xxx.h" #pragma pack (pop, xxx) which has the effect saving the current packing alignment value, processing the include file (which may leave the packing alignment with an unknown setting), and restoring the original value. By default (this can be changed under a custom porting arrangement), #pragma ident is recognized, as is #ident: #pragma ident "string" #ident "string" Both are implemented by recording the string in a pragma entry and passing it to the back end. By default (this can be changed under a custom porting arrangement), #pragma weak is not recognized. When it is, its form is: #pragma weak name1 [ = name2 ] where name1 is the name to be given "weak binding" and is a synonym for name2 if the latter is specified. The entire argument string is recorded in the pragma entry and passed to the back end.
http://www.comeaucomputing.com/4.0/docs/userman/pragma.html
CC-MAIN-2014-52
refinedweb
523
52.43
1. dialog application, which launches the property page dialog. Below is the screen shot of hosting dialog: The below screen shot is the property page: Note the sample has two pages and this will sufficient for the reader to add more on their property page dialog. When you click Settings… button in the main dialog, the property page dialog will be opened. Once you change any one of the default value from the displayed dialog, the apply button will be enabled. Clicking the apply button will make you change permanent not considering whether you cancel the dialog or click ok. You can also save the changes by clicking the OK button also. Then what is the use of apply button? In real world if you want to show the changes visually, the button is very useful and the user of the application will look at the visual changes and tune their settings further. Let us go ahead and start creating the sample. 3. How do we Create Property Page Dialog? The below skeleton diagram explains how do we create the property page dialog. First, we should create property pages. Then these property pages are attached to the property sheet, which provides the buttons required for property page dialog. OK and Cancel buttons are common for a dialog, and the Apply button is specially provided for property page dialogs. Creating the property pages is almost equal to creating the dialog boxes. In the resource, you can ask for property page and you will get a borderless dialog. In this dialog, you should drop the controls that you want for your property page. In the above skeleton picture, first we will create property page1 and page2. Then the required controls are dropped into page1 and pagw2. Finally, through the source code we will add these pages to the property sheet created at runtime. 4. Create Property Pages How do you create a dialog? Property page also created similar to that. The below given video shows creating first page of the property dialog. Steps 1) From the Resource file add the Property Page 2) Then provide a meaningful ID Name for it 3) Open the Property page visual studio editor 4) From the Toolbox 3 radio buttons are added to it. So that’s all we do for creating the pages of the property sheet that create a page template, drop the controls on it. Repeat the same process for all the pages. Once the pages are ready you should create associated class for it. The video provided below shows how do we create a class for the Property page added in the previous video: Steps 1) The Property page template is opened in visual studio 2) Add class menu option is invoked from the context menu of the Property page template (By Right click) 3) In the class dialog a class name is chosen, and base class is set to CPropertyPage 4) Created class is shown in the class view The Second page of sample is created property page 1 way as shown in video1 and video2. Now we have page1 and pag2 for the property dialog is ready. The design of second property page is shown below: 5. Add Control Variables Now the Color and Font property page templates are ready. Now we will associate a variable to the controls in these property page templates. First a variable is associated to the radio buttons. For all the three radio buttons, only one variable is associated and we treat these radio buttons as single group. First we should make sure that the tab order (Format->tab Order or Ctrl+d when the dialog is opened in the editor) for all the radio buttons goes consecutively. Then for the first radio button in the tab order, set the group property to true. The below specified video shows adding a control variable for the Radio buttons: Steps 1) From the resource view, Property page for the font is opened 2) Make sure Group property is set to true. If not set it to true 3) Add variable dialog is opened for first radio button 4) Variable category is changed from control to variable 5) A variable of type BOOL is added (Later we will change this as int through the code) Likewise we add three more value type variables for each text box control in the property page two. The below screen shot shows an int value variable m_edit_val_Red added for the first edit box. For blue and green also variables are added as shown in the below screen shot. 6. OnApply Message Map To follow the code explanation with me, search for the comment //Sample in the solution and in the search result follow the Order 01,02,03 etc ON_MESSAGE_VOID is a nice handler for dealing with custom messages that does not require passing any arguments. In out sample we are going to use this handler for dealing with WM_APPLY user define message. Below is the code change required for the dialog-based project. 1) First a required header is included in the dialog class header file //Sample 01: Include the header required for OnMessageVoid #include <afxpriv.h> 2) In the same header file declaration for the void message handler is given. //Sample 02: Declare the Message Handler function afx_msg void OnApply(); 3) Next in the CPP file, ON_MESSAGE_VOID Macro is added in between Begin Message Map and End Message Map. The OnApply is not yet defined, so you will get a compiler error when you compile the program at present. To avoid this provide a dummy implementation for OnApply like void CPropPageSampleDlg::OnApply() {} //Sample 03: Provide Message map entry for the Apply button click ON_MESSAGE_VOID(WM_APPLY, OnApply) 4) WM_APPLY is not yet defined. So declare that user defined massage in the stdAfx.h. WM_USER macro is useful to define a user defines message in a safe way. That is the WM_APPLY does not clash with any existing user defined message as we use it safely like WM_USER+1 //Sample 04: Define the user defined message #define WM_APPLY WM_USER + 1 7. Change Radio Button Variable In video 3, we added a Boolean type variable for the radio buttons group. It will be very useful if we change this variable type from BOOL to integer type. When user makes a radio button selection, the data exchange mechanism will automatically set the variable to denote the currently selected radio button. You will get more clarity when we write the code for radio check state later. For now we will just change Boolean variable type to integer. 1) In the PropPageFont.h file, the variable type is changed from Boolean to Integer //Sample 05: Change the variable type to Int int m_ctrl_val_radio_font; 2) Next in the constructor of the CPropPageFont, variable is initialized to –1. This value denotes that none of the radio button is initilially the class CPropPageSampleDlg is created by the application wizard. Moreover, we are going to launch the Property page dialog from this dialog as a child dialog. The CPropPageSampleDlg will take the settings from the property page and caches it. When the property page is opened for next time, the settings cached by this parent dialog are supplied back to the property pages. 1) First the variables required to cache settings are declared in the class declaration, which is in the header file //Sample 07: Add Member variables to keep track of settings private: int m_selected_font; int m_blue_val; int m_red_val; int m_green_val; 2) Next in the OnInitDialog, these variables are initialized based on what the property page should show on very first display. //Sample 08: Initialize the member variables m_selected_font = -1; m_red_val = 0; m_green_val = 0; m_blue_val = 0; 9. Create Property Dialog and Display it From the dialog class the property page dialog is created and displayed as modal dialog. Once this property page dialog is closed by the user, the settings set by him/her is read back and cached inside the parent dialog. 1) In the button click handler, first we create a property sheet); 2) Next we create the property pages in the heap. First we declare the variables in the header file of the dialog class, then we declare the required variables in the class with private scope //Sample 9.2: Include Property pages #include "PropPageFont.h" #include "PropPageColor.h" //Sample 07: Add Member variables to keep track of settings private: int m_selected_font; int m_blue_val; int m_red_val; int m_green_val; CPropPageFont* m_page1_font; CPropPageColor* m_page2_color; 3) In the implementation file (Look at step 1), after creating the property sheet with title settings, we create both the property pages (i.e.) Font and Color pages. //Sample 9.3: Create Property Pages m_page1_font = new CPropPageFont(); m_page2_color = new CPropPageColor(); 4);); 6) When the property dialog is closed, we check the return value and cache (Copy) the settings provided in the pages to the calling dialog’s member variables. These variables are used to initialize the property page dialog when it is opened for next time. Note that during the button click, we create the pages on heap, copy the dialog members to the pages, add the pages to sheet and display it as modal dialog and when it closed before deleting the pages from heap we copy the settings into the local members. / enables when the UI elements in the pages are changed. Say for example typing the new red value in the text box will enable the apply button. Once you click the apply button, the changes are informed to the parent. In our case we send the data entered or changed by the user so for, to the parent dialog that launched this property page. In real world, the apply button will immediately apply the settings to the application. So before clicking OK, user can observe the effect of the changed settings just by clicking the apply button. So now,. The below video shows providing the handler for the Radio button click: Steps 1) FONT property page is opened 2) First Radio button in the group is clicked 3) In the properties pane, navigation moved to control events 4) BN_CLICKED event is double clicked (You will enter the code editor) 5) The process is repeated for other two radio buttons. The same way the EN_CHANGED event for all the three text boxes is provided. The screen shot below shows the request for the event handler for the control event EN_CHANGED:: //Sample 11: we will implement that now. The property page will send the notification to this dialog when the user clicks the apply button of the property page. Have a look at the implementation below: //Sample 12: a new instances of property pages are created when we display it. Now refer the code at section 9.4, you will get an idea of how the data flow of the settings will happen. 1) When the Parent about to display the property page it copies the cached data to the property pages 2) When user clicks the OK button, this OnApply is called. Refer section 9.6 3) When user clicks the Apply button, WM_APPLY user message is sent to the CPropPageSampleDlg The below code will send the WM_APPLY message to the parent dialog: BOOL CPropPageFont::OnApply() { //Sample 13: Set the Modified flag to false, and send message to dialog class user clicks the apply button. As we are just going to send the message to the parent dialog of the property page when Apply button is clicked by the user, providing the overridden version of function in either Font or Color page is sufficient. The below video shows adding the OnApply override: Steps 1) Property page for CPropPageFont is opened 2) In the Property Page Overrides toolbar icon is selected 3) Then, OnApply Override is added to the source code. The above video shows the sample in Action.
http://cppandmfc.blogspot.com/2012/08/mfc-creating-and-using-property-page.html
CC-MAIN-2014-15
refinedweb
1,978
58.82
I am happy to announce the first beta release of Jython 2.1. Jython is aython can also be used interactively: just type some Jython code at the prompt and see the results immediately. list of bugs fixed since the previous release includes: - [ . A complete list of changes and differences are available here: Bugs can be reported to the bug manager on SourceForge: Cheers, the jython-developers Robert, You can just use the Python httplib module. It works great. The documentation says: import httplib, urllib params = urllib.urlencode({'spam': 1, 'eggs': 2, 'bacon': 0}) h = httplib.HTTP("") h.putrequest("POST", "/cgi-bin/query") h.putheader("Content-type", "application/x-www-form-urlencoded") h.putheader("Content-length", "%d" % len(params)) h.putheader('Accept', 'text/plain') h.putheader('Host', '') h.endheaders() h.send(params) reply, msg, hdrs = h.getreply() print reply # should be 200 data = h.getfile().read() # get the raw HTML I've modified this script to do exactly what you're trying to do. The two main gotchas are: - setting the content type to what your servlet is expecting. - setting the content length correctly. If you're using XML-RPC, set the content type to "text/xml". Hope this helps, Laurent Fontanel In the FAQ: =09 > -----Original Message----- > From: Ross McDonald [mailto:rossm@...] > Sent: Monday, December 03, 2001 9:56 PM > To: jython-users@... > Subject: [Jython-users] Change to directory in Jython >=20 >=20 > I am having trouble figuring out how to change to another=20 > directoy with > Jython, Pythons os.chdir() is missing, can anyone help? >=20 > Thanks Ross. >=20 >=20 > _______________________________________________ > Jython-users mailing list > Jython-users@... > >=20 I am having trouble figuring out how to change to another directoy with Jython, Pythons os.chdir() is missing, can anyone help? Thanks Ross. Hi
http://sourceforge.net/mailarchive/forum.php?forum_name=jython-users&max_rows=25&style=nested&viewmonth=200112&viewday=3
CC-MAIN-2013-48
refinedweb
295
69.79
The new WordPress editor (codenamed Gutenberg) is due for release in version 5.0. Now is the perfect time to get to grips with it before it lands in WordPress core. In this series, I'm showing you how to work with the Block API and create your very own content blocks which you can use to build out your posts and pages. In the previous post, we saw how to use the create-guten-block toolkit to create a plugin containing a sample block ready for us to work with. We can easily extend this as required, but it's a good idea to know how to create a block from scratch to fully understand all aspects of Gutenberg block development. In this post we'll create a second block in our my-custom-block plugin to display a random image from the PlaceIMG web service. You'll also be able to customize the block by selecting the image category from a drop-down select control. But first we'll learn how block scripts and styles are enqueued before moving on to the all-important registerBlockType() function, which is fundamental to creating blocks in Gutenberg. Enqueueing Block Scripts and Styles To add the JavaScript and CSS required by our blocks, we can use two new WordPress hooks provided by Gutenberg: enqueue_block_editor_assets enqueue_block_assets These are only available if the Gutenberg plugin is active, and they work in a similar way to standard WordPress hooks for enqueueing scripts. However, they are intended specifically for working with Gutenberg blocks. The enqueue_block_editor_assets hook adds scripts and styles to the admin editor only. This makes it ideal for enqueueing JavaScript to register blocks and CSS to style user interface elements for block settings. For block output, though, you'll want your blocks to render the same in the editor as they do on the front end most of the time. Using enqueue_block_assets makes this easy as styles are automatically added to both the editor and front end. You can also use this hook to load JavaScript if required. But you might be wondering how to enqueue scripts and styles only on the front end. There isn't a WordPress hook to allow you to do this directly, but you can get around this by adding a conditional statement inside the enqueue_block_assets hook callback function. add_action( 'enqueue_block_assets', 'load_front_end scripts' ); function load_front_end scripts() { if( !is_admin() { // enqueue front end only scripts and styles here } } To actually enqueue scripts and styles using these two new hooks, you can use the standard wp_enqueue_style() and wp_enqueue_scripts() functions as you normally would. However, you need to make sure that you're using the correct script dependencies. For enqueueing scripts on the editor, you can use the following dependencies: - scripts: array( 'wp-blocks', 'wp-i18n', 'wp-element', 'wp-components' ) - styles: array( 'wp-edit-blocks' ) And when enqueueing styles for both the editor and front end, you can use this dependency: array( 'wp-blocks' ) One thing worth mentioning here is that the actual files enqueued by our my-custom-block plugin are the compiled versions of JavaScript and CSS and not the files containing the JSX and Sass source code. This is just something to bear in mind when manually enqueueing files. If you try to enqueue raw source code that includes React, ES6+, or Sass then you'll encounter numerous errors. This is why it's useful to use a toolkit such as create-guten-block as it processes and enqueues scripts for you automatically! Registering Gutenberg Blocks To create a new block, we need to register it with Gutenberg by calling registerBlockType() and passing in the block name plus a configuration object. This object has quite a few properties that require proper explanation. Firstly, though, let's take a look at the block name. This is the first parameter and is a string made up of two parts, a namespace and the block name itself, separated by a forward slash character. For example: registerBlockType( 'block-namespace/block-name', { // configuration object } ); If you're registering several blocks in a plugin then you can use the same namespace to organize all your blocks. The namespace must be unique to your plugin, though, which helps prevent naming conflicts. This can happen if a block with the same name has already been registered via another plugin. The second registerBlockType() parameter is a settings object and requires a number of properties to be specified: title(string): displayed in the editor as the searchable block label description(optional string): describes the purpose of a block icon(optional Dashicon or JSX element): icon associated with a block category(string): where the block appears in the Add blocks dialog keywords(optional array): up to three keywords used in block searches attributes(optional object): handles the dynamic block data edit(function): a function that returns markup to be rendered in the editor save(function): a function that returns markup to be rendered on the front end useOnce(optional boolean): restrict block from being added more than once supports(optional object): defines block-supported features Assuming we're using JSX for block development, here's what an example registerBlockType() call could look like for a very simple block: registerBlockType( 'my-unique-namespace/ultimate-block', { title: __( 'The Best Block Ever', 'domain' ), icon: 'wordpress', category: 'common', keywords: [ __( 'sample', 'domain' ), __( 'Gutenberg', 'domain' ), __( 'block', 'domain' ) ], edit: () => <h2>Welcome to the Gutenberg Editor!</h2>, save: () => <h2>How am I looking on the front end?</h2> } ); Notice how we didn't include an entry for the description, attributes, useOnce, and supports properties? Any fields that are optional (and not relevant to your block) can be safely omitted. For example, as this block didn't involve any dynamic data, we don't need to worry about specifying any attributes. Let's now cover the registerBlockType() configuration object properties in more detail one by one. Title and Description When inserting or searching for a block in the editor, the title will be displayed in the list of available blocks. It's also displayed in the block inspector window, along with the block description if defined. If the block inspector isn't currently visible then you can use the gear icon in the top right corner of the Gutenberg editor to toggle visibility. Both the title and description fields should ideally be wrapped in i18n functions to allow translation into other languages. Category There are five block categories currently available: common formatting layout widgets embed These determine the category section where your block is listed inside the Add block window. The Blocks tab contains Common Blocks, Formatting, Layout Elements, and Widgets categories, while the Embeds category has its own tab. Categories are currently fixed in Gutenberg, but this could be subject to change in the future. This would be useful, for instance, if you were developing multiple blocks in a single plugin and you wanted them all to be listed under a common category so users could locate them more easily. Icon By default, your block is assigned the shield Dashicon, but you can override this by specifying a custom Dashicon in the icon field. Each Dashicon is prefixed with dashicons- followed by a unique string. For example, the gear icon has the name dashicons-admin-generic. To use this as a block icon, just remove the dashicons- prefix for it to be recognised, e.g. icon: 'admin-generic'. However, you aren't just limited to using Dashicons as the icon property. The function also accepts a JSX element, which means you can use any image or SVG element as your block icon. Just make sure to keep the icon size consistent with other block icons, which is 20 x 20 pixels by default. Keywords Choose up to three translatable keywords to help make your block stand out when users search for a block. It's best to choose keywords that closely relate to the functionality of your block for best results. keywords: [ __( 'search', 'domain' ), __( 'for', 'domain' ), __( 'me', 'domain' ), ] You could even declare your company and/or plugin name as keywords so that if you have multiple blocks, users can start typing your company name and all your blocks will appear in search results. Although adding keywords is entirely optional, it's a great way to make it easier for users to find your blocks. Attributes Attributes help with managing dynamic data in a block. This property is optional as you don't need it for very simple blocks that output static content. However, if your block deals with data that could change (such as the contents of an editable text area) or you need to store block settings, then using attributes is the way to go. Attributes work by storing dynamic block data either in the post content saved to the database or in post meta. In this tutorial we'll only be using attributes that store data along with the post content. To retrieve attribute data stored in post content, we need to specify where it's located in the markup. We can do this by specifying a CSS selector that points to the attribute data. For example, if our block contained an anchor tag, we can use its title field as our attribute as follows: attributes: { linktitle: { source: 'attribute', selector: 'a', attribute: 'title' } } Here, the attribute name is linktitle, which is an arbitrary string and can be anything you like. For example, we could use this attribute to store the link title <a title="some title"> that's been entered via a textbox in block settings. Doing so suddenly makes the title field dynamic and allows you to change its value in the editor. When the post is saved, the attribute value is inserted into the links title field. Similarly, when the editor is loaded, the value of the title tag is retrieved from the content and inserted into the linktitle attribute. There are more ways you can use source to determine how block content is stored or retrieved via attributes. For instance, you can use the 'text' source to extract the inner text from a paragraph element. attributes: { paragraph: { source: 'text', selector: 'p' } } You can also use the children source to extract all child nodes of an element into an array and store it in an attribute. attributes: { editablecontent: { source: 'children', selector: '.parent' } } This selects the element with class .parent and stores all child nodes in the editablecontent attribute. If you don't specify a source then the attribute value is saved in HTML comments as part of the post markup when saved to the database. These comments are stripped out before the post is rendered on the front end. We'll be seeing a specific example of this type of attribute when we get into implementing our random image block later in this tutorial. Attributes can take a little getting used to, so don't worry if you don't fully understand them first time around. I'd recommend taking a look at the attributes section of the Gutenberg handbook for more details and examples. Edit The edit function controls how your block appears inside the editor interface. Dynamic data is passed to each block as props, including any custom attributes that have been defined. It's common practice to add className={ props.className } to the main block element to output a CSS class specific to your block. WordPress doesn't add this for you inside the editor, so it has to be added manually for each block if you want to include it. Using props.className is standard practice and is recommended as it provides a way to tailor CSS styles for each individual block. The format of the generated CSS class is .wp-block-{namespace}-{name}. As you can see, this is based on the block namespace/name and makes it ideal to be used as a unique block identifier. The edit function requires you to return valid markup via the render() method, which is then used to render your block inside the editor. Save Similar to the edit function, save displays a visual representation of your block but on the front end. It also receives dynamic block data (if defined) via props. One main difference is that props.className isn't available inside save, but this isn't a problem because it's inserted automatically by Gutenberg. Supports The supports property accepts an object of boolean values to determine whether your block supports certain core features. The available object properties you can set are as follows. anchor(default false): allows you to link directly to a specific block customClassName(true): adds a field in the block inspector to define a custom classNamefor the block className(true): adds a classNameto the block root element based on the block name html(true): allows the block markup to be edited directly The useOnce Property This is a useful property that allows you to restrict a block from being added more than once to a page. An example of this is the core More block, which can't be added to a page if already present. As you can see, once the More block has been added, the block icon is grayed out and can't be selected. The useOnce property is set to false by default. Let's Get Creative! It's time now to use the knowledge we've gained so far to implement a solid working example of a block that does something more interesting than simply output static content. We'll be building a block to output a random image obtained via an external request to the PlaceIMG website, which returns a random image. Furthermore, you'll be able to select the category of image returned via a select drop-down control. For convenience, we'll add our new block to the same my-custom-block plugin, rather than creating a brand new plugin. The code for the default block is located inside the /src/block folder, so add another folder inside /src called random-image and add three new files: - index.js: registers our new block - editor.scss: for editor styles - style.scss: styles for the editor and front end Alternatively, you could copy the /src/block folder and rename it if you don't want to type everything out by hand. Make sure, though, to rename block.js to index.js inside the new block folder. We're using index.js for the main block filename as this allows us to simplify the import statement inside blocks.js. Instead of having to include the path and full filename of the block, we can just specify the path to the block folder, and import will automatically look for an index.js file. Open up /src/blocks.js and add a reference to our new block at the top of the file, directly underneath the existing import statement. import './random-image'; Inside /src/random-image/index.js, add the following code to kick-start our new block. // Import CSS. import './style.scss'; import './editor.scss'; const { __ } = wp.i18n; const { registerBlockType, query } = wp.blocks; registerBlockType( 'cgb/block-random-image', { title: __( 'Random Image' ), icon: 'format-image', category: 'common', keywords: [ __( 'random' ), __( 'image' ) ], edit: function( props ) { return ( <div className={ props.className }> <h3>Random image block (editor)</h3> </div> ); }, save: function( props ) { return ( <div> <h3>Random image block (front end)</h3> </div> ); } } ); This sets up the framework of our block and is pretty similar to the boilerplate block code generated by the create-guten-block toolkit. We start by importing the editor and front-end style sheets, and then we'll extract some commonly used functions from wp.i18n and wp.blocks into local variables. Inside registerBlockType(), values have been entered for the title, icon, category, and keywords properties, while the edit and save functions currently just output static content. Add the random image block to a new page to see the output generated so far.. You might be wondering why we didn't have to add any PHP code to enqueue additional block scripts. The block scripts for the my-custom-block block are enqueued via init.php, but we don't need to enqueue scripts specifically for our new block as this is taken care of for us by create-guten-block. As long as we import our main block file into src/blocks.js (as we did above) then we don't need to do any additional work. All JSX, ES6+, and Sass code will automatically be compiled into the following files: - dist/blocks.style.build.css: styles for editor and front end - dist/blocks.build.js: JavaScript for editor only - dist/blocks.editor.build.css: styles for editor only These files contain the JavaScript and CSS for all blocks, so only one set of files needs to be enqueued, regardless of the number of blocks created! We're now ready to add a bit of interactivity to our block, and we can do this by first adding an attribute to store the image category. attributes: { category: { type: 'string', default: 'nature' } } This creates an attribute called category, which stores a string with a default value of 'nature'. We didn't specify a location in the markup to store and retrieve the attribute value, so special HTML comments will be used instead. This is the default behavior if you omit an attribute source. We need some way of changing the image category, which we can do via a select drop-down control. Update the edit function to the following: edit: function( props ) { const { attributes: { category }, setAttributes } = props; function setCategory( event ) { const selected = event.target.querySelector( 'option:checked' ); setAttributes( { category: selected.value } ); event.preventDefault(); } return ( <div className={ props.className }> Current category is: {category} <form onSubmit={ setCategory }> <select value={ category } onChange={ setCategory }> <option value="animals">Animals</option> <option value="arch">Architecture</option> <option value="nature">Nature</option> <option value="people">People</option> <option value="tech">Tech</option> </select> </form> </div> ); } Here is what it will look like: This is quite different from the previous static edit function, so let's run through the changes. First we've added a select drop-down control with several choices matching the image categories available on the PlaceIMG site. When the drop-down value changes, the setCategory function is called, which finds the currently selected category and then in turn calls the setAttributes function. This is a core Gutenberg function and updates an attribute value correctly. It's recommended that you only update an attribute using this function. Now, whenever a new category is selected, the attribute value updates and is passed back into the edit function, which updates the output message. We have to complete one last step to get the random image to display. We need to create a simple component that will take in the current category and output an <img> tag with a random image obtained from the PlaceIMG site. The request we need to make to PlaceIMG is of the form:[width]/[height]/[category] We'll keep the width and height fixed for now, so we only have to add the current category onto the end of the request URL. For example, if the category was 'nature' then the full request URL would be:. Add a RandomImage component above registerBlockType(): function RandomImage( { category } ) { const src = '' + category; return <img src={ src } alt={ category } />; } To use it, just add it inside the edit function and remove the static output message. While we're at it, do the same for the save function. The full index.js file should now look like this: // Import CSS. import './style.scss'; import './editor.scss'; const { __ } = wp.i18n; const { registerBlockType, query } = wp.blocks; function RandomImage( { category } ) { const src = '' + category; return <img src={ src } alt={ category } />; } registerBlockType( 'cgb/block-random-image', { title: __( 'Random Image' ), icon: 'format-image', category: 'common', keywords: [ __( 'random' ), __( 'image' ) ], attributes: { category: { type: 'string', default: 'nature' } }, edit: function( props ) { const { attributes: { category }, setAttributes } = props; function setCategory( event ) { const selected = event.target.querySelector( 'option:checked' ); setAttributes( { category: selected.value } ); event.preventDefault(); } return ( <div className={ props.className }> <RandomImage category={ category } /> <form onSubmit={ setCategory }> <select value={ category } onChange={ setCategory }> <option value="animals">Animals</option> <option value="arch">Architecture</option> <option value="nature">Nature</option> <option value="people">People</option> <option value="tech">Tech</option> </select> </form> </div> ); }, save: function( props ) { const { attributes: { category } } = props; return ( <div> <RandomImage category={ category } /> </div> ); } } ); Finally (for now), add the following styles to editor.scss to add a colored border around the image. .wp-block-cgb-block-random-image { select { padding: 2px; margin: 7px 0 5px 2px; border-radius: 3px; } } You'll also need some styles in style.css. .wp-block-cgb-block-random-image { background: #f3e88e; color: #fff; border: 2px solid #ead9a6; border-radius: 10px; padding: 5px; width: -webkit-fit-content; width: -moz-fit-content; width: fit-content; img { border-radius: inherit; border: 1px dotted #caac69; display: grid; } } Whenever the page is refreshed in the editor or on the front end, a new random image will be displayed. If you're seeing the same image displayed over and over, you can do a hard refresh to prevent the image being served from your browser cache. Conclusion In this tutorial we've covered quite a lot of ground. If you've made it all the way through then give yourself a well-deserved pat on the back! Gutenberg block development can be a lot of fun, but it's definitely challenging to grasp every concept on first exposure. Along the way, we've learned how to enqueue block scripts and styles and covered the registerBlockType() function in depth. We followed this up by putting theory into practice and creating a custom block from scratch to display a random image from a specific category using the PlaceIMG web service. In the next and last part of this tutorial series, we'll add more features to our random image block via the settings panel in the block inspector. If you've been following along with this tutorial and just want to experiment with the code without typing everything in, you'll be able to download the final my-custom-block plugin in the next tutorial.<<
https://code.tutsplus.com/tutorials/wordpress-gutenberg-block-api-creating-custom-blocks--cms-31168
CC-MAIN-2021-25
refinedweb
3,698
51.48
Media Section Index | Page 4 What audio formats do Java Sound and/or the Java Media Framework support? Supported Java Sound formats are outlined in the Java Sound FAQ. The Java Media Framework (JMF) builds on top of the Java Sound engine and APIs. JMF provides support for quite a few additional .. Will Java 3D replace VRML? Java 3D will not replace VRML. In fact, the two are largely complementary. VRML is predominantly a standard file format for 3D data for the Web, while Java 3D is predominantly a 3D gra...more Can I capture video input from a video camera using Java? Yes, you can use the capture functionality in the Java Media Framework (JMF) to capture a video stream from an attached video camera. For more information, please refer to the JMF spec and docu...more How can I save a graphics object into a bitmap or vector file format (such as a GIF or WMF) using Java 2D? Some support for JPEG streams is provided in Java 2 via Java 2D, but not as a core platform API. JPEG is supported via the com.sun.image.codec.jpeg package. JPEG streams can in turn be ...more How can I take snapshots of Java components and save them into image files? Java 2D's BufferedImages and Sun's JPEG encoder package make short work of this. In order to save snapshots from java.awt.Components into JPEG files, you simply: Create a BufferedImage ...more Are there any MPEG encoders written in Java? The specifications for MPEG codecs are asymmetric. That is to say, it takes much more work to encode an MPEG bitstream than it does to decode it. This is by design, so that decoders can...more Where can I find more information about Java image manipulation and processing? Check out Jonathan Knudsen's columns on java.oreilly.com plus the archive of my Media Programming JavaWorld columns at: I...more Is it possible to reduce the size of an image with Java? (I want to create thrumbnails on an image.) Yes. Use a BufferedImage to retrieve the image and an AffineTransform to scale it. Here is some example code: public class Thumbnail extends ImageIcon { public Thumbnail(ImageIcon original...more How can I produce a fade-out or fade-in image effect? You can fade an image using a Java 2D convolve operation akin to the edge detection discussed in my JavaWorld column, "Image processing with Java 2D", at: How can I create a grayscale image from a color image? BufferedImage is a powerful new capability provided by Java 2. You can use the BufferedImage to access the pixel-by-pixel RGB information to decide if the pixels fall within some arbitr...more Is it possible to save a modified Java 3D scene graph back into a VRML .wrl file? I am not aware of a Java 3D scene graph-to-VRML file exporter, though something should conceivably be possible to write (I think...not having done it yet myself). You should be able to ...more Where can I get the javax.vecmath package? The vector math optional package is currently only included with Java 3D implementations, although vector math support is certainly useful for other applications besides 3D apps. Download Java...more Are there any books about the Java Media Framework? Yes: Core Java Media Framework Essential JMF: Java Media Framework Programming with the Java Media Framework more Where can I find class hierarchy diagrams of the Java 3D classes? There is rather large one the javax.media.j3d package from Java 3D Land. more
http://www.jguru.com/faq/client-side-development/media?page=4
CC-MAIN-2015-18
refinedweb
599
66.84
Are static typing and refactoring really connected? Join the DZone community and get the full member experience.Join For Free One of the main problems brought out when comparing dynamic languages to static ones is lack of proper refactoring support. It is usually implied that dynamic languages are not conceptually refactorable, which speeds up code rotting. Although there is plenty of evidence that dynamic languages do support refactoring, I'd like to concentrate on the other claim -- that statically typed languages are refactorable. Challenging this claim may seem laughable, as there is no lack of refactoring tools for Java or C#. But let's examine a more advanced language that is touted as Java successor -- Scala. Scala supports structural types, which allow treating classes as records of methods that can subtyped by the presence of appropriate methods. This example was given in the Scala 2.6 release notes: class File(name: String) { def getName(): String = name def open() { /*..*/ } def close() { println(“close file”) } } def test(f: { def getName(): String }) { println(f.getName) } test(new File(“test.txt”)) test(new java.io.File(“test.txt”)) In this code the type { def getName(): String } refers to any class with the method getName(): String in it. Now what happens if we try to rename the method in the structural type? - We can rename all the instances of the getName() methods found in all classes anywhere. - We can just rename the method in the structural type and update everything else manually Both of these approaches are useless. The first one is basically a search and replace done on all code and may rename methods that we never intended to rename (e.g. getName() in the Person class). The second one doesn't really do anything for us. The truth of the matter is that structural types miss an inherent scope associated with nominative (i.e. usual) types. Since every method signature in a nominative type originates from a single type, it gives refactoring a natural scope of all the subtypes of the originating type. Without that scope many refactoring techniques are essentially useless. What is worse is that the presence of structural types also breaks refactoring in usual classes. E.g. if we try renaming getName() in the File type, we are also presented with a decision whether or not we can rename the method in structural type. And if we do rename it, we will break the code that accesses java.io.File the same way. Therefore if we want to refactor working code to working code we can again only rename everything or nothing at all. Luckily it looks like the main refactorings broken by the structural types is renaming the methods and changing their signature. Unluckily these are the most common refactorings and having a same named method in any of the structural types breaks refactoring also in the usual classes. At the moment this mainly affects Scala and some other functional languages, but if the structural types become more spread it may come to a language near you :) Interestingly, Cedric Beust brought out that you can refactor structural types as opposed to the duck types. Since I obviously think differently it would be interesting to hear his (and your) comments on the matter. Perhaps I'm missing something obvious? Jevgeni Kabanov is the Tech Lead of ZeroTurnaround, makers of JavaRebel. You can find more of his writing at the blog. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/are-static-typing-and-refactor
CC-MAIN-2021-31
refinedweb
578
64.3
Today, I’ll walk you through building a simple mobile application’s UI completely in code without the use of Storyboards or NIBs. I will not go into the debate of which is better because, simply, they all have their pros and cons, so I’ll just leave this link here which dives more into that matter:. Overview This tutorial was written using Xcode 9 and Swift 4. I also assume that you’re familiar with Xcode, Swift, and CocoaPods. Without further delay, let’s start building our project: a simple Contact Card application. This tutorial aims to teach you how to build your application’s UI in code, and as such, it will not contain any logic about the application’s functionality unless it serves this tutorial’s purpose. Create constraints programmatically with PureLayout Setting up the project Start by firing up Xcode -> “Create a New Xcode Project”. Select “Single View App” and press “Next”. Name the project anything you’d like, I chose to call it “ContactCard”, for no obvious reasons. Untick all three options below and, of course, choose Swift to be the programming language, then press “Next”. Choose a location on your computer to save the project. Uncheck “Create Git Repository on my Mac” and press “Create”. Since we won’t be using Storyboards, or NIBs for that matter, go ahead and delete “Main.storyboard”, which can be found in the Project Navigator here: After that, click on the project in the Project Navigator and under the “General” tab find the section that says “Deployment Info” and delete whatever’s written next to “Main Interface”, usually it is the word “Main”. This is what tells Xcode which Storyboard file to load with the application startup, but since we just deleted “Main.storyboard”, leaving this line would crash the app since Xcode would not find the file. So go ahead and delete the word “Main”. Creating ViewController At this point, if you run the application, a black screen will appear as the application now does not have any source of UI to present for the user, so the next part is where we will provide it with one. Open “AppDelegate.swift” and inside application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?), insert this snippet of code: self.window = UIWindow(frame: UIScreen.main.bounds) let viewController = ViewController() self.window?.rootViewController = viewController self.window?.makeKeyAndVisible() What this does is basically provide a window for the user’s interaction with the application. This window’s view controller is the one provided with the project on creation, which can be found in “ViewController.swift”. Just as a quick test that everything’s working, head to “ViewController.swift” and inside the viewDidLoad() method, insert the following line: self.view.backgroundColor = .blue Now run the application on your preferred simulator device. A useful shortcut to navigate between files in Xcode is “⇧⌘O” and then typing the file’s name or even a piece of code that you’re looking for and a list of files will come up on the screen from which you can choose. After running the application, this should be the result on your simulator’s screen: Of course we won’t use that hideous blue so just turn the background back to white by replacing .blue with .white inside viewDidLoad(). Laying out the UI For laying out our UI, we’ll be using a very helpful library that will make our lives so much easier. Its repo can be found at. To install PureLayout, you should first open up your terminal and “cd” into the project’s directory. You can do this by typing cd, then a space, then drag and drop your project’s folder into the terminal and press “Enter”. Now run the following commands inside the terminal: pod init pod install This should be the output of your terminal after running the second command: After that, close Xcode, open the folder inside Finder, and you should find something called “<Your Project Name>.xcworkspace”. This is what we’ll be opening to access our application if we ever need to use CocoaPods. Now locate a file called “PodFile” and write the following line under the phrase use_frameworks! pod “PureLayout” Run pod install in your terminal again and then build your project by pressing “Command + B”. Coffee break Now that everything is set up, let’s start with the real work. Head over to “ViewController.swift” and grab a cup of coffee because here’s what the final result will look like: Creating an ImageView Insert the line import PureLayout underneath import UIKit to be able to use the library in this file. Next, underneath the class declaration and outside of any function, we’ll start by creating the Avatar ImageView lazy variable as follows: lazy var avatar: UIImageView = { let imageView = UIImageView(image: UIImage(named: "avatar.jpg")) imageView.autoSetDimensions(to: CGSize(width: 128.0, height: 128.0)) imageView.layer.borderWidth = 3.0 imageView.layer.borderColor = UIColor.lightGray.cgColor imageView.layer.cornerRadius = 64.0 imageView.clipsToBounds = true return imageView }() As for the image, get any image on your desktop that you’d like to use as an avatar, draw and drop it in Xcode under <Your Project’s Name> folder, which in my case is “ContactCard”, and check on “Copy items if needed”. Then click “Finish”. After that, write that file’s name along with its extension inside the declaration of the UIImage instead of “avatar.jpg”. For those of you who don’t know, lazy variables are like normal variables except they do not get initialized (or allocated any memory space) until they are needed, or being called, for the first time. This means that lazy variables don’t get initialized when the view controller is initialized, but rather they wait until a later point when they are actually needed, which saves processing power and memory space for other processes. These are especially useful in the case of initializing UI components. PureLayout in action As you can see inside the initialization, this line imageView.autoSetDimensions(to: CGSize(width: 128.0, height: 128.0)) is PureLayout in action. With just a single line, we set a constraint for both the height and width of the UIImageView and all the necessary NSLayoutConstraints are created without the hassle of dealing with their huge function calls. If you’ve dealt with creating constraints programmatically, then you must have fallen in love by now with this brilliant library. To make this image view round, we set its corner radius to half its width, or height, which is 64.0 points. Also, for the image itself to respect the roundness of the image view, we set the clipsToBounds property to true, which tells the image that it should clip anything outside the radius that we just set. We then move to creating a UIView that will serve as the upper part of the view behind the avatar which is colored in gray. The following lazy var is a declaration for that view: lazy var upperView: UIView = { let view = UIView() view.autoSetDimension(.height, toSize: 128) view.backgroundColor = .gray return view }() Adding subviews Before we forget, let’s create a function called func addSubviews() that adds the views we just created (and all the other views that we’re going to create) as subviews to the view controller’s view: func addSubviews() { self.view.addSubview(avatar) self.view.addSubview(upperView) } And now add the following line to viewDidLoad(): self.addSubviews() Setting up constraints Just to get a picture of how far we are, let’s setup the constraints for these two views. Create another function called func setupConstraints() and insert the following constraints: func setupConstraints() { avatar.autoAlignAxis(toSuperviewAxis: .vertical) avatar.autoPinEdge(toSuperviewEdge: .top, withInset: 64.0) upperView.autoPinEdge(toSuperviewEdge: .left) upperView.autoPinEdge(toSuperviewEdge: .right) upperView.autoPinEdgesToSuperviewEdges(with: .zero, excludingEdge: .bottom) } Now inside viewDidLoad() call setupConstraints() by adding its function call as follows: self.setupConstraints(). Add this AFTER the call to addSubviews(). This should be the final output: Bring to front Oops, that doesn’t seem right. As you can see, our upperView lays on top of the avatar. This is because we added avatar as a subview before upperView, and since those subviews are arranged in a stack of some sort, then it is only natural to see this result. To fix this, we can just replace those two lines with each other, but there is also another trick that I want to show you, which is: self.view.bringSubview(toFront: avatar). This method will bring the avatar all the way from the bottom of the stack right back to the top, no matter how many views there were above it. So choose whichever method you’d like. Of course for readability, it’s better to add the subviews in the order that they should appear above the other, if they would ever happen to intersect, while keeping in mind that the first added subview will be at the bottom of the stack and so any other intersecting view will appear on top of it. And this is how it should really look like: Creating segmented control Moving on, we’ll now create the segmented control, which is the gray bar that contains three sections. Creating the segmented control is actually simple. Just do the following: lazy var segmentedControl: UISegmentedControl = { let control = UISegmentedControl(items: ["Personal", "Social", "Resumè"]) control.autoSetDimension(.height, toSize: 32.0) control.selectedSegmentIndex = 0 control.layer.borderColor = UIColor.gray.cgColor control.tintColor = .gray return control }() I believe everything is clear, the only different thing is that upon initialization we provide it with an array of strings, each string representing one of our desired sections’ title. We also set the selectedSegmentIndex to 0, which tells the segmented control to highlight/choose the first segment upon initialization. The rest is just styling which you can play around with. Now let’s go ahead and add it as a subview by inserting the following line to the end of func addSubviews(): self.view.addSubview(segmentedControl) and its constraints will be: segmentedControl.autoPinEdge(toSuperviewEdge: .left, withInset: 8.0) segmentedControl.autoPinEdge(toSuperviewEdge: .right, withInset: 8.0) segmentedControl.autoPinEdge(.top, to: .bottom, of: avatar, withOffset: 16.0) Take a moment to wrap your head around these. We tell the segmented control that we want to pin it to the left side of its superview, however, we want a a little bit of spacing instead of sticking it directly to the edge of the screen. If you notice, I’m using what is called an eight-point grid where all spacings and sizes are a multiple of eight. I do the same to the right side of the segmented control. As for the last constraint, it simply says to pin its top to the bottom of avatar with a spacing of 16 points. After adding the constraints above to func setupConstraints(), run the code and make sure that it looks like the following: Adding a button Now comes the last piece of UI for this small tutorial, which is the “Edit” button. Add the following lazy variable: lazy var editButton: UIButton = { let button = UIButton() button.setTitle("Edit", for: .normal) button.setTitleColor(.gray, for: .normal) button.layer.cornerRadius = 4.0 button.layer.borderColor = UIColor.gray.cgColor button.layer.borderWidth = 1.0 button.tintColor = .gray button.backgroundColor = .clear button.autoSetDimension(.width, toSize: 96.0) button.autoSetDimension(.height, toSize: 32.0) return button }() Don’t be alarmed by how big the initialization is, but pay attention to how I set the title and its color by calling the function button.setTitle and button.setTitleColor. For certain reasons, we cannot set a button’s title by accessing its titleLabel directly and this is because there are different states for a button and many people would find it convenient to have different titles/colors for different states. Now add the button as a subview like the rest of the components and add the following constraints to have it appear where it is supposed to be: editButton.autoPinEdge(.top, to: .bottom, of: upperView, withOffset: 16.0) editButton.autoPinEdge(toSuperviewEdge: .right, withInset: 8.0) Here we only set the right and top constraints for the button, and since we gave it a size, it won’t expand and will need nothing else. Now go ahead and run the project to see the final result: Some final notes Play around, add as many UI elements as you want. Try to re-create any application’s views that you see challenging. Start simple and build up from there. Try to draw the UI components on a piece of paper so you can imagine how they fit together. Hopefully, I’ll extend this tutorial to include a Table View, Navigation Bar and a Tab Bar in the future. If you have any questions or comments please don’t hesitate to contact me! Learn how we implemented View Hierarchy at Instabug.
https://blog.instabug.com/2017/12/nslayoutconstraint-programmatically/
CC-MAIN-2018-17
refinedweb
2,149
61.97
Achieve higher C/C++ Code quality with CppDepend CppDepend is a tool based on Clang that simplifies managing a complex C\C++ code base. Architects and developers can: - Analyze code structure, - Specify design rules, - Do effective code reviews, - Master evolution by comparing different versions of the code,... How Can CppDepend Help You Improve your code base quality CppDepend simplifies managing a complex C\C++ code base. You can analyze code structure, specify design rules, do effective code reviews and master evolution by comparing different versions of the code. Automate your C\C++ code review CQLinq code query language gives you a flexibility to create your custom queries and have a deep view of your code base. With CQLinq you can automate your code review, and integrate it to your build. Assist your refactoring and migration Understanding the existing code base is primordial before any refactoring or migration. CppDepend could be very useful to audit the code base before refactoring. It helps you also in your migration process. Key Features CppDepend manage complex code base and achieve high Code Quality. With CppDepend, software quality can be measured using Code Metrics, visualized using Graphs and Treemaps, and enforced using standard and custom Rules. - Code Query Linq (CQLinq): Around 120 default queries and rules are provided when you create a new CppDepend project. They are easy to read and easy to adapt to your need. - Compare Builds: CppDepend can tell you what has been changed between 2 builds but it does more than simple text comparison. It can distinguish between comment change and code change. - 82 Code Metrics: CppDepend comes with 82 other code metrics. Some of them are related to your code organization (the number of classes or namespaces, the number of methods declared in a class,...) - View all CppDepend features They use CppDepend Using CppDepend is like climbing an observation tower and look at the work you have done in the excavation of your project and you will find astonishing details you never thought about just by changing your point of view.
http://cppdepend.com/
CC-MAIN-2013-48
refinedweb
341
60.95
Re: why not?what is there too lose? - From: "Jerry Okamura" <okamuraj005@xxxxxxxxxxxxx> - Date: Sat, 1 Dec 2007 15:49:55 -1000 Here is my prediction. Oil if a finite resource. Someday, don't know when, demand for oil WILL start to outstrip supply. Is that going to result in some sort of catastrophy. Most likely not. What will happen, will most likely happen gradually, assuming of course Governments do not stick their noses in the process, which is also not likely...politicians ALWAYS think they are smarter than anyone else. But for the sake of discussion, let us say that they stay the heck out of trying to "maniplate the market". What will happen. Will we have an economic collapse? Not very likely. What is more likely is that as demand for oil outstrips the supply for oil, the price of oil will start to rise (supply and demand at work). As the price of oil rises, alternatives which are barely now not cost effective alternatives, will become the cost effective alternative. Wind, solar, thermal, nuclear power will become the energy source of choice for electricity. As gasoline prices rise, alternatives to gasoline powered cars, will become the automobiles of choice. Technology which are now not cost effective like getting oil for coal tar, and oil shale will become more cost effective. Drilling for oil in deeper and deeper waters will become cost effective as the price of oil continues to rise. Known oil reserves which are now off limits, like ANWR and off our coast line, will be exploited, and anyone who gets in the way of such exploration will be brushed aside. And as the price of oil continues to rise (demand still outstripping supply), the price will continue to rise to the point that even an person in denial cannot avoid what is happening. And what happens "if" technology cannot fill the void. At that point, governments all over the world will pour R&D money into developing new technologies in order to try to avoid an economic catastrophy. In time, rising prices and new alternatives, will solve the problem.... It requires very little government interference, and government interference may actually make the solution even harder to achieve. "V" <vfr44@xxxxxxx> wrote in message news:70cc9e8a-043a-4ca9-a1eb-3f7f1dcd9e8c@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx On Nov 27, 2:39�pm, emuamericaneagle <davse...@xxxxxxx> wrote: With the rising costs of health care why not try out a universal system. I can see the fear in trying a universal system if we were the first to try it but there are several countries functioning with a universal system. Atleast with a universal system everybody will have some type of coverage and access. Health care premiums alone right now can break your bank, then on top of that people have co-pays, deductibles and coinsurances! Then your insurance can turn around a deny something simply because they say it was a pre-exsisting condition or not medically necessary. It is horrible that people who become diagnosed with something like cancer might think man how am I going to pay for this, instead of what are my chances of beating this. We should only have to worry about keeping ourselves healthy or getting healthy and not the cost of it all. Hopefully the future holds some type of universal system because is the cost of health insurance continues to rise then there will be even more of us uninsured. Healthcare???? You think it is political biz as usual in the US? If the brainiacs cannot 'kind a'' replace crude with a sustainable alternative we are headed for disaster. So Dem or Rep...any politician in charge had better come to terms with how things really are and not live in dream land...we are running out of time. It seems everyone wishes to hide their heads in the sand. And in the big picture, we can't fix the problem, we can only postpone the inevitable. But buying a little more time would make things much more livable in the not so distant future than the current path we are headed in. ten billion people can't burn the trees! (Ten billion people is a conservative estimate of world population in the not so distant future. We are at 7 people billion now.) The World Coal Institute estimates world energy reserves as follows: "At current production levels coal will be available for at least the next 155 years compared to 41 years for oil and 65 years for gas." Even though this was written a few years ago and it is based on 'current production and consumption' it gives the same haunting message to the generations to come. We may not exactly see the end of our free flowing energy as we know it - but some of our descendants will in the not so distant future. This is the legacy they will inherit from us. But before the energy dries up completely massive changes in our world will have taken place. You see, no other animal destroys its environment except mankind. We are the only ones that do not accept and live within our comfortable means. We not only debt with our finances we debt with our environment. What we are borrowing in terms of petroleum, coal and natural gas takes millions of years for nature to make. Yet we are using it all up in just a few hundred years...we can never pay it back. I think our countries future will be....'America...a Democratic, Communist Nation Under God.' would 'hopefully' separate us from the atheist based communists that have been run as dictatorships. Without energy our country is open for takeover ... no jets...no tanks...no transport on the ground or in the air. Luckily we will still have nuclear powered submarines and aircraft carriers as long as the uranium holds out. But the jets on the flattop all use jet fuel. All the supplies for those subs and carriers petroleum dependent. So long before the crude dries up the government must 'secure a supply' of crude for it own needs. Other countries such as Russia that have a good supply of, that's why we elect politicians to deal with these troubles. When it comes to the future, I see people living in miniature houses (the lucky ones that survive that is, after all most of the population died off long ago from starvation, freezing to death or from the riots) with roofs shingled completely with solar material. They drive up to their house on an electric scooter that is recharged from their solar roof. If they are higher up the totem pole they may have a solar golf cart. But in either case, luck must still be on their side for without the sun shinning to charge it, their transportation sits idle. (Not much lead left to build big batteries...China gobbled it all up, so we have to make due with very small storage cells.) They work for the government and in exchange the government feeds and clothes them from their warehouses. You see, we have become a sort of 'Communist Democracy' for without that bold leap and a desire 'to put our country first' Russia or China would have stepped in to acquire some new real estate. The warehouses are fed from government owned coal fired steam locomotives. Diesel dried up long ago, so it was either wood or coal to fuel the trains. It did not take our government long to realize this. the electric plants only had to shut down sporadically for 8 months so until they could build the first of a large fleet of steam locomotives. This was a 'slight' government oversight. They never figured that the coal fired power plants were fed with 'diesel powered' locomotives. They kept concentrated on the prediction that we had a hundred of years of coal left, but were oblivious as to how that coal is delivered to the power plant. But all these changes have some bright spots in them. As the coal producers were able to hire many more workers to manually mine coal, as the diesel powered mining equipment sit idle from lack of diesel fuel. Now some of the states or bigger cities had the foresight to build one or two electric rail trolleys for public transport. Your only problem is getting to the main road to catch the trolley and then it is a straight ride to the government warehouse. What happened to Private industry & Money? Money is nothing more than stored energy. But since the crude dried up, the 'real energy' behind the money has vanished...and so did private industry. What about the coal mines...all government owned. If you want to eat you work..it is that simple. So, what is money good for nowadays...to wipe your ass? Not really, the government supplied toilet paper works better than that. Martha Stewart syndrome died out long ago, now people are happy to eat rice and beans and get a clean glass of water to drink. After all, the government can't afford to fool around decorating everyone's house, they can hardly produce enough food to keep a fraction of the population alive. Yes, tractors, reapers and farming is very crude intensive...but no one bothered to think about that as they continued to squander the worlds petroleum resources. On a positive note, since most of the population died off from 'natural causes', the government does not have to worry about passing 'population control' any longer. They tried to get that universally opposed program passed for many years, but the public just would not go for it...too UN-American...goes against our religious upbringings...too controversial and all of the rest. We can still hear the cries now...Communist!...Atheist!...Baby Killer....Hitler....Impeach the President!!!! Such objections are only subjective and prejudicial states of mind. As such, all problems related to 'controversial subjects' such as this are problems created in the mind...the mind of ego based, prejudicial man. If you find yourself being distracted with such thoughts as 'too controversial' just ask yourself if the proposed controversy is true, false or I don't know? This introspective method may help you become truth based and not ego based. You will have made a 'choice divorced of need'...you wont 'need your ego' to support the truth...the truth will be able to stand on its own. But nature helped us humans out with that hard decision - for nature does not discriminate nor find the truth too controversial or provocative or opinionated to be true. And in the end, nature settled the dispute of population control with even handed justice of 75% of our population dying off, ever reminding us all that nature does not bow to man...it is always man that bows to nature. But, people hold no grudges against nature and are more in harmony with nature and enjoy a simpler life nowadays. People pick pine needles from trees to make their tea, since there is no jet fuel to import any Darjeeling tea or coffee. Once in a while people are able to kill a bird, a rat or cat to supplement their diet - so we still can find a place of gratitude in our life for such gifts. Of course one problem still haunts the world? The last remaining buckets of crude will soon be gone and they have still not found out how to make the tires for the solar powered golf carts and scooters without that critical ingredient of crude oil? Add it all together and you have 'America...a Democratic, Communist Nation Under God.' as the 'best fit ' equation. And for dessert add 'politics as usual' and we can see nothing substantive will be done in the US to fix our energy woes until it is too late. (Really it can't be fixed, we can only slowed down the inevitable at this point.) See: BTW, do I like communism? No, I like things EXACTLY as they are. But what I like doesn't matter...neither does what you like matter. That's the point,. The atheists can still be atheists and the Christians, Muslims and Jews can still worship as they like...that is why we would be a free democracy...of sorts. But the difference is, instead of the ego based decisions that politicians and the titans of business get sucked into, they will put the long term viability as top priority over personal profit. We must all pull together and stop pulling in counterproductive directions. The gov needs to cut the fat and stop all this foolish sickness that they are addicted to in Washington. Hire yourself some truth based philosophers and futurists as Socrates suggested in the Republic as an oversight committee to keep you guys on track. One important thing would be to add an addendum to the constitution or bill of rights or whatever other documents that outlines what we are 'now' all about...something that is clear advice that we can all look to and not the 1000 page BS that politician use to hide their sickness. And yes,...hiding behavior is a signpost of die-ease. And put it right upfront in the addendum as to why things changed...we were energy whores and had no other choice. But realize this, throughout history many great nations that once were are not around any longer. Hopefully the US will understand this and start accepting the truth that something has to give and it can't be business as usual...it doesn't matter what you like...it doesn't matter what you hope for...all that really matters is what is. See: Take care, V (Male) Agnostic Freethinker Practical Philosopher Futurist . - References: - Prev by Date: Re: Romney just lost the South and therefore any chance - Next by Date: Re: Attention you useless rightwing pieces of ***: Break's over; grab your cocks and drop your socks - Previous by thread: Re: why not?what is there too lose? - Next by thread: Re: Re: Mohammed On My Front Lawn... - Index(es):
http://newsgroups.derkeiler.com/Archive/Alt/alt.politics/2007-12/msg00112.html
crawl-002
refinedweb
2,367
73.07
- Cann't click the radio for extjs - If condition in groupTextTpl - I Am Going Crazy With This Column Renderer ?? - Tab - Reload content - problem in using Ext.Template and Ext.DataView - StartMenu in Preference - Problems with EXTJS 2.2. porting - New Event handling - Tab Panel add tool icon in top - HTTP status error 411 with ajax request on Mozilla - UploadPanel by Saki - Upload Button Click ? - Collision between ExtJs and prototype.js - add a custom style to the south and north region in border layout - Action.Submit() returning failure - iconCls, cls and CakePHP - confusion with some datagrid events - Trying to create a row above column headers as filters - Tree Nodes - how can I pass parameter to Ext.window - Auto grid - RowExpander issue - editorgrid panel in the form posts the last inserted value - Trying to call the parent's element - JSON file retreival - automating functional tests - aligning icon + text in Grid column value - Show full rows. - tree event isn't working - [SOLVED] Copy, clone or duplicate a JsonStore - Problem with Re-rendering - fileupload and extension file - Dynamically change content in TabPanel - QuickTips and Internet Explorer - A simple question about label - Custom percentage renderer - Issues with IE - Original text in waitMsg window - GroupingStore Collapsed With Pagging - Problem with IE6 and ScriptTagProxy - ComboBox with HTML in the display values - deferring function calls in ext - Changing fieldItems of reader in store - [2.2] RadioGroup Change event - Custom Tree Re-ordering - doLayout() - bbar button iconCls? - [SOLVED] Can't remove items directly from a FormPanel - opening a new window when select a row in a grid - MixedCollection Destruction - Interesting phenomenon. Combobox once loads once not (randomly) - accordion layout render problem - Pages inside a panel - How to fill DataView with Ext.data.JsonReader - Label => textfield => checkbox INLINE - Cant apply font styling to Ext.Editor - [Solved]Menu north not displaying - Dropdown in Toolbar - How to custom call member functions of Ext.Window - Change handler - Textfield validation mask problem - Border collapse not as expected - update form components from grid - Function undefined - Call from other file - Problem when loading viewport with tabpanel at first time - button location FormPanel changes on frame:true - ExtJs working in Tomcat - [SOLVED]Internet Explorer cannon open the Internet site - Items dinamically to accordion - Private Methods - gridpanel horizontal scrolling not showing - beforeresize - Problem with building valid JSON with PHP/MySQL - Problem with the autocomplete search.... - How to add Buttons in Horizontal Layout - ComboBox is null - how to write name of column in two rows - node text in tree panel - How to load custom JSON without page refresh - Problem in adding plugins - disabledDates doesn't work - [SOLVED] Combobox allowBlank:false does not validate ? - Ext tree in framset - Problem with Ext.Resizable. - how to display image in form.....? - ownerCt is null or is not object - radio field in editorgridpanel to select ONE - How to reference parent object? - can I do redirect when using autoload? - Open window with jsp content - EditorGrid With Slider Editor ?? - problem when trying to destroy and rebuild store - Problem with combo box - Rendering error in FF with a DateField box - [SOLVED] Add an icon for title of ext.Window - Custom fields in EditorGrid - Panel title text at bottom - How to open window in parent - How to get the node value dynamically to provide qtip - Ext JS Portal - [SOLVED] How to add custom html to drag'n'drop tree nodes? - Issue with form to FormPanel - [GridPanel] Highlight row and column on hover - How to do a grid with add new row and delete row ? - how to set the configuration by backend value? - Async tree no node param - [v3] ArrayStore, SimpleStore, Element.legacy - [Tooltip]Tooltip that appear on focus and disappear on blur - Minimize Window in Ext.StatusBar - problem with layout-browser - layout broken - MY Grid dont ready php - Display loading in reconfigure - date-picker rendering problem - [Give Up] File upload example don't take full path with FF - Long Object declarations - [SOLVED] Form Load() - mapping has problem? - Pass a single integer from cakephp to ext js - why me?! - Trying to understand Json - Learn Ext-js - colour tweak - background - Centered layout with fixed width - How create search autocomplete with textbox and PHp - Date filter in Grid - How to create some really excellent frame functionality - 'associatedDiv.style' is null or not an object - Combobox Not loading Data from store - Grid Filtering date "ON" doesn't work - JsonStore Time Out. - DomQuery question - How to create a row number column with left alignment in grid - How to set radio stat of radioGroup - Image Croping - ComboBox not working - ComBox Problem (Very urgent please) - Solved: Best way to get a data field from the selected grid row - Paging Problem (very urgent please) - Ext.Button fire more than once when continually click - [ASK] Scrolling TabPanels - How can I filter tree? - stateful - 3rd level menu in accordion control - form.load failed - Using Combobox in a Form - Ultilize which component to create menu buttons? - How to return data from the server that is not XML or JSON - Charset encoding in form and grid - consulting cost for developing custom components - FormPanel.setValues on a combobox using a local store - Internal Window Images refresh - How to make on particular cell in editor grid as noneditable textfield - Vtype within a namespace not working - How hide a rows in grid - how to let tree can be click - Combobox doesn't "drop down" - [SOLVED]Problem with zIndex (i think) - Windows overlapping - html in FormPanel - Problem getting field name on checkColumn - Learning Ext JS book released today - Combo BeforeQuery and hidden list - application settings - bottom scrollbar in gridPanel - Anchor on variable width form - Auto arranging windows - Data added to FormPanel - Big stores - [SOLVED] Problems with an iframe inside a Panel. - [SOLVED] Fast way to destroy all tabs inside a TabPanel?? - border layout bind to foo - a Tab + dataGridView - combobox in grid - Constructors and Scope: Help to understand... - Open FORM in front grid - treepanel bug? - Ext2.2: panel sizing and visibility problems under FF; is viewport causing it? - TreePanel inside Accordion - Problems with Ext.Tip max width - ajax request cache - find parent of a menu - FormPanel Help!!! - How do I add an icon to a grid - How to prevent total collapse (of a Panel) - Slow form load - Grid : Adapt all columns of their content ? - [SOLVED]PropertyGrid Current Edited Field - Problem to reload dataUrl for a fusion chart on submit Form - ColdExt jsonGrid Example Not Working - bodyscroll and mouseover/mouseout in GridPanel - Alert a Message When Click a Grid Cell - reset form beforeHide and setValue dont work. - how to define function variable that appropriate to events? - Datefield - radio field naming convention eg : name: 'a[b]' - Layout-Browser : Adding a grid populate by query.php - mod_gzip: hostings hints and workarounds - a problem of the combobox's selectedIndex after form.load() - [Newbie] Ext.ux.ValidationStatus is not a constructor - How to use field Time and configurate period of time in Extjs ? - [Desktop Example] Creating Window - Unable to display/print the data from DB in tabpanel - treepanel node specific context menu - Ext CDN - How to Center align button in windows - How to open .html with grid in DeskTop - Ext.Window and div scroll - EASY answer, I hope - Ext.onReady firing twice - Datagrid: How to hide the column header row? - Ext.MessageBox and cls class - Firefox slow after some time with a lot of grids - Need Help with Message.box - Grid data not load :( - Grid doesn't refresh it's content, after changing data in it's store - Help in changing the style of a disabled button? - Border layout renders false in IE in a complex layout - radio listeners not firing events consistently - javascript decodeURI() - need help with EXT viewport - [Solved] I keep getting redirected to index.html on my simple prototype - Combine treeEditor with contextmenu event - Help needed IE (again) - Adding event to nodes of Tree - Form Fields Help - JSON Tree renders in FF ... but not in IE 7 ... help! - ScriptTagProxy and max GET URL length - problem with server json response on form - Nested grid within RowExpander grid - Window dragging problem - error: grid is not defined in extjs when i use DeskTOp - How to add a datepicker to the toolbar - Can - Dataview with Pagination - Using XMLDataReader In Toolbar. - How to get the original height of a viewport? - Page Loading using Ajax.Request - Rendering error - Calendar control in extjs - listener for combobox's innerList 'mouseover', 'mousewheel' etc? - WEBLOGIC Grid data did not display - How to create a button of lookup to the side of the field textfield? - Copy data in Grid. - How to create generic grid - Disable Field Auto Validation failed - login form with c# - getCmp from Toolbar issue - editable grid can't save - getCmp from Toolbar issue - How to disable a check box of a tree node - how to disable menubar button - Calling function from hyperlink in grid column - Grid : define the sorted column
https://www.sencha.com/forum/archive/index.php/f-9-p-98.html?s=690bf45acb4dceb1fc6a6d68b62133cd
CC-MAIN-2020-10
refinedweb
1,456
50.36
I am a totally beginner on programming language including python and this problem kind of difficult for me. Appreciate if you guys can help me. So I have these two list of list: S = [[D, 0.67, 0.05], [A, 0.68, 0.06], [C, 2.00, 0.13], [B, 0.68, 0.39], [E, 1.28, 0.97], [F, 0.72, 1.05], [I, 0.58, 1.05], [G, 1.25, 2.03], [H, 1.10, 3.59], [J, 0.98, 4.14]] R = [[D, 0.67, 0.05], [A, 0.68, 0.06], [C, 2.00, 0.13]] [point name, x value, y value] (1/y value of list S) over (1/closest and smaller y value of list R) [B, 0.68, 0.39] [C, 2.00, 0.13] (1/0.39)/(1/0.13) S_score = [[D,1],[A,1],[C,1],[B,0.33],[E,0.13],[F,0.12],[I,0.12],[G,0.06],[H,0.04],[J,0.03]] S_score = [] for i in xrange(len(S)): if S[i][2] >= R[0][2] and S[i][2] <= R[1][2]: value = (1/S[i][2]) / (1/R[0][2]) score.append(value) else: if S[i][2] >= R[1][2] and S[i][2] <= R[2][2]: value = (1 / S[i][2]) / (1 / R[1][2]) score.append(value) if S[i][2] >= R[2][2]: value = (1 / S[i][2]) / (1 / R[2][2]) score.append(value) print "Score: ", S_score This works: def maximum(arr): x = arr[0] for x1 in arr: x = x1 if x1[2] > x[2] else x return x def foo(x, arr): x1 = maximum(filter(lambda x2: x2[2] <= x[2], arr)) return (1/x[2])/(1/x1[2]) result = [[x[0], foo(x, R)] for x in S] The idea here is that the foo function will send to the maximum function only those values of R that have y lower or equal than the current y, and that one will return the one with the largest y. After that it's just the calculation you provided. The code with the result is a simple list comprehension.
https://codedump.io/share/h1rWmTvAKTLx/1/assign-value-by-comparison-of-different-lists-with-different-numbers-of-member
CC-MAIN-2016-44
refinedweb
361
86.71
Fun With React: A Quick Overview Fun With React: A Quick Overview React has become arguably the most popular JavaScript framework currently in use. What are the key elements of React that help make it so popular? Let's dive in. Join the DZone community and get the full member experience.Join For Free Jumpstart your Angular applications with Indigo.Design, a unified platform for visual design, UX prototyping, code generation, and app development. React in the Real World Created by Facebook, React was initially released in 2013. React continued to gain momentum until it looked like it was going to hit a snag in 2017 over licensing. The BSD+Patents license that Facebook was insisting on created potential Intellectual Property issues for developers. Fortunately, in September of 2017 Facebook backed down and re-licensed React under the more acceptable MIT license. The current release is 16.2. Like the other popular frameworks, React is a free, unlicensed library so there are no perfect usage statistics, but there are several places we can look to for a good idea of overall adoption. It has over 88K stars on GitHub, and over 7 million npm downloads per month. Some of this traffic might, of course, be from development machines or mirrors, but these are good quick stats to get an idea of just how popular the library is. React Statistics - Over 88K stars on GitHub - Over 7M npm downloads per month React broke one million downloads per month in Jan of 2016 and has been climbing steadily since then, topping 7 million by the end of 2017. While it dipped in Dec 2017, in Jan of 2018 it was back up over 7.5 Million. Download statistics for package "react" 2016-2017 - data by npm-stat.com Core Concepts React has some unique core concepts. It has a virtual DOM, JSX components, input properties, and props. Also, each React component has a state and a lifecycle, which we will go into. React Core Concepts - Virtual DOM - JSX - Components - Props - State - Lifecycle The Virtual DOM The virtual DOM is a node tree, just like the DOM. If you're familiar with how the DOM works within a web browser then it will be easy to understand the virtual DOM. It's very similar, but it's all virtual. There's a list of elements, and attributes, and content that exists as JavaScript objects with properties. When we call a render function - and each React component has a render function - it actually creates a node tree from that React component whether it's just one single component, or whether we're rendering child components as well. It lists out the whole tree. It also updates that same tree whenever data models are changed - whenever we update state or change anything within the component. React updates the real DOM in three steps. Whenever something has changed, the virtual DOM will re-render. Then the difference between the old virtual DOM and new virtual DOM will be calculated. Then from there, the real DOM will be updated based on these changes. Instead of constantly having to work with the real DOM, which is very expensive, everything is handled virtually until we absolutely need to update the DOM. At that point, we'll go ahead and do that expensive call. JSX JSX is officially an XML-like syntax that is close to HTML, but not quite HTML. It is actually JavaScript with HTML sprinkled in. It's really just syntactical sugar for something like this: react.createElement(component, props, ...children) Instead of working with the format in the example above, we'll use a much simpler format shown in the example below using the tag MyButton. <MyButton color="blue" shadowSize={2}> Click Me </MyButton> Becomes React.createElement( MyButton, { color: 'blue', shadowSize: 2 }, 'Click Me' ) It all stems from the react.createElement. Instead of having to create an element by hand, we have the component MyButton which has several different attributes that we pass into it. We don't have to deal with creating the element, and then defining the tag, and then passing in all the attributes and everything like that. Components React lets us split the UI into independent reusable pieces. Components are like JavaScript functions. We have an arbitrary amount of input, set the props, and then we return the React elements. We're always returning a render function that has the elements that we want it to display. It's very simple to begin with, but we can quickly get advanced with this. The render function is really key here because every component will have a render function. We'll see here we have the function Welcome(props), for example. function Welcome(props) { return <h1>Hello, {props.name}</h1>; } We can also write it as @class Welcome which extends React.Component in the ES6 way if we want to work a little bit more with classes. class Welcome extends React.Component { render() { return <h1>Hello, {this.props.name}</h1>; } } In the first example, we return the simple HTML element. In the ES6 example, we have the same thing but then render, and this is just a little bit more syntax thrown in for passing back an HTML element. Props Props is what gives our components and attributes overall properties. This is how we pass in data. This is how we deal with various different attributes. As we see here, in this example, we have the shopping list name, we pass in a name here, and we'll be able to use this.props.name while rendering this particular component. This is an easy way to pass data in and out. class ShoppingList extends React.Component { render() { return ( <div className="shopping-list"> <h1>Shopping List for {this.props.name}</h1> <ul> <li>Bananas</li> <li>Cereal</li> <li>Cabbage</li> </ul> </div> ); } } Each component has a state, and it actually manages its own state as well. This can be extracted and set in our code. As developers, we're actually responsible for updating and dealing with state. In the example below, we see here that in the beginning when we create this clock component in the constructor, we have this.state. We pass in a new date, and then we can actually output that in the render function. We can use states easily to perform common tasks like setting the state and extracting the state easily. class Clock extends React.Component { constructor(props) { super(props); this.state = {date: new Date()}; } render() { return ( <div> <h1>Hello, world!</h1> <h2>It is {this.state.date.toLocaleTimeString()}.</h2> </div> ); } } Each component has a specific lifecycle that we have control over. We have mounting, updating, and unmounting functions. We have a full list of different lifecycle hooks that we can subscribe too. The constructor, for example, can help us set up the initial state. Then, from there, we have other events that we can hook into. Mounting constructor() componentWillMount() render() Updating componentDidMount() componentWillReceiveProps() shouldComponentUpdate() componentWillUpdate() render() Unmounting componentDidUpdate() componentWillUnmount() Getting Started The easiest way to get started with React is through create-react-app CLI. That's the official React CLI. Then we can create a new app, and that bootstraps the entire application. We simply use create-react-app my-app. Then we go ahead and kick things off with npm start. This command just runs through a custom npm script to kick off the app and set up a server for us, so we can start working on the app itself. # Install create-react-app – React’s CLI $ npm install –g create-react-app # Create our app $ create-react-app my-app # Start our app $ cd my-app $ npm start What's Next? We covered a lot of content quickly to present a 'taste' of React and we have not done more than scratch the surface. However, this should be enough to give everybody a high-level look at what we have available within React. Now that we have had a quick look at React, is React the right choice for you? There are other frameworks that are very popular - Angular and Vue in particular. While Vue and React share some similarities, Angular is very different. Whether or not it is the right choice for you depends on a number of factors. Take a look at an Indigo.Design sample application to learn more about how apps are created with design to code software. Published at DZone with permission of John Willoughby , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/fun-with-react-a-quick-overview
CC-MAIN-2019-04
refinedweb
1,443
65.73
Super Simple Python is a series dedicated to creating super simple Python projects for beginners to be able to do in under 15 minutes. In this episode, we’ll be covering how to build a high low guessing game in under 15 lines of code! For a video version see: Like many of the projects we’ve built in the Super Simple Python series like the Random Number Generator, the Dice Roll Simulator, and Rock Paper Scissors, the High Low guessing game is built around random number generation. In this game, the computer will generate a number and ask us for guesses. For each turn of the game, we’ll guess a number. The computer will tell us if our number is too high or too low until we guess the correct number. Picking a Random Number to Guess Since we’re working with a random number, we’ll need to start off by importing our random library. After that we’ll also print out a sentence telling the user what the range of our random number will be in, and then set a random number. For this high low guessing game, we’ll pick a random number between 1 and 100 to guess. import random print("This program picks a random number between 1 and 100 and tells you if your guess is high or low until you guess it") num = random.randint(1, 100) Playing the High Low Guessing Game Alright, now that we’ve picked a number between 1 and 100 to guess, it’s time to play the guessing game. We’ll start by declaring a variable, guessing, and setting it to True. This tells the program whether or not we’re still trying to guess the number. While guessing is True, we’ll ask the user for their guess. If the guess is greater than the number, we’ll tell the user their guess was too high. If the guess is less than the number, we’ll tell the user their guess was too low. Otherwise they’ve guessed the number and we should tell them that they got it and set guessing to False so the while loop doesn’t repeat. guessing = True while guessing: guess = int(input("What is your guess? ")) if guess > num: print("Too high!") elif guess < num: print("Too low!") else: print("Nice, you got it!") guessing = False Once we run our program we should get an output like the image below. To play around with this code, try picking a different range of numbers to guess from. Other augmentations to the high low guessing game could include telling the user they’re “way too high” or “way too low” when their guess is more than some number x away from the target, or telling the user if they’re getting warmer or colder from their last guess. Have fun! Further Reading - Build Your Own AI Text Summarizer - How to Build the Tools for Engineering - What is Text Polarity or Sentiment? - Prim’s Algorithm: High Low Guessing Game”
https://pythonalgos.com/super-simple-python-high-low-guessing-game/
CC-MAIN-2022-27
refinedweb
508
78.08
CGTalk > Software Specific Forums > Maxon Cinema 4D > Cinema 4D XPRESSO/Scripting/Coding > Camera animation export PDA View Full Version : Camera animation export Muzikaaa 09-24-2012, 06:57 PM Hi all I am completely new to both C4D and the forum, and I am hoping someone could help me in this :) I need to export the animation data of a camera from C4D to a txt file, i.e. Transformation data of each frame and FOV. The camera should be with target, not free camera, and the exported txt file should look like this: xSource#ySource#zSource#xTarget#yTarget#zTarget#xUp#yUp#zUp#FOV with a new line for each frame. I have this script in maxscript and I am including it here for reference ==================================== ( output_name = getSaveFileName caption:"CameraData File" types:"CameraData (*.txt)|*.txt|All Files (*.*)|*.*|" if output_name != undefined then ( output_file = createfile output_name local newscale="1.000000" fn rescaleScene= ( case units.SystemType of ( #Inches: (newscale= 0.0254) #Feet: (newscale= 0.3048) #Miles: (newscale= 1609.344) #Millimeters: (newscale= 0.001) #Centimeters: (newscale= 0.01) #Meters: (newscale= 1) #Kilometers: (newscale= 1000) ) try(setinisetting (objExp.getIniName()) "Geometry" "ObjScale" (newscale as string))catch() ) rescaleScene() for t = animationrange.start to animationrange.end do ( at time t current_pos = $Camera1.pos *newscale at time t current_target = $Camera1.Target.pos *newscale at time t theFOV = $Camera1.FOV at time t thecam_dir= ( local posdummy=dummy transform:$Camera1.transform in coordsys local move posdummy [0,100,0] local yvec_pos=posdummy.pos delete posdummy camup_pos=normalize($Camera1.pos-yvec_pos) ) theFlags=#(current_pos,current_target,theFOV,thecam_dir) str = stringStream "" format "%" theFlags[1][1] to:str format "#%" theFlags[1][3] to:str format "#%" -theFlags[1][2] to:str format "#%" theFlags[2][1] to:str format "#%" theFlags[2][3] to:str format "#%" -theFlags[2][2] to:str format "#%" theFlags[3] to:str format "#%" -theFlags[4][1] to:str format "#%" -theFlags[4][3] to:str format "#%" theFlags[4][2] to:str mystring = str as string format "%\n"(mystring) to:output_file ) close output_file edit output_name ) ) ============================== If someone can help me in this, or point me to a C4D script that does the same thing or something similar that i can modify to achieve my goal i would very much appreciate it :) Muzikaaa 10-09-2012, 03:17 AM Gentlemen, would you give me a hand with the above script? :) It will be very much appreciated. Scott Ayers 10-09-2012, 06:09 PM Sure. I'll throw you bone.;) I won't write the entire thing for you. But I'll give you enough of the the basic code framework. So that you should be able to take it from there. And customize it to suit your needs. This is a python script that runs in the script manager: #This script saves the active camera's positions and FOV to a text file #The text file is called "myfile.txt" and is located on the Desktop #Change the code as needed import c4d from c4d import storage def main(): fps = doc.GetFps() curTime = doc.GetTime() #Store current time so we can get back to it start = doc.GetLoopMinTime().GetFrame(fps) #Get min loop time, start of writing end = doc.GetLoopMaxTime().GetFrame(fps) #Get max loop time, end of writing camera = doc.GetActiveBaseDraw().GetSceneCamera(doc) #Get the active camera #path = storage.SaveDialog(c4d.FILESELECTTYPE_ANYTHING,'Save Camera Info','txt') #Use this if you want to use a save dialog filepath = c4d.storage.GeGetC4DPath(c4d.C4D_PATH_DESKTOP) + "/" +"myfile.txt" camPos = camera.GetRelPos() FOV = camera[c4d.CAMERAOBJECT_FOV] #The camera's Field of View(horizontal) attribute file = open(filepath,"w") #Open the file in writable mode for frame in xrange(start, end+1): c4d.StatusSetBar(end*(frame-start)/(end-start)) #Animate the staus bar value doc.SetTime(c4d.BaseTime(frame, fps)) c4d.DrawViews(c4d.DRAWFLAGS_ONLY_ACTIVE_VIEW|c4d.DRAWFLAGS_NO_THREAD|c4d.DRAWFLAGS_NO_REDUCTION|c4d.DRAWFLAGS_STATICBREAK) file.write('#CamerPositionX' + '%.2f' % ( camPos.x) + " ") file.write('#CamerPositionY' + '%.2f' % ( camPos.y) + " ") file.write('#CamerPositionZ' + '%.2f' % ( camPos.z) + " ") file.write('#FOV' + '%.2f' % ( FOV) + '\n') c4d.GeSyncMessage(c4d.EVMSG_TIMECHANGED) #Update timeline doc.SetTime(curTime) #Set the scrubber back to where it was c4d.EventAdd(c4d.EVENT_ANIMATE) c4d.StatusClear() #Re-set the status bar to blank file.close() #close the file so it can be accessed again if __name__=='__main__': main() Be aware that this forum often screws up the indentations in python code. And also inserts spaces where they shouldn't be. So you might need to fix this code so it doesn't throw errors. If you have more Python & C4D questions. You'll probably find more coders at You can learn a lot about the basics just by going through the python forum posts there. -ScottA Muzikaaa 10-10-2012, 04:44 AM Sure. I'll throw you bone.;) I won't write the entire thing for you. But I'll give you enough of the the basic code framework. So that you should be able to take it from there. And customize it to suit your needs. Great, this is what I need, a jump start. I will try the script today. Thank you Scott :) Muzikaaa 10-11-2012, 07:25 AM This is a python script that runs in the script manager -ScottA Hey Scott, I built over your script and got what I want for now. I ran into some other posts of you on some other forums that gave me some good info as well. Thank you :applause: As a gesture of appreciation for your valuable help, I would like to give you something back in return. Please PM me :) CGTalk Moderation 10-11-2012,.
http://forums.cgsociety.org/archive/index.php/t-1072054.html
CC-MAIN-2014-15
refinedweb
917
59.4
Following up on a couple of recent threads, I decided to add caching to the X Color lookup code in emacs. The way I ended up doing it was creating a general purpose cache linked-list, and wrapping each of XAllocColor, XQueryColor, and XQueryColors in a function that first looks for the requested data in the linked list before issuing the X call. Results are quite good. Some arbitrary timings follow. All times over an ssh -XC link with lbxproxy running on the remote end. The "stock" emacs has a version string of: GNU Emacs 21.1.1 (i686-pc-linux-gnu, X toolkit, Xaw3d scroll bars) of 2001-10-22 on zion. Times are presented for this stock emacs and for a current CVS emacs configured --without-xim and with color caching, and for each of those, with and without my ~/.emacs file (which loads a whole bunch of libraries). without ~/.emacs (-q): Std emacs 21.1 0.24user 0.05system 0:45.31elapsed emacs-cvs --without-xim, with color caching: 0.25user 0.03system 0:29.80elapsed with ~/.emacs: emacs-21.1: 0.96user 0.22system 1:04.65elapsed emacs-cvs --without-xim, with color caching: 0.94user 0.15system 0:32.40elapsed As you can see, the big improvement is in the loading of libraries -- where the stock emacs goes from 45s to 69s, the caching emacs goes up less than 3 seconds. Another obvious thing to cache would be the font data that emacs can request, although it is much less clear to me how to cache the XExtData list that is potentially contained in an XFontStruct (namely, how do you know how big the private data is? If anyone has any suggestions here, I'd like to hear them). This new functionality is provided via two new files (xcache.[ch]), adding xcache.o to the XOBJ variable in src/Makefile, adding -DUSE_XCACHE to the ALL_CFLAGS in the Makefile, and the following small patch to xterm.c: Index: xterm.h =================================================================== RCS file: /cvsroot/emacs/emacs/src/xterm.h,v retrieving revision 1.136 diff -r1.136 xterm.h 36c36,39 < #endif --- > #ifdef USE_XCACHE > #include "xcache.h" > #endif /* USE_XCACHE */ > #endif /* USE_X_TOOLKIT */ xcache.[ch] are attached. I'd be interested to hear any feedback you might have on this. -- Ami Fischman address@hidden xcache.c Description: Binary data xcache.h Description: Binary data
https://lists.gnu.org/archive/html/emacs-devel/2002-10/msg00147.html
CC-MAIN-2016-26
refinedweb
395
76.42
! Hi Alex. Thank you very much for the fantastic step-by-step tutorial. A silly thought just crossed my mind when I read the REMOVE() member function code: 1. Isn’t it easier and faster if we just move all the elements after INDEX up 1 position in stead of allocating a new array and copying the required elements to it? 2. The last element will be destroyed along with the original array when it (the array) goes out of scope, function ERASE() is called etc… will it not? That way, the value in that 1 address will not make great impact on the program, right? I understand if you want to make a good habit out of this when dealing with dynamic memory management. Thank you for your time. Yes, you could definitely do that. You’d just reduce the length of the array by 1 and not use the last element. I have a question about remove(): Instead of creating a new array to copy old array over, can we reuse the old array, copy only the values after the removed elements to one index before and only delete memory pointed by last element. … // Copy all of the values after the removed element for (int after=index+1; after < m_length; ++after ) m_data[after-1] = m_data[after]; // Finally, delete the memory pointed by last element delete m_data[m_length-1]; -m_length; } Thank you! It depends. Generally speaking, no. Arrays must be allocated as contiguous blocks of memory, and can’t be shortened or lengthened (at least not easily). Your delete statement won’t shorten the array -- instead, it says, “delete any dynamically allocated memory being pointed to by the last element of the array”. Now, if you have an array of pointers to objects, this can be a valid approach -- rather than actually making the array shorter, you can just change m_length so the class _thinks_ the array is shorter (even though it isn’t). If you have an array of non-pointers, that delete line won’t compile. I have a question. In your resize function at the end, you set the new array "data" equal to the member array "m_data". I understand that part but how come you don’t delete "data" (the new array). Doesn’t that mean there will always be 2 dynamically allocated arrays which both point to 1 address? What happens when you call the resize function again, does it create a third array (which gets set to m_array)? I’m just confused about what happens with the new array you declared in the function. The resize() function does a little bit of shuffling of pointers that can be confusing if you’ve never seen this done before. Data is just At a high level, this function does four things: * Allocate a new array (data) * Copy the old array into the new array * Deallocate the old array (m_data) * Set the member pointer (m_data) to point at the new array (this is just a pointer assignment, no new array is created here) By the time the function is done, m_data is pointing at the new array, and we’ve done one allocation and one deletion. The data pointer (which points to the same array as m_data) is no longer needed, so we let it go out of scope. If we then deleted data, m_data would be left as a dangling pointer (pointing at deallocated memory). I see, I think I understand. So pointers declared in local scope will be destroyed when it leaves the scope? And since we assigned its memory to something else, there won’t be memory leaks when *data is gone? Yes, the pointer variable is destroyed (just like a non-pointer variable would be), but the dynamically allocated data being pointed to is not destroyed (that requires using delete). Since m_data now points at that dynamically allocated data, the data isn’t leaked, because we can still get to it through m_data (and delete it later, like on the next call to resize()). I see, thank you for the clarification. Hi Alex, In resize methods we can add a conditional check to see if newSize is same as m_length and can return without erasing and creating array again. resize() { if (newLength == m_length) return; // If we are resizing to an empty array, do that and return if (newLength <= 0) { erase(); return; } } Good point. Added that to the example. In your "remove()" function, you wrote a comment after the assert statement: "// If this is the last element in the erase, set the array to empty and bail out". I think "erase" is a mistake. Indeed. Should be “array”. Thanks for noticing. What is the difference between including <assert.h> or <cassert>? In the general case, best practice in C++ is to include <cXXX> rather than <XXX.h>, as the <cXXX> headers tend to put less in the global namespace (they make better use of namespace std). In the specific case of cassert vs assert.h, since macros are namespace agnostic, there’s no difference. Nevertheless, I’ve updated the lesson to use cassert, since that better follows the best practice. hi alex how to make a class container ??? Similar to the example shown in the lesson, but change the data type of m_data from int to whatever your class name is. Hello Alex, I have 2 questions: 1. In the void Resize() member function: However, in Lesson 6.9 you gave an example: Because pointers follow all of the same rules as normal variables, when the function ends, ptr will go out of scope. I’m confused why m_pnData doesn’t become a dangling pointer? Why don’t I get a compiler error that is already defined if it never went out of scope? Should we make it static? 2. Also in the void Resize() member function, When this function is called a consecutive time, shouldn’t the pnData now point to a newly allocated handful of memory, without releasing the previous allocated memory, whereby leading to memory leak as the memory reserved for the previous pnData is now inaccessible? Thanks again for helping me struggle through this. 1) When pointers go out of scope, they are destroyed. However, the contents that they point to are not (they are not implicitly deleted). So when pnData goes out of scope, pnData gets destroyed, but the memory allocated is still there. Normally this would result in a dangling pointer, except that we’ve already set m_pnData to point to that same value, so we still have a way to access it. 2) You removed the line that prevents memory leaks! That’s what this line is for: In English: 1) Allocate some memory and assign it to a temporary pointer 2) Delete any old memory we’re holding onto 3) Make our class point at the new memory we just allocated We can do those above 3 steps as many times as we want and each time we’ll swap newly allocated memory in and delete the old memory. Make sense? I think so. So it isn’t a pointer, pointing to a pointer, pointing to allocated memory. but rather re-initializing the address held by m_pnData. I was under the impression that m_pnData would be pointing to nothing (link lost between it and allocated memory) when pnData went out of scope. Not re-initializing, but rather, re-assigning. m_pnData will retain whatever value it’s pointing at for as long as the object it is part of still exists. pnData going out of scope has no impact on this. Thanks, Alex. dear Alex . how can we return m_pnData .which is a pointer,how it works ,please explain ,thanks int& operator[](int nIndex) { assert(nIndex >= 0 && nIndex < m_nLength); return m_pnData[nIndex]; } Now all is clear. Thanks Alex! Hi, in Reallocate(), why do we need to delete[] m_pnData before we set it to pnData? Consider what would happen if m_pnData was already pointing to allocated memory. If we then overwrote m_pnData with pnData, then we’d lose our only pointer to that allocated memory, which means we’d have a memory leak. Deleting m_pnData ensures that any previously allocated data is properly cleaned up before we point m_pnData at something else. So we wouldn’t have to do it if m_pnData was an array (or anything except for a pointer)? Thank you, Alex! Yeah, if m_pnData was an array, a vector, or some other type of class, the overloaded operator= would handle the memory allocation for you, so you’d only need to do an assignment. This is part of the reason why std::array, std::vector, and other classes are so great. Less code, less chances of mistakes, less things to worry about. In the test section you are putting 20 before the sixth element and removing the fourth element. I think the comments are incorrect; unless we refer to a zeroth element I’ve updated the comments to make them more accurate. Thanks for noticing this. Hi Alex, It seems there’s an error in one of your sentences:(paragraph 2, line 5) …array container classes generally provide "dynamically resizing" (when elements are added or removed) and do bounds-checking. I think it should be …provide dynamic resizing". I really like your tutorials. It’s been helpful. Thanks. Thanks, fixed! Quick question. Why is the prefix of the member variables in classes started off with an ‘m_’? I have seen other C++ programmers do this too. It’s just a convenient way to differentiate your member variables from non-member variables, to help avoid stupid mistakes. Why do we need Reallocate() if we don’t use this function anywhere else? When building classes, it’s often good to consider what tools you might want to have handy at a later date, and add them while you’ve got a good understanding of the code. Coming back later and trying to add a new function will generally be more difficult. which websites do all of you recommend me to learn for computer graphics and openGL? Thank you very much. I learned a lot about how to do openGL from Nehe. Google “Nehe OpenGL” and I’m sure you’ll find it. Understanding C++ is a prerequisite. Thank you so much Alex. I already learnt a lot from your websites. Now I reached to the 11.5 which is the portion of inheritance. I asked especially for my project. There’s a good one here It uses more a modern OpenGL version than NeHe. God, after reading through this lesson i feel like such a dimwit. I don’t understand it at all! 🙁 I don’t understand: what is IntArray? Is it an Array? You said: "Our IntArray is going to need to keep track of two values: the data itself, and the size of the array." But an array has more then two values, right? It has as many values as the size of the array (plus the size). You said: "Now we need to add some constructors that will allow us to create IntArrays." I also don’t understand this! If I want to create an IntArray, i just code: Then i have an InArray, right? So as you can see, i really don’t understand what this IntArray class is and what it’s doing! Any chance you could just try one more time to explain it in simple terms? IntArray is just a custom-defined class. In the lesson on dynamically allocated arrays, we learned that we can have a pointer to an array and use it to access all the elements in the array, right? So our IntArray class contains a member variable (m_pnData) that holds this pointer to the array. We can use it to access all of the elements of the array. It also has another member variable (m_nLength) that stores the array length (since the array pointer doesn’t know how big the array is). If we had no constructors, we could create arrays as you suggest (IntArray MyArray), but what values would m_pnData or m_nLength be set to? They would be garbage. We use the constructors to ensure they get initialized with meaningful values, as well as allowing us to create an IntArray with a preset length, like this: Really, all we’ve done here is put a dynamic array inside a class. Does that help? How can I clarify for you further? Thanks, i get it now! (i was thinking it was more complicated then it actually is!) Hi Alex, I’m a big fan of you! What a great tutorial + Why do you have to create a constructor constructing an empty array because I delete it but the code still can run ? + What do you mean when you say Reallocate operates quickly and Resize operates slowly? + I failed to understand your idea about the beginning snippet of Reallocate and Resize: Why we have to erase, whats wrong with the if condition and what is the exactly "thing" we return ? + I need a refresh on this: int nElementsToCopy = (nNewLength > m_nLength) ? m_nLength : nNewLength; but I couldn’t remember which lesson, can you help me? + One silly question: Do you have a plan to write all of this or you just sit down and the code continuously appear in your head? The other parts of the lesson are perfect! Thank you so much for your time Alex! > + Why do you have to create a constructor constructing an empty array because I delete it but the code still can run ? When creating classes, it’s always good to consider how they _could_ be used, not just how you want to use them right now. We create a default constructor so that if we wanted to do something like this: array would have it’s member variables initialized correctly. > + What do you mean when you say Reallocate operates quickly and Resize operates slowly? I mean Reallocate() executes quickly and Resize() executes slowly (because it has to copy over elements from the old array to the new array, which takes time). > + I failed to understand your idea about the beginning snippet of Reallocate and Resize: Why we have to erase, whats wrong with the if condition and what is the exactly “thing” we return ? Erase() deallocates the currently allocated array (if it has been allocated), so we can either allocate a new array, or leave array unallocated (if nNewLength is 0). The if condition checks if we’re requesting an array of length 0, because in that case, we don’t need to allocate a new array. Nothing is returned, these functions return void. They simply modify the values of the classes member variables. > I need a refresh on this: int nElementsToCopy = (nNewLength > m_nLength) ? m_nLength : nNewLength; but I couldn’t remember which lesson, can you help me? Lesson 3.4 -- Sizeof, comma, and conditional operators. Basically, this sets nElementsToCopy to the smaller of nNewLength and m_nLength. > One silly question: Do you have a plan to write all of this or you just sit down and the code continuously appear in your head? I usually have a plan. How much of a plan depends on how complicated what I am doing is. If it’s something simple, then I just write it. For something of some complexity (such as this IntArray container class), I usually put more thought into it first. First I make sure I know what I want (in this case, a class to manage an array of integers). Then I decide what members I need (an int pointer and a length). Then I work on the primary functions (constructor, destructor, access functions). And finally, supporting functions (such as Resize() and Reallocate()) and operators. It’s sometimes useful to use comments to indicate what the function needs to do (handle zero case, allocate the new array, copy the elements, deallocate the old array) before writing the code to do it. I also often discover new useful functions as I go (for example, Erase()). Then I test the code, and usually find things that are broken, or cases that I missed. That requires modifications to the code to address. if (nNewLength<= 0) return; Are we talking about the same return ? PLease tell me if I’m wrong Hello Jason, Reallocate() is written to “Reset” our array. Suppose we have an IntArray named myarray that has 10 elements in it. If we do this: myarray.Reallocate(5), all 10 elements are destroyed and new size for our myarray is set to 5. Now lets talk about this line, inside Reallocate Say, we have accidently passed -2 as argument for nNewLength. Now take a look at rest of the code inside Reallocate… Now m_pnData points to an array of size -2. Does that makes sense??? The condition ensures that Reallocation will only take place if valid array size is passed in as parameter, otherwise Reallocate exits and control returns to the caller. If we pass in the value of 0 (or any number less than 0), that means we now want our array to be empty. Reallocate calls Erase() in its first statement, so if there is an invalid value (or 0) passed in, our array will be destroyed (elements would be deleted), m_pnData set to null, and array’s new size will be 0. Let me know if it’s still not clear to you. Thanks… Dear Devashish, I get it now. Thank you so much! Hi Alex, I thought : int nElementsToCopy = (nNewLength > m_nLength) ? m_nLength : nNewLength; sets nElementsToCopy to the BIGGER of nNewLength and m_nLength ? By the way you made copy - typo in void Remove() : inserted -> removed Great tutorial!! Is the same as: If we’re making the array larger, we only need to copy however many elements were in the older (smaller) array. If we’re making the array smaller, we only need to copy how many elements are in the new (smaller) array. Alex, please correct me if I am wrong: In the integer array class: 1. All the parameters could be marked as consts. 2. If I do this: IntArray my_array; with no parameters (or, IntArray my_array (0);), the constructor sets m_pnData and m_nLength to 0. Now if I wish to use reallocator (Reallocate ()) , what would happen? The first line In Reallocate () calls Erase() that deallocates the memory allocated in m_pnData. Deallocating a null pointer, does that make sense? We can put a condition in function Error (), that makes Error () deallocate memory In m_pnData only if it was previously allocated. Same problem with function Resize (). 3. I failed to understand why you put this check inside Resize (): 4. Shouldn’t the destructor set m_pnData to null? Isn’t it necessary? 1) It wouldn’t hurt to make the parameters const, if for no other reason than to prevent the function from accidentally changing the value of a parameter. 2) Deleting a null pointer doesn’t do anything -- it’s a no-op. There’s no need to explicitly guard against it. It’s only dereferencing a null pointer that causes problems. 3) It’s technically not needed, but it makes the logic of the function easier to follow. With it, you know that the following block only executes if the current array has elements. Otherwise, you have to see that nElementsToCopy gets set to 0 and the for loop executes 0 times. That’s not immediately obvious. I’ll favor understandability over efficiency in most cases, and this one of them. 4) No, it isn’t necessary. m_pnData will go out of scope at the end of the destructor, so setting it to null beforehand isn’t needed. Doing so will make your code slower for no real benefit. Thanks A lot sir. I hope for you the best. Thanks again for your Good Website. Hi, thank you for this fantastic tutorial. In main() if I call cArray.Resize with a value greater than the previous array ( ex. ) i got the un-set elements of the array set to zero. i.e : 40 1 2 3 5 20 6 7 8 9 10 0 0 0 0 0 0 0 0 30 I expected to have garbage instead of that zero. Are they set to zero when I dinamically make a new array or is just garbage? Garbage. In this case, it looks like you got a lot of garbage zeros. Got the logic of container how it works.. Its more about that how we implementing the algorithm. People commenting here are more discussing about how the containers got implemented but i just want to know how this Integer container related to composition. As far as my understanding, 1. here we don’t have any sub-classes (any way this is simple IntArray that’s why we dont have separate sub-class) 2. we are using pointers to hold data (any way allocating when main class created and deallocating when main class destroyed) Please correct me if my understanding is wrong. m_data and m_length are part-of IntArray, and don’t have any value/meaning outside the IntArray. IntArray manages both of them entirely, including initializing and destroying them. The fact that m_data is a pointer instead of a normal value is irrelevant. That’s pretty much the definition of a composition. Thank you very much for the detailed explanation, BUT what I see is that this is an array of INTEGERS and not a container CLASS. You can make an example where you work with classes. Thank you very much. Is this a valid expression: int *pnData = new int[0]? If it is, what is the point of all special cases (when m_nLength equals to 0)? If it isn’t there is a bug in the Remove function. No, it’s not valid. We need to special case the situation where we’re removing the last element. I’ve updated the example to handle this. This is a good reminder as to why dealing with your own memory management should be avoided if possible -- it’s really easy to miss edge cases! For the excerpt below, why doesn’t ~IntArray() does not include m_pnData = 0; m_nLength = 0; unlike Erase() does? We’ll also need some functions to help us clean up IntArrays. First, we’ll write a destructor, which simply deallocates any dynamically allocated data. Second, we’ll write a function called Erase(), which will erase the array and set the length to 0. ~IntArray() { delete[] m_pnData; } void Erase() { delete[] m_pnData; // We need to make sure we set m_pnData to 0 here, otherwise it will // be left pointing at deallocated memory! m_pnData = 0; m_nLength = 0; } The destructor function is only called whenever a variable of that type is destroyed. So, if you set m_pnData = 0; m_nLength = 0; they wouldn’t actually do anything because those variables are a piece of the class we’re destroying. It would be the equivalent to setting a functions local variables to 0 at the end of it, they’re going to be destroyed when they go out of scope so there is really no point. Exactly this. After calling Erase(), the object will still be in a valid state, so we need to make sure all member variables have expected, valid values. After the destructor is called, the object will be destroyed, so there’s no need to zero out values or set pointers to null, since all those values will be destroyed anyway. hey, really enjoying the tutorial, but I was wondering, in “void Resize(int nNewLength)” what ever happens to pnData, it never gets deleted, so does it just remain filling up memory? or would it get deleted at the end of the function? void Resize(int nNewLength) pnData doesn’t get deleted because m_pnData was made to point to that address after deleting the original m_pnData. delete[] m_pnData; m_pnData = pnData; Yup, the memory allocated to data was assigned to m_data, so that memory can be used by the other class functions. It will eventually get deleted by the constructor (or another function that reallocates the memory). Why did we need a Reallocate()? I don’t see it being used anywhere. Just to show you how it can be done. In the Resize function, why do you put: and not: ? This would, in my opinion, be more intuitively understandable (if m_nLength is < 0 this shouldn't this cause an error?). Or did I misunderstand something here? Error trapping, without aborting the function. If by some mistake a negative number is passed, the code will just empty the array. Example: if main() has a way to knock down a size variable nArraySize to -1 when the programmer really meant 0, the function will treat nArraySize as if it were zero. allows the function to adjust for the programmer’s intention with respect to emptying arrays. Plus, it’s simpler to put in than it would be to use and a separate error-checking line. Had been used, the function would have to guard against negative values with a line like Alex’s choice collapses two lines into one, in a way that doesn’t halt the program if a negative number gets passed. Since Resize doesn’t declare an array, assert() isn’t really needed. As an intermediate programmer using this site brushing up on my C++ I would definitely recommend at least adding a reference to the STL Container classes which would largely omit the need to write your own containers. If I didn’t already know about them, seeing the need to write these myself would make me think twice using C++ for my coding needs. Showing how to write these container classes may give you insight into how to write your own classes that may or may not be containers. But, yes, you should definitely have a good reason as to why you are writing your own container class and not using those provided in the STL. Yup, I’ve already added introduction lessons about std::array and std::vector, which you should have encountered if you’ve been reading sequentially. I intend to cover the standard library containers in more depth in a future chapter, but there are other topics that we need to talk about first to maximize our value (things like iterators and big-o notation). Even though you should use standard containers instead of writing your own, going through the logic of how these things are built gives you insight into how they work and what their limitations are. In the Resize function, if the new length is bigger than the initial one, we will be left with some empty spots at the end. Is that fine, or should we fill those places by default with, say, zeros? This is a good question with no correct answer. I think I’d personally leave those extra elements unallocated. If you’re accessing those values before you set them, you’ve got a logic problem with your code that needs to be fixed anyway. And if you do access them before setting them, you’re more likely to notice when your array value comes out as some strange number (eg. -26432593). Also, filling with 0’s takes extra time that may not be needed if you’re just going to overwrite the values anyway. However, I can see cases where it would be useful to default those elements to 0. For example, if I was using the array to hold the counts of different kinds of things, I’d want to start counting from 0. So I think the ideal solution would be to add an an optional boolean parameter on the Resize function that gives the user the choice to fill extra elements with 0 or not. That way, the user can decide on a case-by-case basis whether or not they want/need to do that. I’d also add that optional boolean parameter to the constructors so you could allocate an array filled with 0 in the first place if you wanted. Name (required) Website
http://www.learncpp.com/cpp-tutorial/106-container-classes/
CC-MAIN-2017-26
refinedweb
4,683
63.8
Measure text accurately before laying it out and get font information from your App (Android and iOS). There are two main functions: flatHeights to obtain the height of different blocks of text simultaneously, optimized for components such as <FlatList> or <RecyclerListView>. The other one is measure, which gets detailed information about one block of text: The width and height are practically the same as those received from the onLayout event of a <Text> component with the same properties. In both functions, the text to be measured is required, but the rest of the parameters are optional and work in the same way as with React Native: fontFamily fontSize fontWeight fontStyle fontVariant(iOS) includeFontPadding(Android) textBreakStrategy(Android) letterSpacing allowFontScaling width: Constraint for automatic line-break based on text-break strategy. In addition, the library includes functions to obtain information about the fonts visible to the App. If it has helped you, please support my work with a star ⭐️ or ko-fi. Mostly automatic installation from npm yarn add react-native-text-size react-native link react-native-text-size Change the compile directive to implementation in the dependencies block of the android/app/build.gradle file. Requirements: For versions prior to 0.56 of React Native, please use react-native-text-size v2.1.1 See Manual Installation on the Wiki as an alternative if you have problems with automatic installation. measure(options: TSMeasureParams): Promise<TSMeasureResult> This function measures the text as RN does and its result is consistent* with that of Text's onLayout event. It takes a subset of the properties used by <Text> to describe the font and other options to use. If you provide width, the measurement will apply automatic wrapping in addition to the explicit line breaks. * There may be some inconsistencies in iOS, see this Know Issue to know more. Note: Although this function is accurate and provides complete information, it can be heavy if the text is a lot, like the one that can be displayed in a FlatList. For these cases, it is better to use flatHeights, which is optimized for batch processing. Plain JS object with this properties (only text is required): The sample App shows interactively the effect of these parameters on the screen. measure returns a Promise that resolves to a JS object with this properties: If the value of the lineInfoForLine is greater or equal than lineCount, this info is for the last line (i.e. lineCount - 1). In case of error, the promise is rejected with an extended Error object with one of the following error codes, as a literal string: //... import rnTextSize, { TSFontSpecs } from 'react-native-text-size' type Props = {} type State = { width: number, height: number } // On iOS 9+ will show 'San Francisco' and 'Roboto' on Android const fontSpecs: TSFontSpecs = { fontFamily = undefined, fontSize = 24, fontStyle = 'italic', fontWeight = 'bold', } const text = 'I ❤️ rnTextSize' class Test extends Component<Props, State> { state = { width: 0, height: 0, } async componentDidMount() { const width = Dimensions.get('window').width * 0.8 const size = await rnTextSize.measure({ text, // text to measure, can include symbols width, // max-width of the "virtual" container ...fontSpecs, // RN font specification }) this.setState({ width: size.width, height: size.height }) } // The result is reversible render() { const { width, height } = this.state return ( <View style={{ padding: 12 }}> <Text style={{ width, height, ...fontSpecs }}> {text} </Text> </View> ) } } flatHeights(options: TSHeightsParams): Promise<number[]> Calculate the height of each of the strings in an array. This is an alternative to measure designed for cases in which you have to calculate the height of numerous text blocks with common characteristics (width, font, etc), a typical use case with <FlatList> or <RecyclerListView> components. The measurement uses the same algorithm as measure but it returns only the height of each block and, by avoiding multiple steps through the bridge, it is faster... much faster on Android! I did tests on 5,000 random text blocks and these were the results (ms): In the future I will prepare an example of its use with FlatList and multiple styles on the same card. This is an object similar to the one you pass to measure, but the text option is an array of strings and the usePreciseWidth and lineInfoForLine options are ignored. The result is a Promise that resolves to an array with the height of each block (in SP), in the same order in which the blocks were received. Unlike measure, null elements returns 0 without generating error, and empty strings returns the same height that RN assigns to empty <Text> components (the difference of the result between null and empty is intentional). //... import rnTextSize, { TSFontSpecs } from 'react-native-text-size' type Props = { texts: string[] } type State = { heights: number[] } // On iOS 9+ will show 'San Francisco' and 'Roboto' on Android const fontSpecs: TSFontSpecs = { fontFamily = undefined, fontSize = 24, fontStyle = 'italic', fontWeight = 'bold', } const texts = ['I ❤️ rnTextSize', 'I ❤️ rnTextSize using flatHeights', 'Thx for flatHeights'] class Test extends Component<Props, State> { state = { heights: [], } async componentDidMount() { const { texts } = this.props const width = Dimensions.get('window').width * 0.8 const heights = await rnTextSize.flatHeights({ text: texts, // array of texts to measure, can include symbols width, // max-width of the "virtual" container ...fontSpecs, // RN font specification }) this.setState({ heights }) } render() { const { texts } = this.props const { heights } = this.state return ( <View style={{ padding: 12 }}> {texts.map( (text, index) => ( <Text style={{ height: heights[index], ...fontSpecs }}> {text} </Text> ) )} </View> ) } } specsForTextStyles(): Promise<{ [key: string]: TSFontForStyle }> Get system font information for the running OS. This is a wrapper for the iOS UIFont.preferredFontForTextStyle method and the current Android Material Design Type Scale styles. The result is a Promise that resolves to a JS object whose keys depend on the OS, but its values are in turn objects fully compatible with those used in the RN styles, so it can be used to stylize <Text> or <TextInput> components: To know the key names, please see Keys from specsForTextStyles in the Wiki. I have not tried to normalize the keys of the result because, with the exception of two or three, they have a different interpretation in each OS, but you can use them to create custom styles according to your needs. fontFromSpecs(specs: TSFontSpecs): Promise<TSFontInfo> Returns the characteristics of the font obtained from the given specifications. This parameter is a subset of TSMeasureParams, so the details are omitted here. fontFromSpecs uses an implicit allowsFontScaling:true and, since this is not a measuring function, includeFontPadding has no meaning. The result is a Promise that resolves to a JS object with info for the given font and size, units in SP in Android or points in iOS, using floating point numbers where applicable*. * Using floats is more accurate than integers and allows you to use your preferred rounding method, but consider no more than 5 digits of precision in this values. Also, remember RN doesn't work with subpixels in Android and will truncate this values. See more in: Understanding typography at the Google Material Design site. About Text Handling in iOS for iOS. fontFamilyNames(): Promise<string[]> Returns a Promise for an array of font family names available on the system. On iOS, this uses the UIFont.familyNames method of the UIKit. On Android, the result is hard-coded for the system fonts and complemented dynamically with the fonts installed by your app, if any. See About Android Fonts and Custom Fonts in the Wiki to know more about this list. fontNamesForFamilyName(fontFamily: string): Promise<string[]> Wrapper for the UIFont.fontNamesForFamilyName method of UIKit, returns an array of font names available in a particular font family. You can use the rnTextSize's fontFamilyNames function to get an array of the available font family names on the system. This is an iOS only function, on Android it always resolves to null. In iOS, the resulting width of both, measure and flatHeights, includes leading whitespace while in Android these are discarded. On iOS, RN takes into account the absolute position on the screen to calculate the dimensions. rnTextSize can't do that and both, width and height, can have a difference of up to 1 pixel (not point). RN does not support the Dynamic Type Sizes, but does an excellent job imitating this feature through allowFontScaling ...except for letterSpacing that is not scaled. I hope that a future version of RN solves this issue. Although rnTextSize provides the resulting lineHeight in some functions, it does not support it as a parameter because RN uses a non-standard algorithm to set it. I recommend you do not use lineHeight unless it is strictly necessary, but if you use it, try to make it 30% or more than the font size, or use rnTextSize fontFromSpecs method if you want more precision. Nested <Text> components (or with images inside) can be rasterized with dimensions different from those calculated, rnTextSize does not accept multiple sizes. I'm a full-stack developer with more than 20 year of experience and I try to share most of my work for free and help others, but this takes a significant amount of time and effort so, if you like my work, please consider... Of course, feedback, PRs, and stars are also welcome 🙃 Thanks for your support! The BSD 2-Clause "Simplified" License.
https://awesomeopensource.com/project/aMarCruz/react-native-text-size
CC-MAIN-2021-31
refinedweb
1,527
52.6
/Calendar/Lunch with Tug. Another way to say this is that the code that creates the new appointment chooses to give it the binding name Lunch with Tugin the /Calendarcollection. After the item is created, this parcel or any other parcel can retrieve the same item via this exportable address.. /Archives for ISP account/personal/lists/knitting-group(or it could be flatter). The point of this exercise is that a user can directly share their IMAP boxes from Chandler, for use by other devices or other users. /Calendar/, and more importantly, external references to this data (e.g. an HTTP address) do not change. This is an important use case because it shows that the exportable path shouldn't contain the parcel name. There needs to be another way to find out what parcel "owns" what content besides looking at the exportable address. /Archives for ISP accounthierarchy, because the user doesn't want those emails to show up in searches of the Mail hierarchy or be synchronized to the IMAP server. Instead, those emails are placed into a collection called "/Old-job-stuff/Mail". Another side effect is that this user can publish (or synchronize) all their mail except the "old-job-stuff" mail. This use case is important because it shows that email content items won't necessarily all appear in the exportable address hierarchy where they were originally created. Note that internally this function is a feature of collections, which is why for external use we feel addresses ought to have some structural relationship to collections. /Inbox%20for%20ISP%20Account/) is the default place to store email arriving in this account. The parcel also needs to pick a unique name within Inbox, and it chooses a name based on the subject, but with a uniquifying number, thus /Inbox for ISP account/Re:shawl kit order[3]is the final exportable address. /Inbox for ISP accountand the new mail will be returned. knitting-groupcollection, which may be shared to other knitters. A new binding is created for the same email, probably /knitting-group/Re:shawl kit order[3]. This does not result in "not found" responses for the old address because the old address is also maintained as a binding. Both addresses work for the same email (remote Chandler repositories can tell if they are actually the same resource by checking some ID). /Unread mail/Hi Lisaand /from katie/Hi Lisa(two bindings to the exact same mail). Note that not all exportable path segments would have to be views. For example, /core/schemas/parcels/mailParcelwouldn't have to be a view. Or /Inbox for ISP Account/need not be a view, it might simply be a collection. /__uuids/DA259054-D93B-498C-8C10-DEBD83EF1357then a GET request to that specific address could certainly return the exact item uniquely and permanently identified with that UUID (provided it's available and readable on that repository). However, a PROPFIND request to the /__uuidscollection would not return all the children of that collection, so this namespace isn't useful for browsing. /__uuids /Archive for ISP account /Calendar /Contacts /Inbox for ISP account /Named Views #also contains ad-hoc views -- any persisted views /Notes /Sent items for ISP account /Shared views /TasksThis skeleton assumes a single email account -- more top-level collections would be added if the user has multiple email accounts. This skeleton has no relationship to the sidebar -- it doesn't dictate what does or does not appear in the sidebar. The calendar, contacts, notes and tasks collections that contain all Content Items of a certain Kind. For example, the calendar contains all Content Items that are events: /Calendar/Lunch with Tug.ics /Calendar/Symphony concert.ics /Calendar/Trip to Whidbey Island.icsIf a Calendar event is also an email (let's say the Symphony concert started out as an email, and was stamped as an event) then the same item appears in two places: /Calendar/Symphony concert.ics /Archive for ISP Account/Symphony concertThe Shared Viewscollection contains any other collection that the user decides to share. For example the user could have an ad-hoc collection for all home events, and a separate collection for all work events (based on the value of the "@context" attribute). In addition, the user could create ad-hoc collections of mixed types, such as a collection of emails and tasks relating to a project. These would all be "created" under the /Shared Viewscollection /Shared views/Home Calendar /Shared views/Work Calendar /Shared views/Schema ProjectShared views could also be created as top-level collections, of course. New Content Items, when developed by 3rd party parcel developers or invented by the user, should be encouraged to create new top level folders to hold their new Content Items. For example, the ZaoBao feed items could be created in /ZaoBao or sub-folders. /Zaobao/Feeds /Zaobao/EntriesContent Items are typically linked from views inside the Named Views and Shared Views folders, but those are more likely to have mixed content and be slower to synchronize if that's the only place that user data is found. To improve the ability to synchronize and have outside tools work with a set of Content Items, make sure they appear in their own collection roughly sorted by Kind. /Shared Views/Calendrier personnel /Shared Views/Projet BagatelleSince URLs are not intended to be displayed in the chandler UI, we hope this is sufficient. Note that individual content items are shared with paths like /__uuids/DA259054-D93B-498C-8C10-DEBD83EF1357, which is even less friendly to a French user, or any other user, than a path like "Shared Views". Clearly, paths are not meant for display, wherever it's possible to use something more friendly. /Shared Views/Schema Projectby searching "in" that namespace. Searches can be depth-infinity or depth 1. I can search every item to a depth of infinity in /Shared Viewsby providing that address. Note that queries can also limit scope by type -- e.g. "every item of type Mail" -- but that's a different kind of search. Does the repository expose a way to pick a unique name for a new item inside an existing namespace, or is the application providing the new item responsible for suggesting that name? If the application suggests the name then the name could have some relationship to the semantics of the item. For example, an email parcel could create new emails using a sequential ID ("Msg336") or by using the Subject of the mail with a uniquifying number (e.g. Re: IRC and wing[2]). /Shared Views/Schema Project/Recap meeting Agenda[2], it shouldn't matter if the two repositories have the same UID or internal storage path, and that could be a good thing as it's doing more to keep the two repositories at an arm's distance, independent of each others versions/quirks. If the item that address refers to is changed, that's fine -- that's a content update and the remote repository downloads the new content. A sharing repository would make its items' UUIDs readable, but those would be used for reference and comparison by the remote Chandler, not as the remote repository's native UUID. Do schema items/kinds have internal addresses, whereas Content items have external addresses? Does every item have an exportable address?
http://chandlerproject.org/Journal/ExportableAddressesJun2004
crawl-002
refinedweb
1,220
60.45
TERMIOS(3V) TERMIOS(3V) NAME termios, tcgetattr, tcsetattr, tcsendbreak, tcdrain, tcflush, tcflow, cfgetospeed, cfgetispeed, cfsetispeed, cfsetospeed - get and set termi- nal attributes, line control, get and set baud rate, get and set termi- nal foreground process group ID SYNOPSIS #include <<termios.h>> #include <<unistd.h>> int tcgetattr(fd, termios_p) int fd; struct termios *termios_p; int tcsetattr(fd, optional_actions, termios_p) int fd; int optional_actions; struct termios *termios_p; int tcsendbreak(fd, duration) int fd; int duration; int tcdrain(fd) int fd; int tcflush(fd, queue_selector) int fd; int queue_selector; int tcflow(fd, action) int fd; int action; speed_t cfgetospeed(termios_p) struct termios *termios_p; int cfsetospeed(termios_p, speed) struct termios *termios_p; speed_t speed; speed_t cfgetispeed(termios_p) struct termios *termios_p; int cfsetispeed(termios_p, speed) struct termios *termios_p; speed_t speed; #include <<sys/types.h>> #include <<termios.h>> DESCRIPTION The termios functions describe a general terminal interface that is provided to control asynchronous communications ports. A more detailed overview of the terminal interface can be found in termio(4). That section also describes an ioctl() interface that can be used to access the same functionality. However, the function interface described here is the preferred user interface. Many of the functions described here have a termios_p argument that is a pointer to a termios structure. This structure contains the follow- ing members: tcflag_t c_iflag; /* input modes */ tcflag_t c_oflag; /* output modes */ tcflag_t c_cflag; /* control modes */ tcflag_t c_lflag; /* local modes */ cc_t c_cc[NCCS]; /* control chars */ These structure members are described in detail in termio(4). as follows: o If optional_actions is TCSANOW, the change occurs immedi- ately. o If optional_actions is TCSADRAIN, the change occurs after all output written to fd has been transmitted. This function should be used when changing parameters that affect output. o If optional_actions is TCSAFLUSH, the change occurs after all output written to the object referred by fd has been trans- mitted, and all input that has been received but not read will be discarded before the change is made. The symbolic constants for the values of optional_actions are defined in <<sys/termios.h>>. If the terminal is using asynchronous serial data transmission, tcsend- break() transmits a continuous stream of zero-valued bits for a spe- cific duration.: o If queue_selector is TCIFLUSH, it flushes data received but not read. o If queue_selector is TCOFLUSH, it flushes data written but not transmitted. o If queue_selector is TCIOFLUSH, it flushes both data received but not read, and data written but not transmitted. The symbolic constants for the values of queue_selector and action are defined in termios.h. The default on open of a terminal file is that neither its input nor its output is suspended. tcflow() suspends transmission or reception of data on the object referred to by fd, depending on the value of actions: o If action is TCOOFF, it suspends output. o If action is TCOON, it restarts suspended output. o If action is TCIOFF, the system transmits a STOP character, which stops the terminal device from transmitting data to the system. (See termio(4).) o If action is TCION, the system transmits a START character, which starts the terminal device transmitting data to the system. (See termio(4).) The baud rate functions are provided for getting and setting the values of the input and output baud rates in the termios structure. The effects on the terminal device described below do not become effective until tcsetattr() is successfully called. The input and output baud rates are stored in the termios structure. The values shown in the table are supported. The names in this table are defined in termios.h center, tab(:) ; cb cb cb cb cb lfB r r lfB r . Name:Descrip- tion::Name:Description B0:Hang up::B600:600 baud B50:50 baud::B1200:1200 baud B75:75 baud::B1800:1800 baud B110:110 baud::B2400:2400 baud B134:134.5 baud::B4800:4800 baud B150:150 baud::B9600:9600 baud B200:200 baud::B19200:19200 baud B300:300 baud::B38400:38400 baud cfgetospeed() returns the output baud rate stored in the termios struc- ture pointed to by termios_p. cfsetospeed() sets the output baud rate stored in the termios structure pointed to by termios_p to speed. The zero baud rate, B0, is used to terminate the connection. If B0 is specified, the modem control lines shall no longer be asserted. Normally, this will disconnect the line. If the input baud rate is set to zero, the input baud rate will be specified by the value of the output baud rate. cfgetispeed() returns the input baud rate stored in the termios struc- ture. cfsetispeed() sets the input baud rate stored in the termios structure to speed. RETURN VALUES cfgetispeed() returns the input baud rate stored in the termios struc- ture. cfgetospeed() returns the output baud rate stored in the termios struc- ture. cfsetispeed() and cfsetospeed() return: 0 on success. -1 on failure and sets errno to indicate the error. All other functions return: 0 on success. -1 on failure and set errno to indicate the error. ERRORS EBADF The fd argument is not a valid file descriptor. ENOTTY The file associated with fd is not a terminal. tcsetattr() may set errno to: EINVAL The optional_actions argument is not a proper value. An attempt was made to change an attribute represented in the termios structure to an unsupported value. tcsendbreak() may set errno to: EINVAL The device does not support tcsendbreak(). tcdrain() may set errno to: EINTR A signal interrupted tcdrain(). EINVAL The device does not support tcdrain(). tcflush() may set errno to: EINVAL The device does not support tcflush(). The queue_selector argument is not a proper value. tcflow() may set errno to: EINVAL The device does not support tcflow(). The action argument is not a proper value. tcsetattr() may set errno to: EAGAIN There is insufficient memory available to copy in the arguments. EBADF fd is not a valid descriptor. EFAULT Some part of the structure pointed to by termios_p is outside the process's allocated address space. EINVAL optional_actions is not valid. EIO The calling process is a background process. ENOTTY fd does not refer to a terminal device. ENXIO The terminal referred to by fd is hung up. cfsetispeed() and cfsetospeed() may set errno to: EINVAL speed is greater than B38400 or less than 0. SEE ALSO setpgid(2V), setsid(2V), termio(4) 21 January 1990 TERMIOS(3V)
http://modman.unixdev.net/?sektion=3&page=cfsetispeed&manpath=SunOS-4.1.3
CC-MAIN-2017-30
refinedweb
1,058
64.61
Getting Started with Flutter Instantly There is no better time to deep dive into Flutter than now. Google Pay just announced that its picking up Flutter to drive its global product development. Introduction: - A Flutter app is way more cheaper compared to developing two native apps. - The development team is relatively smaller with linear processes. - You get more time for working on the app’s main features. - Maintenance is also pruned down because of single code base. No wonder that Flutter is more popular as a framework than React Native on both GitHub and Stack Overflow. Look at the survey below! Start with Dart Start using Dart to develop web-only apps. If you want to write a multi-platform app, then use Flutter. The dart programming shares several same features as other programming languages, such as Kotlin and Swift, and can be trans-compiled into JavaScript code. Also if you look at the list of most loved languages, Dart is just behind JavaScript Flutter is different from other frameworks because it neither uses WebView nor the OEM widgets that shipped with the device. Instead, it uses its own high-performance rendering engine to draw widgets. It also implements most of its systems such as animation, gesture, and widgets in Dart programing language that allows developers to read, change, replace, or remove things easily. It gives excellent control to the developers over the system. Flutter apps do not support the browser. It only supports Android and iOS platforms. Open-Source: Flutter is a free and open-source framework for developing mobile applications. Cross-platform: This feature allows Flutter to write the code once, maintain, and can run on different platforms. Hot Reload: It means the changes immediately visible in the app itself. Accessible Native Features and SDKs: We can easily access the SDKs on both platforms. Minimal code: Flutter app is developed by Dart, which uses JIT and AOT compilation to improve the overall start-up time, functioning and accelerates the performance. Widgets: Flutter has two sets of widgets: Material Design and Cupertino widgets that help to provide a glitch-free experience on all platforms. Where to Start Start using Dart to develop web-only apps. If you want to write a multi-platform app, then use Flutter. With Flutter 1.21, the Flutter SDK includes the Dart SDK. So you may not need to explicitly download the Dart SDK. If you are a beginner, it is highly recommended to start with Dart only. It requires less disk space and easy way to get started. Once you build your competency on dart as a programming language, you can start with flutter. Also if you just want to have continuous integration (CI) setup that requires only Dart. Installing Dart The installation steps differs bases on OS platforms. Use the one which suits your need. For Mac brew tap dart-lang/dart brew install dart brew info dart //check info dart — version //check version For Windows You can install the Dart SDK using Chocolatey. C:\> choco install dart-sdk C:\> choco upgrade dart-sdk You might need to add path in environment variable. C:\tools\dart-sdk\bin C:\Users\%USERPROFILE%\AppData\Local\Pub\Cache\bin The Dart SDK has a number of very useful command-line tools: - dart — The Dart Virtual Machine (VM) - dartdoc — An API documentation generator - dart2js — A Dart to JS compiler, which target the web with Dart - dartfmt — A code formatter for Dart - dartanalyzer — A static code analyzer for Dart - pub — The Dart package manager - dartdevc — A quick Dart compiler - dart2native — A tool that AOT compiles Dart code to native x64 machine code - dartaotruntime — A Dart runtime for AOT-compiled snapshots Building a Web App with Dart The Dart package manager — pub, is useful to add and manage Dart packages. We’ll use pub to install the command-line interface tools. pub global activate webdev pub global activate stagehand mkdir demo cd demo stagehand web-simple Stagehand is a Dart scaffolding tool. It helps to set up a new project in Dart with some predetermined folders very easy. The project scaffold will be created and you can run pub get to update project dependencies. To gets all the dependencies listed in the pubspec.yaml file in the current working directory and their transitive dependencies run: pub get On restarting system will flush the system cache and you have to run the above command again because it download dependencies in system cache and map to .package file. Run the following command to launch a development server, which serves your app and watches for source code changes: webdev serve And point your browser to But if you want to run your application in debug mode then run the following command: webdev serve — debug You might need to set path locally : export PATH=”$PATH”:”$HOME/.pub-cache/bin” Congratulations! You have just completed your first web app in Dart. Gradually move to building Flutter App If you know object-oriented programming, and concepts such as variables, loops, and conditionals, then you don’t need any previous experience with Dart, mobile, or web programming. Software Requirements - Flutter SDK - Chrome browser - Text editor or IDE For web-only development, you can either use IntelliJ IDEA or VS Code. Android Studio and Xcode are not required. While developing, run your web app in Chrome so you can debug with Dart DevTools. Enable web development At the command line, run the following commands to make sure that you have the latest web support enabled. You only need to run flutter config once to enable Flutter support for the web. If it throws “flutter: command not found”, then ensure that you have installed Flutter SDK and that it’s in your path. flutter channel beta flutter upgrade flutter config — enable-web Run flutter doctor — The flutter doctor command reports the status of the installation flutter doctor List the devices — To ensure that web is installed, list the devices available. flutter devices The Chrome device automatically starts Chrome. The Web Server also starts a server that hosts the app so that you can load it from any browser. Do use the Chrome device during development so that you can use DevTools, and the web server to test on other browsers. The app will be displayed in similar DartPad. Below is a short code skeleton: import ‘package:flutter/material.dart’; void main() => runApp(SignUpApp()); class SignUpApp extends StatelessWidget {} class SignUpScreen extends StatelessWidget {} class SignUpForm extends StatefulWidget { @override } class _SignUpFormState extends State<SignUpForm> { } Run the example Click the Run button in the Dart pad to run the example. Create a new Flutter project From your IDE, editor, or at the command line, create a new Flutter project and name it anything . Replace the contents of the lib/main.dart. The entire code for this example lives in the file — lib/main.dart All of the app’s UI is created with Dart code. The app’s UI adheres Material Design. Flutter also has the Cupertino widget library which implements the current iOS design language. You can also create your own custom widget library. In Flutter, almost everything is a Widget including the app itself. The app’s UI could be described as a widget tree. Dart DevTools Do not confuse it with Chrome DevTools. The following instructions for launching DevTools applies to any workflow - Run the app. Select the Chrome device from the pull down and launch it from the IDE or, from the command line, use flutter run -d chrome, - Get the web socket info for DevTools. At the command line, or in the IDE, you should see a message like the following: Debug service listening on **ws://127.0.0.1:54998/pJqWWxNv92s=** Now launch the DevTools. - Ensure that the DevTools is installed Make sure you have the Flutter and Dart plugins set up. If you are working at the command line, launch the DevTools server. - Connect to DevTools. When DevTools launches, you should see the following: Serving DevTools at Now go to this URL in a Chrome browser and you should see the DevTools launch screen. - Connect to running app. Under Connect to a running site, paste the ws location showed in step 2, and click Connect. Now you should see Dart DevTools running successfully in your Chrome browser. Now debug on the dart dev tool using the following steps : - Set a breakpoint. - Set a breakpoint. - Trigger the breakpoint. - Resume the app. - Delete the breakpoint. Finally If everything goes smoothly, you should be able to see a nice UI on the pad. Awesome! You have just created your first web app using Flutter!
https://gpsein.medium.com/getting-started-with-flutter-instantly-6a1649599706?source=post_internal_links---------7----------------------------
CC-MAIN-2021-25
refinedweb
1,437
63.9
25 April 2007 15:17 [Source: ICIS news] LONDON (ICIS news)--INEOS Polyolefins has announced a €40/tonne ($55/tonne) increase for May polypropylene (PP)business, a company source said on Wednesday. ?xml:namespace> “Product is tight and we are concerned about the cracker turnaround situation planned during the second half of the year,” he added. “Margins will be a priority over volume in May.” PP availability has tightened considerably over the past month. Another European producer confirmed INEOS’s statement. “I have never experienced such a tight stock position. We have to allocate volumes this month and next, and the customers are not happy,” he said. Buyers said it was too early to say where May prices would land but a couple admitted that availability was more limited than they had expected. PP homopolymer prices rose by €25/tonne in April and were close to €1,200/tonne FD (free delivered) NWE (northwest
http://www.icis.com/Articles/2007/04/25/9023868/ineos-targets-40t-increase-on-europe-may-pp.html
CC-MAIN-2014-52
refinedweb
154
63.7
How to use cookies for persisting state/users in NextJS October 29, 2020 How to use cookies for persisting users in NextJS! There are a number of ways to persist users in a React or Single Page Appplication. A lot of times, devs generally use localStorage to store user data and load the data from there when required. While this approach works, it’s not the most effective way as it leaves users vulnerable to attacks. Using cookies is a little safer although it’s still not the safest option. Personally, I prefer a mixture of using cookies and JWT’sJSON Web tokens with expiry to persist user session and to force a user to re-login when their session expires. Using JWT’s is out of the scope of this article. As LocalStorage is undefined on the server-side(since localStorage does not exist on the server), it’s impossible to access localStorage before rendering a route. As such, our best bet is to check if a user’s cookie is valid on the server side before rendering a route. Getting started using cookies in React/NextJS To use cookies in NextJS, we need to install 2 packages. For this tutorial, we’ll be using cookie and react-cookie. React-cookie allows us set the cookie from the client side while the cookie package lets us access the set cookie from the server-side. Install both packages by running npm install react-cookie cookie Cookie-cutter is a tiny package that does the same thing as react-cookie. Setting a cookie With both packages installed, It’s time to set a cookie. Usually, we set a cookie for a user once they’ve succesfully signed in or signed up to our application. To set a cookie on Sign in, follow the example below. // pages/login.js import { useCookies } from "react-cookie" const Login = () => { const [setCookie] = useCookies(["user"]) const handleSignIn = async () => { try { const response = await yourLoginFunction(username) //handle API call to sign in here. const data = response.data setCookie("user", JSON.stringify(data), { path: "/", maxAge: 3600, // Expires after 1hr sameSite: true, }) } catch (err) { console.log(err) } } return ( <> <label htmlFor="username"> <input type="text" placeholder="enter username" /> </label> </> ) } export default Login In the snippet above, we call the setCookie hook from react-cookies and set it to a default name. In our case, that’s user. We then make a request to sign in a user by calling a function to log the user in. We take the response from that API call, stringify the data(cookies are formatted as text) and store that data in a cookie. We also pass some additional options to the cookie including path - makes sure your cookie is accessible in all routes, maxAge, how long from the time the cookie is set till it expires and sameSite. Samesite indicates that this cookie can only be used on the site it originated from - It is important to set this to true to avoid errors within firefox logs. Giving your app access to the Cookie To ensure that every route in our application has access to the cookie, we need to wrap our APP component in a cookie provider. Inside _app.js, add the following bit of code. // pages/_app.js import { CookiesProvider } from "react-cookie" export default function MyApp({ pageProps }) { return ( <CookiesProvider> <Component {...pageProps} /> </CookiesProvider> ) } Setting up the function to parse the cookie Next, we need to setup a function that will check if the cookie exists on the server, parse the cookie and return it. Created a new folder called helpers and within that add an index.js file. Inside this file, add the following piece of code. // helpers/index.js import cookie from "cookie" export function parseCookies(req) { return cookie.parse(req ? req.headers.cookie || "" : document.cookie) } The function above accepts a request object and checks the request headers to find the cookie stored. Accessing the cookie within your component Finally, we will use getInitialProps in our component to check if the user already has a valid cookie on the server side before rendering the requested route. An alternative to this approach is using getServerSideProps. import { parseCookies } from "../helpers/" export default function HomePage({ data }) { return ( <> <div> <h1>Homepage </h1> <p>Data from cookie: {data.user}</p> </div> </> ) } HomePage.getInitialProps = async ({ req, res }) => { const data = parseCookies(req) // check if there's a valid cookie on the server side. If there isn't, redirect user to index if (res) { if (Object.keys(data).length === 0 && data.constructor === Object) { res.writeHead(301, { Location: "/" }) res.end() } } return { data: data && data, } } Within getInitialProps, we’re passing in the request object(req) that’s available to us on the server-side in NextJS to the parseCookies function. This function returns the cookie to us which we can then send back to the client as props. We also do a check on the server to see if the response object is available. The res object is only available on the server. If a user hits the HomePage route using next/link or next/router, the res object will not be available. Using the res object, we check if there are cookies and if they’re still valid. We do this check using the res object. If the data object is empty, it means the cookie isn’t valid. If the cookie isn’t valid, we then redirect the user back to the index page rather than showing a flash of the HomePage before redirecting the user. Note that subsequent requests to pages containing getInitialProps using next/link or next/router will be done from the client side. i.e The cookie will be extracted from the clent rather than the server side for other routes that are accessed via using next/link or next/router And with that, you can now store cookies for users in your application, expire those cookies and secure your app to a good extent.
https://blog.adebola.dev/how-to-use-cookies-for-authentication-in-nextjs/
CC-MAIN-2021-10
refinedweb
988
63.19
Answered by: When to create a new DataContext Hello, In my app I create a new instance of my DataContext object (created using Linq to SQL) and then do some querying etc using it. Whilst doing this, I call a custom function that I wrote for one of the tables also created using Linq to SQL. Inside this function, I need a reference to a DataContext. Do I: 1. Create a new DataContext within the function? 2. Pass in the existing DataContext as a parameter? If you create a new DataContext when one already exists, what happens? I want to reuse the existing datacontext, but would rather avoid passing around references to it for the sake of readability and tidyness. Many thanks, Ben S Example:Code Block //My command line application class Program { { MyDataContext db = new MyDataContext(); db.MyTables.MyCustomFunction(); //Maybe this should be: //db.MyTables.MyCustomFunction(db); } } //////////////////////////MyDataClasses.cs////////////////// //Created by Linq to SQL public partial class MyDataContext : DataContext { publicSystem.Data.Linq.Table<MyTable> MyTables { { } } } //Created by Linq to SQL. [Table(Name="dbo.MyTable")] public partial class MyTable { //A custom function that I wrote public void MyCustomFunction() { //Do something that needs a reference to a DataContext here //Do I create a new DataContext here, or do I pass in the existing one // that I created in the main program? } }Friday, November 30, 2007 12:08 PM Question Answers All replies Hi Ben, The advice I have seen (and have been following for the most part) is to keep the lifetime of the data context relatively short and focused on the unit of work you are doing. So you wouldn't want to keep a live data context around for the duration of your app and use that for all subsequent LINQ calls. I just responded to a thread of yours and mentioned a business object that I created that wrapped some LINQ to SQL off an established data context. I'll expand on the pattern I've been following and let you decide if you wish to follow it (and possibly some of the hardcore LINQ guys will respond with comments and/or other (better) approaches). I have mainly avoided extending my LINQ to SQL generated classes with custom functions, aside from simple things like calculated values (e.g. a FullName property based on FirstName and LastName properties). I view the LINQ to SQL generated classes as the data access layer (DAL) itself, and any calculated properties would extend this and be part of the DAL. When I want to query the DAL, I will usually encapsulate the LINQ to SQL code in a business object layer made up of business objects that encapsulate a data context. So I'll typically have a parameterless constructor for the business object that instantiates its own data context, and a constructor that takes a data context as a parameter, but regardless of how the BO is constructed, all LINQ to SQL wrapped in its methods will use the data context instance variable. This allows me to pretty much forget about the data context and not have to worry about passing it around all the time (except when I feel that I need to and will pass an existing data context as a parameter to the constructor).Code Block public class EnterpriseBo { public EnterpriseBo() { // Calls a static method that instantiates a new data context // based on a connection string already read from app.config // (maintained by a custom connection string manager class) db = EnterpriseDataContext.NewContext(); } public EnterpriseBo(EnterpriseDataContext DataContext) { db = DataContext; } protected EnterpriseDataContext db; } public class MachineBo : EnterpriseBo { public MachineBo() : base() { } public MachineBo(EnterpriseDataContext DataContext) : base(DataContext) { } { return machine; } // Other methods that usually join against multiple tables, return custom // defined types (that can be serialized across WCF), etc. } Hope this helps. -LarryFriday, November 30, 2007 1:55 PM - First of all, there is a mistake in your code. You've added a method MyCustomFunction to your entity class MyTable and expect to have this method available on your LINQ to SQL table MyTables which is of type Table<MyTable>. This is wrong and your code won't compile. Regarding your question, it's better not to have more than one instance of the same type of DataContext pointing to the same database in your application as it may cause several mistakes and bugs in your code. For example, if you load a record from one DataContext, trying to save its changes on the other one won't work. The only safe way of having several DataContexts in your application at the same time is turning off change tracking(via ObjectTrackingEnabled property) and deferred loading (via DeferredLoadingEnabled property) on all your DataContexts which of course disables important parts of its functionality. So, it's better not to maintain more than one instance of DataContext in your application unless you have strong reasons to do so. Regarding your problem keeping your code neat when sharing the same instance of DataContext in various parts of the application is making it static or sharing it via a singleton class. For example you can write: public static class SharedElements { public static MyDataContext DB { get; private set; }} static SharedData() { DB = new MyDataContext();} And use SharedElements.DB wherever in your code that needs a reference to the DataContext.Friday, November 30, 2007 2:44 PM Thanks for your responses - very helpful. (yes, I must have been asleep when I wrote the example!) I will be using a DataContext in the context of an ASP.NET Page, and therefore it will only be used briefly whilst the page is being rendered and then disposed of. Thanks again, BenFriday, November 30, 2007 5:08 PM It is inevitable that there will be cases where multiple contexts will be looking at the same database. ASP.NET, as mentioned, is one. You're not going to get away from this fact. This is why we have optimistic concurrency support in LINQ to SQL. So don't be afraid of having multiple data contexts -- just be aware that multiple contexts may exist, and in general treat them appropriately. In particular: Realize that the data is stale the moment it's queried. (This is why long-lived contexts are problematic.) and Never assume that two contexts agree about the state of any particular row in the database. Failure to do so is what would lead to problems as alluded to in CompuBoy's response.Friday, November 30, 2007 6:58 PM Hey Matt, How are you invisioning the datacontext to be used? A business facade that wraps around and hides the context or as a source that can be used by the presentation layer? The LinqDataSource seems to imply presentation layer. PaulMonday, January 07, 2008 11:10 PM
https://social.msdn.microsoft.com/Forums/en-US/ea6fe19c-cd21-40d1-b005-750af6632bfe/when-to-create-a-new-datacontext?forum=linqprojectgeneral
CC-MAIN-2015-32
refinedweb
1,122
51.28
URL Routing¶ When it comes to combining multiple controller or view functions (however you want to call them), you need a dispatcher. A simple way would be applying regular expression tests on PATH_INFO and call registered callback functions that return the value. Werkzeug provides a much more powerful system, similar to Routes. All the objects mentioned on this page must be imported from werkzeug.routing, not from werkzeug! Quickstart¶ Here is a simple example which could be the URL definition for a blog: from werkzeug.routing import Map, Rule, NotFound, RequestRedirect url_map = Map([ Rule('/', endpoint='blog/index'), Rule('/<int:year>/', endpoint='blog/archive'), Rule('/<int:year>/<int:month>/', endpoint='blog/archive'), Rule('/<int:year>/<int:month>/<int:day>/', endpoint='blog/archive'), Rule('/<int:year>/<int:month>/<int:day>/<slug>', endpoint='blog/show_post'), Rule('/about', endpoint='blog/about_me'), Rule('/feeds/', endpoint='blog/feeds'), Rule('/feeds/<feed_name>.rss', endpoint='blog/show_feed') ]) def application(environ, start_response): urls = url_map.bind_to_environ(environ) try: endpoint, args = urls.match() except HTTPException, e: return e(environ, start_response) start_response('200 OK', [('Content-Type', 'text/plain')]) return ['Rule points to %r with arguments %r' % (endpoint, args)] So what does that do? First of all we create a new Map which stores a bunch of URL rules. Then we pass it a list of Rule objects. Each Rule object is instantiated with a string that represents a rule and an endpoint which will be the alias for what view the rule represents. Multiple rules can have the same endpoint, but should have different arguments to allow URL construction. The format for the URL rules is straightforward, but explained in detail below. Inside the WSGI application we bind the url_map to the current request which will return a new MapAdapter. This url_map adapter can then be used to match or build domains for the current request. The MapAdapter.match() method can then either return a tuple in the form (endpoint, args) or raise one of the three exceptions NotFound, MethodNotAllowed, or RequestRedirect. For more details about those exceptions have a look at the documentation of the MapAdapter.match() method. Rule Format¶ Rule strings basically are just normal URL paths with placeholders in the format <converter(arguments):name>, where visited without a trailing slash will trigger a redirect to the same URL with that slash appended. The list of converters can be extended, the default converters are explained below. Builtin Converters¶ Here a list of converters that come with Werkzeug: - class werkzeug.routing. UnicodeConverter(map, minlength=1, maxlength=None, length=None)¶ This converter is the default converter and accepts any string but only one path segment. Thus the string can not include a slash. This is the default validator. Example: Rule('/pages/<page>'), Rule('/<string(length=2):lang_code>') - class werkzeug.routing. PathConverter(map)¶ Like the default UnicodeConverter, but it also matches slashes. This is useful for wikis and similar applications: Rule('/<path:wikipage>') Rule('/<path:wikipage>/edit') - class werkzeug.routing. AnyConverter(map, *items)¶ Matches one of the items provided. Items can either be Python identifiers or strings: Rule('/<any(about, help, imprint, class, "foo,bar"):page_name>') - class werkzeug.routing. IntegerConverter(map, fixed_digits=0, min=None, max=None)¶ This converter only accepts integer values: Rule('/page/<int:page>') This converter does not support negative values. - class werkzeug.routing. FloatConverter(map, min=None, max=None)¶ This converter only accepts floating point values: Rule('/probability/<float:probability>') This converter does not support negative values. Maps, Rules and Adapters¶ - class werkzeug.routing. Map(rules=None, default_subdomain='', charset='utf-8', strict_slashes=True, redirect_defaults=True, converters=None, sort_parameters=False, sort_key=None, encoding_errors='replace', host_matching=False)¶ The map class stores all the URL rules and some configuration parameters. Some of the configuration values are only stored on the Map instance since those affect all rules, others are just defaults and can be overridden for each rule. Note that you have to specify all arguments besides the rules as keyword arguments! New in version 0.5: sort_parameters and sort_key was added. New in version 0.7: encoding_errors and host_matching was added. converters¶ The dictionary of converters. This can be modified after the class was created, but will only affect rules added after the modification. If the rules are defined with the list passed to the class, the converters parameter to the constructor has to be used instead. add(rulefactory)¶ Add a new rule or factory to the map and bind it. Requires that the rule is not bound to another map. bind(server_name, script_name=None, subdomain=None, url_scheme='http', default_method='GET', path_info=None, query_args=None)¶ Return a new MapAdapterwith the details specified to the call. Note that script_name will default to '/'if not further specified or None. The server_name at least is a requirement because the HTTP RFC requires absolute URLs for redirects and so all redirect exceptions raised by Werkzeug will contain the full canonical URL. If no path_info is passed to match()it will use the default path info passed to bind. While this doesn’t really make sense for manual bind calls, it’s useful if you bind a map to a WSGI environment which already contains the path info. subdomain will default to the default_subdomain for this map if no defined. If there is no default_subdomain you cannot use the subdomain feature. New in version 0.7: query_args added New in version 0.8: query_args can now also be a string. bind_to_environ(environ, server_name=None, subdomain=None)¶ Like bind()but you can pass it an WSGI environment and it will fetch the information from that dictionary. Note that because of limitations in the protocol there is no way to get the current subdomain and real server_name from the environment. If you don’t provide it, Werkzeug will use SERVER_NAME and SERVER_PORT (or HTTP_HOST if provided) as used server_name with disabled subdomain feature. If subdomain is None but an environment and a server name is provided it will calculate the current subdomain automatically. Example: server_name is 'example.com'and the SERVER_NAME in the wsgi environ is 'staging.dev.example.com'the calculated subdomain will be 'staging.dev'. If the object passed as environ has an environ attribute, the value of this attribute is used instead. This allows you to pass request objects. Additionally PATH_INFO added as a default of the MapAdapterso that you don’t have to pass the path info to the match method. Changed in version 0.5: previously this method accepted a bogus calculate_subdomain parameter that did not have any effect. It was removed because of that. Changed in version 0.8: This will no longer raise a ValueError when an unexpected server name was passed. default_converters= ImmutableDict({'int': <class 'werkzeug.routing.IntegerConverter'>, 'string': <class 'werkzeug.routing.UnicodeConverter'>, 'default': <class 'werkzeug.routing.UnicodeConverter'>, 'path': <class 'werkzeug.routing.PathConverter'>, 'float': <class 'werkzeug.routing.FloatConverter'>, 'any': <class 'werkzeug.routing.AnyConverter'>, 'uuid': <class 'werkzeug.routing.UUIDConverter'>})¶ New in version 0.6: a dict of default converters to be used. is_endpoint_expecting(endpoint, *arguments)¶ Iterate over all rules and check if the endpoint expects the arguments provided. This is for example useful if you have some URLs that expect a language code and others that do not and you want to wrap the builder a bit so that the current language code is automatically added if not provided but endpoints expect it. - class werkzeug.routing. MapAdapter(map, server_name, script_name, subdomain, url_scheme, path_info, default_method, query_args=None)¶ Returned by Map.bind()or Map.bind_to_environ()and does the URL matching and building based on runtime information. allowed_methods(path_info=None)¶ Returns the valid methods that match for a given path. New in version 0.7. build(endpoint, values=None, method=None, force_external=False, append_unknown=True)¶ Building URLs works pretty much the other way round. Instead of match you call build and pass it the endpoint and a dict of arguments for the placeholders. The build function also accepts an argument called force_external which, if you set it to True will force external URLs. Per default external URLs (include the server name) will only be used if the target URL is on a different subdomain. >>> m = Map([ ... Rule('/',>> urls.build("downloads/show", {'id': 42}) '/downloads/42' >>> urls.build("downloads/show", {'id': 42}, force_external=True) '' Because URLs cannot contain non ASCII data you will always get bytestrings back. Non ASCII characters are urlencoded with the charset defined on the map instance. Additional values are converted to unicode and appended to the URL as URL querystring parameters: >>> urls.build("index", {'q': 'My Searchstring'}) '/?q=My+Searchstring' When processing those additional values, lists are furthermore interpreted as multiple values (as per werkzeug.datastructures.MultiDict): >>> urls.build("index", {'q': ['a', 'b', 'c']}) '/?q=a&q=b&q=c' If a rule does not exist when building a BuildError exception is raised. The build method accepts an argument called method which allows you to specify the method you want to have an URL built for if you have different methods for the same endpoint specified. New in version 0.6: the append_unknown parameter was added. dispatch(view_func, path_info=None, method=None, catch_http_exceptions=False)¶ Does the complete dispatching process. view_func is called with the endpoint and a dict with the values for the view. It should look up the view function, call it, and return a response object or WSGI application. http exceptions are not caught by default so that applications can display nicer error messages by just catching them by hand. If you want to stick with the default error messages you can pass it catch_http_exceptions=Trueand it will catch the http exceptions. Here a small example for the dispatch usage: from werkzeug.wrappers import Request, Response from werkzeug.wsgi import responder from werkzeug.routing import Map, Rule def on_index(request): return Response('Hello from the index') url_map = Map([Rule('/', endpoint='index')]) views = {'index': on_index} @responder def application(environ, start_response): request = Request(environ) urls = url_map.bind_to_environ(environ) return urls.dispatch(lambda e, v: views[e](request, **v), catch_http_exceptions=True) Keep in mind that this method might return exception objects, too, so use Response.force_typeto get a response object. get_default_redirect(rule, method, values, query_args)¶ A helper that returns the URL to redirect to if it finds one. This is used for default redirecting only. get_host(domain_part)¶ Figures out the full host name for the given domain part. The domain part is a subdomain in case host matching is disabled or a full host name. make_alias_redirect_url(path, endpoint, values, method, query_args)¶ Internally called to make an alias redirect URL. match(path_info=None, method=None, return_rule=False, query_args=None)¶ The usage is simple: you just pass the match method the current path info as well as the method (which defaults to GET). The following things can then happen: - you receive a NotFound exception that indicates that no URL is matching. A NotFound exception is also a WSGI application you can call to get a default page not found page (happens to be the same object as werkzeug.exceptions.NotFound) - you receive a MethodNotAllowed exception that indicates that there is a match for this URL but not for the current request method. This is useful for RESTful applications. - you receive a RequestRedirect exception with a new_url attribute. This exception is used to notify you about a request Werkzeug requests from your WSGI application. This is for example the case if you request /fooalthough the correct URL is /foo/You can use the RequestRedirect instance as response-like object similar to all other subclasses of HTTPException. - you get a tuple in the form (endpoint, arguments)if there is a match (unless return_rule is True, in which case you get a tuple in the form (rule, arguments)) If the path info is not passed to the match method the default path info of the map is used (defaults to the root URL if not defined explicitly). All of the exceptions raised are subclasses of HTTPException so they can be used as WSGI responses. The will all render generic error or redirect pages. Here is a small example for matching: >>> m = Map([ ... Rule('/', endpoint='index'), ... Rule('/downloads/', endpoint='downloads/index'), ... Rule('/downloads/<int:id>', endpoint='downloads/show') ... ]) >>> urls = m.bind("example.com", "/") >>> urls.match("/", "GET") ('index', {}) >>> urls.match("/downloads/42") ('downloads/show', {'id': 42}) And here is what happens on redirect and missing URLs: >>> urls.match("/downloads") Traceback (most recent call last): ... RequestRedirect: >>> urls.match("/missing") Traceback (most recent call last): ... NotFound: 404 Not Found New in version 0.6: return_rule was added. New in version 0.7: query_args was added. Changed in version 0.8: query_args can now also be a string. - class werkzeug.routing. Rule(string, defaults=None, subdomain=None, methods=None, build_only=False, endpoint=None, strict_slashes=None, redirect_to=None, alias=False, host=None)¶ A Rule represents one URL pattern. There are some options for Rule that change the way it behaves and are passed to the Rule constructor. Note that besides the rule-string all arguments must be keyword arguments in order to not break the application on Werkzeug upgrades. - string Rule strings basically are just normal URL paths with placeholders in the format <converter(arguments):name>where the matched without a trailing slash will trigger a redirect to the same URL with the missing slash appended. The converters are defined on the Map. - endpoint - The endpoint for this rule. This can be anything. A reference to a function, a string, a number etc. The preferred way is using a string because the endpoint is used for URL generation. - defaults An optional dict with defaults for other rules with the same endpoint. This is a bit tricky but useful if you want to have unique URLs: url_map = Map([ Rule('/all/', defaults={'page': 1}, endpoint='all_entries'), Rule('/all/page/<int:page>', endpoint='all_entries') ]) If a user now visits will be redirected to. If redirect_defaults is disabled on the Map instance this will only affect the URL generation. - subdomain The subdomain rule string for this rule. If not specified the rule only matches for the default_subdomain of the map. If the map is not bound to a subdomain this feature is disabled. Can be useful if you want to have user profiles on different subdomains and all subdomains are forwarded to your application: url_map = Map([ Rule('/', subdomain='<username>', endpoint='user/homepage'), Rule('/stats', subdomain='<username>', endpoint='user/stats') ]) - methods A sequence of http methods this rule applies to. If not specified, all methods are allowed. For example this can be useful if you want different endpoints for POST and GET. If methods are defined and the path matches but the method matched against is not in this list or in the list of another rule for that path the error raised is of the type MethodNotAllowed rather than NotFound. If GET is present in the list of methods and HEAD is not, HEAD is added automatically. Changed in version 0.6.1: HEAD is now automatically added to the methods if GET is present. The reason for this is that existing code often did not work properly in servers not rewriting HEAD to GET automatically and it was not documented how HEAD should be treated. This was considered a bug in Werkzeug because of that. - strict_slashes - Override the Map setting for strict_slashes only for this rule. If not specified the Map setting is used. - build_only - Set this to True and the rule will never match but will create a URL that can be build. This is useful if you have resources on a subdomain or folder that are not handled by the WSGI application (like static data) - redirect_to If given this must be either a string or callable. In case of a callable it’s called with the url adapter that triggered the match and the values of the URL as keyword arguments and has to return the target for the redirect, otherwise it has to be a string with placeholders in rule syntax: def foo_with_slug(adapter, id): # ask the database for the slug for the old id. this of # course has nothing to do with werkzeug. return 'foo/' + Foo.get_slug_for_id(id) url_map = Map([ Rule('/foo/<slug>', endpoint='foo'), Rule('/some/old/url/<slug>', redirect_to='foo/<slug>'), Rule('/other/old/url/<int:id>', redirect_to=foo_with_slug) ]) When the rule is matched the routing system will raise a RequestRedirect exception with the target for the redirect. Keep in mind that the URL will be joined against the URL root of the script so don’t use a leading slash on the target URL unless you really mean root of that domain. - alias - If enabled this rule serves as an alias for another rule with the same endpoint and arguments. - host - If provided and the URL map has host matching enabled this can be used to provide a match rule for the whole host. This also means that the subdomain feature is disabled. New in version 0.7: The alias and host parameters were added. Rule Factories¶ - class werkzeug.routing. RuleFactory¶ As soon as you have more complex URL setups it’s a good idea to use rule factories to avoid repetitive tasks. Some of them are builtin, others can be added by subclassing RuleFactory and overriding get_rules. - class werkzeug.routing. Subdomain(subdomain, rules)¶ All URLs provided by this factory have the subdomain set to a specific domain. For example if you want to use the subdomain for the current language this can be a good setup: url_map = Map([ Rule('/', endpoint='#select_language'), Subdomain('<string(length=2):lang_code>', [ Rule('/', endpoint='index'), Rule('/about', endpoint='about'), Rule('/help', endpoint='help') ]) ]) All the rules except for the '#select_language'endpoint will now listen on a two letter long subdomain that holds the language code for the current request. - class werkzeug.routing. Submount(path, rules)¶ Like Subdomain but prefixes the URL rule with a given string: url_map = Map([ Rule('/', endpoint='index'), Submount('/blog', [ Rule('/', endpoint='blog/index'), Rule('/entry/<entry_slug>', endpoint='blog/show') ]) ]) Now the rule 'blog/show'matches /blog/entry/<entry_slug>. - class werkzeug.routing. EndpointPrefix(prefix, rules)¶ Prefixes all endpoints (which must be strings for this factory) with another string. This can be useful for sub applications: url_map = Map([ Rule('/', endpoint='index'), EndpointPrefix('blog/', [Submount('/blog', [ Rule('/', endpoint='index'), Rule('/entry/<entry_slug>', endpoint='show') ])]) ]) Rule Templates¶ - class werkzeug.routing. RuleTemplate(rules)¶ Returns copies of the rules wrapped and expands string templates in the endpoint, rule, defaults or subdomain sections. Here a small example for such a rule template: from werkzeug.routing import Map, Rule, RuleTemplate resource = RuleTemplate([ Rule('/$name/', endpoint='$name.list'), Rule('/$name/<int:id>', endpoint='$name.show') ]) url_map = Map([resource(name='user'), resource(name='page')]) When a rule template is called the keyword arguments are used to replace the placeholders in all the string parameters. Custom Converters¶ You can easily add custom converters. The only thing you have to do is to subclass BaseConverter and pass that new converter to the url_map. A converter has to provide two public methods: to_python and to_url, as well as a member that represents a regular expression. Here is a small example: from random import randrange from werkzeug.routing import Rule, Map, BaseConverter, ValidationError class BooleanConverter(BaseConverter): def __init__(self, url_map, randomify=False): super(BooleanConverter, self).__init__(url_map) self.randomify = randomify self.regex = '(?:yes|no|maybe)' def to_python(self, value): if value == 'maybe': if self.randomify: return not randrange(2) raise ValidationError() return value == 'yes' def to_url(self, value): return value and 'yes' or 'no' url_map = Map([ Rule('/vote/<bool:werkzeug_rocks>', endpoint='vote'), Rule('/vote/<bool(randomify=True):foo>', endpoint='foo') ], converters={'bool': BooleanConverter}) If you want that converter to be the default converter, name it 'default'. Host Matching¶ New in version 0.7. Starting with Werkzeug 0.7 it’s also possible to do matching on the whole host names instead of just the subdomain. To enable this feature you need to pass host_matching=True to the Map constructor and provide the host argument to all routes: url_map = Map([ Rule('/', endpoint='www_index', host=''), Rule('/', endpoint='help_index', host='help.example.com') ], host_matching=True) Variable parts are of course also possible in the host section: url_map = Map([ Rule('/', endpoint='www_index', host=''), Rule('/', endpoint='user_index', host='<user>.example.com') ], host_matching=True)
http://werkzeug.pocoo.org/docs/0.10/routing/
CC-MAIN-2015-35
refinedweb
3,344
56.76
Calculating the Sum of a List of Numbers We will begin our investigation with a simple problem that you already know how to solve without using recursion. Suppose that you want to calculate the sum of a list of numbers such as: . An iterative function that computes the sum is shown below. The function uses an accumulator variable ( total) to compute a running total of all the numbers in the list by starting with and adding each number in the list. def iterative_sum(numbers): total = 0 for n in numbers: total = total + n return total iterative_sum([1, 3, 5, 7, 9]) # => 25: numbersis the sum of the first element of the list ( numbers[0]), and the sum of the numbers in the rest of the list ( numbers[1:]). To state it in a functional form: def sum_of(numbers): if len(numbers) == 0: return 0 return numbers[0] + sum_of(numbers[1:]) sum_of([1, 3, 5, 7, 9]) # => 25 There are a few key ideas in this code sample to look at. First, on line 2 we are checking to see if the list is empty. This check is crucial and is our escape clause from the function. The sum of a list of length 0 is trivial; it is just zero. Second, on line 5 our function calls itself! This is the reason that we call the sum_of algorithm recursive. A recursive function is a function that calls itself. The diagram below shows the series of recursive calls that are needed to sum the list .. The diagram below shows the additions that are performed as sum_of works its way backward through the series of calls. When sum_of returns from the topmost problem, we have the solution to the whole problem. ![ Series of recursive returns from adding a list of numbers](figures/sum-list-out.png)
https://bradfieldcs.com/algos/recursion/calculating-the-sum-of-a-list-of-numbers/
CC-MAIN-2018-26
refinedweb
304
70.02
I may have jumped the gun on criticizing Dart's partoperator yesterday. Some of the behavior caught me a bit by surprise, after which it was only natural to jump to the conclusion that its sole use would be for nefarious coding. It's also possible that I might still need to catch up on sleep after a bout with a stomach bug. Ahem. Anyhow, after some helpful comments in yesterday's post (and some G+ discussion), I think I have a better handle on parts in Dart. I also think I might have an opportunity to use them myself. First, let's make sure that I understand them. In my main.dartfile, I import the Greeterclass from a library: import 'greeter.dart'; main() { var standard = new StandardGreeter(); print(standard.greet("Bob")); }I use it to create a standard greeting and print out the result: ➜ code dart part/main.dart Hello, BobMy greeter library is so successful with a single class, I decide to add other kinds of greeters: the "howdy" greeter and the more subdued "hi" greeter. Clearly, it is going to be a maintenance nightmare to define all three classes in one file. This is the point of parts. The original greeter library with the StandardGreeterclass was defined in greeter.dartas: library greeter; class StandardGreeter { greet(name) => "Hello, ${name}"; }It's a library. It is a class. The class has a method. Easy-peasy. Now that I have a bunch of classes, I move them all into their own files: standard_greeter.dart, hi_greeter.dart, and howdy_greeter.dart. The greeter.dartlibrary file now needs to pull in each class, which it does with part: library greeter; part 'standard_greeter.dart'; part 'hi_greeter.dart'; part 'howdy_greeter.dart';Last, but not least, each part has to declare itself the property of this library. These are not reusable code chunks. They belong exclusively to the library. This is where the part ofdirective comes into play. In standard_greeter.dart: part of greeter; class StandardGreeter { greet(name) => "Hello, ${name}"; }In hi_greeter.dart: part of greeter; class HiGreeter { greet(name) => "Hi, ${name}"; }In howdy_greeter.dart: part of greeter; class HowdyGreeter { greet(name) => "Howdy, ${name}"; }Back in main.dart, the importstatement is unchanged—I am still pulling in the greeter library. But now, thanks to the multi-faceted nature of that library, I have access to three greeter classes that can be used to greet Bob: import 'greeter.dart'; main() { var standard = new Greeter(); var hi = new HiGreeter(); var howdy = new HowdyGreeter(); print(standard.greet("Bob")); print(hi.greet("Bob")); print(howdy.greet("Bob")); }No output from the dart_analyzermeans that I must be doing something right (or at least not horribly wrong). And, with that, I have my three different greetings: ➜ code dart_analyzer part/main.dart ➜ code dart part/main.dart Hello, Bob Hi, Bob Howdy, BobI can very much see the utility of doing something like this. In particular, I can already see that this would help Hipster MVC. In the collection library, for instance, I am currently defining my collection class, event list class, event class, and more in a single file. This was tedious at times. I definitely plan on making use of parts to split those things out into smaller, more manageable files. It still concerns me a bit that it is possible to extract a bunch of functions into a part as I did last night. A bunch of functions is the opposite of organization. It is the developer equivalent of the junk drawer. But I am happy to avoid the practice, especially if I have yet another way to make the rest of my code stronger. Day #592
https://japhr.blogspot.com/2012/12/parts-for-dart-classes.html
CC-MAIN-2017-47
refinedweb
607
66.94
Issues ZF-1982: Zend_Session_Namespace should have a getter for the $_namespace property Description Currently there is no way to obtain the namespace string (protected $_namespace property) of a Zend_Session_Namespace instance. There should be a getNamespace() or similar method for obtaining this property after instantiation. Posted by Wil Sinclair (wil) on 2008-03-21T17:05:31 Ralph Schindler (ralph) on 2008-04-22T11:28:41.000+0000 Updating project management info Posted by Ralph Schindler (ralph) on 2009-01-10T09:30:24.000+0000 Will evaluate within 2 weeks. Posted by Robert Basic (robertbasic) on 2009-11-19T07:58:16.000+0000 Could the opener provide a case where this would be useful? ATM, the only thing I can come up with is: but this can be done with: If there's no other useful case for this, then it's just duplication. Posted by Marco Kaiser (bate) on 2009-11-20T04:10:11.000+0000 fixed in r19080
http://framework.zend.com/issues/browse/ZF-1982?focusedCommentId=27954&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-18
refinedweb
158
67.04
History¶ v14.0.0¶ 04 Feb 2018 - #1688: Officially deprecated basic_authand digest_authtools and the httpauthmodule, triggering DeprecationWarnings if they’re used. Applications should instead adapt to use the more recent auth_basicand auth_digesttools. This deprecated functionality will be removed in a subsequent release soon. - Removed DeprecatedTooland the long-deprecated and disabled tidyand nsgmlstools. See the rationale for this change. v13.1.0¶ 17 Dec 2017 #1231 via PR #1654: CaseInsensitiveDict now re-uses the generalized functionality from jaraco.collectionsto provide a more complete interface for a CaseInsensitiveDict and HeaderMap. Users are encouraged to use the implementation from jaraco.collections except when dealing with headers in CherryPy. v13.0.1¶ 17 Dec 2017 v12.0.2¶ 03 Dec 2017 v12.0.1¶ 20 Nov 2017 - Fixed issues importing cherrypy.test.webtest(by creating a module and importing classes from cheroot) and added a corresponding DeprecationWarning. v12.0.0¶ 17 Nov 2017 Drop support for Python 3.1 and 3.2. #1625: Removed response timeout and timeout monitor and related exceptions, as it not possible to interrupt a request. Servers that wish to exit a request prematurely are recommended to monitor response.timeand raise an exception or otherwise act accordingly. Servers that previously disabled timeouts by invoking cherrypy.engine.timeout_monitor.unsubscribe()will now crash. For forward-compatibility with this release on older versions of CherryPy, disable timeouts using the config option: 'engine.timeout_monitor.on': False, Or test for the presence of the timeout_monitor attribute: with contextlib2.suppress(AttributeError): cherrypy.engine.timeout_monitor.unsubscribe() Additionally, the TimeoutErrorexception has been removed, as it’s no longer called anywhere. If your application benefits from this Exception, please comment in the linked ticket describing the use case, and we’ll help devise a solution or bring the exception back. v11.3.0¶ - Bump to cheroot 5.9.0. cherrypy.test.webtestmodule is now merged with the cheroot.test.webtestmodule. The CherryPy name is retained for now for compatibility and will be removed eventually. v11.2.0¶ 13 Nov 2017 v11.1.0¶ 28 Oct 2017 - PR #1611: Expose default status logic for a redirect as HTTPRedirect.default_status. - PR #1615: HTTPRedirect.statusis now an instance property and derived from the value in args. Although it was previously possible to set the property on an instance, and this change prevents that possibilty, CherryPy never relied on that behavior and we presume no applications depend on that interface. - #1627: Fixed issue in proxy tool where more than one port would appear in the request.baseand thus in cherrypy.url. - PR #1645: Added new log format markers: iholds a per-request UUID4 zoutputs UTC time in format of RFC 3339 cherrypy._cprequest.Request.unique_id.uuid4now has lazily invocable UUID4 - #1646: Improve http status conversion helper. - PR #1638: Always use backslash for path separator when processing paths in staticdir. - #1190: Fix gzip, caching, and staticdir tools integration. Makes cache of gzipped content valid. - Requires cheroot 5.8.3 or later. - Also, many improvements around continuous integration and code quality checks. This release contained an unintentional regression in environments that are hostile to namespace packages, such as Pex, Celery, and py2exe. See PR #1671 for details. v10.2.0¶ 12 Mar 2017 - PR #1580: CPWSGIServer.versionnow reported as CherryPy/x.y.z Cheroot/x.y.z. Bump to cheroot 5.2.0. - The codebase is now PEP 8 complaint, flake8 linter is enabled in TravisCI by default. - Max line restriction is now set to 120 for flake8 linter. - PEP 257 linter runs as separate allowed failure job in Travis CI. - A few bugs related to undeclared variables have been fixed. pre-committesting goes faster due to enabled caching. v10.1.0¶ 07 Feb 2017 v10.0.0¶ 20 Jan 2017 #1332: CherryPy now uses portend for checking and waiting on ports for startup and teardown checks. The following names are no longer present: - cherrypy._cpserver.client_host - cherrypy._cpserver.check_port - cherrypy._cpserver.wait_for_free_port - cherrypy._cpserver.wait_for_occupied_port - cherrypy.process.servers.check_port - cherrypy.process.servers.wait_for_free_port - cherrypy.process.servers.wait_for_occupied_port Use this functionality from the portend package directly. v9.0.0¶ 19 Jan 2017 - #1481: Move functionality from cherrypy.wsgiserver to the cheroot 5.0 project. v8.9.0¶ 13 Jan 2017 v8.6.0¶ 27 Dec 2016 - #1538 and #1090: Removed cruft from the setup script and instead rely on include_package_data to ensure the relevant files are included in the package. Note, this change does cause LICENSE.md no longer to be included in the installed package. v8.5.0¶ 26 Dec 2016 v8.3.1¶ 25 Dec 2016 v8.3.0¶ 24 Dec 2016 - Consolidated some documentation and include the more concise readme in the package long description, as found on PyPI. v8.1.1¶ 27 Sep 2016 v8.1.0¶ 04 Sep 2016 - #1473: HTTPErrornow also works as a context manager. - #1487: The sessions tool now accepts a storage_classparameter, which supersedes the new deprecated storage_typeparameter. The storage_classshould be the actual Session subclass to be used. - Releases now use setuptools_scmto track the release versions. Therefore, releases can be cut by simply tagging a commit in the repo. Versions numbers are now stored in exactly one place. v8.0.1¶ 03 Sep 2016 v8.0.0¶ 02 Sep 2016 - #1483: Remove Deprecated constructs: cherrypy.lib.httpmodule. unrepr, modules, and attributesin cherrypy.lib. - PR #1476: Drop support for python-memcached<1.58 - #1401: Handle NoSSLErrors. - #1489: In wsgiserver.WSGIGateway.respond, the application must now yield bytes and not text, as the spec requires. If text is received, it will now raise a ValueError instead of silently encoding using ISO-8859-1. - Removed unicode filename from the package, working around pypa/pip#3894 and pypa/setuptools#704. v7.1.0¶ 25 Jul 2016 v7.0.0¶ 24 Jul 2016 Removed the long-deprecated backward compatibility for legacy config keys in the engine. Use the config for the namespaced-plugins instead: - autoreload_on -> autoreload.on - autoreload_frequency -> autoreload.frequency - autoreload_match -> autoreload.match - reload_files -> autoreload.files - deadlock_poll_frequency -> timeout_monitor.frequency v6.2.0¶ 18 Jul 2016 v6.1.1¶ 16 Jul 2016 v6.1.0¶ 14 Jul 2016 - Combined wsgiserver2 and wsgiserver3 modules into a single module, cherrypy.wsgiserver. v6.0.0¶ 05 Jun 2016 - Setuptools is now required to build CherryPy. Pure distutils installs are no longer supported. This change allows CherryPy to depend on other packages and re-use code from them. It’s still possible to install pre-built CherryPy packages (wheels) using pip without Setuptools. - six is now a requirement and subsequent requirements will be declared in the project metadata. - #1440: Back out changes from PR #1432 attempting to fix redirects with Unicode URLs, as it also had the unintended consequence of causing the ‘Location’ to be byteson Python 3. cherrypy.exposenow works on classes. cherrypy.configdecorator is now used throughout the code internally. v5.6.0¶ 05 Jun 2016 @cherrypy.exposenow will also set the exposed attribute on a class. - Rewrote all tutorials and internal usage to prefer the decorator usage of exposerather than setting the attribute explicitly. - Removed test-specific code from tutorials. v5.5.0¶ 05 Jun 2016 - #1397: Fix for filenames with semicolons and quote characters in filenames found in headers. - #1311: Added decorator for registering tools. - #1194: Use simpler encoding rules for SCRIPT_NAME and PATH_INFO environment variables in CherryPy Tree allowing non-latin characters to pass even when wsgi.versionis not u.0. - #1352: Ensure that multipart fields are decoded even when cached in a file. v5.4.0¶ 10 May 2016 cherrypy.test.webtest.WebCasenow honors a ‘WEBTEST_INTERACTIVE’ environment variable to disable interactive tests (still enabled by default). Set to ‘0’ or ‘false’ or ‘False’ to disable interactive tests. - #1408: Fix AttributeError when listiterator was accessed using the nextattribute. - #748: Removed cherrypy.lib.sessions.PostgresqlSession. - PR #1432: Fix errors with redirects to Unicode URLs. v5.3.0¶ 30 Apr 2016 v5.2.0¶ 30 Apr 2016 - #1410: Moved hosting to Github (cherrypy/cherrypy). v5.1.0¶ - Bugfix issue #1315 for test_HTTP11_pipeliningtest in Python 3.5 - Bugfix issue #1382 regarding the keyword arguments support for Python 3 on the config file. - Bugfix issue #1406 for test_2_KeyboardInterrupttest in Python 3.5. by monkey patching the HTTPRequest given a bug on CPython that is affecting the testsuite (). - Add additional parameter raise_subclsto the tests helpers openURL and CPWebCase.getPageto have finer control on which exceptions can be raised. - Add support for direct keywords on the calls (e.g. foo=bar) on the config file under Python 3. - Add additional validation to determine if the process is running as a daemon on cherrypy.process.plugins.SignalHandlerto allow the execution of the testsuite under CI tools. v5.0.0¶ - Removed deprecated support for ssl_certificateand ssl_private_keyattributes and implicit construction of SSL adapter on Python 2 WSGI servers. - Default SSL Adapter on Python 2 is the builtin SSL adapter, matching Python 3 behavior. - Pull request #94: In proxy tool, defer to Host header for resolving the base if no base is supplied. v3.8.2¶ v3.8.0¶ v3.7.0¶ v3.6.0¶ - Fixed HTTP range headers for negative length larger than content size. - Disabled universal wheel generation as wsgiserver has Python duality. - Pull Request #42: Correct TypeError in check_authwhen encrypt is used. - Pull Request #59: Correct signature of HandlerWrapperTool. - Pull Request #60: Fix error in SessionAuth where login_screen was incorrectly used. - Issue #1077: Support keyword-only arguments in dispatchers (Python 3). - Issue #1019: Allow logging host name in the access log. - Pull Request #50: Fixed race condition in session cleanup.
http://cherrypy.readthedocs.io/en/latest/history.html
CC-MAIN-2018-13
refinedweb
1,571
52.87
This article is a rewrite of Calculate Cumulative Distribution Function by Sorting written in Ruby in Python. When you want to know the probability density function (PDF) of a random variable, you use a histogram for naive, but it takes trial and error to cut bins, and it takes a considerable number of measurements to get a clean graph. It is troublesome to become. In such a case, it is cleaner to look at the cumulative distribution function (CDF) instead of the probability density function, and it is easier to find it with a single sort. Below, we will introduce the differences between random numbers that follow a normal distribution when viewed in PDF and when viewed in CDF. The operation was confirmed by Google Colab. First, let's see how to obtain the probability density function from the histogram. Import all the libraries you will need later. import random import matplotlib.pyplot as plt import numpy as np from math import pi, exp, sqrt from scipy.optimize import curve_fit from scipy.special import erf Generate 1000 random numbers that follow a Gaussian distribution with mean 1 and variance 1. N = 1000 d = [] for _ in range(N): d.append(random.gauss(1, 1)) When I plot it, it looks like this. plt.plot(d) plt.show() It looks like it's swaying around 1. Let's make a histogram and find the probability density function. You can also find it with matplotlib.pyplot.hist, but I use numpy.histogram as a hobby of how to receive values. hy, bins = np.histogram(d) hx = bins[:-1] + np.diff(bins)/2 hy = hy / N plt.plot(hx,hy) plt.show() It looks like a Gaussian distribution, but it's pretty crazy. Now, let's assume that this histogram has a Gaussian distribution and find the mean and standard deviation. Use scipy.optimize.curve_fit. First, define the function used for fitting. def mygauss(x, m, s): return 1.0/sqrt(2.0*pi*s**2) * np.exp(-(x-m)**2/(2.0*s**2)) Note that you must use np.exp instead of ʻexp as the NumPy array is passed to x . If you pass this function and data to scipy.optimize.curve_fit`, it will return an array of estimates and a covariance matrix, so let's display it. v, s = curve_fit(mygauss, hx, hy) print(f"mu = {v[0]} +- {sqrt(s[0][0])}") print(f"sigma = {v[1]} +- {sqrt(s[1][1])}") Since the diagonal component of the covariance matrix is variance, its square root is displayed as an error. The result is different every time, but it looks like this, for example. mu = 0.9778044193329654 +- 0.16595607115412642 sigma = 1.259695311989267 +- 0.13571713273726863 While the true values are both 1, the average estimate is 0.98 + -0.17 and the standard deviation is 1.3 + -0.1, which is not a big difference, but it's not good. The cumulative distribution function $ F (x) $ is the probability that the value of a random variable $ X $ is smaller than $ x $, that is, Is. Now, suppose that when $ N $ of independent data is obtained, the $ k $ th value is $ x $ by arranging them in ascending order. Then, the probability that the random variable $ X $ is smaller than $ x $ can be estimated as $ k / N $. From the above, the cumulative distribution function can be obtained by sorting the obtained array of $ N $ random variables and plotting the $ k $ th data on the x-axis and $ k / N $ on the y-axis. let's see. sx = sorted(d) sy = [i/N for i in range(N)] plt.plot(sx, sy) plt.show() A relatively beautiful error function was obtained. As before, think of this as an error function and find the mean and variance by fitting. First, prepare the error function for fitting. Note that the definition of the error function in the world is delicate, so you have to add 1 and divide by 2, or divide the argument by √2. def myerf(x, m, s): return (erf((x-m)/(sqrt(2.0)*s))+1.0)*0.5 Let's fit it. v, s = curve_fit(myerf, sx, sy) print(f"mu = {v[0]} +- {sqrt(s[0][0])}") print(f"sigma = {v[1]} +- {sqrt(s[1][1])}") The result looks like this. mu = 1.00378752698032 +- 0.0018097681998120645 sigma = 0.975197323266848 +- 0.0031393908850607445 The average is 1.004 + -0.002 and the standard deviation is 0.974 + -0.003, which is a considerable improvement despite using exactly the same data. To see the distribution of random variables, I introduced how to get the probability density function using a histogram and how to sort and see the cumulative distribution function. Histograms require a lot of trial and error on how to cut bins, but it's easier to see the cumulative distribution function in a sort because it doesn't require any parameters. Even if you want a probability density function, you can get better data by finding the cumulative distribution function once, then moving average and numerically differentiating it. Also, the cumulative distribution function is more accurate in estimating the parameters of the original distribution. This is intuitively in the area of interest (for example, near the mean), when using the histogram you can only use the number of data that fits in the bin, but in the case of the cumulative distribution function you can use about $ N / 2 $ of data. I think it can be used, but I'm not so confident, so ask a nearby professional. Recommended Posts
https://memotut.com/en/09e44c0435bd5a972272/
CC-MAIN-2021-49
refinedweb
925
57.57
David J. Anderson Delivers "State of Kanban-Land" Address at the 2013 Lean Kanban Conference. At the Lean Kanban North America conference in Chicago, Kanban pioneer David J. Anderson discussed the “State of Kanban-Land”. InfoQ captured some highlights of his address. The address began with some statistics: Last year the Kanbandev Yahoo! group had 2426 members which is up about 20% from last year, and there were 17,000 posts. There are 15 more yahoo groups about Lean or Kanban, but most of the traffic seems to be on the Yahoo Group Kanbandev. There are 64 groups on LinkedIn.com related to Kanban or Lean. Meetup.com lists 89 groups around the world that discuss Kanban. The Limited WIP Society website lists 40 local groups of Kanban aficianados. Anderson's classic Kanban: Successful Evolutionary Change for Your Technology Business, which is considered by many to be the Kanban bible, has sold over 30,000 copies across all formats and translations, and in addition sales are rising and higher now than 2 years ago. There are also many new books including an MEAP from Manning "Kanban in Action" and other provocative titles such as "Kanban for Skeptics". "Lean Kanban" is a return to the original name (2009) of the conference (known in the last several years as the “Lean Software and Systems Conference”) and the golden triangular logo will brand all of the Lean Kanban conference activities: In 2012 global attendance at all Kanban and Lean related conferences was 900, this year that number is expected to reach 1500. The specific conference websites are linked at. These numbers are expected to continue growing, and Anderson asked that people who are interested in creating a Kanban meet-up or conference should reach out to David J. Anderson Associates (or contact InfoQ for more information). Anderson went on to discuss Lean Kanban University. This is a trade association and standards body for offering Kanban training classes globally. In order to offer accredited training, trainers must be qualified to a certain standard. Anderson noted that it is not a trivial thing to achieve that recognition. Lean Kanban University (LKU) has a standard curriculum for Kanban training. If you have a training company and you want your trainees to receive a certificate from LKU using your own training materials, then your trainers must be accredited and the training materials must be consistent with the curriculum and approved against the standard, and your company needs to be a member of LKU. In addition to the regular two-day training class and the "Accredited Kanban Trainer" designation, a “Kanban Coaching Professional” credential was introduced for training certified Kanban coaches. There are currently 42 individuals who are Accredited Kanban Trainers globally, in addition to the companies that are members, and there are 76 Kanban Coaching Professionals. These are recruiting new membership, and there is more information on the website. Kanban is seeing significant growth for example in Eastern Europe and South America. But while there's momentum, there isn't a huge amount of Kanban training happening globally. The number of public classes globally in the past eight months was less than 100. That indicates about 500 additional private classes, which Anderson asserts “is a very small number”. In North America there is a relatively low demand for Kanban training, which he labels “curious”. He says “There are so many companies around the world who have been doing Kanban for five years, you would think that it would have caught on to a much greater degree.” Anderson pointed out that five years ago Jim Shore, a popular figure in the Agile community, predicted that by today there would be many Kanban installations that will have done it wrong and there would be failures. He says that “what's interesting is that we are not seeing that at all. We are seeing perhaps shallow Kanban implementations, where a few companies are doing Kanban boards and such, but there is so much more to Kanban than that. We see a very rudimentary understanding of risk analysis and risk management, we see very few companies doing operations reviews, and very few implementations where it has scaled across many services in the organization and connected those dots. These are things covered in the LKU standard training. “There are a number of companies that teach Kanban training for a living. The largest in America is Imaginet. Their management would love hearing from you about what it would take to grow this Kanban training market, and what do you need to see in order to make it possible to buy Kanban training. We would like to understand why there is a lot more training in Europe than there is in North America.” Anderson said "We now know that Kanban works, there are companies that have been using it successfully for years, it is entirely institutionalized. Where in the past we have seen a history of change initiatives that tend to wear out, and companies drop some new idea and move on to something else, we have institutionalized Kanban adoption and this tells us that companies like it, and it is helping them, and now that is how they do business." Anderson’s announcement Anderson went on to note that “Many people know where it has been used outside of software, e.g. Reuters uses it in HR, as do many law firms, architecture firms, and so on. So we feel it is time we moved this out of the tech sector to knowledge workers in general. With that, I am announcing that next year's conference will be rebranded as the “Modern Management Methods Conference”. Because what we are doing here is not a process, it is a management method. So we are taking this out of the tech sector. The venue will be the Hyatt Regency San Francisco May 4 to 8, 2014." 2014 by Jim Benson Re: 2014 by Vikram Gupta
http://www.infoq.com/news/2013/05/State_of_KanbanLand_Address/
CC-MAIN-2014-35
refinedweb
990
60.55
#include <ggi/ovl.h> int ggiOvlSetPos(ggiOvl_t ovl int x int y); int ggiOvlMove(ggiOvl_t ovl int x int y); int ggiOvlGetPos(ggiOvl_t ovl int *x int *y); int ggiOvlSetSize(ggiOvl_t ovl int w int h); int ggiOvlGetSize(ggiOvl_t ovl int *w int *h); int ggiOvlSetZoom(ggiOvl_t ovl int x int y int w int h); int ggiOvlGetZoom(ggiOvl_t ovl int *x int *y int *w int *h); int ggiOvlSetVals(ggiOvl_t ovl int x int y int w int h int zx int zy int zw int zh); Important Note: all the coordinates and sizes are given in the units which were last set with the function ggiOvlSetCoordBase. In addition, where noted below, coordinates are offset by the values last passed to ggiOvlSetCoordBase. New resources, opon which ggiOvlSetCoorbase has never been used, start with their point of origin at the top left corner of the visual, and use the same units that were used to negotiate them via the ggiOvlAdd* or ggiOvlCreate* functions. If you didn't specifically ask for certain units, then you should call ggiOvlSetCoordBase to set the desired units before attempting to use these functions. ggiOvlSetPos positions the graphics overlay with the top left corner at the given coordinates x and y. The actual coordinates used may be altered by the target subsystem if their are restrictions on the placement of the resource. ggiOvlGetPos will read the actual position of the resource into the offered parameters x and y. ggiOvlMove is just another name for ggiOvlSetPos. ggiOvlSetSize alters the visible size of a graphics overlay. The actual sizes used may be altered by the target subsystem if their are restrictions on the pplacement of the resource. ggiOvlGetSize will read the actual position of the resource into the offered parameters x and y. The zoom parameters (see below) are interpolated to the new size of the resource. ggiOvlSetZoom alters the contents displayed inside the overlay on resources which support zooming and panning. The parameters are given in the same units as those of the above five functions, and if the contents are not panned or zoomed, x, y will have the same values as last supplied to ggiOvlSetPos and w, h will have the same values as last supplied to ggiOvlSetZoom. Their usage is best described by example, see below. ggiOvlSetVals is used to set all the positioning and zooming parameters at the same time. It is recommended to use this function, rather than the above, when you need to perform a combination of positioning, sizing, or zooming, operations on the overlay, for two reasons: Firstly, if you are setting these parameters on an overlay which is not hidden, this will likely induce less user-visible artifacts; Secondly, some resources may not support the intermediate states you traverse by using combinations of the other functions above, and you may find they fail to actually implement the requested changes. Example 1. Move, zoom, and pan a video overlay. ggi_mode mode; /* Set the units to pixels */ ggiGASetCoordBase(GA_COORBASE_PIXELS,0,0,0,0); /* Get the current size of the overlay */ ggiOvlGetSize(ovl, &sizex, &sizey); err = ggiOvlMove(video,(mode.visible.x - sizex/2),(mode.visible.y - sizey/2)); if (err) { fprintf(stderr, "Could not center the video display\n"); } /* Get the current position of the overlay, which may be slightly off from the above. */ ggiOvlGetPos(ovl, &posx, &posy); /* Zoom the contents of the overlay to twice their normal size. */ ggiOvlZoom(ovl, sizex * 2, sizey * 2, posx, posy); /* Slowly pan over from the top left quadrant of the video image, where we are now, to the lower right quadrant */ for (i = 100; i > 0; i++) { ggiOvlZoom(ovl, sizex * 2, sizey * 2, posx + sizex / i, posy + sizey / i); usleep(10000); }
http://www.ggi-project.org/docs/ggiovlsetvals.html
crawl-002
refinedweb
616
53.04
Welcome to the last walkthrough (for now) of the new WIF tools for Visual Studio 11 Beta! This is my absolute favorite, where we show you how to take advantage of ACS2 from your application with just a few clicks. The complete series include Using the Local Development STS, manipulating common config settings, connecting with a business STS, get an F5 experience with ACS2. Let’s say that you downloaded the new WIF tools (well done! ) and you at least checked out the first walkthrough. That test stuff is all fine and dandy, but now you want to get to something a bit more involved: you want to integrate with multiple identity providers, even if they come in multiple flavors. Open the WIF tools dialog, and from the Provider tab pick the “Use the Windows Azure Access Control Service” option. You’ll get to the UI shown below. There’s not much, right? That’s because in order to use ACS2 form the tools you first need to specify which namespace you want to use. Click on the “Configure…” link. You get a dialog which asks you for your namespace and for the management key. Why do we ask you for those? Well, the namespace is your development namespace: that’s where you will save the trust settings for your applications. Depending on the size of your company, you might not be the one that manages that namespace; you might not even have access to the portal, and the info about namespaces could be sent to you by one of your administrator. Why do we ask for the management key? As part of the workflow followed by the tool, we must query the namespace itself for info and we must save back your options in it. In order to do that, we need to use the namespace key. As you can see, the tool offer the option of saving the management key: that means that if you always use the same development namespace, you’ll need to go through this experience only once. As mentioned above, the namespace name and management key could be provided to you from your admin; however let’s assume that your operation is not enormous, and you wear both the dev and the admin hats. Here there’s how to get the the management key value form the ACS2 portal. Navigate to, sign in with your Windows Azure credentials, 1) pick the Service Bus, Access Control and Cache area, 2) select access control 3) pick the namespace you want to use for dev purposes and 4) hit the Access Control Service button for getting into the management portal. Once here, pick the management service entry on the left navigation menu; choose “management client”, then “symmetric key”. Once here, copy the key text in the clipboard (beware not to pick up extra blanks!). Now get back to the tool, paste in the values and you’re good to go! As soon as you hit OK, the tool downloads the list of all the identity providers that are configured in your namespace. In the screenshot below you can see that I have all the default ones, plus Facebook which I added in my namespace. If I would have had other identity providers configured (local ADFS2 instances, OpenID providers, etc) I would see them as checkboxes as well. Let’s pick Google and Fecebook, then click OK. Depending on the speed of your connection, you’ll see the little donut pictured below informing you that the tools are updating the various settings. As soon as the tool closes, you are done! Hit F5. Surprise surprise, you get straight to the ACS home realm discovery page. Let’s pick Facebook. Here there’s the familiar FB login… …and we are in! What just happened? Leaving the key acquisition out for a minute, let me summarize. - you went to the tools and picked ACS as provider - You got a list checkboxes, one for each of the available providers, and you selected the ones you wanted - you hit F5, and your app showed that it is now configured to work with your providers of choice Now, I am biased: however to me this seems very, very simple; definitely simpler than the flow that you had to follow until now Of course this is just a basic flow: if you need to manage the namespace or do more sophisticated operations the portal or the management API are still the way to go. However now if you just want to take advantage of those features you are no longer forced to learn how to go through the portal. In fact, now dev managers can just give the namespace credentials without necessarily giving access to the Windows Azure portal for all the dev staff. What do you think? We are eager to hear your feedback! Don’t forget to check out the other walkthroughs: the complete series include Using the Local Development STS, manipulating common config settings, connecting with a business STS, get an F5 experience with ACS2.
https://blogs.msdn.microsoft.com/vbertocci/2012/03/15/windows-identity-foundation-tools-for-visual-studio-11-part-iv-get-an-f5-experience-with-acs2/
CC-MAIN-2016-30
refinedweb
844
68.3
Gosh it feels like a long time since I’ve blogged – particularly since I’ve blogged anything really C#-language-related. At some point I want to blog about my two CodeMash 2013 sessions (making the C# compiler/team cry, and learning lessons about API design from the Spice Girls) but those will take significant time – so here’s a quick post about object and collection initializers instead. Two interesting little oddities… Is it an object initializer? Is it a collection initializer? No, it’s a syntax error! The first part came out of a real life situation – FakeDateTimeZoneSource, if you want to look at the complete context. Basically, I have a class designed to help test time zone-sensitive code. As ever, I like to create immutable objects, so I have a builder class. That builder class has various properties which we’d like to be able to set, and we’d also like to be able to provide it with the time zones it supports, as simply as possible. For the zones-only use case (where the other properties can just be defaulted) I want to support code like this: { CreateZone("x"), CreateZone("y"), CreateZone("a"), CreateZone("b") }.Build(); ( CreateZone is just a method to create an arbitrary time zone with the given name.) To achieve this, I made the Builder implement IEnumerable<DateTimeZone>, and created an Add method. (In this case the IEnumerable<> implementation actually works; in another case I’ve used explicit interface implementation and made the GetEnumerator() method throw NotSupportedException, as it’s really not meant to be called in either case.) So far, so good. The collection initializer worked perfectly as normal. But what about when we want to set some other properties? Without any time zones, that’s fine: { VersionId = "foo" }.Build(); But how could we set VersionId and add some zones? This doesn’t work: { VersionId = "foo", CreateZone("x"), CreateZone("y") }.Build(); That’s neither a valid object initializer (the second part doesn’t specify a field or property) nor a valid collection initializer (the first part does set a property). In the end, I had to expose an IList<DateTimeZone> property: { VersionId = "foo", Zones = { CreateZone("x"), CreateZone("y") } }.Build(); An alternative would have been to expose a propert of type Builder which just returned itself – the same code would have been valid, but it would have been distinctly odd, and allowed some really spurious code. I’m happy with the result in terms of the flexibility for clients – but the class design feels a bit messy, and I wouldn’t have wanted to expose this for the "production" assembly of Noda Time. Describing all of this to a colleague gave rise to the following rather sillier observation… Is it an object initializer? Is it a collection initializer? (Parenthetically speaking…) In a lot of C# code, an assignment expression is just a normal expression. That means there’s potentially room for ambiguity, in exactly the same kind of situation as above – when sometimes we want a collection initializer, and sometimes we want an object initializer. Consider this sample class: using System.Collections; class Weird : IEnumerable { public string Foo { get; set; } private int count; public int Count { get { return count; } } public void Add(string x) { count++; } IEnumerator IEnumerable.GetEnumerator() { throw new NotSupportedException(); } } As you can see, it doesn’t actually remember anything passed to the Add method, but it does remember how many times we’ve called it. Now let’s try using Weird in two ways which only differ in terms of parentheses. First up, no parentheses: Weird weird = new Weird { Foo = "y" }; Console.WriteLine(Foo); // x Console.WriteLine(weird.Foo); // y Console.WriteLine(weird.Count); // 0 Okay, so it’s odd having a local variable called Foo, but we’re basically fine. This is an object initializer, and it’s setting the Foo property within the new Weird instance. Now let’s add a pair of parentheses: Weird weird = new Weird { (Foo = "y") }; Console.WriteLine(Foo); // y Console.WriteLine(weird.Foo); // Nothing (null) Console.WriteLine(weird.Count); // 1 Just adding those parenthese turn the object initializer into a collection initializer, whose sole item is the result of the assignment operator – which is the value which has now been assigned to Foo. Needless to say, I don’t recommend using this approach in real code… 12 thoughts on “Fun with Object and Collection Initializers” Fun With? (I think you need to go out for Valentines day Jon!) Though I have to admit, interesting contortions with initializers. Seems like one possible solution would be something like this: var invalid = new FakeDateTimeZoneSource.Builder ( versionId: “foo” ) { CreateZone(“x”), CreateZone(“y”) }.Build(); Where you could just support the properties as optional parameters to the constructor. Or you can add items as parameters of the constructor : … var pipo = new Pipo(1, 2, 3) {Poil = “pouet”}; Wow, that’s scary. Thanks for sharing. Wow!!! Never thought of that! @Ben: Yes, that would have been one option, I guess. I’m trying to stay away from C# 4 features for the moment, although admittedly using optional parameters wouldn’t prevent anyone from using it with .NET 3.5… it would just make it a pain for them to set the properties. One way of abusing initializers is this: FakeDateTimeZoneSource.Builder has method Add(DateTimeZone), and also method Add(ISetting). Then, you implement string-holder objects such as class VersionId : ISetting { public readonly string value; public VersionId(string value); }, and you can call it as such: var odd = new FakeDateTimeZoneSource.Builder { VersionId(“foo”), CreateZone(“x”), CreateZone(“y”) }.Build(); It doesn’t have to have an ISetting interface – it can be a specific overload for every setting if that’s easier to write (e.g. if there’s only one or two possible settings other than the zone list). Can we apply same technique in Java? @Vidhyut: No – Java doesn’t have object initializers, collection initializers or properties, so none of this applies. Interesting. I thought that you were saying weird.Foo was somehow being set to null in your last example (the ‘result’ of assigning a value to the local variable). But when I gave Weird.Foo a default value of “a” it remains as “a”, which is what you were saying all along! That first example of “incidental” Property initialization by use of the assignment operator, while creating an instance of an IEnumerable was surprising to me: Weird weird = new Weird { Foo = “y” }; It wasn’t surprising I couldn’t access the internal collection of ‘weird, since there’s no “real” GetEnumerator implemented. So, with the second call you have created a collection with 1 member, but you can’t access it ! I am not sure what I am supposed to learn from that :) What would seem to make sense to me would be that anytime I create a Class meant to be an Enumerator, that maintained some Public state Properties, which might be initialized as instances of the Class are created, that I would put a constructor in the Class to handle initialization of those Properties. public Weird(string foo){Foo = foo;} Then I can easily create a new instance in which the initialization of state variables, and the creation of internal Enumerable elements, is semantically distinct: Weird weird = new Weird(foo:”some string”){“1″,”2″}; Or, create an instance with state variable set, but internal collection empty: Weird weird = new Weird(foo:”some string”); That does “lock me into” having to provide an argument(s) to create an instance of ‘Weird, unless I throw in an optional parameterless calling form: public Weird(){} thanks, Bill I realize this is an old post, but I recently stumbled onto the way collection initializers work and found a neat use for them. Consider the following code: string whereText = new WhereBuilder(WhereBehavior.OmitNull | WhereBehavior.OmitEmpty | WhereBehavior.JoinAnd) { { “Field1”, field1Value }, { “Field2”, field2Value }, { “Field3″, field3Value }, $”Field4 BETWEEN {SqlValue(field4StartValue)} AND {SqlValue(field4EndValue)}”, otherTermsCollection }; A codebase I inherited is big on SQL generation, and this WhereBuilder class has made the code so much easier to navigate. No more of this craziness: List terms = new List(); if(!String.IsNullOrEmpty(field1Value)) terms.Add($”Field1 = {SqlValue(field1Value)}”); if(field2Value != null) terms.Add($”Field2 = {SqlValue(field2Value)}”); terms.Add($”Field4 BETWEEN {SqlValue(field4StartValue)} AND {SqlValue(field4EndValue)}”); terms.AddRange(otherTermsCollection); string whereText = terms.Length == 0 ? “” : $”WHERE {string.Join(” AND “, terms.Select(term => $”({term})”))}”; Some may consider it abuse (especially with the implicit conversion to string), but to me it makes the code far more readable. This is as close as anything I’ve seen to computation expressions in C#. I’ve already gone ahead and created an ArrayBuilder and ListBuilder to avoid screwing around with AddRange, Union, SelectMany, ToArray, and whatnot. Total control of order, and as efficient as if I’d hand-written the code bringing the elements together in a single collection/array.
https://codeblog.jonskeet.uk/2013/02/14/fun-with-object-and-collection-initializers/?like_comment=13474&_wpnonce=0faddb8395
CC-MAIN-2021-17
refinedweb
1,474
54.32
Today we are excited to announce several updates and enhancements made to the Windows Azure AppFabric Management Portal and the Access Control service. Management Portal Localization As announced in the blog post on the Windows Azure team blog, Windows Azure Platform Management Portal Updates Now Available, the Windows Azure Platform Management Portal now supports loca lization in 11 languages. The. You can read about additional enhancements in the blog post, Windows Azure Platform Management Portal Updates Now Available. Co-admin support Customers can grant access to additional users (Co-Administrators) on the Windows Azure Management Portal as documented here: How to Setup Multiple Administrator Accounts. These Co-administrators will now have access to the AppFabric section of the portal. For any questions or feedback regarding the Management Portal please visit the Managing Services on the Windows Azure Platform forum. Access Control The following updates have been made to all ACS 2.0 namespaces. Rules now support up to two input claims The ACS 2.0 rules engine now supports a new type of rule that allows up to two input claims to be configured, instead of only one input claim. Rules with two input claims can be used to reduce the overall number of rules required to perform complex user authorization functions. For more information on rules with two input claims, see. Encoding is now UTF-8 for all OAuth 2.0 responses In the initial release of ACS 2.0, the character encoding set for all HTTP responses from the OAuth 2.0 endpoint was US-ASCII. In the July 2011 release, the character encoding of HTTP responses is now set to UTF-8 to support extended character sets. Quotas Removed The previous quotas on configuration data have been removed in this release. This includes removal of all limitations on the number of identity providers, relying party applications, rule groups, rules, service identities, claim types, delegation records, issuers, keys, and addresses that can be created in a given ACS namespace. Please use the following resources to learn more about this release: For any questions or feedback regarding the Access Control service please visit the Security for the Windows Azure Platform forum. If you have not signed up for Windows Azure AppFabric and would like to start using these new capabilities, be sure to take advantage of our free trial offer. Just click here and get started today!
https://azure.microsoft.com/es-es/blog/announcing-the-windows-azure-appfabric-july-release/
CC-MAIN-2018-39
refinedweb
398
52.8
import "golang.org/x/tools/internal/span" Package span contains support for representing with positions and ranges in text files. parse.go span.go token.go token112.go uri.go utf16.go Invalid is a span that reports false from IsValid ToUTF16Column calculates the utf16 column expressed by the point given the supplied file contents. This is used to convert from the native (always in bytes) column representation and the utf16 counts used by some editors. type Converter interface { //ToPosition converts from an offset to a line:column pair. ToPosition(offset int) (int, int, error) //ToOffset converts from a line:column pair to an offset. ToOffset(line, col int) (int, error) } Converter is the interface to an object that can convert between line:column and offset forms for a single file. Point represents a single point within a file. In general this should only be used as part of a Span, as on its own it does not carry enough information. FromUTF16Column advances the point by the utf16 character offset given the supplied line contents. This is used to convert from the utf16 counts used by some editors to the native (always in bytes) column representation. Range represents a source code range in token.Pos form. It also carries the FileSet that produced the positions, so that it is self contained. NewRange creates a new Range from a FileSet and two positions. To represent a point pass a 0 as the end pos. IsPoint returns true if the range represents a single point. Span converts a Range to a Span that represents the Range. It will fill in all the members of the Span, calculating the line and column information. Span represents a source code range in standardized form. Parse returns the location represented by the input. Only file paths are accepted, not URIs. The returned span will be normalized, and thus if printed may produce a different string. Format implements fmt.Formatter to print the Location in a standard form. The format produced is one that can be read back in using Parse. func (s Span) Range(converter *TokenConverter) (Range, error) Range converts a Span to a Range that represents the Span for the supplied File. TokenConverter is a Converter backed by a token file set and file. It uses the file set methods to work out the conversions, which makes it fast and does not require the file contents. func NewContentConverter(filename string, content []byte) *TokenConverter NewContentConverter returns an implementation of Converter for the given file content. NewTokenConverter returns an implementation of Converter backed by a token.File. func (l *TokenConverter) ToOffset(line, col int) (int, error) URI represents the full URI for a file. URIFromPath returns a span URI for the supplied file path. It will always have the file scheme. Filename returns the file path for the given URI. It is an error to call this on a URI that is not a valid filename. Package span imports 13 packages (graph) and is imported by 38 packages. Updated 2020-07-02. Refresh now. Tools for package owners.
https://godoc.org/golang.org/x/tools/internal/span
CC-MAIN-2020-29
refinedweb
511
57.77
So, I just finished up the last touches, and am ready to to release the beta. Screenshots of the examples: {"name":"collegro_types.png","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/d\/9\/d9b3e52b41c561211886359d1256b224.png","w":643,"h":501,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/d\/9\/d9b3e52b41c561211886359d1256b224"}{"name":"collegro_ani.png","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/d\/2\/d2e511b60d5fda3d55abbebf35987744.png","w":644,"h":502,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/d\/2\/d2e511b60d5fda3d55abbebf35987744"} So far, collegro supports testing between bounding boxes, bounding circles, and bit masks. You can grab a lot of the data from allegro BITMAPs, saving programmers a little bit of time. You can also draw the various collision objects to an allegro BITMAP, including bit masks. It supports custom bit masks as well. I added some quickie functions for testing if two allegro BITMAPs collide, for example... I'm sure this function will be well liked: int clgo_collide_bitmaps (BITMAP *bitmap_a, int blit_x_a, int blit_y_a, BITMAP *bitmap_b, int blit_x_b, int blit_y_b) The library is C compatable, fully documented, and written inside a single file so you can add it to your projects really easily. You can read the documentation at: And you can download collegro at: Pre-compiled Windows binaries are available. Before releasing 1.0, I want to have collegro tested a bit more, so please download it and use it. I also will want to write more example code and how-tos for the documentation based on questions I recieve. I also will want to include a pre-compiled MSVC library and a linux makefile. But for right now, it should be in a usable state, so have fun. =) On a side note: I should probably add this to the depot. Should I submit it under the utilities section, or will Matthew add it to the libraries section? EDIT: Post suggestions / bugs / comments below or on my website's message board. -----------------Website: Playing With YarnMy Games: Flying Flammie, CorruptionWare, Hero of Light VS. Warriors of Darkness.Other: Collegro, The Seasons The question that most people will ask: How fast is it? 42 How fast is it? In a scale of 1 - 10, 10 being totally optimised, I'd say, 8ish. So it's pretty fast, but not so fast as to make the code unreadable. EDIT: To clarify...For example, it only tests the bits inside an overlapping rectangle of a bit mask and the other collision object.. So it won't test any bits in a bit mask that can't possibly collide. I also read allegro BITMAPs directly to grab the data for the bit masks. And obviously, bounding boxes and bounding circles are very fast. However, on the slower side of things, I added some safety code to help make sure user errors won't cause the program to crash, and I store one bit of a bit mask inside each unsigned char, as opposed to packing a bunch of them into an 8 bit variable. This way, the bit masks are much easier to work with, and the bit masks can store more than 2 values if the user needs them to. Well, that seems fast enough to me. When I need something more than bounding boxes, I'll be sure to use your library, congratulations. You still might want to use collegro if you just want to use bounding boxes at first because it'll probably be faster to set up, comes with debugging tools (you can draw the bounding box to the screen), and can test between bounding circles and bit maps later on, if you need it to. =P Besides, I need people to beta test it. EDIT: Found one bug involving unsigned ints trying to be ints. Already fixes and will be included in the next release. Whenever there's... int overlapping_box_width; int overlapping_box_height; It should be... unsigned int overlapping_box_width; unsigned int overlapping_box_height; That's why it's beta. Congratulations! When I start being productive again I'll most definetally check it out (source included). The only collision detection library I've used so far is PPCol. It seemed pretty good, but I never did anything serious with it. Perhaps I'll uses Collegro instead now (assuming it's better/faster). The face of a child can say it all, especially the mouth part of the face. Nice work! One suggestion: for checking between two rectangles you use: but wouldn't this be a little more fast and compact, not sure, just asking...(untested code) Also you could declare all local variables that are calculated only once and never changed as constants. It might help compiler to better optimize the code. _________ Thanks! =) I am in the middle of finals, but I am totally going to go through the code and make a bunch of minor optimizations. I just wanted to get the beta out then optimize for the next release. =P Actually, another minor optimization I'm going to pull off is changing the array code so that I increment a pointer instead of using array subscripts all the time. Example:peanut[5] VS. *(peanut + size_peanut * 5). Of course for traversing the array, I'd just be incrementing "peanut," so the code would look more like, "peanut = peanut + size_peanut," then de-reference that pointer a bunch of times. It's a beta, there's a bunch of minor optimizations left to do, and I appreciate your suggestions. Still, the big O of the algorithms shouldn't change. EDIT: FMC, your code is faster, thank you. =) EDIT 2: Also, do you think I should disable NULL testing for pointers? The biggest desire I think people are asking for is speed. It would shave off a comparison in each function, but could possibly cause crashes if the user passes a NULL pointer (which they shouldn't do in the first place?) EDIT 3: Another optimization idea: I use unsigned chars to store 1s and 0s in a bit mask. If I use ints, it'll use 4 - 8 times as much space, but could be quicker. Think it's worth it? peanut[5] VS. *(peantut + size_peanut * 5). Of course for traversing the array, I'd just be incrementing "peanut," so the code would look more like, "peanut = peanut + size_peanut," then de-reference that pointer a bunch of times. Depending on how you do it, the compiler may do that automatically for you. But be careful: Foo *bar; ++bar; // <- increases the pointer by sizeof(*bar), not 1! -- "Do not meddle in the affairs of cats, for they are subtle and will pee on your computer." -- Bruce Graham I store one bit of a bit mask inside each unsigned char Bit field is faster and uses eight times smaller space than storing it in unsigned char. I accept your opinion about multiple values, but I would also implement bit field. And another note: what about polygon collisions? " Whoah, that must be the hugest function I've seen to perform a bounding box collision... You can do it in 2 lines (4 tests and 4 sums) as I did in the other collision thread OpenLayer has reached a random SVN version number ;) | Online manual | Installation video!| MSVC projects now possible with cmake | Now alvailable as a Dev-C++ Devpack! (Thanks to Kotori) Flad: That's the second thing. I couldn't sleep very well last night, but that gave me some time to think about how to optimize collegro. Currently, the only collision tests that do not run in constant time are, of course, the bit map tests. To answer your question of why not bit pack all the bits into an unsigned char, at least for right now... 1 - Really complicated code. I may be able to test 8 bits at once, but I may need to add several more operations and tests to align the bits, making the increase in speed on certain tests, marginal. However, the code would come close to unreadable. 2 - There are a lot of functions that need to grab one bit at a time. For those functions, storing each bit seperately is actually faster. 3 - The horizontal / vertical flipping flags would be difficult and slower to implement using bit fields. I would have to rebuild and store horizontal, vertially, and horizontally/vertically flipped bit masks in memory all at the same time, at the very least. 4 - Optimizations like that come last, after you get the code working. =P But here are some ideas that could make some signifigant speed increases... 1) Using MMX, SSE, SSE2, etc.. to test multiple sets of data at once. My only question would be how portable can the code be? I don't want to force the user to set flags and stuff for different systems. 2) Instead of packing into unsigned chars, pack into unsigned ints. Why test 8 bits at once, when we can test 32 / 64 at once, using the data size the processor likes? The only draw-back... very complicated code. Whoah, that must be the hugest function I've seen to perform a bounding box collision... You can do it in 2 lines (4 tests and 4 sums) as I did in the other collision thread It's long, but it's not slow, just a lot of if/else statements. =P I was mainly concerned with writing very understandable code at the time that I wrote it, I already planned on compressing it into a shorter amount of code for the next release anyways. =) 2) Instead of packing into unsigned chars, pack into unsigned ints. Why test 8 bits at once, when we can test 32 / 64 at once, using the data size the processor likes? That's what PPCol and other libraries like that do. Personally I like collision polygons best, you can even generate those from images so no extra work for the developer (see my thread about OL's new features). You can access bits like this: char *bit_field; int n; // how many bits you want b = (char *)malloc((n/8)*sizeof(char) + 1); // you have to divide it by 8 to get the number of chars and add there one to avoid memory leaks // then a single bit can be accessed like this b[i>>3] /* and some bit operation - reading what's there, setting a bit to 0/1 */ collision polygons Funny you should mention that. I was going to add that later on, after I finish up with the current tests. =P EDIT:Thanks, OICW. That was certainly random, but informative, I think. EDIT 2: What I decided to do for the release after next (the next release will be just code cleanup and a few bug fixes), I'll keep the unsigned char bit map for quick single pixel access, but add four integer bit masks for collision testing. EDIT 3: I just want to remind everyone that I know what can be fixed up to go faster. Remember, collegro is UNFINISHED. The whole idea of releasing this beta was to get user feedback on features / suggestions. So, I appreciate suggestions and feedback regarding the api / features / etc, but critisizing unfinished code for not being optimized is just telling me something I already know. EDIT 4: Since people have been complaining about it, I just wanted to post the final collision box code for now. It should be pretty close to what FMC posted earlier. EDIT 5: Phew! Just got through optimizing the code. I also fixed some bugs regarding 15 bit image masks. There are a couple functions I want to make inline now, but I have no idea how to do that in standard C. (And no, (static / extern / nothing) inline void func(void) { ... } doesn't seem to work.) Anyone have any ideas other than using a macro? I'm going to add some distancing functions to collegro, write an example of how to find the distance between a bounding box and/or a bounding circle, and upload version 0.81 by Friday. XP uses Collegro. [edit] Found a bug while using collegro too. Not a major one, but your code doesn't play nice with C++. Adding extern "C" around the #include of collegro.h helped. This is necessary, as collegro.c doesn't compile as C++. You may want to add the #ifdef __cplusplus extern "C" {, etc code to collegro.h. BAF.zone | SantaHack! EDIT 4: Since people have been complaining about it, I just wanted to post the final collision box code for now. It should be pretty close to what FMC posted earlier. You could still do that all with 4 comparasions and 4 sums... PS. You need a collision library for pong?! I didnt need a collision detection library.. but I decided due to the weird shape of the paddles (candy canes) it could benifit from pixel perfect collision detection, then I remembered Collegro, so I decided to try that out, and it worked good, so I kept it. XP uses Collegro. Excellent. Already added and ready for the next release (which will be shortly after I finish my essay.) if(())) return 1; return 0; You know, if just assume that the function returns true (non-zero) on collision, you can remove an entire branch by doing this instead: return ()); I have to say though, it looks pretty good (haven't downloaded it yet myself though...) True true true! Thanks cammus. Committed. 1. Speed isn't really important in pair-based collision detection. Higher-level factors tend to dominate in the real world, like culling possible collision pairs. 2. Here's the bounding-box collision detection from PPCol #define check_bb_collision_general(x1,y1,w1,h1,x2,y2,w2,h2) (!( ((x1)>=(x2)+(w2)) || ((x2)>=(x1)+(w1)) || \ ((y1)>=(y2)+(h2)) || ((y2)>=(y1)+(h1)) )) Here's the same from PMASK: #define check_bb_collision_general(x1,y1,w1,h1,x2,y2,w2,h2) (!( ((x1)>=(x2)+(w2)) || ((x2)>=(x1)+(w1)) || ((y1)>=(y2)+(h2)) || ((y2)>=(y1)+(h1)) )) edit: the forum code seems to be truncating that line, but it's basically identical to PPCols, except single-line instead of a multiline macro, probably because multiline macros sometimes get bugged when moving code between platforms with different carriage returns (ie dos2unix etc).(possibly I stole that one from PPCol?) Anyway, both have 4 comparisons, 4 adds, and 3 logical-booleans-operators. 3.For pixel-perfect, I expect bitwise is significantly faster than per-pixel-logic, and I suspect that benchmarks under-estimate the real world performance difference, since bitwise based methods use less memory and thus should play nicer when they have to share resources with AI and game logic etc. 4. I have some code lieing around that tests a bunch of pixel-perfect collision libraries against each other for correctness and speed. If you want I could clean the code slightly and post it here. I've added Collegro to it. It now tests PMASK, PPCol, Ebox, Ebox2, Molly, Bitmask, and Collegro. PMASK, PPCol, and Bitmask are based upon packed bits and bitwise operations. Molly is based upon Minkowski sums, and Ebox/Ebox2 are based upon a hierarchical data structure. Collegro passes the correctness test, along with PMASK, PPCol, and Bitmask. The others fail. PMASK, PPCol, and Bitmask tend to have very similar performance, since they are all based upon bitwise operations. Collegro outperforms the bitwise-operator by up to a factor of 2 or so when the bitmaps are very small (2x2). At 60x60, the bitmask-based libs tend to outperform Collegro by a factor of 3 or 4. At 100x100 the difference rises to a factor of 15 or so. The sprite-size at which the bitwise-based libs and Collegro are equal in performance varies greatly based upon the exact settings used (in terms of sprite shapes, what percentage of the collision checks are overlapping and by how much, etc), but is generally between 15x15 and 40x40. Even the slowest of the libraries is still plenty fast enough for most any sane purpose - as I said earlier, performance is not as important as people think it is for pair-based collision functions. You know, if just assume that the function returns true (non-zero) on collision, you can remove an entire branch by doing this instead: Code readability?I mean you could even just use bb_a->x instead of bb_a_x, and put the whole thing only in the return, no need to declare variables
https://www.allegro.cc/forums/thread/550371/0
CC-MAIN-2018-17
refinedweb
2,729
63.8
In today’s Programming Praxis exercise, our goal is to calculate and show the gas mileage given an input file containing the total amount of miles and the amount of fuel bought at each fuel stop. Let’s get started, shall we? Some imports: import Text.Printf import Text.XFormat.Read Normally I’d use Parsec for something like this (and indeed in my first attempt), but the fact that parsing numbers requires token parsers makes the program less elegant. Instead, I used the XFormat package, which in this case produces cleaner code. line :: String -> (Float, Float) line s = (m,g) where Just (m,_,g) = readf (Float, Space, Float) s Once we’ve read in the number, calculating the mileage is a simple matter of taking the difference between each pair of mile totals and dividing it by the fuel amount. We use printf to format everything. showLog :: [(Float, Float)] -> [String] showLog es = "Miles Gals Avg" : "------ ---- ----" : zipWith (\(m2,g) (m1,_) -> printf "%.0f %.1f %.1f" m2 g ((m2 - m1) / g)) (tail es) es All that’s left to do is route the input through the functions above and output the result. main :: IO () main = mapM_ putStrLn . showLog . map line . lines =<< readFile "input.txt" Tags: bonsai, code, Haskell, kata, mileage, praxis, programming November 6, 2012 at 11:32 pm | […] Art is the elimination of the unnecessary « Programming Praxis – Gasoline Mileage Log […]
https://bonsaicode.wordpress.com/2012/11/02/programming-praxis-gasoline-mileage-log/
CC-MAIN-2015-22
refinedweb
231
71.55
import "github.com/pace/bricks/maintenance/health/servicehealthcheck" connection_state.go health_handler.go healthchecker.go readable_health_handler.go HealthHandler returns the health endpoint for transactional processing. This Handler only checks the required health checks and returns ERR and 503 or OK and 200. ReadableHealthHandler returns the health endpoint with all details about service health. This handler checks all health checks. The response body contains two tables (for required and optional health checks) with the detailed results of the health checks. func RegisterHealthCheck(name string, hc HealthCheck) RegisterHealthCheck registers a required HealthCheck. The name must be unique. If the health check satisfies the Initializable interface, it is initialized before it is added. It is not possible to add a health check with the same name twice, even if one is required and one is optional func RegisterHealthCheckFunc(name string, f HealthCheckFunc) RegisterHealthCheckFunc registers a required HealthCheck. The name must be unique. It is not possible to add a health check with the same name twice, even if one is required and one is optional func RegisterOptionalHealthCheck(hc HealthCheck, name string) RegisterOptionalHealthCheck registers a HealthCheck like RegisterHealthCheck(hc HealthCheck, name string) but the health check is only checked for /health/check and not for /health/ ConnectionState caches the result of health checks. It is concurrency-safe. func (cs *ConnectionState) GetState() HealthCheckResult GetState returns the current state. That is whether the check is healthy or the error occurred. func (cs *ConnectionState) LastChecked() time.Time LastChecked returns the time that the state was last updated or confirmed. func (cs *ConnectionState) SetErrorState(err error) SetErrorState sets the state to not healthy. func (cs *ConnectionState) SetHealthy() SetHealthy sets the state to healthy. type HealthCheck interface { HealthCheck(ctx context.Context) HealthCheckResult } HealthCheck is a health check that is registered once and that is performed periodically and/or spontaneously. type HealthCheckFunc func(ctx context.Context) HealthCheckResult func (hcf HealthCheckFunc) HealthCheck(ctx context.Context) HealthCheckResult type HealthCheckResult struct { State HealthState Msg string } HealthCheckResult describes the result of a health check, contains the state of a service and a message that describes the state. If the state is Ok the description can be empty. The description should contain the error message if any error or warning occurred during the health check. HealthState describes if a any error or warning occurred during the health check of a service const ( // Err State of a service, if an error occurred during the health check of the service Err HealthState = "ERR" // Warn State of a service, if a warning occurred during the health check of the service Warn HealthState = "WARN" // Ok State of a service, if no warning or error occurred during the health check of the service Ok HealthState = "OK" ) Initializable is used to mark that a health check needs to be initialized Package servicehealthcheck imports 10 packages (graph) and is imported by 5 packages. Updated 2020-11-24. Refresh now. Tools for package owners.
https://godoc.org/github.com/pace/bricks/maintenance/health/servicehealthcheck
CC-MAIN-2020-50
refinedweb
480
54.42
Django miscellany My colleagues got hit by this issue the other day. It's a definite annoyance, and just for the record here's my version, just on the off chance that os.curdir is different. import os this = os.path.dirname(os.path.abspath(__file__)) TEMPLATE_DIRS = ( "%s/jobs/templates" % this, ) On other things I integrated django-openid the other day into a Django site and its really nice. It worked really well, although I do have to find a way of altering the templates to go nicely into my Page Templated site. I do have to put in some work to the login screen. I have to explain the OpenId login, without making it too daunting and provide an easy way to create an account. Finally I had a quick play with Google maps. Want to show a map based on a UK postcode? How about: var postcode = "L7 9NJ"; if (postcode != "") { if (GBrowserIsCompatible()) { var map = new GMap2(mapnode); var lookup = new GClientGeocoder(); lookup.setBaseCountryCode('uk'); map.addControl(new GSmallMapControl()); lookup.getLatLng(postcode, function(point){ map.setCenter(point, 12); }); } } Yay. Not the encoding has to done with setBaseCountryCode as UK, not .co.uk or gb. Otherwise it centres on Germany, not Liverpool.
http://agmweb.ca/2008-05-02-django-miscellany/
CC-MAIN-2018-47
refinedweb
203
69.79
packet - packet interface on device level. Synopsis Description Address Types Socket Options Ioctls Error Handling Errors Versions Notes Compatibility Bugs Colophon #include <sys/socket.h> #include <netpacket/packet.h> #include <net/ethernet.h> /* the L2 protocols */ packet_socket = socket(AF_PACKET, int socket_type, int protocol);.. The sockaddr_ll */ }; sll_protocol is the standard ethernet protocol type in network order as defined in the <linux/if_ether.h> include file. It defaults to the sockets. Packet sockets can be used to configure physical layer multicasting and promiscuous mode. It works by calling setsockopt(2) on a packet socket. SIOCGSTAMP can be used to receive the timestamp of the last received packet. Argument is a struct timeval. In addition all standard ioctls defined in netdevice(7) and socket(7) are valid on packet sockets. Packet sockets do no error handling other than errors occurred while passing the packet to the device driver. They dont have the concept of a pending error. AF_PACKET is a new feature in Linux 2.2. Earlier Linux versions supported only covers. interface, which doesnt. glibc 2.1 does not have a define for SOL_PACKET. The suggested workaround is to use:This is fixed in later glibc versions and also does not occur on libc5 systems.This is fixed in later glibc versions and also does not occur on libc5 systems. #ifndef SOL_PACKET #define SOL_PACKET 263 #endif. socket(2), pcap(3), capabilities(7), ip(7), raw(7), socket(7) RFC 894 for the standard IP Ethernet encapsulation. RFC 1700 for the IEEE 802.3 IP encapsulation. The <linux/if_ether.h> include file for physical layer protocols. This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.sgvulcan.com/packet.7.php
CC-MAIN-2017-09
refinedweb
291
53.78
i was looking into the bitset class for java which was suggested to me on an earlier post which i thank to that person for such a helpful suggestion. This seems to be what i may be interested in. I found at least three helpful links that showed me how to create a bitset class which also showed how to look at each bit by using get() to see if it is true for the 1 bit or flase for the 0 bit. This seems to be what i am interested in, but its hard to find information on the bitset class. What i wanted to find was how to open any file as a bitset and read the bits as booleans, such as opening a text file , jpeg file, etc. Why? Very little is done bit by bit. 99.9999% of processing is at the byte level. well i am interested in that .001%, and if someone could answer my question that will be very helpful. Or if someone could recommend a link or book that will answer my question that would too. Thankyou. Perhaps nspills has a suggestion since he recommended the bitset class. i mean .0001%. Do you need an existing class or could you create your own? If you read a file as bytes, you can then use the various operators to access individual bits. For example you can AND a byte with a mask to test if a bit is on: byte b = ??? if(b & 0x80 != 0) // test if high order bit is set b & 0x40 != 0 // next lower one ... b & 0x01 != 0 // low order bit nspills suggested the bitset class which will return whatever bit you are referring to in a bitset as a boolean, false for 0 and true for 1. i think this is how it works; you use get(int i) where i is the nth bit in the set. However, if you could explain how i could read every bit in a file one by one that would be helpful. i have seen what you are talking about on other sites but i don't understand the high order or the low order. I am not sure what that means exactly. however, what i want to do is read every bit sequentially. Anywho, if you want to know what i am talking about for the java bitset class, look at this link: Just scroll down until you get to the bold heading called The Bitset Class. If you read a file as bytes, you will be reading the bits 8 at a time. If the bits in the file are to be numbered from 0 (the first one) to n the last one, then you could use some code like the following to return the nth bit. This is off the top of my head so it probably won't compile, but should give you an idea. Assign numbers to the bits of a byte: 01234567 Create masks to test each of these bits final byte Byte0 = (byte)0x80; ... final byte Byte7 = (byte)0x01; put these in an array of masks: final byte[] Masks = new byte[]{Byte0, ... Byte7}; This will be indexed to get the mask to be used below Read all the bytes of the file into an array of bytes: byte[] allBytes; The first eight bits of the file will be in the first byte (0th), the next 8 in the second byte and so on. To test a specific bit say: int bitNo = 9; First find which byte to test by dividing the bit number by 8. For example the 9th bit would be in the second byte. Remember its 0 based. int byteNbr = bitNo / 8; // = 1 Find which bit to test by using modulo 8: the 9th bit would be bit number 1. int bitNbr = bitNo % 8; // = 1 Now test the bit: allBytes[byteNbr] & Masks[bitNbr] != 0 i found this code on the net. Supposedly allows you to read a byte bit by bit, but i don't understand how to read it. Would it allow me to read a file as a sequence of bits all the way up to the end of the file: public class Bits{ public static void main(String[] args){ byte b; String bs; b = Byte.parseByte(args[0]); bs = Integer.toBinaryString(b); if (bs.length() < 8){ bs = "0000000" + bs; } bs = bs.substring(bs.length() - 8); System.out.println(bs); System.out.print(b + " = "); for (int i = 0; i < 8; ++i){ if ((b & 0x80) != 0){ System.out.print("1 "); } else{ System.out.print("0 "); } b <<= 1; } System.out.println(); } //end main } oh wait i just saw norm's post oh i completely get what you are saying. i just looked at hexadecimal again; you AND the first bit with 0x80, the second with 0x40, the third with 0x20, the fourth with 0x10, the fifth with 0x08 and so on. You load up each byte and determine the sequence. I completely understand, but there's just one thing which is that i am new to programming and don't know how to write such code. Is there a book with example code and detailed explanations or a link with such information. you see it is really hard to find information about this stuff on the net. oh yeah, one more thing; could'nt you read by four bits. Can't you create hexadecimal numbers with four bits. I also remember coming across masking four bits on the net if you declare what you are masking as a integer. Anywho, thank you Norm that was real helpful; the lightbulb really went on. Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?147412-help-with-bitset-class&p=438865
CC-MAIN-2014-15
refinedweb
957
79.9
So how can we locate controls in Windows forms applications? First of all you need to search for controls by using the Search Control classes specific for Windows Forms applications. You will find these in the Microsoft.VisualStudio.TestTools.UITesting.WinControls namespace. You use the search properties on the control to define how CodedUI needs to search for the control. So let’s have a look how we can locate different UI controls based on the following example application. Let’s start with a Search for the button that is shown in the form here below? First of all we need to know what the properties of this button are. If you look at how we develop windows forms applications then it is very common that the developer provides a name for the control in code, because they need to access them in code. You also see this button has a title “Click me!” The properties for this button look as follows: If you want to search for this control in a maintainable way, you always try to avoid taking any dependencies on the look and feel of the control you search for. So taking a dependency on the text on the button is generally not a smart thing to do. This would also cause issues when you have e.g. multi language versions of your application. So we want find the control preferably on the name the developer gave the control. Since that name will probably not change. Reason is that this name is also used in code for referring to the control. So we want to find the control based on the programmatic name. The question is, is this possible? To identify how to search for a control, you can use the test recorder tool to inspect the properties of the control when we run the application. The inspection of the control looks as shown in this screenshot on the side. Here you can see that we can search on the programmatic name of the control by using the search property Name of the control. So based on this information you would expect you would be able to find the control based on the following code: But when you run this test you will see the test fails and is unable to locate the control. It took me a while to figure out, but apparently every control can be found by first searching for a WinWindow control that has the name of the control and then within that window search for the control. Normally I use a tool, called Spy++ to find controls in the UI. This tool has been in the C++ SDK for years and something I always used. But when you look with Spy++ you will see the control Hierarchy is the same as you would expect from the UI perspective. I got some help from Abhitej John who is a software engineer at Microsoft who works on the CodedUI tools. When I told him that I did not understand the hierarchy of the controls he told me the following: “Coded UI for Windows forms works on IAcessible objects. The hierarchy that is defined by this native element (in coded UI terms) is what we use to search for controls. The tool that you would use in comparison to look at the hierarchy is “inspect” in MSAA mode. Spy++ is a UIA based tool that identifies WPF controls. Hence you see that difference in hierarchy” So I used the tool he suggests that I could find on my machine at the following location: <install drive>:\Program Files (x86)\Windows Kits\8.1\bin\x86\Inspect.exe When looking at the form using this tool you can see that when you set it in MSAA mode, you see the hierarchy is different and that is the reason we need to use a slightly different approach to find our controls. So that looks like follows for e.g. the button: This works for all the controls in the same way. You always search for a WinWindow that has the name of the control. then inside that window you search for the control of the type you expect. Now don’t make the mistake I did to search inside the WinWindow for the control, including the search properties, because that will result in the dreadful message: “Another control is blocking the control. Please make the blocked control visible and retry the action.” Alternatives with AccessibleName An alternative way that also works is setting the AccessibleName of the control. if you set this value, then you don’t need the containing WinWindow control and you can search straight for the control based on the name and the type you expect. But this has a nasty side effect that is probably not visible immediately. AccessibleName is used for accessibility tools. It provides the name of the control. You can check this by e.g. starting windows Narrator on your machine and the moment you click a control it will tell you which control you are accessing on that particular form. Now when we change the name of the accessible name, then we can introduce issues with our application in production when we have people that rely on these names to represent something they can understand. So therefore it is best to search based on the control name, without tweaking the AccessibleName property. How can we make the test better maintainable? As I have discussed in al my courses, the best way to ensure you create a maintainable set of UI Automation functions is by adopting the concept of writing DAMP tests. DAMP stands for Descriptive And Meaningful Phrases and you use this to ensure you can read the scenario from your tests straight from the code. You do this by abstracting the UI by using a Page Object Abstraction. For each functional part of your UI you create a class that abstracts the actions of the UI and provides methods you can call. Each method returns a Page Object that contains the next actions available in the UI. this then results in a nice Fluent API that you can use to write your test scenario. So for clicking all controls in the UI the test scenario would look like this: I hope you agree that this is way more readable then the test I showed before, where I only clicked one button. Imagine the clutter you get when you click multiple elements in the test. Page object pattern So the way you write the methods in the Page Object abstraction is by creating a class MainForm and adding action methods in that class that interact with the UI as follows: Now you see the actual search is now abstracted in the MainForm page class and my test now delegates the actual interaction with the controls to that instance of the class. By returning the MainForm instance as result of the method, you get the possibility to call the next method, resulting in the fluent API Conclusion We have looked at how to find controls in a windows forms application and found that we need to search for a WinWindow control first before we can find the actual control we want to interact with. Secondly I showed how you can abstract the interaction using a pattern called the Page Object Pattern, that enables us to write better maintainable code. You can download the full sample I have from this location: Just unzip it and run it in your visual studio IDE. The only thing you need to change is the launch path for the application, since that will be depending on where you unzipped the solution Hope this helps 20 thoughts on “Maintainable Test automation for Winforms using CodedUI” Hi Marcel. Thanks for your great (as usually) job. I have tried your example. It works for me on Win 8.1 with VS 2013, but it does not on W10 with VS2015. I think something changed in the way in which controls hierarchy is build on W10 and, as an addition, everything seems shifted on the screen, so when you ask to Mouse.Click a specific control, it clicks in a different place. Do you know if there are known issues with W10? Thanks, Giorgio. I will look into it and post back when I know more Hi Marcel, Excellent post. I am facing the same issue as Giorgio with CodedUI on W10 with VS2015. Did you ever get to look into this? I found that if I switch Inspect.exe to UI Automation, shifting is gone and everything seems to be aligned properly. I am not sure how I can get coded UI to use UI Automation for WinForms based application as opposed to the default MSAA. Any help would be greatly appreciated. Thanks, GK unfortunately I have not looked into this. the problem I face is that I can’t reproduce the issue on my machines and I tried multiple. Are you able to record a short screen capture on what is going on and then also provide me with the info on how the screen resolution of your system is set up? send it to me in the mail to vriesmarcel@hotmail.com and then I will try to reproduce it again and see if I can find a solution. You can tell CodedUI to explicitly search using UI Automation by doing the following: WinButton myButton = new WinButton(myContainer); myButton.TechnologyName = “UIA” Mouse.Click(myButton); CodedUI doesn’t search for the control until we perform an action on it so just be sure to set the TechnologyName property before interacting with the control. Hi Marcel, I’m trying to create UI application that will select the specific tests to run. When calling the test from a “button_Click” method, I always get an error that the control was not found. What is the best way to do it? I tried the following: 1. Create the application in a different solution and add as reference the coded UI library project with all the controls 2. Create the application in the same project that the tests and controls are placed. I have used Hand UI coded for the controls and for creating the tests. Thanks in advanced, TB I am not 100% sure what you are trying to achieve, but I have a feel this post may be of help for you: here I show how you can use CodedUI from within your application to click in the same application. Marcel, I have studied your CodedUi course on Pluralsite. I am building against a Wpf application. It is complex and runs in an enterprise environment. We will be using TeamCity once it it ready. I want to handcode this completely. As we have multiple wpf applications on the production floor, I want to build my first project out using the best of SOLID practices from the start. (including extension metthods and base classes to abstact out eventually to a nuget package, as well as fluent methodology) I am comfortable with all of that. However, still stumbling over design. Do you have a fully matured WFP project that would include all the best practices , file and project structure that you would use . I keep finding resouces that are web or window from others that has only mixed me up. If you have something out on github that I could use as a model, I would deeply appreciate looking at it. Thanks You Demo with PageObjects Code First is helpfull, but am hoping I could look at something in Wpf instead of web. Thanks I am sorry I don’t have a demo for that laying around. But the basic principles are the same. Define the functional area’s of your application and build a page object for each part. Make them nice and small and compose the flow out of the chaining of the objects. Unfortunately I don’t have a good sample for this at the moment. I would love to help out, but this then needs to be in a consulting way, since I don’t have any representative WPF application that I can base the help on Marcel, Running test ClickButton1 runs fine (passed test). Running test TestFormWithPageObject fails with the error you mentioned earlier, “Another control is blocking the control. Please make the blocked control visible and retry the action.” I assumed both tests should pass as written. What do I need to do to make TestFormWithPageObject pass? Copy of error: Test Name: TestFormWithPageObject Test FullName: WinformsCodedUI.UITests.CodedUITest1.TestFormWithPageObject Test Source: C:\repos\Demos\WinformsCodedUI\WinformsCodedUI.UITests\CodedUITest1.cs : line 76 Test Outcome: Failed Test Duration: 0:00:07.2404499 Test method WinformsCodedUI.UITests.CodedUITest1.TestFormWithPageObject threw exception: Microsoft.VisualStudio.TestTools.UITest.Extension.FailedToPerformActionOnBlockedControlException: Another control is blocking the control. Please make the blocked control visible and retry the action. Additional Details: TechnologyName: ‘MSAA’ Name: ‘Click me!’ ClassName: ‘WindowsForms10.BUTTON’ ControlType: ‘Window’ —> System.Runtime.InteropServices.COMException: Exception from HRESULT: 0xF004F003 Hi Marcel, I am impressed with your web site contents with examples. This specific CODEDUI on WinForm example put me more interest to work on it. I have a winform with Infragistics Combobox on it and the combobox datasource is filled with datatable with 2 columns (ID and Desc). I tried a lot to create a test method and the combox should pick one of the item from the dropdown. I failed in making this through the codedUI c# script. Can you help me on this in how to achieve it? var custtype = new WinWindow(dfcol[3]); custtype.SearchProperties.Add(WinWindow.PropertyNames.ControlName, “ucCusType”); var custtypecol = custtype.GetChildren(); var custtypecb = new WinComboBox(winform); WinControl wctrl = new WinControl(custtypecb); wctrl.SearchProperties[UITestControl.PropertyNames.Name] = “Open”; wctrl.SearchProperties[UITestControl.PropertyNames.ControlType] = “DropDownButton”; wctrl.WindowTitles.Add(“XXXXXXX”); //wctrl.SearchConfigurations.Add(SearchConfiguration.ExpandWhileSearching); string lname = “RED”; int k = 0; foreach (WinListItem wlt in custtypecb.Items) { if (wlt.DisplayText == lname) { itm = wlt; break; } k++; } if (itm != null) { custtypecb.Expanded = true; itm.Select(); //Mouse.Click(itm); //wctrl.WaitForControlEnabled(); //wctrl.SetFocus(); //Mouse.Click(wctrl); // Mouse.Click(itm); } ALL the time I get the playbackfailureexception cannot setproperty with value RED. Hi I am using Visual Studio 2017 Enterprise and I am not able to find ApplicationUnderTest Class . Would you know if this has been renamed . A google search did not yield any results . Would be great if you could let me know . You probably are missing a using statement, since it is just there in my 2017 environment. Did you create a CodedUI test project from template? Thanks for the reply . I remember checking the using statement . will check again . I did remember the auto complete was showing an error . will try and let you know in a couple of days . Interesting the same program ( copy-paste ) worked in VS2015 . Probably one of those things . Maybe if nothing else works I will try uninstall and reinstall 🙂 These are the using statements that are generated using System; using Microsoft.VisualStudio.TestTools.UITesting; using Microsoft.VisualStudio.TestTools.UnitTesting; using Microsoft.VisualStudio.TestTools.UITest.Input; using Microsoft.VisualStudio.TestTools.UITest.Extension; using Microsoft.VisualStudio.TestTools.UITesting.HtmlControls; using Microsoft.VisualStudio.TestTools.UITesting.DirectUIControls; using Keyboard = Microsoft.VisualStudio.TestTools.UITesting.Keyboard; I still get the same error . hello Marcel, This post is really very helpful. The sample which you have provided is not there anymore. its showing page not found error. kindly re upload. it will be really helpful for me. Thank You. Seema I updated the link and the sample can be downloaded again
https://fluentbytes.com/?p=7381
CC-MAIN-2021-04
refinedweb
2,596
64.81
c:\>java -version java version "1.3.0" Java(TM) 2 Runtime Environment, Standard Edition (build 1.3.0-C) Java HotSpot(TM) Client VM (build 1.3.0-C, mixed mode) This bug seems related to Bug #4218471 which was supposedly fixed in Kestrel (v1.3). The problem occurs when the print dialog is displayed. If the user moves the print dialog around the Java application behind does not repaint (while other application windows do repaint). This is an inconsistency. If the option on the desktop to “Show window contents while dragging” is turn on, then the problem occurs the first time that the movement is done. If this option is turned off (thus only showing an outline), then the first move seems to be fine. However, subsequent moves cause the same problem. The following is the complete code that I used to demonstrate this: import java.awt.*; import javax.swing.*; import java.awt.event.*; import java.awt.print.*; public class PrintDialogBug extends JFrame implements Printable { private myPanel m_myPanel = null; private boolean _printing = false; //boolean indicating printing public class myPanel extends JPanel { private Dimension sz; private String str = "Print Test: myPanel"; public myPanel() { super(); setPreferredSize(new Dimension(700, 500)); sz = new Dimension(); this.setOpaque(true); } protected void paintComponent(Graphics g) { super.paintComponent(g); sz = getSize(sz); g.setClip(0, 0, sz.width, sz.height); g.setColor((_printing)?Color.white:Color.black); g.fillRect(0, 0, sz.width, sz.height); g.setColor((_printing)?Color.black:Color.yellow); g.drawRect(10, 10, sz.width - 20, sz.height - 20); FontMetrics fm = g.getFontMetrics(); int sw = fm.stringWidth(str); int sh = fm.getHeight(); g.drawString(str, (10 + ((sz.width - 20) / 2) - (sw / 2)), (10 + ((sz.height - 20) / 2) + (sh / 2))); } } public PrintDialogBug(String t) { super(t); setDefaultCloseOperation(EXIT_ON_CLOSE); JMenuBar mb = new JMenuBar(); JMenu m = new JMenu("File"); m.add(new AbstractAction("Print...") { public void actionPerformed(ActionEvent e) { doPrint(); } }); m.addSeparator(); m.add(new AbstractAction("Exit") { public void actionPerformed(ActionEvent e) { System.exit(0); } }); mb.add(m); setJMenuBar(mb); m_myPanel = new myPanel(); getContentPane().add(m_myPanel); } public void doPrint() { PrinterJob pj = PrinterJob.getPrinterJob(); pj.setPrintable(this); if (pj.printDialog()) { try { pj.print(); } catch (PrinterException e) { e.printStackTrace(); } } } public int print(Graphics g, PageFormat pf, int idx) { //since we are currently sizing to fit on one page, we are only // printing one page. if (idx >= 1) { _printing = false; //turn off printing return NO_SUCH_PAGE; } _printing = true; //flag to notify paint routines that we are printing //get the total width and height of the current panels int pWidth = m_myPanel.getWidth(); int pHeight = m_myPanel.getHeight(); //calculate the scaling ratio double w_perChange = percentChange(pf.getImageableWidth(), pWidth); double h_perChange = percentChange(pf.getImageableHeight(), pHeight); double perChange = Math.min(w_perChange, h_perChange); //translate the printer graphic and set scale to scaling ratio g.translate((int)pf.getImageableX(), (int)pf.getImageableY()); ((Graphics2D)g).scale(perChange, perChange); m_myPanel.print(g); _printing = false; //notify the paint routines that we are done printing return PAGE_EXISTS; } //simple method which check for zero before dividing // private double percentChange(double to, double from) { if (from == 0) return 1.0; return to/from; } ///////////////////////////////////////////////////////////////////////////// //////// // main of demo // public static void main(String[] args) { PrintDialogBug vs = new PrintDialogBug("Print Dialog Bug"); vs.setLocation(10, 30); vs.pack(); vs.setVisible(true); } } (Review ID: 113018) ====================================================================== N/A Fixed using peer. xxxxx@xxxxx 2003-03-19 Has anyone discovered a workaround to this bug? I am part of a development team that is about to release an application which utilises JDKv1.3 and this problem is still persists! This should be considered a major flaw as many Java based applications use this feature. I would strongly recommend this to be fixed. We are also facing similar problem can this be handled It is still not fixed, even in JDK 1.4b2 !! It's incredible ! It's the same as : 85.html See the workaround at (site may be down at times as it's a home PC) The root of this problem seems too be the printdialog (and pagedialog too) which are provided by OS (not Java) blocking the current thread, which happens to be the event dispatch thread. Blocking that thread effectively prevents all repainting (as paint is a response to an event). The first "solution" is to make all calls to printdialog/pagedialog in a separate worker thread. That does solve the problem with pagedialog, but not with printdialog. The printdialog apparently still blocks the event thread somehow. The real solution is to start off a new event pump in a worker thread, while blocking the event dispatch thread yourself. Fortunately, this doesn't lead to a deadlock with printdialog. I realize it's a dangerous and hack-like technique, but why not (if it works)? This bug still exists in JDK1.4.1-b21, even though the list of fixed bugs for v1.4 includes this item: Bug ID : 4273333 RFE: Let PrinterJob.printDialog dialog box be optionally modal State: Closed, fixed Note the ORIGINAL bug was reported in Sept 1999. As I type this today, 3 full years have passed, and this bug remains. Not only that, but you have marked at as FIXED, and included it here: But it's not fixed yet. Still broken in JDK 1.4.1_01. Tested under Win2000 and WinXP and the problem still persists. Come on Sun, you need to give us a solution to this. The Microsoft developers are laughing at us (you) as we are faced with releasing Java app products with this type of repaint problems. java version "1.4.0_00" Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.0_00-b05) Java HotSpot(TM) Client VM (build 1.4.0_00-b05, mixed mode) The printdialog still does not repaint as of today Dec 04 2002. This still happens in 1.4.2-b28 (regression?) Its fixed in 1.5 (not yet released). The bug database used to say something like "fixed in a future release" for cases like this, but it doesn't any more. I will try (again!) to get that issue resolved.
http://bugs.sun.com/bugdatabase/view_bug.do%3Fbug_id=4394889
crawl-002
refinedweb
1,003
60.41
On Wed, Mar 01, 2006 at 12:18:58PM +1100, Daniel Carosone wrote: > On Sun, Feb 26, 2006 at 04:15:53AM -0800, Nathaniel Smith wrote: > > -- Having a single, standard abbrev is very useful. If, for > > instance, we switch the command name to "mtn", we should also (I > > suggest): > > -- make the bookkeeping directory MTN (instead of MT) > > If this is to change, let's please use .MTN (or equivalent for the > eventual name), and make it selectable by a hook in > ~/monotone/monotonerc or the environment. > > > -- I'm kind of fond of "m" (take _that_, you upstart 2-letter > > systems like hg!), but it got shouted down the last time I > > suggested it :-). > > -- "mmm" -- less boring than other suggestions, has appropriate > > associations ;-), and is, in fact, a mono-tone... but just > > perhaps a bit too cute. Also, annoying to type. > > I don't really want to get into the bikeshed about what name to use, > but since you're explicitly polling for opinions > : > m, mm, or mmm are good/clever/cute. I don't mind cute, in this > sense, if it results in a good name people will remember. hg is the > classic example of a cute name that works. mm is probably the best > of these, and I'd be happy with that. > > monotone is just fine by me as is, frankly. > > the others are all just not compelling or interesting enough to be > worthwhile, somehow. even mtn, which seems to be popular, has the > feeling of a least-common-denominator or least-bad compromise, to me. > > If/whichever we choose, I agree that aligning the various > abbreviations in dot files and namespaces to the one thing is > worthwhile. > > -- > Dan. Should we want to differentiate ourselves in google searches, perhaps trying a few names on google now might be useful. mmm -- 8,490,000 Google hits mm -- 109,000,000 m -- 1,340,000,000 mtn -- 6,410,000 mon -- 206,000,000 monotone -- 2,950,000 and finally, gienhu -- no hits So gienhu is the clear winner, with monotont a distant second. -- hendrik
http://lists.gnu.org/archive/html/monotone-devel/2006-03/msg00013.html
CC-MAIN-2013-48
refinedweb
345
70.73
I am trying to take a ActionScript 3 UI library and use MXML to describe its layout. More specifically I wanted to use the PlayBook's native OS UI component library from the BlackBerry Tablet OS SDK, called the QNX UI components. Lets take a look at some code to show you what I am talking about. Here is a MXML based app using a new class called QApplication that extends QNX Container, which then contains a QNX UI Button: QApplication Container Button <?xml version="1.0" encoding="utf-8"?> <r:QApplication xmlns:r="" xmlns:fx="" xmlns: <buttons:Button /> </r:QApplication> Doing this in ActionScript would look like this: package { import flash.display.Sprite; import flash.display.StageAlign; import flash.display.StageScaleMode; import qnx.ui.buttons.Button; import qnx.ui.core.Container; public class DisplayButtonAS3 extends Sprite { public function DisplayButtonAS3() { stage.scaleMode = StageScaleMode.NO_SCALE; stage.align = StageAlign.TOP_LEFT; var container:Container = new Container(); var but:Button = new Button(); container.addChild(but); addChild(container); container.setSize(stage.stageWidth, stage.stageHeight); } } } All to create the same application seen here: This means you can create a PlayBook application all in MXML with no Flex. This also means you can use Flex 4 (including Flex Hero the latest SDK with mobile support) and mix in some QNX UI components. All you need is to add the QMXML.swc (compiled against Tablet OS SDK 0.9.2) and namespace xmlns:r="" to your project/application. xmlns:r="" Now that sounds like the bomb! Well its still another UI library. So its not like the QNX UI Component library all of the sudden works just like the Flex Spark architecture. No, the QNX UI lib walks its own path and you will have to remember that when using it. Doesn't mean its bad just be aware its not the same as Flex. If you want to continue and play with the examples make sure to get the BlackBerry Tablet OS SDK from the BlackBerry developer site: NOTE: I am using BlackBerry Tablet OS SDK 0.9.1 as of writing this. First! All the code is available on github, go get it. The repository has a few Flash Builder projects in it. The main library project is called QMXML. All you need to create your own project is the QMXML.swc (compiled against Tablet OS SDK 0.9.2) found in the bin folder or you can download it from here. QMXML The other projects are sample projects called: NonFlexMXMLUsingQNX and FlexAndQNX. The NonFlexMXMLUsingQNX project contains the DisplayButton applications shown above, as well as another project showing a few more QNX UI classes nested in containers using QNX flow/containment properties. NonFlexMXMLUsingQNX FlexAndQNX DisplayButton The FlexAndQNX application is a standard Flex 4 spark (non-mobile) application that uses a QNX Button and List along side a Flex LineSeries data chart. Which made some sense as QNX UI library lacks charting at this point. Here is a view of the application running in the simulator: List LineSeries Here is the source code for the FlexAndQNX application which allows you mix QNX UI components directly in Flex. <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="" xmlns:r="" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" xmlns: <s:layout> <s:HorizontalLayout /> </s:layout> <fx:Script> <![CDATA[<span class="code-SummaryComment"> import mx.collections.ArrayCollection; import qnx.ui.core.ContainerFlow; import qnx.ui.core.SizeUnit; import qnx.ui.data.DataProvider; import qnx.ui.events.ListEvent; private var data:Array = []; [Bindable] private var chartData:ArrayCollection; private function createData():void { data = []; for (var i:int = 0; i <50; i++) data.push({label: "Point " + i, data: i+(Math.random()*30), data2: i+60+(Math.random()*60), data3: i+80+(Math.random()*120)}); lstData.dataProvider = new DataProvider(data); chartData = new ArrayCollection(data); } protected function lstData_listItemClickedHandler(event:ListEvent):void { trace(event.data.data + " - " + event.row); } ]]></span> </fx:Script> <fx:Declarations> <!--<span class="code-comment"> Define custom colors for use as fills in the AreaChart control. --></span> <mx:SolidColor <mx:SolidColor <mx:SolidColor <!--<span class="code-comment"> Define custom Strokes. --></span> <mx:SolidColorStroke <mx:SolidColorStroke <mx:SolidColorStroke <mx:SeriesInterpolate </fx:Declarations> <s:Group <r:QContainer <buttons:LabelButton <listClasses:List </r:QContainer> </s:Group> <s:HGroup <mx:AreaChart <mx:horizontalAxis> <mx:CategoryAxis </mx:horizontalAxis> <mx:series> <mx:AreaSeries <mx:AreaSeries <mx:AreaSeries </mx:series> </mx:AreaChart> <mx:Legend </s:HGroup> </s:Application> There is probably more integration on the QContainer class that I can do to make it work with Flex better but for now follow these loose rules. When mixing QNX UI components in a Flex app always start with a QContainer, think of that as your little mini QNX app space. You will want to wrap this in a Flex Spark Group or SkinnableContainer with the notion of the QContainer filling up all of its parents width and height. The QContainer listens for its parents RESIZE event and then invalidates its layout. Then based on the new width and height of its parent it remeasures itself and its children. There is some leaway for setting different QNX properties like size and sizeUnit, but for the safest bet start with: QContainer Group SkinnableContainer RESIZE width height size sizeUnit <s:Group <r:QContainer /> </s:Group> Any changes to the dimensions of the Group will automatically be detected by the QContainer and it will fill up the whole space of its parent. NOTE: BlackBerry Tablet OS SDK does not provide a manifest file and namespace for their classes, which means your apps will have a lot of namespaces that look like qnx.ui.buttons.*. Also some QNX classes might cause problems with Flex's SystemManager control the stage, let me know if you come across something. qnx.ui.buttons.* There are various projects types across Flash Builder 4 and Flash Builder Burrito. Also the BlackBerry Tablet OS SDK installs plugins for both Flash Builder versions. Typically all you will need is to create a AIR project and modify your application descriptor file for mobile devices (meaning typically non-WindowedApplication and set visible=true). The QMXML classes will work in Flex 4 spark desktop apps as well as the newer Flex Hero mobile app and theme. This assumes your projects are created and point to the BlackBerry Tablet OS SDK 0.9.1. Be aware the classes have had limited testing and would love you to take them for a ride. Have fun! With MXML you get Binding. Which is one of the great benefits of MXML. But this also means I had to take a UI framework that didn't take that in to consideration and tweak it to work. Some of the QNX UI framework choices made this not so straightforward. I had to work in a bunch of hooks to properties that might change the layout and create my own layout invalidation/refresh. Luckily the Container class is the only Container in QNX UI library and it only had a half dozen layout related properties. That made it easy to make this a viable solution. There is still probably some issues with using width/height instead of the QNX's size, sizeUnit, and sizeMode for flow layouts. But if you are using explicit width/height you probably are not invalidating the layout a bunch. Also there where a few properties that didn't implement getter/setters so I had to watch for changes on them differently to invalidate the layout on changes. Overall it wasn't too complicated but could be improved. Having said all that I have to put a disclaimer that BlackBerry Tablet OS SDK is in beta and will improve, as well as my understanding how it works will change over time. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
https://www.codeproject.com/articles/182422/using-mxml-with-qnx-ui-components-for-the-playbook
CC-MAIN-2017-04
refinedweb
1,310
56.66
On Mon, Mar 7, 2011 at 7:41 PM, Mark Brown<broonie@opensource.wolfsonmicro.com> wrote:>>,> the register is referenced. This doesn't seem ideal - it'd be much> nicer to have the register I/O functions work this out without the> callers needing to.I'm afraid it's not easy to do so.>> ?no,it makes the processor work in master mode.and output bit clock andframe clock.>> > completely.this SigmaDSP doesn't support duplex operation,it can choose eitherADCs or serial port as input source.>> > elsewhere mute> function. This looks buggy.the processor has no switchs to mute or unmute ADCS/DACs,only thing wecan do is turning them off or on.>> + - what> exactly?these configurations above loading firmware mainly used to avoid pops/clicksand cleanup some registers in the DSP core.>> +> handled in either the bias management functions or ideally DAPM. It> also appears that the oscillator is an optional feature so it should be> used conditionally.the processor can receive MCLK either from external clock source orcrystal oscillator,currently we use the on board crystal,and it can't be turnedoff,otherwise the whole chip will be in an unpredicted status,only output clocks can be disabled.>> + struct adau1701_priv *adau1701 = snd_soc_codec_get_drvdata(codec);>> +>> + adau1701->codec = codec;>> You don't actually ever seem to reference the codec pointer you're> storing, be> namespaced.> _______________________________________________> Alsa-devel mailing list> Alsa-devel@alsa-project.org>>--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2011/3/9/22
CC-MAIN-2016-36
refinedweb
264
50.43
Back to index Interface representing a location where extensions, themes etc are installed. More... import "nsIExtensionManager.idl"; Interface representing a location where extensions, themes etc are installed. Definition at line 56 of file nsIExtensionManager.idl.. Whether or not the user can write to the Install Location with the current access privileges. This is different from restricted because it's not whether or not the location might be restricted, it's whether or not it actually is restricted right now. Definition at line 95 of file nsIExtensionManager.idl. An enumeration of nsIFiles for: Definition at line 70 of file nsIExtensionManager.idl. The file system location where items live. Items can be dropped in at this location. Can be null for Install Locations that don't have a file system presence. Note: This is a clone of the actual location which the caller can modify freely. Definition at line 79 of file nsIExtensionManager.idl. The string identifier of this Install Location. Definition at line 61 of file nsIExtensionManager.idl. The priority level of this Install Location in loading. Definition at line 116 of file nsIExtensionManager.idl.. Definition at line 107 of file nsIExtensionManager.idl. Definition at line 110 of file nsIExtensionManager.idl. Definition at line 108 of file nsIExtensionManager.idl. Definition at line 111 of file nsIExtensionManager.idl. Definition at line 109 of file nsIExtensionManager.idl. Whether or not this Install Location is on an area of the file system that could be restricted on a restricted-access account, regardless of whether or not the location is restricted with the current user privileges. Definition at line 87 of file nsIExtensionManager.idl.
https://sourcecodebrowser.com/lightning-sunbird/0.9plus-pnobinonly/interfacens_i_install_location.html
CC-MAIN-2016-44
refinedweb
269
51.85
Asked by: Issue adding external alias - VS 2010 bug I am having a tough time setting up an external alias. I added references to these two items: - Telerik.Windows.Controls.Scheduler - Telerik.Windows.Controls.ScheduleView Telerik is a set of components (in this case for WPF). When using both of these, they require one be added as an external alias as they contain duplicate definitions of some items. I made the following changes changes: - Changed the Scheduler reference alias to 'sched', (left ScheduleView as 'global'). - Added the following class to my program: extern alias sched; namespace Work_Planner { public class RadScheduler : sched::Telerik.Windows.Controls.RadScheduler { } } NOTE: I've used BOTH my application's own namespace here as well as a new one. - Added an xmlns to my XAML (again, I tried my MAIN app namespace, as well as a new one): xmlns:local="clr-namespace:Work_Planner" xmlns:sch="clr-namespace:SchedNamespace" Added the component in my XAML (again, both namespaces have been tried): <local:RadScheduler> <sch:RadScheduler> And finally added the extern declarations and modified any using clauses in code behind forms: extern alias sched; // first line - before usings using ...........; using sched::Telerik.Windows.Controls.Scheduler; After doing this, it still did not work. I found a forum topic on it as well as a Microsoft Bug report: I followed the steps in the workaround listed in the bug report. Just before the end of the csproj file (just before the </Project> line) I added the following: <Target Name="solveAliasProblem" > <ItemGroup> <ReferencePath Remove="C:\Program Files (x86)\Telerik\RadControls for WPF Q3 2010\Binaries\WPF\Telerik.Windows.Controls.Scheduler.dll"/> <ReferencePath Include="C:\Program Files (x86)\Telerik\RadControls for WPF Q3 2010\Binaries\WPF\Telerik.Windows.Controls.Scheduler.dll"> <Aliases>sched</Aliases> </ReferencePath> </ItemGroup> </Target> <PropertyGroup> <CoreCompileDependsOn>solveAliasProblem;$(PrepareResourcesDependsOn)</CoreCompileDependsOn> </PropertyGroup> It still does not work. Now I get an error when building on my main window in the generated code file. The error is as follows: Error 1 The type or namespace name 'Scheduler' does not exist in the namespace 'Telerik.Windows.Controls' (are you missing an assembly reference?) C:\Source Code\PN.Net\Work Planner\Work Planner\obj\x86\Debug\MainWindow.g.cs 41 32 Work Planner Looking at the g.cs file it does reference Telerik and does not contain the 'extern alias' at the top. What can I do? I can't modify this file. The bug doesn't even look to be addressed in SP1. Any thoughts or ideas would be appreciated!PaulWednesday, February 23, 2011 10:38 PM Question All replies Hi Paul, Thanks for your post. I'm not familiar with Telerik but I suspect you have to install something else. It seems Scheduler type is in other assembly but use the same namespace "Telerik.Windows.Controls". Also you can post another thread at: I need some time to do further research on this issue and hope you can get more information there. Ziwei Chen [MSFT] MSDN Community Support | Feedback to us Get or Request Code Sample from Microsoft Please remember to mark the replies as answers if they help and unmark them if they provide no help. Friday, February 25, 2011 6:32 AM Thank you for the reply. There's nothing missing from the installations. I can use either reference and corresponding component separately. Just not together. I have already posted on Telerik's forums and the resulting response was to use an external alias on the Scheduler and leave ScheduleView as global which I have done. The 615953 issue in Visual Studio on the MS Connect site is exactly my problem but I cannot get the workaround to work. I'm hoping I'm doing something wrong that can be fixed. I did create a sample project without referencing my application's namespace in my XAML and it did work. Removing all references in my large application will be difficult.Friday, February 25, 2011 2:18 PM Hello, My apologies for posting again, but I'm wondering if the statement "I need some time to do further research" meant that this thread would be updated again. If not, I'll need to go another route, if so I will wait to hear back before continuing development on this application. PaulThursday, March 03, 2011 5:59 PM Hi Paul, Thanks for your feedback and sorry for the delay. I did some research but cannot reproduce your issue, I think the best approach it to look for some workarounds for your project if it is urgent. Thanks for understanding and wish you a good weekend. Ziwei Chen [MSFT] MSDN Community Support | Feedback to us Get or Request Code Sample from Microsoft Please remember to mark the replies as answers if they help and unmark them if they provide no help. Friday, March 04, 2011 9:14 AM I workaround this issue in XAML using two aliases for one assembly (with comma delimited): "global, someAliasName" In code where I have a class conflict I use someAlliasName::someNamespace.someClass. In XAML xmlns:a="clr-namespace:someNamespace;assembly=someAssembly" Hope this helps somebody who have a problem like this.Tuesday, August 23, 2011 12:37 PM
http://social.msdn.microsoft.com/Forums/vstudio/en-US/87f0caa0-c57a-4146-a999-c794947ae28e/issue-adding-external-alias-vs-2010-bug?forum=vseditor
CC-MAIN-2014-35
refinedweb
861
54.63
Jim Meyering 2017-10-03 01:24:09 UTC Mainly just a heads up, since this certainly isn't blocking me. When trying to build coreutils using gcc built from very recent (with some change committed since Sep 26), I see this new warning/error: In file included from src/system.h:140:0, from src/ls.c:84: src/ls.c: In function 'print_long_format': ./lib/timespec.h:85:11: error: assuming signed overflow does not occur when simplifying conditional to constant [-Werror=strict-overflow] return (a.tv_sec < b.tv_sec ? -1 ~~~~~~~~~~~~~~~~~~~~~~~~~ : a.tv_sec > b.tv_sec ? 1 ^~~~~~~~~~~~~~~~~~~~~~~~~ : (int) (a.tv_nsec - b.tv_nsec)); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When compiling with gcc built from latest sources on September 26, there is no such problem. Given all of the comments on that function, I'd be tempted to suppress this warning in that function.
http://bug-gnulib.gnu.narkive.com/WfzuGmHP/latest-gcc-vs-lib-timespec-h-85
CC-MAIN-2018-34
refinedweb
136
66.23
! Having people from both desktops in one place is seen as valuable to most participants. Some of the meetings taking place have been about topics important to both desktops, like common infrastructure. A good example of this is the Freedesktop.org meeting where the current situation and the issues surrounding the process of collaboration was discussed. Unfortunately due to the closure of the room they were in the first meeting did not last long, but it was later continued. A summary has been posted to the XDG mailing list, and the resulting discussion can be read. In short, it seems at least the participants to the meeting agreed that a bit more structure is needed. No taking of Freedesktop.org namespaces unless at least KDE and GNOME agree, and a list of specs which have been and have not been blessed. This certainly does not mean complicated procedures or bureaucracy, just an email to the XDG mailing list to ask for a 'go' generally would work to get the process started. If the person or team proposing the standard is open to suggestions and collaboration there is no reason it would have to take long to get something accepted. Yet agreeing on some rules will make sure there will not be anymore 'standards' unsuitable for both major desktops and more work can be shared. Sharing infrastructure is clearly on the rise. As Maemo will move to Qt as its main toolkit, the other components of its infrastructure will have to work more closely with Qt. Many of those components have been cross-desktop for a while now - like D-Bus and Avahi. Others, like Tracker, are in the process of removing their dependencies on GTK technology and becoming part of the Free Desktop platform layer. Since several Tracker developers are here, a meeting was set up for people working on desktop search, indexing and monitoring. People from Nepomuk (contextual linking), Tracker (indexing), Strigi (indexing) and Zeitgeist (event logging) spoke about further collaboration. Zeitgeist will use Tracker for its data storage and Tracker will start using the Strigi analyzer infrastructure. The ontologies (definitions for data storage) developed for Nepomuk are already being adopted by Tracker. A new ontology will be developed for use with Zeitgeist in cooperation with the Tracker and Nepomuk developers. These developers are very enthusiastic about these collaborative developments, and rightly so! Developers from the Open Source Consultancy company Collabora held a session with KDE developers to discuss their work in bringing the Telepathy communication mechanism to KDE. Work has restarted on KCall and more applications are expected to integrate it in future. Cooperation was also started in the area of bug fighting. Members of the GNOME and KDE Bugsquads got together to share their secret experiences and ideas on killing these ugly animals. Discussed where things like the formal and informal policies of both teams, and the experiences in various areas. The KDE bugsquad is very good at recruitment and organisation, while the GNOME team has a good quality control and a decent process of teaching new members the ropes. Both bugsquads suffer from the same curse-and-blessing: there is a very high conversion rate of people joining the bugsquads toward development. In the KDE team some quick surveys have shown that most people joining the bugsquad were new KDE contributors. After being welcomed in the open arms of the KDE community through the bug squad they quickly moved on towards SVN accounts, starting with simple bugfixing and then moving on to maintaining full applications. But everyone moving on to KDE development, how much appreciated, leaves a gap in the bugsquad ranks. This creates a challenge for the remaining members - they quickly have to find new members, and get them to help organising the bugdays to train new recruits in the ways of bughunting. Despite the downside to this, it also creates a very interesting and fast-paced environment where people turn up, contribute a great deal (don't underestimate the incredible value of the bugreporting work!) and move on to become highly appreciated developers in our great Free Software community. Steve Alexander gave a talk on CouchDB, the database being used for Ubuntu One to share everything from addresses to bookmarks between computers. He annonced that Till Adam had created an Akonadi resource for CouchDB and demonstrated the power by entering an address in Evolution then reading it in Akonadi. Personal data can now be shared across computers and across desktops. Smaller get-togethers which still have to bear fruit would be the talks between Maemo, KOffice and OpenOffice developers about sharing more infrastructure and some initial talks among multimedia developers. There have been cooperative efforts between GNOME and KDE marketing teams, while the KDE e.V. and GNOME Foundation boards had a meeting to talk about this conference and what would happen next year, and other possibilities of working together. It is not only cross-desktop cooperation which is happening here. Plasma and KWin developers came together, remotely joined by Aaron Seigo over a Skype connection to discuss further integration. The remote connection could have been better, and this is definitely something we need for next Akademy so those still at home can join us. Nonetheless some ideas like moving Plasma-based animations into KWin and improving the API so more tight connection between KWin and Plasma is possible were discussed. There also was a Kernel BoF, where kernel developer Matthew Garrett gathered some input from GNOME and KDE developers. Several developers asked for improvements on inotify, a kernel component which notifies applications of changes in files. This is useful for example to music players (which can then automatically add music to their collection if the user drops it in the music folder) or document editors (which can warn users if somebody else has made changes to a file while they are editing it). Inotify has several issues right now, one of them being performance related. Currently, you have to configure inotify for each file you want to watch, which can become a problem if you have a large collection of files to monitor. If inotify could work recursively this problem would be solved. Related to this is the long time it takes to check if any indexed files have changed if the indexer has not been running for a while. The indexer has to crawl the whole system, checking every single file. A possible solution would be a special attribute for directories. It could indicate something in a directory tree has changed, making checking for changes much faster. This new attribute would be possible as part of the extended attribute framework in the file systems and have a chance of making it into BRTFS. Unfortunately that means endusers will still have to wait several years before they will reap the benefits of this improvement, 'living higher on the stack' changes can take a while to trickle through. Another interesting request which came up was related to some recent fuzz about a change in the way file systems worked on Linux. It increased the chances of losing file data for the end user if the system crashed. There is a temporary fix in place, but the developers are far from happy with that solution. A better way of solving the issue would be a new flag to be used when opening certain files, o_rewrite. It would tell the kernel to only write the changes to the file when the application closes it. Making sure the operation is atomic would also preserve extended file attributes, giving software developers an incentive to start using this new technique instead of their current workaround. The third important topic which came up at the Kernel BoF was the wish for some degree of integration between D-Bus and Linux. Currently, inter-application communication involves a lot of copying data and context switches while the applications are negotiating the exchange of knowledge and commands. Having part of this process in the kernel could have a big impact on performance, making data exchange cheaper in terms of system resources. As usual, the above is not much more than a small selection of what is going on at the joint Akademy/GUADEC meeting here at Gran Canaria. Following every interesting BoF and talk would require some highly illegal cloning of the marketing team. But the above impression should convey at least some of the exciting things happening here to those not at the Destkop Summit, and probably even inspire our great communities to work together more closely in the future! I hope it will get used by KDE and Gnome, that would be really, well, awesome =) All the talkting about FOSS solutions an then use Skype? Does QuteCom (=OpenWengo) no longer work? Despite that: Great article and a great event. Be sure to spread the word about cross desktop collaboration:... I wish I wasnt seeing a laptop with VISTA installed in one of the developers machine (in photo) ;). Well, KDE4 runs on all major operating systems, including Windows. So of course there are developer machines with Windows (and hopefully KDE4) Alex Jos, I love you, but can you please stop misspelling my name every time? :) Adam, no s. (I guess Steve might have gotten it wrong as well.) Thanks, Till whoops, did it again, sorry... I pronounce it properly but still write it the wrong way. Unfortunately I can't fix the article, the dot thinks I'm a big spammer.... :(
https://dot.kde.org/comment/106711
CC-MAIN-2016-50
refinedweb
1,579
60.35
Hello I'm curious how all of you all handle namespaces in your production workflow. I'm wondering if there is a more "clean" way to go about how we do it.The flow looks like this:- First we reference all models into the rig scene. All models get one namespace- Then everything gets published- In the anim scene we reference the rig. Everthing gets the same namespace, except the models who get double namespace (both model and rig namespace) The models inside the rig looks like this once everything is references in the animation scene:car_rig_v01:carInterior_model_v01:main_grpcar_rig_v01:carExterior_model_v01:main_grp A more clean namespace in the anim scene would of course be car_v01:main_grp. But im not sure how to do this since i need namespacing in the rig file to avoid nameclashing? Thankful for help on how to set up a working namespace workflow! CheersOlle I'm not sure there is any perfect way to handle namespaces. Many have pros and cons. An approach I've seen is to "merge" them: Everything is put under a root node and namespaces are merged: car:int:main_grp become car:int__main_grp. This is done automatically at rigging release (implying rigging is a script). root car:int:main_grp car:int__main_grp Thank you for your reply! Interesting to merge the namespaces for sure. Makes it a bit cleaner, but maybe harder to script? Im trying to decide between the following. ALT_1 (asset description inside namespace. "rig", "model" etc)ALT_2 (merged namespace) (no description of asset)ALT_3 (not merged) (no description of asset)ALT_4 (asset description outside of namespaces) ALT 4 seems kind of clean. But maybe a bad idea to have have different naming of the topgroup of each asset? (model_grp, rig_grp and so on). If they are always called "root_grp" (no matter what type of asset) I guess it would be easier to script. Best regardsOlle
http://tech-artists.org/t/namespace-in-production/9393
CC-MAIN-2017-51
refinedweb
314
65.01
To create a variable number of variables in Python, either use a dictionary or a namedtuple. Problem: How can you store variable number of variables in your Python code? Example: Consider that there is a requirement wherein you have to store the specifications of a product in a single variable which includes a variable number of variables that are used to store different parameters of the product. For example, we want to store the parameters like brand name, model, and year of manufacture inside another variable named Product. The diagram below shall explain the problem in detail. This is just an example scenario that shall be used to demonstrate the solutions to our problem statement. Method 1: Using Python Dictionaries The easiest way to accomplish our task is to make use of Python dictionaries. A dictionary is a collection that lets you store data in the form of key and value pairs where the keys are used to represent the variables while the value represents the values to be stored in the variables. To have a better grip on the concept of dictionaries, I highly recommend you have a look at this article. The following code will help you understand the usage of dictionaries to store a variable number of variables (Please read the comments to understand the code better) : product = { "brand": 'Apple', "model": 'iPhone 11 Pro', "year": '2019' } print(product) # If want to access a particular variable, for eg, Brand Name then use: print("Product Brand is : ", product["brand"]) # If you want to loop through the variables and their corresponding values then use: for x, y in product.items(): print(x, ":", y) Output: {'brand': 'Apple', 'model': 'iPhone 11 Pro', 'year': '2019'} Product Brand is : Apple brand : Apple model : iPhone 11 Pro year : 2019 Before jumping into the next solution let us discuss a different scenario where you have to store variable brands of a product along with their specifications. Take a look at the diagram below to visualize the scenario we shall be discussing next. This might look like a complex scenario, however with the use of python dictionaries even such situations can be handled with ease. The program given below demonstrates how you can deal with such situations: products = { "Apple": { "Brand": "Apple", "Model": "iPhone 11 Pro", "Year": "2019" }, "Samsung": { "Brand": "Samsung", "Model": "Galaxy Note 20 Ultra 5G", "Year": "2020" }, "Nokia": { "Brand": "Nokia", "Model": "Nokia C3", "Year": "2020" } } for x, y in products.items(): print(x, ":", y) Output: Apple : {'Brand': 'Apple', 'Model': 'iPhone 11 Pro', 'Year': '2019'} Samsung : {'Brand': 'Samsung', 'Model': 'Galaxy Note 20 Ultra 5G', 'Year': '2020'} Nokia : {'Brand': 'Nokia', 'Model': 'Nokia C3', 'Year': '2020'} Try it yourself in the interactive code shell: Exercise: Run the code. Does the output match? Method 2: Using getattr() Another way of storing a variable number of variables can be accomplished using a built-in method called getattr(). The getattr() function is used to return the value of a specified attribute of an object. Therefore you can store the values in the variables and encapsulate them in a class and then retrieve the desired value using the getattr() function. The following code demonstrates the above concept: class Product: brand = "Apple" model = "iPhone 11 Pro" year = 2020 x = getattr(Product, 'model') print("brand:", x) Output: brand: iPhone 11 Pro Method 3: Using namedtuple “ namedtuple()” is a type of container in Python just like dictionaries. It is present within the collections module. Just like dictionaries, namedtuples also consist of key-value pairs, however, in the case of namedtuple() you can access from both keys as well as the values, which is not the case with dictionaries where you must access the value from the key. Unlike dictionaries namedtuple() is an ordered container and you can access its values using the index numbers. Let us have a look at the following code to understand how namedtuple() container can be used to store a variable number of variables: import collections # declaring the namedtuple() Product = collections.namedtuple('Apple', ['Brand', 'Model', 'Year']) # Passing the values prod = Product('Apple', 'iPhone 11 Pro', '2020') # Accessing the values using different ways print("Brand: ", prod[0]) print("Model: ", prod.Model) print("Year: ", getattr(prod, 'Year')) Output: Brand: Apple Model: iPhone 11 Pro Year: 2020 Before wrapping up this article, there is a little concept I would like to touch upon which might be instrumental in dealing with situations of variable number of variables. Method 4: Using Arbitrary Arguments (*args) In Python, we can pass a variable number of arguments to a function using the special symbol *args. Therefore we can make use of the * symbol before a parameter name in the function definition to pass a variable number of arguments to the function. Let us have a look at the following code to understand the usage of *args: def my_function(*k): print("Brand: ", k[0]) print("Model: ", k[1]) print("Year: ", k[0]) my_function("Apple", "iPhone 11 Pro", "2020") Output: Brand: Apple Model: iPhone 11 Pro Year: Apple Conclusion Thus from the solutions discussed above, we can safely say that python offers a plethora of options to create and store values in a variable number of variables. I hope you found this article helpful and it helps you to create a variable number of variables in Python with ease. Stay tuned for more interesting stuff!
https://blog.finxter.com/how-to-create-a-variable-number-of-variables/
CC-MAIN-2020-50
refinedweb
883
51.92
about postfix operator and sequence point void func(int a){ printf("%d",a); } int main(){ int a= 0; printf("%d", a); func(a++); } This is my code BUT I can't understand why the result is 0 I think the result has to be 1 Because : The side effect of updating the stored value of the operand shall occur between the previous and the next sequence point. "a has to be increased before next sequence point" All side effects of argument expression evaluations are sequenced before the function is entered "There's sequence point before function is called" So Isn't the variable a to be increased before func is called? Can you tell me what am I understanding wrong? THANK YOU 4 answers - answered 2021-04-21 16:28 dbush The postfix ++operator evaluates to the current value of its operand, with the increment being a side effect. So since ais 0 to start, that's the value that is passed to the function. - answered 2021-04-21 16:31 Eric Postpischil func(a++)does not pass ato the function. It passes the value of the expression a++. The value of that expression is defined to be the value of abefore the increment occurs. It is entirely irrelevant when the increment occurs. The value of a++is the value of aprior to the increment. - answered 2021-04-21 16:34 Zoso Quoting sequencing rules (emphasis mine) - When calling a function (whether or not the function is inline and whether or not function call syntax was used), there is a sequence point after the evaluation of all function arguments (if any) which takes place before execution of any expressions or statements in the function body. So, there's a sequence point after the evaluation of the function arguments. Now, this read with - Between the previous and next sequence point a scalar object must have its stored value modified at most once by the evaluation of an expression, otherwise the behavior is undefined. will explain what is happening. The evaluation of the argument will happen and then there's a sequence point. The previous sequence point was the previous statement. So, evaluation of a++will yield 0since it's the postfix operator. This is what is passed to the function and hence the value 0. Here's confirming what's happening via assembly for the code void foo(int a) {} int main() { int a = 0; foo(a++); return 0; } which yields the assembly: foo: push rbp mov rbp, rsp mov DWORD PTR [rbp-4], edi nop pop rbp ret main: push rbp mov rbp, rsp sub rsp, 16 mov DWORD PTR [rbp-4], 0 mov eax, DWORD PTR [rbp-4] lea edx, [rax+1] mov DWORD PTR [rbp-4], edx mov edi, eax call foo 0 is moved at the memory location with the value of rbp-4. Then that value is moved to the register eax. mov edi, eaxmoves this value to the ediregister which is actually used by the foofunction, called via call foo. lea edx, [rax+1] mov DWORD PTR [rbp-4], edx These 2 instructions increment and store the value of a. - answered 2021-04-21 17:20 Ian Abbott Consider the following program: #include <stdio.h> int a; /* global */ void func(int arg) { printf("func: arg=%d, (global) a=%d\n", arg, a); } int main(void){ a = 0; printf("main: (global) a=%d\n", a); func(a++); printf("main: (global) a=%d\n", a); } That program uses a global variable a. In main, the expression statement func(a++);is evaluated in the following order: - The arguments of funcare evaluated in no particular order: - For func's argparameter, the argument expression a++is evaluated yielding the old value of a(the value 0) with the side effect of abeing incremented before the next sequence point. - Any remaining side effects of argument evaluation are completed (so awill be incremented to 1). (This is the sequence point after the evaluation of the function designator and function arguments but before the actual call.) funcis actually called. (I have omitted some details, such as evaluation of the function designator func.) The result of the above is that the old value of a(value 0) is passed to the argparameter of func, but awill have been incremented (to value 1) when the body of function funcis executed. The above behavior can be seen in the program's output: main: (global) a=0 func: arg=0, (global) a=1 main: (global) a=1
https://quabr.com/67199747/about-postfix-operator-and-sequence-point
CC-MAIN-2021-21
refinedweb
750
57.5
Add number in each word by Notepad++ - Ekopalypse last edited by @Cohenmichael said in Add number in each word by Notepad++: automatically You need to explain what automatically in your case means. There must be trigger to run the script but for which trigger are you looking for? Could you explain what you want to do? Like, if I start npp and open file x I want to have result y or if I start the pc I want to have my coffee machine making me coffee … Do you get what I’m looking for? Sorry if I was not clear. I want to make a modification on all the words “Hello” existing on all pages name1.txt, name2.txt, … +100 document.txt When I use Notepad ++, I open my folder contains all documents (.txt) by Folder as WorkSpace. In this step, I don’t know what to do to change the word “Hello” on all pages automatically with this script. - Ekopalypse last edited by Ok I see - not that easy - let me think about it. As I’m leaving the office now I’m following up on this later this day, unless someone else jumped in and provided a solution with fits your need. Thanks a lot ! Hi, I did a search on google, and I made a small modification on the script. Notepad ++ opens all the pages of the folder, and the change is made only to the last document.txt and the rest is intact. thank you -----------The script------------ import os os.chdir(‘C:\Users\yrkj\Desktop\allfiles’) list_of_files = [x for x in os.listdir(‘.’) if x.endswith(‘.txt’)] for file in list_of_files: notepad.open(file) from Npp import editor def add_number(line_content, line_number, total_num_lines): words = line_content.split(’ ') for i, word in enumerate(words): if word == ‘hello’: words[i] = ‘{}.{}’.format(i, word) editor.replaceWholeLine(line_number, ’ '.join(words)) notepad.getFiles() editor.forEachLine(add_number) notepad.save() notepad.close() - Alan Kilborn last edited by Put your script in a code block using the </>“toolbar button” when composing. Python relies on indentation when defining blocks, so it is impossible to tell the true logic of your script the way you posted it. I don’t know how to do that ! I don’t have any knowledge about programming scripts for Python, I just tried if it work but it didn’t work. The published script is targeted only for Ekopalypse, maybe it has ideas to make it work. - Alan Kilborn last edited by @Cohenmichael said in Add number in each word by Notepad++: I don’t know how to do that ! You don’t know how to select some text in a post you are composing and press this button??: Ye Gods, Man! Sorry, I misunderstood your message. from Npp import editor def add_number(line_content, line_number, total_num_lines): words = line_content.split(' ') for i, word in enumerate(words): if word == 'nounous': words[i] = '{}.{}'.format(i, word) editor.replaceWholeLine(line_number, ' '.join(words)) editor.forEachLine(add_number) notepad.getFiles() - Ekopalypse last edited by Ekopalypse do you see a need to open each file in notepad, using editor instance to manipulate it and then using notepad instance to save and close it? It could be done with plain file read/write as well. Test the script extensively before using it in production Assumptions done by the following script is Folder as workspaceis visible The word looking for is exactly Hello, not hello or HELLO … Only the root directory is scanned for files having .txtextension, no recursion aka subdirectories are searched. import os import ctypes from ctypes import wintypes user32 = ctypes.WinDLL('user32', use_last_error=True) PINT = ctypes.POINTER(wintypes.INT) EnumWindowsProc = ctypes.WINFUNCTYPE(wintypes.BOOL, PINT, PINT) SendMessage = user32.SendMessageW SendMessage.argtypes = [wintypes.HWND, wintypes.UINT, wintypes.WPARAM, wintypes.LPARAM] SendMessage.restype = wintypes.LPARAM class TVITEM(ctypes.Structure): _fields_ = [("mask", wintypes.UINT), ("hitem", wintypes.HANDLE), ("state", wintypes.UINT), ("stateMask", wintypes.UINT), ("pszText", wintypes.LPCWSTR), ("cchTextMax", wintypes.INT), ("iImage", wintypes.INT), ("iSelectedImage", wintypes.INT), ("cChildren", wintypes.INT), ("lparam", ctypes.POINTER(wintypes.LPARAM)),] TV_FIRST = 0x1100 TVM_GETNEXTITEM, TVM_GETITEMW = TV_FIRST+10, TV_FIRST+62 TVGN_ROOT, TVGN_NEXT = 0, 1 TVIF_PARAM = 4 window_handles = dict() def add_number(path): ''' Generates a list of files which end with .txt for the given directory (no recursion). For each file read the content and add an increasing number, per line, for each occurance of word Hello ''' list_of_files = [x for x in os.listdir(path) if x.endswith('.txt')] for file in list_of_files: print(file) new_content = '' file = os.path.join(path, file) with open(file, 'r') as f: for line in f.readlines(): words = line.split(' ') j = 0 for i, word in enumerate(words): if word.strip() == 'Hello': j += 1 words[i] = '{}.{}'.format(j, word) new_content += ' '.join(words) with open(file, 'w') as f: f.write(new_content) def enumerate_root_nodes(h_systreeview32): ''' Returns a list of all root nodes from a given treeview handle ''' root_nodes = [] hNode = user32.SendMessageW(h_systreeview32, TVM_GETNEXTITEM, TVGN_ROOT, 0) while hNode: root_nodes.append(hNode) hNode = user32.SendMessageW(h_systreeview32, TVM_GETNEXTITEM, TVGN_NEXT, hNode) return root_nodes def foreach_window(hwnd, lParam): ''' Look for the Workspace as folder dialog and find the treeview handle ''' if user32.IsWindowVisible(hwnd): length = user32.GetWindowTextLengthW(hwnd) + 1 buffer = ctypes.create_unicode_buffer(length) user32.GetWindowTextW(hwnd, buffer, length) if buffer.value == u'File Browser': h_systreeview32 = user32.FindWindowExW(hwnd, None, u'SysTreeView32', None) if h_systreeview32: window_handles['file_browser_treeview'] = h_systreeview32 return False return True hwnd = user32.FindWindowW(u'Notepad++', None) user32.EnumChildWindows(hwnd, EnumWindowsProc(foreach_window), 0) if 'file_browser_treeview' in window_handles: root_nodes = enumerate_root_nodes(window_handles['file_browser_treeview']) tvitem = TVITEM() tvitem.mask = TVIF_PARAM for root_node in root_nodes: tvitem.hitem = root_node user32.SendMessageW(h_systreeview32, TVM_GETITEMW, 0, ctypes.addressof(tvitem)) path = ctypes.wstring_at(tvitem.lparam.contents.value) if path: add_number(path) UPDATE: open the python script console to see which file gets manipulated I tried 2 operations and the script didn’t work. 1st operation: I opened the folder content all documents (.txt) by Folder as WorkSpace, then I clicked execute the script and displayed this message: Python 2.7.15 (v2.7.15:ca079a3ea3, Apr 30 2018, 16:22:17) [MSC v.1500 32 bit (Intel)] Initialisation took 31ms Ready. Traceback (most recent call last): File "C:\Users\yrkj\AppData\Roaming\Notepad++\plugins\Config\PythonScript\scripts\Hello.py", line 107, in <module> user32.SendMessageW(h_systreeview32, TVM_GETITEMW, 0, ctypes.addressof(tvitem)) NameError: name 'h_systreeview32' is not defined 2nd operation : I opened all documents (.txt) on Notepad++ and I ran the script, no change on all pages, and it didn’t display any message. - Ekopalypse last edited by Ekopalypse I see, not at my computer but I think if you change this lines it should work. if 'file_browser_treeview' in window_handles: h_systreeview32 = window_handles['file_browser_treeview'] root_nodes = enumerate_root_nodes(h_systreeview32) Thank you so much. The script works very well ! Thanks again for your precious help :)
https://community.notepad-plus-plus.org/topic/18604/add-number-in-each-word-by-notepad/20
CC-MAIN-2021-49
refinedweb
1,110
61.22
the question is specific to ComponentModel.Editor part, what's the class name that will allow me to specify which editor will allow me to edit a property with the "Style Builder" in VS2005? not the component Builder where it gives limited options the Style Builder is the one in the image attached 1 <ComponentModel.Editor(GetType(System.Web.UI.Design.???????????????????), GetType(System.Drawing.Design.UITypeEditor)), _2 ComponentModel.DisplayName("MyStyle")> _3 Public Property MyStyle() As CssStyleCollection4 Get ...5 Set ...6 End Property Hi, can't view the image you have provided, but I know the one you refer to....the Style Builder Dialog Box. I may be wrong here but I don't think you have acces to it. And for the life of me, I can't find the namespace in which it may be contained. The closest is: " <Editor(GetType(StyleCollectionEditor), GetType(UITypeEditor))>" which you can use for a property of type CssStyleCollection but this will allow you to set the style for an object, such as a server control. That being said, I think the Style Builder you refer to is applicable to HTML block or inline elements such as DIV and SPAN elements, that is why in design, when you click on a div or span element, you see the style in the property grid. I haven't read the article fully, but check out: for a possible soultion, or at least an idea. If I am wrong, and you do find a work-around, please let me know.
https://social.msdn.microsoft.com/Forums/en-US/f7c26ad7-886b-4355-b6ab-f629dae83160/custom-controls-componentmodeleditor?forum=aspservercontrols
CC-MAIN-2022-21
refinedweb
253
61.97
How can I use bisect module on lists that are sorted descending? eg. import bisect x = [1.0,2.0,3.0,4.0] # normal, ascending bisect.insort(x,2.5) # --> x is [1.0, 2.0, 2.5, 3.0, 4.0] ok, works fine for ascending list # however x = [1.0,2.0,3.0,4.0] x.reverse() # --> x is [4.0, 3.0, 2.0, 1.0] descending list bisect.insort(x,2.5) # --> x is [4.0, 3.0, 2.0, 1.0, 2.5] 2.5 at end, not what I want really Probably the easiest thing is to borrow the code from the library and make your own version def reverse_insort(a, x, lo=0, hi=None): """Insert item x in list a, and keep it reverse-sorted assuming a is reverse-sorted. If x is already in a, insert it to the right of the rightmost x. Optional args lo (default 0) and hi (default len(a)) bound the slice of a to be searched. """ if lo < 0: raise ValueError('lo must be non-negative') if hi is None: hi = len(a) while lo < hi: mid = (lo+hi)//2 if x > a[mid]: hi = mid else: lo = mid+1 a.insert(lo, x)
https://codedump.io/share/Aj5xErOFfQxE/1/python-bisect-it-is-possible-to-work-with-descending-sorted-lists
CC-MAIN-2017-30
refinedweb
211
75
Opened 9 years ago Closed 8 years ago Last modified 6 years ago #9816 closed (invalid) ModelForm with only FileField or ImageField can not pass validation Description Hi. I want to let my users can change their mugshot without updating their whole profile. When i have a form with only mugshot field, form never pass validaiton. If i add another field to that form it can pass validation now. My model and forms are below. class Profile(models.Model): """ Profile model """ GENDER_CHOICES = ( ('F', _('Female')), ('M', _('Male')), ) user = models.ForeignKey(User, unique=True, verbose_name=_('user')) mugshot = models.ImageField(_('mugshot'), upload_to='uploads/mugshots', blank=True) birth_date = models.DateField(_('birth date'), blank=True, null=True) gender = models.CharField(_('gender'), choices=GENDER_CHOICES, max_length=1, null=True) occupation = models.CharField(_('occupation'), max_length=32, blank=True) mobile = PhoneNumberField(_('mobile'), blank=True) This form never pass validation: class MugshotForm(forms.ModelForm): mugshot = forms.ImageField(required=True) class Meta: model = Profile fields = ['mugshot'] This form pass validation: class MugshotForm(forms.ModelForm): gender = forms.CharField(widget=forms.HiddenInput()) mugshot = forms.ImageField(required=True) class Meta: model = Profile fields = ['mugshot', 'gender'] Is this a bug or did i misunderstand something? Change History (9) comment:1 Changed 9 years ago by comment:2 Changed 9 years ago by comment:3 Changed 9 years ago by comment:4 Changed 9 years ago by comment:5 Changed 9 years ago by I can not reproduce it either. Here's what I did class MyModel(models.Model): a = models.FileField(upload_to='.') b = models.CharField(max_length=123) class FileOnlyForm(forms.ModelForm): class Meta: model = MyModel fields = ['a'] And a view that uses this modelform def fileupload(request): if request.method == "POST": form = FileOnlyForm(request.POST, request.FILES) print form.is_valid() else: form = FileOnlyForm() return render_to_response('fileupload.html', locals(), context_instance=RequestContext(request)) With that I was able to conclude that it works. It does NOT matter if there's only one (file)field or more. comment:6 Changed 9 years ago by Since neither of us can reproduce it. Reopen if you think I've tried to reproduce it wrongly. comment:7 Changed 8 years ago by The reason the form validates in the second example, is because you have two fields specified. In the first one, you only have one. Django has really messed up weird issues, if u have a list or tuple with only 1 entry.. (some people get this problem, others do not. I don't know why) To fix this, add a comma "," after the first field: fields = ('mugshot', ) That works fine for me Regards Cal Leeming comment:8 Changed 8 years ago by Please don't reopen without a specific recreatable case that demonstrates the problem. It is not true that you need more than one field on a form for it to be able to validate. The really messed up weird issue you seem to have tripped across is a standard Python gotcha: ( 'abc' ) is not a single-element tuple in Python. Parentheses don't make a tuple, commas do (see:). So you need the trailing comma on a single element when specifying the list of fields if you specify it as a tuple. In the original description lists, not tuples, were used, so that was unlikely the cause of whatever was going on. comment:9 Changed 6 years ago by Milestone 1.1 deleted I tried to reproduce it but was unable to do so... The code that I used is the one you submitted. What I don't get is why you are using the forms.BooleanField in the form that you are inheriting from ModelForm... If you can provide some further info so that I can reproduce it I will try to do so. thanks a lot!!
https://code.djangoproject.com/ticket/9816
CC-MAIN-2018-09
refinedweb
629
59.3
23 August 2012 11:31 [Source: ICIS news] (adds paragraphs 4-6) ?xml:namespace> Naphtha prices for the first-half October contract climbed by $21/tonne from Wednesday’s close to $974.50-976.50/tonne CFR Japan on Thursday – the strongest since 3 May when prices stood at $992-994/tonne CFR Japan, according to ICIS data. The naphtha crack spread strengthened to $104.68/tonne against October Brent crude futures, marking it the highest since 3 May when the crack spread was at $106.88/tonne, the data showed. In the physical trading, Marubeni bought the spread between the first half of October and the first half of November contracts at $4.50/tonne, traders said. The backwardation for the first-half October/first-half November contracts was assessed at $4.50/tonne, rebounding from $3.00/tonne earlier in the week, ICIS data showed. At 18
http://www.icis.com/Articles/2012/08/23/9589320/asia-naphtha-tops-970tonne-as-crude-gains-more-than-1bbl.html
CC-MAIN-2015-18
refinedweb
149
76.93
src/vof.h Volume-Of-Fluid advection We want to approximate the solution of the advection equations \displaystyle \partial_tc_i + \mathbf{u}_f\cdot\nabla c_i = 0 where c_i are volume fraction fields describing sharp interfaces. This can be done using a conservative, non-diffusive geometric VOF scheme. We also add the option to transport diffusive tracers (aka “VOF concentrations”) confined to one side of the interface i.e. solve the equations \displaystyle \partial_tt_{i,j} + \nabla\cdot(\mathbf{u}_ft_{i,j}) = 0 with t_{i,j} = c_if_j (or t_{i,j} = (1 - c_i)f_j) and f_j is a volumetric tracer concentration (see Lopez-Herrera et al, 2015). The list of tracers associated with the volume fraction is stored in the tracers attribute. For each tracer, the “side” of the interface (i.e. either c or 1 - c) is controlled by the inverse attribute). We will need basic functions for volume fraction computations. #include "fractions.h" The list of volume fraction fields interfaces, will be provided by the user. The face velocity field uf will be defined by a solver as well as the timestep. The gradient of a VOF-concentration t is computed using a standard three-point scheme if we are far enough from the interface (as controlled by cmin), otherwise a two-point scheme biased away from the interface is used. foreach_dimension() static double vof_concentration_gradient_x (Point point, scalar c, scalar t) { static const double cmin = 0.5; double cl = c[-1], cc = c[], cr = c[1]; if (t.inverse) cl = 1. - cl, cc = 1. - cc, cr = 1. - cr; if (cc >= cmin && t.gradient != zero) { if (cr >= cmin) { if (cl >= cmin) { if (t.gradient) return t.gradient (t[-1]/cl, t[]/cc, t[1]/cr)/Delta; else return (t[1]/cr - t[-1]/cl)/(2.*Delta); } else return (t[1]/cr - t[]/cc)/Delta; } else if (cl >= cmin) return (t[]/cc - t[-1]/cl)/Delta; } return 0.; } On trees, VOF concentrations need to be refined properly i.e. using volume-fraction-weighted linear interpolation of the concentration. #if TREE static void vof_concentration_refine (Point point, scalar s) { scalar f = s.c; if ((!s.inverse && f[] <= 0.) || (s.inverse && f[] >= 1.)) foreach_child() s[] = 0.; else { coord g; foreach_dimension() g.x = Delta*vof_concentration_gradient_x (point, f, s); double sc = s.inverse ? s[]/(1. - f[]) : s[]/f[], cmc = 4.*cm[]; foreach_child() { s[] = sc; foreach_dimension() s[] += child.x*g.x*cm[-child.x]/cmc; s[] *= s.inverse ? 1. - f[] : f[]; } } } On trees, we need to setup the appropriate prolongation and refinement functions for the volume fraction fields. event defaults (i = 0) { for (scalar c in interfaces) { c.refine = c.prolongation = fraction_refine; scalar * tracers = c.tracers; for (scalar t in tracers) { t.restriction = restriction_volume_average; t.refine = t.prolongation = vof_concentration_refine; t.c = c; } } } #endif // TREE We need to make sure that the CFL is smaller than 0.5 to ensure stability of the VOF scheme. One-dimensional advection The simplest way to implement a multi-dimensional VOF advection scheme is to use dimension-splitting i.e. advect the field along each dimension successively using a one-dimensional scheme. We implement the one-dimensional scheme along the x-dimension and use the foreach_dimension() operator to automatically derive the corresponding functions along the other dimensions. foreach_dimension() static void sweep_x (scalar c, scalar cc, scalar * tcl) { vector n[]; scalar alpha[], flux[]; double cfl = 0.; If we are also transporting tracers associated with c, we need to compute their gradient i.e. \partial_xf_j = \partial_x(t_j/c) or \partial_xf_j = \partial_x(t_j/(1 - c)) (for higher-order upwinding) and we need to store the computed fluxes. We first allocate the corresponding lists. scalar * tracers = c.tracers, * gfl = NULL, * tfluxl = NULL; if (tracers) { for (scalar t in tracers) { scalar gf = new scalar, flux = new scalar; gfl = list_append (gfl, gf); tfluxl = list_append (tfluxl, flux); } The gradient is computed using the “interface-biased” scheme above. foreach() { scalar t, gf; for (t,gf in tracers,gfl) gf[] = vof_concentration_gradient_x (point, c, t); } boundary (gfl); } We reconstruct the interface normal \mathbf{n} and the intercept \alpha for each cell. Then we go through each (vertical) face of the grid. reconstruction (c, n, alpha); foreach_face(x, reduction (max:cfl)) { To compute the volume fraction flux, we check the sign of the velocity component normal to the face and compute the index i of the corresponding upwind cell (either 0 or -1). double un = uf.x[]*dt/(Delta*fm.x[] + SEPS), s = sign(un); int i = -(s + 1.)/2.; We also check that we are not violating the CFL condition. if (un*fm.x[]*s/(cm[] + SEPS) > cfl) cfl = un*fm.x[]*s/(cm[] + SEPS); If we assume that un is negative i.e. s is -1 and i is 0, the volume fraction flux through the face of the cell is given by the dark area in the figure below. The corresponding volume fraction can be computed using the rectangle_fraction() function. Volume fraction flux When the upwind cell is entirely full or empty we can avoid this computation. double cf = (c[i] <= 0. || c[i] >= 1.) ? c[i] : rectangle_fraction ((coord){-s*n.x[i], n.y[i], n.z[i]}, alpha[i], (coord){-0.5, -0.5, -0.5}, (coord){s*un - 0.5, 0.5, 0.5}); Once we have the upwind volume fraction cf, the volume fraction flux through the face is simply: flux[] = cf*uf.x[]; If we are transporting tracers, we compute their flux using the upwind volume fraction cf and a tracer value upwinded using the Bell–Collela–Glaz scheme and the gradient computed above. scalar t, gf, tflux; for (t,gf,tflux in tracers,gfl,tfluxl) { double cf1 = cf, ci = c[i]; if (t.inverse) cf1 = 1. - cf1, ci = 1. - ci; if (ci > 1e-10) { double ff = t[i]/ci + s*min(1., 1. - s*un)*gf[i]*Delta/2.; tflux[] = ff*cf1*uf.x[]; } else tflux[] = 0.; } } delete (gfl); free (gfl); On tree grids, we need to make sure that the fluxes match at fine/coarse cell boundaries i.e. we need to restrict the fluxes from fine cells to coarse cells. This is what is usually done, for all dimensions, by the boundary_flux() function. Here, we only need to do it for a single dimension (x). #if TREE scalar * fluxl = list_concat (NULL, tfluxl); fluxl = list_append (fluxl, flux); for (int l = depth() - 1; l >= 0; l--) foreach_halo (prolongation, l) { #if dimension == 1 if (is_refined (neighbor(-1))) for (scalar fl in fluxl) fl[] = fine(fl); if (is_refined (neighbor(1))) for (scalar fl in fluxl) fl[1] = fine(fl,2); #elif dimension == 2 if (is_refined (neighbor(-1))) for (scalar fl in fluxl) fl[] = (fine(fl,0,0) + fine(fl,0,1))/2.; if (is_refined (neighbor(1))) for (scalar fl in fluxl) fl[1] = (fine(fl,2,0) + fine(fl,2,1))/2.; #else // dimension == 3 if (is_refined (neighbor(-1))) for (scalar fl in fluxl) fl[] = (fine(fl,0,0,0) + fine(fl,0,1,0) + fine(fl,0,0,1) + fine(fl,0,1,1))/4.; if (is_refined (neighbor(1))) for (scalar fl in fluxl) fl[1] = (fine(fl,2,0,0) + fine(fl,2,1,0) + fine(fl,2,0,1) + fine(fl,2,1,1))/4.; #endif } free (fluxl); #endif We warn the user if the CFL condition has been violated. if (cfl > 0.5 + 1e-6) fprintf (ferr, "WARNING: CFL must be <= 0.5 for VOF (cfl - 0.5 = %g)\n", cfl - 0.5), fflush (ferr); Once we have computed the fluxes on all faces, we can update the volume fraction field according to the one-dimensional advection equation \displaystyle \partial_tc = -\nabla_x\cdot(\mathbf{u}_f c) + c\nabla_x\cdot\mathbf{u}_f The first term is computed using the fluxes. The second term – which is non-zero for the one-dimensional velocity field – is approximated using a centered volume fraction field cc which will be defined below. For tracers, the one-dimensional update is simply \displaystyle \partial_tt_j = -\nabla_x\cdot(\mathbf{u}_f t_j) foreach() { c[] += dt*(flux[] - flux[1] + cc[]*(uf.x[1] - uf.x[]))/(cm[]*Delta + SEPS); scalar t, tc, tflux; for (t, tc, tflux in tracers, tcl, tfluxl) t[] += dt*(tflux[] - tflux[1] + tc[]*(uf.x[1] - uf.x[]))/ (cm[]*Delta + SEPS); } boundary ({c}); boundary (tracers); delete (tfluxl); free (tfluxl); } Multi-dimensional advection The multi-dimensional advection is performed by the event below. void vof_advection (scalar * interfaces, int i) { for (scalar c in interfaces) { We first define the volume fraction field used to compute the divergent term in the one-dimensional advection equation above. We follow Weymouth & Yue, 2010 and use a step function which guarantees exact mass conservation for the multi-dimensional advection scheme (provided the advection velocity field is exactly non-divergent). scalar cc[], * tcl = NULL, * tracers = c.tracers; for (scalar t in tracers) { scalar tc = new scalar; tcl = list_append (tcl, tc); #if TREE t.restriction = restriction_volume_average; t.refine = t.prolongation = vof_concentration_refine; t.c = c; #endif // TREE } foreach() { cc[] = (c[] > 0.5); scalar t, tc; for (t, tc in tracers, tcl) { if (t.inverse) tc[] = c[] < 0.5 ? t[]/(1. - c[]) : 0.; else tc[] = c[] > 0.5 ? t[]/c[] : 0.; } } We then apply the one-dimensional advection scheme along each dimension. To try to minimise phase errors, we alternate dimensions according to the parity of the iteration index i. void (* sweep[dimension]) (scalar, scalar, scalar *); int d = 0; foreach_dimension() sweep[d++] = sweep_x; for (d = 0; d < dimension; d++) sweep[(i + d) % dimension] (c, cc, tcl); delete (tcl), free (tcl); } } event vof (i++) vof_advection (interfaces, i);
http://basilisk.fr/src/vof.h
CC-MAIN-2021-39
refinedweb
1,579
58.58
Subject: [Boost-bugs] [Boost C++ Libraries] #4728: "iostreams::detail::mode_adapter<>" is never "flushable": flushing a filtering_ostream will not flush the "std::ostream" target --- patch included. From: Boost C++ Libraries (noreply_at_[hidden]) Date: 2010-10-12 21:52:25 #4728: "iostreams::detail::mode_adapter<>" is never "flushable": flushing a filtering_ostream will not flush the "std::ostream" target --- patch included. --------------------------------------------------------------+------------- Reporter: Duncan Exon Smith <duncanphilipnorman@â¦> | Owner: turkanis Type: Bugs | Status: new Milestone: To Be Determined | Component: iostreams Version: Boost 1.44.0 | Severity: Problem Keywords: | --------------------------------------------------------------+------------- I have found a problem with Boost.Iostreams. * I'm using Boost v1.43.0 on Gentoo Linux with `gcc-4.3.4`. * I've reproduced it with `gcc-4.4.3`. * I've reproduced it with Boost v1.44.0 (from the website tarball). * Let me know if you need to know anything else about my environment. * A simple testcases attached at [attachment:boost_iostreams_filtering_std_ostream.cpp]. * A patch to fix the problem is attached at [attachment:boost_iostreams- mode_adaptor-flushable.patch]. * This is a different problem from #4590. === Details === I'm going to use `io` as a synonym for the `boost::iostreams` namespace. I have come across a problem with `io::detail::mode_adapter<Mode, T>`, where `T` is a `std::ostream` or a `std::streambuf`. I came across the problem in `io::filtering_ostream`, but perhaps this class is used elsewhere also. * `io::detail::mode_adapter<>::category` is not convertible to any of `io::flushable_tag`, `io::ostream_tag`, or `io::streambuf_tag`. * `io::flush()` will use `io::detail::flush_device_impl<io::any_tag>::flush()` for `mode_adapter<, T>` even when `T::catagory` is convertible `flushable_tag`, `ostream_tag` or `streambuf_tag`. * As a result, `io::filtering_ostream` will not flush correctly when the device at the end of the chain is a non-boost `std::ostream` or a `std::streambuf`. * I expect, also, that any filters in the chain that inherit from `flushable_tag` also do not get flushed correctly. * In particular the following methods from the STL `std::ostream` interface will ''not'' flush the stream to the device: {{{ #!cpp std::ostream stream(&someBuffer); io::filtering_ostream out; out.push(stream); // These STL methods of flushing a stream will NOT flush "stream". out << std::endl; out.flush(); }}} * My solution is to have `mode_adapter<>::category` inherit from `flushable_tag` when appropriate, and to implement `::flush()` methods: {{{ #!cpp }}} -- Ticket URL: <> Boost C++ Libraries <> Boost provides free peer-reviewed portable C++ source libraries. This archive was generated by hypermail 2.1.7 : 2017-02-16 18:50:04 UTC
https://lists.boost.org/boost-bugs/2010/10/14211.php
CC-MAIN-2020-16
refinedweb
404
59.5
Contributing Guide Making a ContributionMaking a Contribution We love contributions from everyone! 🎉 It is a good idea to talk to us first if you plan to add any new functionality. Otherwise, bug reports, bug fixes and feedback on the library is always appreciated. Check out the Contributing Guidelines for more information and please follow the GitHub Flow. The following is a set of guidelines for contributing to this project, which are hosted on GitHub. These are mostly guidelines, not rules. Use your best judgment, and feel free to propose changes to this document in a pull request. Please take the time to review the Code of Conduct, which all contributors are subject to on this project. I don't want to read this whole thing, I just have a question!!! Writing a PostWriting a Post If you've been given a login, you can access the CMS and write your content there. Otherwise, we have a blog post CLI to start you off with an empty markdown file, or you can create the markdown file and send it as a pull request manually. Example Blog PostExample Blog Post For author and categories values, see content/authors.json and content/categories.json. --- title: An awesome title for your post. description: An awesome description for your post. thumbnail: /thumbnail-url/goes-here.png author: tom published: true published_at: 2019-03-21T20:21:36.000Z comments: true category: tutorial tags: - python - serverless - sms-api spotlight: false --- <!-- # english posts live in /content/blog/en --> tldr or intro ## section title some text ## conclusion some text Using the Blog Post CLIUsing the Blog Post CLI Once you've cloned the repository and gone through the local setup guide, you can run the CLI using this command. npm run blog # > vonage-dev-blog@0.0.0 blog /Users/luke/Projects/nexmo/dev-education-poc # > node bin/blog # # ℹ Vonage DevEd Post CLI # ℹ by @lukeocodes # # ? Would you like to create or translate a blog post? Create # ? What's the title for this post? <max 70 chars> An awesome ... # ? What's the description? <max 240 chars> An awesome description ... # ? What language would you like to create a post in? English # ? Who's the author? Luke Oliff # ? What's the category? Tutorial # ? Enable comments? Yes # ? By spotlight author? No # ✔ Saved demo file to content/blog/en/an-awesome-title-for-your-post.md ... You can then open up the file you've just created in content/blog/ and edit away. Reporting BugsReporting Bugs This section guides you through submitting a bug report.. Before Submitting A Bug ReportBefore Submitting A Bug Report - Perform a cursory search to see if the problem has already been reported. If it has and the issue is still open, add a comment to the existing issue instead of opening a new one. How Do I Submit A (Good) Bug Report?How Do I Submit A (Good) Bug Report? Bugs are tracked as GitHub issues. Create an issue. When listing steps, don't just say what you did, but explain how you did it. - Provide specific examples to demonstrate the steps. Include links to files where possible. Show how you follow the described steps and clearly demonstrate the problem. You can use this tool to record GIFs on macOS and Windows, and this tool on Linux. - If the problem wasn't triggered by a specific action, describe what you were doing before the problem happened and share more information using the guidelines below. - Can you reliably reproduce the issue? If not, provide details about how often the problem happens and under which conditions it normally happens. Include details about your configuration and environment: Suggesting EnhancementsSuggesting Enhancements This section guides you through submitting a suggestion, out the required template, the information it asks for helps us resolve issues faster. Before Submitting An Enhancement SuggestionBefore Submitting An Enhancement Suggestion - GitHub issues. Create an issue and provide the following information by filling in the template. -. Your First Code ContributionYour First Code Contribution Unsure where to begin contributing? You can start by looking through these beginner and help wanted issues: - Beginner issues - issues which should only require a few lines of code, and a test or two. - Help wanted issues - issues which should be a bit more involved than beginnerissues. Both issue lists are sorted by total number of comments. While not perfect, number of comments is a reasonable proxy for impact a given change will have. Pull RequestsPull Requests Please follow these steps to have your contribution considered by the maintainers: - Follow all instructions in the template - Adhear the Code of Conduct - After you submit your pull request, verify that all status checks are passing. While the prerequisites above must be satisfied prior to having your pull request reviewed, the reviewer(s) may ask you to complete additional design work, tests, or other changes before your pull request can be ultimately accepted. I don't want to read this whole thing I just have a question!!!I don't want to read this whole thing I just have a question!!! You can join the Vonage Community Slack for any questions you might have: - Contact our Developer Relations Team - Reach out on Twitter - This Twitter is monitored by our Developer Relations team, but not 24/7 — please be patient! - Join the Vonage Community Slack - Even though Slack is a chat service, sometimes it takes several hours for community members to respond — please be patient! - Use the #generalchannel for general questions or discussion - Use the #statuschannel for receiving updates on our service status - There are many other channels available, check the channel list for channels for a specific library Alternatively, you can raise an issue on the project. ToolsTools Capitalize My TitleCapitalize My Title Tag TesterTag Tester Vonage Tags Language Tags Brand Tags Event Tags Other Tags Code Block ExamplesCode Block Examples Examples of different language code blocks. A full list can be found on the Prism website. No TypeNo Type It barely gets formatted at all. JavaScriptJavaScript ```js code goes here ``` const hello = (val) => { return `Hello, ${val}` } hello('World') HTMLHTML ```html code goes here ``` Outputs: <!doctype html> <html> <body> <h1>Hello, World</h1> </body> </html> RubyRuby ```ruby code goes here ``` Outputs: def sum_eq_n?(arr, n) return true if arr.empty? && n == 0 arr.product(arr).reject { |a,b| a == b }.any? { |a,b| a + b == n } end CSSCSS ```css code goes here ``` Outputs: .container { margin: auto; max-width: 1200px; width: 100%; } KotlinKotlin ```kotlin code goes here ``` Outputs: fun main(args: Array<String>) { println("Hello World!") } Writing Style GuideWriting Style Guide LanguageLanguage Strive for a conversational tone, as if you were teaching a workshop. It’s ok to use “we” or “you” or both, but try to apply consistently. Watch out for awkward usage that comes from trying to stick too forcefully to one rule. - Yes: Today we’re going to learn about Flask. Open up your terminal to begin. - No: Let’s save our credentials in your root directory. Keep your language direct, efficient, and active, especially on longer tutorials (don’t want excess text when trying to read dozens of steps). Be cautious with humor, as it doesn’t always translate across languages/cultures and can make people feel excluded. - Yes: On a new line, enter your application key. - No: Be sure your application key has been entered following your previous line, if you know what’s good for you! Avoid generalizations—“We all know”, “It’s commonly accepted”—unless they are relevant and you can back them up. Try to avoid -ing words in headings and titles. For example, Set Up Your Server rather than Setting Up Your Server. PunctuationPunctuation Oxford comma—should always be used with lists of three or more! - Yes: This practice keeps your application secure, resilient, and easy to deploy. - No: Next let’s sign up, download the file and install. A period should be followed by a single space. Hyphens are used between words when the phrase is acting as an adjective. - Yes: Time to brush off your front-end skills. - No: Let’s switch to the application’s front-end. The em dash (—) can be used as an alternative to a comma, semicolon, colon, or parenthesis, but don’t get carried away! It can easily lead to run-on and confusing sentences. If you are using it, be sure to use a proper em dash (longer than a hyphen, look up how to do it for your operating system) and don’t leave space between the dash and the surrounding words. - Yes: This application—once finished—will allow users to log in seamlessly. - No: Follow along with this tutorial -- it will give you all the information you need. To make an acronym plural, just add s (DON’T use an apostrophe) It is APIs not API's, SDKs not SDK's, and JWTs not JWT's. FormattingFormatting Inline code(single backticks) is used for typed input, such as a value entered into a field or at a terminal line, as well as names of variables, libraries, functions/methods/classes, and files. Code blocks (triple backticks) can also be used to show text-based output. They are used for portions of code or configuration files as well as terminal commands. Anything inside a code block will not be formatted, so they are good for showing text with characters that might have a specific meaning to the markdown renderer. Italics (use underscores) are for pointing out text-based elements of the UI (i.e. “Click on File > Open” or “Click the Save button”). Bold (use double **) is used for emphasis and as appropriate for names of concepts/products when first introduced. Shouldn’t be over used. Headings should use ## for sections and ### for subsections. Capitalize the first letter of each word in your headings, apart from articles (the, a, an), prepositions (in, at, by, on, for) and conjunctions (and, or, nor, yet, for so, but). If you are not sure, you can use one of these tools that capitalize the title correctly for you: TitleCase or Capitalize My Title (use the AP Style). For links, try to have the link naturally included in an appropriate phrase. Rather than using “You can find more information here” or “Click here to read more”, use something like “The SMS API allows you to quickly send messages.” Add a UTM link: whenever directing the reader to sign up for a Nexmo account. (replace github-repo with your own repo link). Keep a white space line between paragraphs. SEOSEO Titles should be under 71 and include keywords related to technologies and languages used in the post. Include a description, under 160 characters, that will show up for your post in search results. Posts should include at least 4 mentions of your SEO key phrase. Majority of image alt descriptions should include your SEO key phrase. - Ok alt: Screenshot of chat - Bad alt: Screenshot - Really bad alt: - Amazing alt: Screenshot of Client SDK conversation between two users Include links to related articles inside the blog content. Concluding section should include links to further reading as well as the Vonage Developer Twitter and Community Slack.
https://learn.vonage.com/contributing/
CC-MAIN-2021-39
refinedweb
1,854
65.01
Testing different bundlers for Svelte development is an odd hobby of mine. I like my development environments smooth as butter and my feedback loops tight. First up is Vite. A young bundler from the creator of the popular Vue.js framework, Evan You. I've heard many good things about Vite and decided to try it out. The Purpose I am on the quest to find the best bundler setup for Svelte development. My requirements are simple. - It must be fast - It must support Typescript - It must support PostCSS - It must produce small and efficient bundles - It must produce correct sourcemaps for debugging - It should support HMR (Hot Module Replacement) Let's proceed with this list as our benchmark for a good Svelte bundler setup. Test App For the purpose of testing I created a simple Svelte app. Its functionality is simple. You press a button and it fetches a random Kanye West tweet from Kanye as a Service. The app might be simple, maybe even naïve, but it has a few interesting parts. - Svelte components in Typescript. I want to see if transpiling and type checking works correctly for TS. - External Svelte library. Not all bundlers support libraries written in Svelte efficiently. - External library dependency. I want to see if Vite supports tree shaking when bundling for production. - Extenral Assets. It should be possible to import SVG, PNG, JSON and other external assets in our code. - PostCSS with TailwindCSS. A good bundler should make it easy to work with SASS and PostCSS. - Business components in Typescript. Typescript is here to stay. A good bundler should support it out-of-the-box. With that checklist in place, let's proceed and see if Vite can handle all our requirements. Although, I built a app specifically for testing different Svelte bundlers, I will walk you though how to set up Vite from scratch using a simpler Svelte app as an example. Vite Overview As I write this Vite haven't had an official release yet, but it's nearing one. Currently it's on 1.0.0-rc.4. There are probably still a few wrinkles to iron out. Vite is not a traditional bundler, like Webpack or Rollup, but an ESM bundler. What does it mean? It means that it serves all your files and dependencies via native ES modules imports that most modern browsers support. This means superfast reloads during development as only file that was changes needs to be recomplied. It comes with "batteries included", meaning it has sensible defaults and supports many features that you might need during development. Vite, just like Snowpack, is using ESBuild as its Typescript compiler. If you want to know more details about please read the How and Why section in Vite's README. What's the difference between Vite and Rollup? This can be a little confusing to many. Why should you use an ESM bundler instead of a traditional one line Webpack or Rollup? Vite Installation There is an option to create Vite backed apps with create-vite-app command, but as of now there is no Svelte template, so we will setup everything manually for now. I will try to find some time to create a Svelte template based on my findings. For my examples I will use pnpm, a fast and disk space efficient package manager, but all the commands apply to npm as well. Let's get cranking! Creating the project First, we need to initialize our project and add Vite. Here are the steps. $ mkdir vite-svelte-typescript $ cd vite-svelte-typescript $ pnpm init -y $ pnpm add -D vite Creating required files Now we need to add an index.html file and a src directory where we will be keeping our app's source files. Create a src directory and add an index file in the root directory with the following contents. <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"/> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Vite App</title> </head> <body> <div id="app"></div> <script type="module" src="/src/index.js"></script> </body> </html> This file will be used by Vite as entry or template to our app. You can add anything you want there. Just make sure to point the entry JS/TS file to your app's main file. Vite Configuration You configure Vite by creating a vite.config.js in the root directory. It's in there that you can change Vite's dev server port and set many other options. The configuration documentation is lacking behind at the moment. The best place to see what options are available is to look at Vite's config source. We don't have anything to configure yet, so we will postpone this task for now. Vite and Svelte with vite-plugin-svelte We are building a Svelte app, so we need to tell Vite how to deal with Svelte files. Luckily, there is a great Vite Svelte plugin we can use - vite-plugin-svelte. Install the plugin and also the Svelte framework. $ pnpm add -D vite-plugin-svelte svelte The time has come to write some Vite configuration. We will just follow recommendation from the plugin's README file. Edit your vite.config.js and add the following. // vite.config.js import svelte from 'vite-plugin-svelte'; export default { plugins: [svelte()], rollupDedupe: ['svelte'] }; Let's test drive it by creating the simplest Svelte app possible. First, create an App.svelte file in the src directory with the following contents. <!-- App.svelte --> <h1>Hello Svelte!</h1> Now, create the main app entry file index.js, also in the src directory, with the following contents. // main.js import App from './App.svelte'; // import './index.css'; const app = new App({ target: document.getElementById('app') }); export default app; Start the app by running pnpx vite and open the browser on localhost:3000. Bam! Now Vite knows what Svelte is. While we are on it, let's tackle the Typescript and Svelte part next. Vite and Typescript Support Vite has Typescript support out of the box for normal Typescript files. As many other modern ESM bundlers it uses esbuild which is written in Golang and is very fast. It's fast because it performs only transpilation of .ts files and does NOT perform type checking. If you need it, you must to run tsc --noEmit in the build script. More on that later. If you ask me, a better choice would have been SWC compiler. It's written in Rust, is just as fast and handles things a little better than ESBuild. Let's add a simple timer written in Typescript and use it in our file. // timer.ts import { readable } from 'svelte/store'; export const enum Intervals { OneSec = 1, FiveSec = 5, TenSec = 10 } export const init = (intervals: Intervals = Intervals.OneSec) => { return readable(0, set => { let current = 0; const timerId = setInterval(() => { current++; set(current); }, intervals * 1000); return () => clearTimeout(timerId); }); }; We are using enums, a Typescript feature, in order to not get any false positives. Let's add it to our App.svelte file. <!-- App.svelte --> <script> import { init } from './timer'; const counter = init(); </script> <h1>Hello Svelte {$counter}!</h1> Yep. Seems to work. Vite transpiles Typescript files to Javascript using ESBuild. It just works! Svelte and Typescript Support When it comes to various template and languages support in Svelte files, svelte-preprocess is king. Without this plugin, Typescript support in Svelte would not be possible. Simply explained, svelte-preprocess works by injecting itself as a first-in-line preprocessor in the Svelte compilation chain. It parses your Svelte files and depending on the type delegates the parsing to a sub-processor, like Typescript, PostCSS, Less or Pug. The results are then passed on to the Svelte compiler. Let's install it and add it to out setup. $ pnpm add -D svelte-preprocess typescript We need to change out vite.config.js and add the svelte-preprocess library. // vite.config.js import svelte from 'vite-plugin-svelte'; import preprocess from 'svelte-preprocess'; export default { plugins: [svelte({ preprocess: preprocess() })], rollupdedupe: ['svelte'] }; And change our App.svelte to use Typescript. <!-- App.svelte --> <script lang="ts"> import { init, Intervals } from './timer'; const counter = init(Intervals.FiveSec); </script> <h1>Hello Svelte {$counter}!</h1> We initialized our counter with 5s interval and everything works as advertised. svelte-preprocess for president! Notice how little configuration we have written so far. If you ever worked with Rollup, you will definitely notice! svelte.config.js If your editor shows syntax errors it's most likely you forgot to add html.config.js. const preprocess = require('svelte-preprocess'); module.exports = { preprocess: preprocess() }; This configuration file is still a mystery to me, but I know that it's used by the Svelte Language Server which is used in VSCode Svelte extension and at least one other bundler - Snowpack. Vite and PostCSS with TailwindCSS There are actually two parts of working with PostCSS in Vite and Svelte. Fist one is the Vite part. Vite has out-of-the-box support for PostCSS. You just need to install your PostCSS plugins and setup postcss.config.js file. Let's do that. Let's add PostCSS and Tailwind CSS support. $ pnpm add -D tailwindcss && pnpx tailwindcss init Create a PostCSS config with the following contents. module.exports = { plugins: [require("tailwindcss")] }; And add base Tailwind styles by creating an index.css in the src directory. /* index.css */ @tailwind base; body { @apply font-sans bg-indigo-200; } @tailwind components; @tailwind utilities; Last thing we need to do is to import index.css in our main index.js file. // index.js import App from './App.svelte'; import './index.css'; const app = new App({ target: document.getElementById('app') }); export default app; If you did everything right the page background should have a light indigo background. PostCSS in Svelte files When it comes to Svelte and PostCSS, as usual, svelte-preprocess is your friend here. As Vite, it has support for PostCSS. However, we need to tweak the settings a bit as it doesn't work out of the box. According to the svelte-preprocess documentation you can do it in two ways. Specify PostCSS plugins and pass them to the svelte-preprocess as arguments or install a postcss-load-conf plugin that will look for an existing postcss.config.js file. This seems like the best option. I mean, we already have an existing PostCSS configuration. Why not (re)use it? Let's install the postcss-load-conf library. $ pnpm add -D postcss-load-conf We also need to tweak our vite.config.js again. import svelte from 'vite-plugin-svelte'; import preprocess from 'svelte-preprocess'; export default { plugins: [svelte({ preprocess: preprocess({ postcss: true }) })], rollupdedupe: ['svelte'] }; Let's test it by adding some Tailwind directives to the style tag in> Yep. Works fine. Notice that I added lang="postcss" to the style tag in order to make the editor happy. Vite and SVG, or external assets support Vite has built-in support for importing JSON and CSS files, but what about other assets like images and SVGs? It's possible too. If you import an image or an SVG into your code it will be copied to the destination directory when bundling for production. Also, image assets smaller than 4kb will be base64 inlined. Let's add an SVG logo to our> <img class="w-64 h-64" src={logo} However, in our case, since we are using Typescript in Svelte we will get a type error. That's because Typescript doesn't know what an SVG is. The code will still work, but it's annoying to see this kind of error in the editor. We can fix this by adding a Typescript type declaration file for most common asset types. While we are on it we can create a Typescript config file. It's actually not needed by Vite, because it does not do any typechecking, only transpiling, but it's needed for the editor and also for our TS type checker that we will setup later. First, install the common Svelte Typescript config. $ pnpm add -D @tsconfig/svelte Next, create a tsconfig.json in the root directory of the project. { "extends": "@tsconfig/svelte/tsconfig.json", "include": ["src/**/*"], "exclude": ["node_modules/*", "dist"] } Last thing we need to do is to add a Typescript declaration file in the src directory. The name is not important, but it should have a .d.ts extention. More of a convention than a requirement. I named mine types.d.ts. // types.d.ts - "borrowed" from Snowpack declare module '*.css'; declare module '*.svg' { const ref: string; export default ref; } declare module '*.bmp' { const ref: string; export default ref; } declare module '*.gif' { const ref: string; export default ref; } declare module '*.jpg' { const ref: string; export default ref; } declare module '*.jpeg' { const ref: string; export default ref; } declare module '*.png' { const ref: string; export default ref; } declare module '*.webp' { const ref: string; export default ref; } If you did everything correctly you should not see any errors in your editor. Vite and Environment Variables It pretty common to make use of the environment variables in your code. While developing locally you might want to use a development API instance for your, while in production you need to hit the real API. Vite supports environment variables. They must however be prefixed with VITE_. Vite support many ways to import your environment variables through different .env file. You can read more about it here. For the sake of demonstration, let's setup and require and use an environment variable in our code. Create an .env.local file with the following contents. VITE_KANYE_API= We now need to import it in our app. The way you do it is through import.meta.env object. <!-- App.svelte --> <script lang="ts"> // import meta.env types from vite import type {} from 'vite'; import { init, Intervals } from './timer'; import logo from './assets/logo.svg'; const counter = init(Intervals.FiveSec); const KANYE_API = import.meta.env.VITE_KANYE_API; console.log(KANYE_API); </script> <style lang="postcss"> h1 { @apply text-5xl font-semibold; } </style> <h1>Hello Svelte {$counter}!</h1> <img class="w-64 h-64" src={logo} If you open you dev tools you should see it printed in console. Setting up a Smooth Workflow Getting everything to compile and start is one thing. Getting your development environment to run smoothly is another. Let's spend a few minutes to set it up. Linting Typescript files We already have everything we need to typecheck our Typescript files. This should be done outside Vite by running tsc --noEmit. Checking your Svelte files with svelte-check Svelte has this cool CLI app called svelte-check. It's very good at catching all types of errors and warnings in your Svelte files. Putting it all together Last step is to put everything together. For that purpose we will use npm-run-all package. It will help us run npm scripts in parallel. First, let's install the missing tools. While we are on it we will install a few other helpful utilities too that we will use. $ pnpm add -D npm-run-all svelte-check cross-env sirv-cli Replace the scripts property in package.json with the following object. { "dev": "vite", "compile": "cross-env NODE_ENV=production vite build", "check": "svelte-check --human && tsc --noEmit", "watch:svelte": "svelte-check --human --watch", "watch:ts": "tsc --noEmit --watch", "start": "run-p watch:* dev", "build": "run-s check compile", "serve": "sirv dist" } Now you can simply run pnpm start and it will start local development server and also continuously lint our Svelte and Typescript files. When you are done just run pnpm run build. Your app will be linted before it's compiled. If you want to compile and serve the app in production mode just issue pnpm run build serve. Vite Production Bundling For production bundling Vite is using Rollup, which is known for creating very efficient bundles, so you are in save hands. When it comes to code you don't have to configure anything special. It just works. But we need to tell Tailwind to purge our unused styles. You do it in tailwind.config.js file. // tailwind.config.js module.exports = { purge: ['./src/**/*.svelte', 'index.html'], theme: { extend: {} }, variants: {}, plugins: [] }; Now both our app and styles will be mean and lean. Here are some stats from my test app. [write] dist/_assets/index.03af5881.js 32.03kb, brotli: 9.50kb [write] dist/_assets/style.89655988.css 6.37kb, brotli: 1.67kb [write] dist/_assets/usa.29970740.svg 0.88kb [write] dist/index.html 0.41kb, brotli: 0.17kb Build completed in 5.17s. When bundling for production Vite injects CSS and JS tags into index.html automatically. However, it leaves the script tag as type="module. Thread carefully if you need to support old browsers. <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"/> <link rel="icon" href="/favicon.ico"/> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Vite App</title> <link rel="stylesheet" href="/_assets/style.89655988.css"> </head> <body> <div id="app"></div> <script type="module" src="/_assets/index.03af5881.js"></script> </body> </html> What about Svite? Right. Svite is a Svelte specific bundler that's built on top of Vite. You should definitely check it out. It's great! Plugins and Libraries Mentioned - - - - - - - - - - - - - - - - Results Let's re-visit our list of requirements. - It must be fast. Check. Vite's cold starts and reloads feel superfast. - It must support Typescript. Check. Was easy to setup. - It must support PostCSS. Check. Works out-of-the-box. - It must produce small and efficient bundles. Check. Rollup is used for bundling. - It must produce correct sourcemaps for debugging. So-so. Could be better. - It should support HMR (Hot Module Replacement). Check. Works great. Conclusion My goal was to see how good Vite is for Svelte development and also to show you how to setup an efficient local development environment. I must say that I am happy with results. That happy, that I even dare to ask whether not Vite is currently the best bundler for Svelte. If you have made it so far, you should not only learned about Vite, but also about how to effectively setup your development environment. Many of the things we went through apply to many different bundlers, not only Vite. Vite is built by the creator of Vue.js. Although, it's a framework agnostic bundler, you can tell that it's probably has a tighter connection to Vue. You can find Vue specific things sprinkled here and there. What I like most about Vite is its speed and flexibility. It has sensible default configuration options that are easy to change. I was also surprised how little configuration I had to write! Probably the best thing is that Vite uses Rollup for creating production bundles. I've learned to trust Rollup by now after testing many different module bundlers. You can find the full app setup on Github. Watch this repo as I test more bundlers for Svelte development. Thanks for reading and happy coding! Discussion (3) I remember the first time I use webpack for react (before using CRA) it was such a pain to set things up and make them work properly. Maybe I was just too green. Later I basically use CRA to make things easier so I can focus on things that's more important and fun to me. Why do you think Vite is BETTER than the current default svelte boilerplate? So far I find everything works fast enough. Or am I too green to have lower standard?! lol When your app is small, it's not really a problem. But as it grows larger, the re-compilation will take longer because default boilerplate uses Rollup, which recompiles everything when you change one file. In most cases, default boilerplate is good enough. It's simple and does its job well. You should always use whatever makes you happy and productive. It's a personal choice, but it's always good to have a few options to choose from :) Right. When app grows larger and speed becomes slower, we have to change certain tools, so it's good to have options. I hope one day I can get to work at Svelte production site that would require this haha!
https://practicaldev-herokuapp-com.global.ssl.fastly.net/codechips/is-vite-currently-the-best-bundler-for-svelte-2gc4
CC-MAIN-2021-31
refinedweb
3,390
68.26
Continuous Queries in Apache Ignite C++ 1.9 Continuous Queries in Apache Ignite C++ 1.9 Join the DZone community and get the full member experience.Join For Free Hello! As some of you probably know Apache Ignite 1.9 was released last week and it brings some cool features. One of them is Continuous Queries for Apache Ignite C++ and this is what I want to write about today. A Short introduction to Continuous Queries Continuous Queries is the mechanism in Apache Ignite that allows you to track data modifications on caches. It consists of several parts: - Initial Query. This is a simple query which is used when ContinuousQueryis registered. It is useful to get consistent picture of the current state of the cache. - Remote Filter. This class is deployed on remote nodes where cache data is stored. It is used to filter-out data modification events which user does not need. Using these, one can reduce network traffic and improve overall system performance. - Event Listener. This is observer class which is deployed locally on the node. It gets notified every time data is modified in cache. Out of these three only Event Listener is a mandatory part of the Continuous Query, while both Initial Query and Remote Filter are optional. Continuous Queries in C++ API Apache Ignite has C++ API which is called Apache Ignite C++. It allows using Ignite features from native C++ applications. So what about Continuous Queries in Apache Ignite C++? Until Apache Ignite 1.9 there was no support for Continuous Queries in C++ API. But now Continuous Queries are finally here so lets take a look. Remote Filters are not yet supported in C++ API (though they are planned for 2.0). Now lets try writing some code to check how it all works. I’m going to use simple (int -> string) cache: using namespace ignite; using namespace ignite::cache; IgniteConfiguration cfg; // Set configuration here if you want anything non-default. Ignite ignite = Ignition::Start(cfg); Cache<int32_t, std::string> cache = ignite.GetOrCreateCache<int32_t, std::string>("mycache"); Now I need to define Event Listener. My listener is going to be very simple - it only prints all the events it gets: class MyListener : public CacheEntryEventListener<int32_t, std::string> { public: MyListener() { } // This is the only method we need to declare for our listener. It gets called // when we receive notifications about new events. virtual void OnEvent(const CacheEntryEvent<int32_t, std::string>* evts, uint32_t num) { for (uint32_t i = 0; i < num; ++i) { const CacheEntryEvent<int32_t, std::string>& event = evts[i]; // There may be or may be not value or old value in the event. If the value // is absent that means that the cache entry has been removed. if the old // value is absent then a new value has been added. std::string oldValue = event.HasOldValue() ? event.GetOldValue() : "<none>"; std::string value = event.HasValue() ? event.GetValue() : "<none>"; std::cout << "key=" << event.GetKey() << ", " << "oldValue='" << oldValue << "', " << "value='" << value << "'" << std::endl; } } }; Pretty useless listener but good enough for testing purposes. Lets finally create and start our ContinuousQuery. I’m not going to use initial query here as there is nothing special or new. You can look at the documentation if you want to see how to use one. // Creating my listener. MyListener lsnr; // Creating new continuous query. ContinuousQuery<int32_t, std::string> qry(lsnr); // Starting the query. ContinuousQueryHandle<int32_t, std::string> handle = cache.QueryContinuous(qry); Compiling the code and… getting an error. cannot convert parameter 1 from 'MyListener' to 'ignite::Reference<T>' Ownership problem Looks like ContinuousQuery constructor expects something called ignite::Reference. With a little help of the documentation we can understand why. Ignite wants to know how to handle ownership problem for the listener. It does not know if it should make a copy or if it should just keep a reference. So Ignite provides us with a mechanism to deal with this problem and it’s called ignite::Reference. If some function accepts ignite::Reference<T> it means that you can pass instance of a type T in one of the following ways: ignite::MakeReference(obj)- simply pass objinstance by a reference. ignite::MakeReferenceFromCopy(obj)- copy an objand pass a new instance. MakeReferenceFromOwningPointer(ptr)- pass pointer. You can keep pointer to a passed object but ownership is passed to Ignite. This means that Ignite is now responsible for object destruction. MakeReferenceFromSmartPointer(smartPtr)- pass a smart pointer. You can pass your std::shared_ptr, std::auto_ptr, boost::shared_ptr, etc, using this function. This is going to work like your ordinary smart pointer passing. So let me fix the code above. I’m going to pass my listener as a copy because it’s small, has no inner state and I have no reason to keep reference to it in my application code: // Creating my listener. MyListener lsnr; // Creating new continuous query. ContinuousQuery<int32_t, std::string> qry(MakeReferenceFromCopy(lsnr)); // Starting the query. ContinuousQueryHandle<int32_t, std::string> handle = cache.QueryContinuous(qry); Results Now it works. All you need now is to add some values to your cache and watch: cache.Put(1, "Hello Continuous Queries!"); cache.Put(2, "Some other string"); cache.Put(1, "Rewriting first entry"); cache.Remove(2); Compiling, running and getting the following output: [21:51:14] __________ ________________ [21:51:14] / _/ ___/ |/ / _/_ __/ __/ [21:51:14] _/ // (7 7 // / / / / _/ [21:51:14] /___/\___/_/|_/___/ /_/ /___/ [21:51:14] [21:51:14] ver. 2.0.0-SNAPSHOT#20170315-sha1:159feab6 [21:51:14] 2017 Copyright(C) Apache Software Foundation [21:51:14] [21:51:14] Ignite documentation: [21:51:14] [21:51:14] Quiet mode. [21:51:14] ^-- Logging to file 'C:\reps\incubator-ignite\target\release-package\work\log\ignite-3d801ffe.0.log' [21:51:14] ^-- To see **FULL** console log here add -DIGNITE_QUIET=false or "-v" to ignite.{sh|bat} [21:51:14] [21:51:14] OS: Windows 10 10.0 amd64 [21:51:14] VM information: Java(TM) SE Runtime Environment 1.8.0_121-b13 Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.121-b13 [21:51:14] Configured plugins: [21:51:14] ^-- None [21:51:14] [21:51:24] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides. [21:51:24] Security status [authentication=off, tls/ssl=off] [21:51:27] Performance suggestions for grid (fix if possible) [21:51:27] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true [21:51:27] ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM options) [21:51:27] ^-- Set max direct memory size if getting 'OOME: Direct buffer memory' (add '-XX:MaxDirectMemorySize=<size>[g|G|m|M|k|K]' to JVM options) [21:51:27] ^-- Disable processing of calls to System.gc() (add '-XX:+DisableExplicitGC' to JVM options) [21:51:27] Refer to this page for more performance suggestions: [21:51:27] [21:51:27] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat} [21:51:27] [21:51:27] Ignite node started OK (id=3d801ffe) [21:51:27] Topology snapshot [ver=1, servers=1, clients=0, CPUs=4, heap=0.89GB] key=1, oldValue='<none>', value='Hello Continuous Queries!' key=2, oldValue='<none>', value='Some other string' key=1, oldValue='Hello Continuous Queries!', value='Rewriting first entry' key=2, oldValue='Some other string', value='<none>' Press any key to exit. That’s all for today. You can find a complete code at GitHub. Next time I’m going to write more about C++ API of Apache Ignite. I hope this was helpful. Published at DZone with permission of Igor Sapego . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/continuous-queries-in-apache-ignite-c-19
CC-MAIN-2018-30
refinedweb
1,300
50.73
OVERVIEW To start with it is natural to assume that static fields have a special life cycle and live for the life of the application. You could assume that they live is a special place in memory like the start of memory in C or in the perm gen with the class meta information. However, it may be surprising to learn that static fields live on the heap, can have any number of copies and are cleaned up by the GC like any other object. This follows on from a previous discussion; Are static blocks interpreted? When a class is obtained for linking it may not result in the static block being intialised. A simple example public class ShortClassLoadingMain { public static void main(String... args) { System.out.println("Start"); Class aClass = AClass.class; System.out.println("Loaded"); String s= AClass.ID; System.out.println("Initialised"); } } class AClass { static final String ID; static { System.out.println("AClass: Initialising"); ID = "ID"; } } prints Start Loaded AClass: Initialising Initialised You can see you can obtain a reference to a class, before it has been initialised, only when it is used does it get initialised. Each class loader which loads a class has its own copy of static fields. If you load a class in two different class loaders these classes can have static fields with different values. UNLOADING STATIC FIELDS static fields are unloaded when the Class’ ClassLoader is unloaded. This is unloaded when a GC is performed and there are no strong references from the threads’ stacks. PUTTING THESE TWO CONCEPTS TOGETHER Here is an example where a class prints a message when it is initialised and when its fields are finalized. class UtilityClass { static final String ID = Integer.toHexString(System.identityHashCode(UtilityClass.class)); private static final Object FINAL = new Object() { @Override protected void finalize() throws Throwable { super.finalize(); System.out.println(ID + " Finalized."); } }; static { System.out.println(ID + " Initialising"); } } By loading this class repeatedly, twice at a time for (int i = 0; i < 2; i++) { cl = new CustomClassLoader(url); clazz = cl.loadClass(className); loadClass(clazz); cl = new CustomClassLoader(url); clazz = cl.loadClass(className); loadClass(clazz); triggerGC(); } triggerGC(); you can see an output like this 1b17a8bd Initialising 2f754ad2 Initialising -- Starting GC 1b17a8bd Finalized. -- End of GC 6ac2a132 Initialising eb166b5 Initialising -- Starting GC 6ac2a132 Finalized. 2f754ad2 Finalized. -- End of GC -- Starting GC eb166b5 Finalized. -- End of GC In this log, two copies of the class are loaded first. The references to the first class/classloader are overwritten by references to the second class/classloader. The first one is cleaned up on a GC, the second one is retained. On the second loop, two more copies are initialised. The forth one is retained, the second and third are cleaned up on a GC. Finally the forth copy of the static fields are cleaned up on a GC when they are no longer references. THE CODE The first example – ShortClassLoadingMain The second example – LoadAndUnloadMain Reference: Java Secret: Loading and unloading static fields from our JCG partner Peter Lawrey at the Vanilla Java.
https://www.javacodegeeks.com/2011/10/java-secret-loading-and-unloading.html
CC-MAIN-2017-09
refinedweb
506
55.03
Hello, I'm working on an assignment due next week and I'm having quite a bit of trouble. The program is to ask the user for input of a double until they desire to stop. That part, I imagine will be in the Test file. The master file is to store the inputs in an array list, count how many instances of doubles there were, and then output each variable, add them all the together, find the average, the largest, and the smallest value. I know this is probably really easy, but I haven't been in Java for 2 years so I'm struggling and I need to know what I'm doing wrong if, that is I'm doing anything right at all lol. Example of desired output: Please enter a double value: 3.5 Another?: Y Please enter a double value: 4.3 Another?: Y Please enter a double value: 7.3 Another?: N Value 1: 3.50000 Value 2: 4.30000 Value 3: 7.30000 Sum: 15.100000 Average: 5.033333 Largest: 7.300000 Smallest: 3.500000 List cleared. ---My code so far--- public class ArrayList { private double sum, average, large, small; public ArrayList() { final int MAX_ARRAY = 15; //hold up to 15 values double[] list = new double[ MAX_ARRAY ]; } public Data() { for(int count = 1; count < list.length; count++) } public void process(list) { double sum, average, large, small; for(int count = 1; count < list.length; count++) sum += list[ count ]; for(int count = 1; count < list.length; count++) average = list[ count ] / count; for(int count = 0; count < list.lenght; count++) if(large < list[ count ];) { large = list[ count ]; large = count; } for(int count = 0; count < list.length; count++) if(small > list[ count ];) { small = list[ count ]; small = count; } } public display() { return list; return sum; return average; return large; return small; } public clearData() { list[] = 0; count = 0; } } Any help is appreciated, thanks. Edited by peter_budo: Keep It Clear - Do wrap your programming code blocks within [code] ... [/code] tags
https://www.daniweb.com/programming/software-development/threads/308976/array-lists-please-help
CC-MAIN-2017-39
refinedweb
328
75.2
Exploring/O consisted of a "code search" system, whereby you could upload a zipfile containing source code (or other text files) to the blobstore, then find every line matching a regular expression inside that file. Without further ado, let's see the mapper function that demo used: def codesearch((file_info, reader)): """Searches files in a zip for matches against a regular expression.""" params = context.get().mapreduce_spec.mapper.params regex = re.compile(params["regex"]) parent_key = db.Key(params["parent_key"]) if file_info.file_size == 0: return results = model.CodesearchResults( parent=parent_key, filename=file_info.filename) file_data = reader() for line_no, line in enumerate(file_data.split('\n')): if regex.search(line): results.match_lines.append(line_no) results.matches.append(line) if results.matches: yield op.db.Put(results) This is our mapper function. It gets called once for each entry in the zip file that's being processed, and is passed a single argument. When iterating over a zipfile, that argument is a tuple, consisting of a zipfile.ZipInfo object providing information about the entry, and a zero-argument function (here, called 'reader') that when called will return the entire contents of the stored file. The reason it doesn't simply pass in the contents of the entry directly is for efficiency: If you're writing a mapper that only needs to process some files in a zip, there's no point wasting time reading them all out and passing them to the mapper, just to be discarded. The first thing our function does is get the user-defined parameters. This is a dict of values provided by the user (and as we'll see later, optionally by our code) when they started the mapreduce process. We then extract 'regex', the regular expression to search on, and 'parent_key'. parent_key provides the key of the entity under which we should create all our result entities*. Next up, we check the length of the file. If it's empty, we skip it. The main reason for this check is that directories are also zip entries, and this is the easiest way to skip over directories, since we don't care about directories or empty files. Now we do the real work. First, we create a results entity under the parent key we retrieved earlier. Then, we call the passed-in reader function to retrieve the contents of the file, and we iterate over each line of it. Inside the for loop, we apply the regular expression to each line, and if it matches, we update the result entity with the line the match occurred in, and the contents of that line. You'll note that we're compiling the regular expression afresh with each call to the mapper function. This seems inefficient, but here we're relying on a feature of the Python regular expression library: The regular expression cache. It transpires that Python caches regular expressions, so that if you call compile on the same regex twice, it will use the cached copy the second time. Since the mapper function will be called over and over again, we're able to use this to good effect. Finally, we check if any lines matched in the file. If they did, we yield a 'Put' operation, instructing the mapper framework to write the new result entity to the datastore. If there were no matches in this file, we don't make the call, so the entity never gets written. That's all there is to the mapper function, but there's still a little more we need to do. You'll note that we accepted as one of our parameters a 'parent_key'. It's not particularly friendly to require users to create the parent entity and supply its key, though - it'd be far more helpful if we could allow them to supply the ID of a file in the datastore, and figure out the parent key ourselves. Well, it turns out that's possible, by using a 'validator' function. Validator functions get called once before the mapreduce starts, and are given the set of user-supplied parameters for the mapreduce, and allowed the opportunity to validate them (what a surprise, eh?) and modify them if they wish. Here's ours: def codesearch_validator(user_params): """Validates and extends parameters for a codesearch MR.""" file = model.ZipFile.get_by_id(int(user_params["file_id"])) user_params["blob_key"] = str(file.blob.key()) parent_model = model.CodesearchJob( name=user_params["job_name"], file=file, regex=user_params["regex"]) parent_model.put() user_params["parent_key"] = str(parent_model.key()) return True In this case, we expect the user to supply a 'file_id' parameter, which should be the id of an entity in our datastore with information about an uploaded file. From that, we extract the key of the blob to process, which allows our user to avoid having to enter that information by hand, as well. Then, we construct a new CodesearchJob entity, which will act as the parent entity we referred to above, and add its key as the 'parent_key' parameter. Finally, we return True to indicate that validation succeeded. The one remaining components is the definition of the mapreduce. Here it is, from mapreduce.yaml: mapreduce: - name: Codesearch mapper: input_reader: mapreduce.input_readers.BlobstoreZipInputReader handler: mr.codesearch params_validator: mr.codesearch_validator params: - name: file_id - name: regex - name: job_name - name: processing_rate default: 10000 - name: shard_count default: 20 None of this should be unexpected at this point. We give our mapreduce a name, and specify BlobstoreZipInputReader as the input reader class to use. For the handler, we provide the fully qualified name of the handler function we defined above, and likewise for the params_validator, we supply the fully qualified name of our validator function. Finally, we define a set of parameters that may be provided by the user when they create the mapreduce: file_id, regex, job_name, processing_rate, and shard_count. The eagle-eyed amongst you may have noticed that the last two parameters appear nowhere in our code. That's because they're used by the mapper framework to decide how many shards to start up, and how fast to process entities. There are several parameters of this type -we already saw another one in the validator function, 'blob_key', which is used by the input reader. That's all for today. In a future post, we'll take a look inside the BlobstoreZipInputReader, and discuss how to write your own input reader class for the mapper framework.Previous Post Next Post
http://blog.notdot.net/2010/05/Exploring-the-new-mapper-API
CC-MAIN-2014-35
refinedweb
1,062
56.05
Euclid’s GCD. This article describes how to calculate in Java the greatest common divisor of two positive number with Euclid’s algorithm. 1. Euclid’s Algorithm for the greatest common divisor The greatest common divisor (gcd) of two positive integers is the largest integer that divides both without remainder. Euclid’s algorithm is based on the following property: if p>q then the gcd of p and q is the same as the gcd of p%q and q. p%q is the remainder of p which cannot be divided by q, e.g. 33 % 5 is 3. This is based on the fact that the gcd of p and q also must divided (p-q) or (p-2q) or (p-3q). Therefore you can subtract the maximum of a multitude q from p which is p%q. 2. Implementation in Java Create a Java project "de.vogella.algorithms.euclid". Create the following program. package de.vogella.algorithms.euclid; /** * Calculates the greatest common divisor for two numbers. * <p> * Based on the fact that the gcd from p and q is the same as the gcd from p and * p % q in case p is larger than q * * @author Lars Vogel * */ public class GreatestCommonDivisor { public static int gcd(int p, int q) { if (q == 0) { return p; } return gcd(q, p % q); } // Test enable assert check via -ea as a VM argument public static void main(String[] args) { assert (gcd(4, 16) == 4); assert (gcd(16, 4) == 4); assert (gcd(15, 60) == 15); assert (gcd(15, 65) == 5); assert (gcd(1052, 52) ==
https://www.vogella.com/tutorials/JavaAlgorithmsEuclid/article.html
CC-MAIN-2021-17
refinedweb
262
58.11
>>> The fact that scientific journal articles start with a documentation >>> string >>> called an abstract does not indicate that scientific English fails as a >>> human communication medium. Function docstrings say what the function >>> does >>> and how to use it without reading the code. They can be pulled out and >>> displayed elsewhere. They also guide the reading of the code. Abstracts >>> serve the same functions. >> >> >> A paper, with topic introduction, methods exposition, data/results >> description and discussion is a poor analog to a function. I would >> compare the abstract of a scientific paper to the overview section of >> a program's documentation. The great majority of the docstrings I see >> are basically signature rehashes with added type information and >> assertions, followed by a single sentence English gloss-over. If the >> code were sufficiently intuitive and expressive, that would be >> redundant. Of course, there will always be morbidly obese functions >> and coders that like to wax philosophical or give history lessons in >> comments. > > > Both abstracts and doc strings are designed to be and are read independently > of the stuff they summarize. Perhaps you do not use help(obj) as often as > some other people do. I find help() to be mostly useless because of the clutter induced by double under methods. I use IPython, and I typically will either use tab name completion with the "?" feature or %edit <obj> if I really need to dig around. I teach Python to groups from time to time as part of my job, and I usually only mention help() as something of an afterthought, since typically people react to the output like a deer in headlights. Some sort of semantic function and class search from the interpreter would probably win a lot of fans, but I don't know that it is possible without a standard annotation format and the addition of a namespace cache to pyc files.
https://mail.python.org/pipermail/python-list/2012-March/621579.html
CC-MAIN-2018-26
refinedweb
312
61.56
Let us implement Autocomplete feature in Spring MVC application using JQuery. Autocomplete is a feature you”ll see in almost all good web apps. It allows user to select proper values from a list of items. Adding this feature is recommended if the field has multiple ( > 20 to 25) values. Related: Autocomplete in Java / JSP Our requirement is simple. We will have two fields Country and Technologies. Both these fields will have autocomplete feature so user will be able to select from list of countries and technologies. The country field can have only one value. But the technologies field can have multiple values separated by comma (,). Things We Need Before we starts with our Spring MVC Autocomplete Example, we will need few tools. - JDK 1.5 above (download) - Tomcat 5.x above or any other container (Glassfish, JBoss, Websphere, Weblogic etc) (download) - Eclipse 3.2.x above (download) - JQuery UI (Autocomplete) (download) - Spring 3.0 MVC JAR files:(download). Following are the list of JAR files required for this application. - - commons-logging-1.0.4.jar - commons-beanutils-1.8.0.jar - commons-digester-2.0.jar - jackson-core-asl-1.9.7.jar - jackson-mapper-asl-1.9.7.jar Note that depending on the current version of Spring MVC, the version number of above jar files may change. Also note that we need jackson mapper and jackson core jars. This is required for generating JSON from our Spring MVC Controller. Getting Started Let us start with our Spring 3.0 MVC based application. Open Eclipse and goto File -> New -> Project and select Dynamic Web Project in the New Project wizard screen. After selecting Dynamic Web Project, press Next. Write the name of the project. For example SpringMVC_Autocomplete. once we finish the tutorial and add all source code. Now copy all the required JAR files in WebContent > WEB-INF > lib folder. Create this folder if it does not exists. The Dummy Database Normally you would need a database from where you’ll fetch values required for autocomplete. But for sake of simplicity of this example we will write a DummyDB java class. Once the project is created, create a package net.viralpatel.springmvc.autocomplete and a Java class file DummyDB.java. DummyDB.java is the class that will simulate the database connection and it will provide the data for our example. File: /src/net/viralpatel/springmvc/autocomplete/DummyDB.java package net.viralpatel.spring.autocomplete; import java.util.ArrayList; import java.util.List; import java.util.StringTokenizer; public class DummyDB { private List<String> countries; private List<String> tags; public DummyDB() { String data = "Afghanistan, Albania, Algeria, Andorra, Angola, Antigua & Deps,"+ "United Kingdom,United States,Uruguay,Uzbekistan,Vanuatu,Vatican City,Venezuela,Vietnam,Yemen,Zambia,Zimbabwe"; countries = new ArrayList<String>(); StringTokenizer st = new StringTokenizer(data, ","); //Parse the country CSV list and set as Array while(st.hasMoreTokens()) { countries.add(st.nextToken().trim()); } String strTags = "SharePoint, Spring, Struts, Java, JQuery, ASP, PHP, JavaScript, MySQL, ASP, .NET"; tags = new ArrayList<String>(); StringTokenizer st2 = new StringTokenizer(strTags, ","); //Parse the tags CSV list and set as Array while(st2.hasMoreTokens()) { tags.add(st2.nextToken().trim()); } } public List<String> getCountryList(String query) { String country = null; query = query.toLowerCase(); List<String> matched = new ArrayList<String>(); for(int i=0; i < countries.size(); i++) { country = countries.get(i).toLowerCase(); if(country.startsWith(query)) { matched.add(countries.get(i)); } } return matched; } public List<String> getTechList(String query) { String country = null; query = query.toLowerCase(); List<String> matched = new ArrayList<String>(); for(int i=0; i < tags.size(); i++) { country = tags.get(i).toLowerCase(); if(country.startsWith(query)) { matched.add(tags.get(i)); } } return matched; } } The DummyDB.java contains the list of all the countries and technologies in a comma separated string value and a method getCountryList() and getTechList() that will return the list of countries and technologies starting with the string query passed as argument to that method. Thus if we pass “IN” to this method, it will return as all the countries starting with IN. You may want to change this code and add the database implementation here. Just a simple "SELECT * FROM <table> WHERE country LIKE " query will serve the purpose. Now we write SpringMVC Controller that returns JSON output for Autocomplete. The Spring MVC Controller The spring mvc controller class that will process the request and returns JSON output. For this create a class UserController.java under package net.viralpatel.springmvc.autocomplete. package net.viralpatel.spring.autocomplete; import java.util.List; public class UserController { private static DummyDB dummyDB = new DummyDB(); @RequestMapping(value = "/index", method = RequestMethod.GET) public ModelAndView index() { User userForm = new User(); return new ModelAndView("user", "userForm", userForm); } @RequestMapping(value = "/get_country_list", method = RequestMethod.GET, headers="Accept=*/*") public @ResponseBody List<String> getCountryList(@RequestParam("term") String query) { List<String> countryList = dummyDB.getCountryList(query); return countryList; } @RequestMapping(value = "/get_tech_list", method = RequestMethod.GET, headers="Accept=*/*") public @ResponseBody List<String> getTechList(@RequestParam("term") String query) { List<String> countryList = dummyDB.getTechList(query); return countryList; } } Note how we used @ResponseBody annotation in methods getCountryList() and getTechList(). Spring MVC converts the return type which in our case is List into JSON data. Following is the content of spring-servlet.xml file. File: /WebContent/WEB-INF/spring-servlet.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <context:component-scan <mvc:annotation-driven /> > </beans> The tag is required here. This lets Spring to process annotations like @ResponseBody. The below User.java class is required only to bind a form with JSP. It is not required for this example. But for sake of Spring MVC we are using it. File: /src/net/viralpatel/springmvc/autocomplete/User.java package net.viralpatel.spring.autocomplete; public class User { private String name; private String country; private String technologies; //Getter and Setter methods } The JSP View Now add JSP file which renders User form. Also we will add index.jsp which redirect to proper request. File: /WebContent/index.jsp <jsp:forward</jsp:forward> File: /WebContent/WEB-INF/jsp/user.jsp <%@taglib <script type="text/javascript" src=""></script> <script type="text/javascript" src=""></script> </head> <body> <h2>Spring MVC Autocomplete with JQuery & JSON example</h2> <form:form <table> <tr> <th>Name</th> <td><form:input</td> </tr> <tr> <th>Country</th> <td><form:input</td> </tr> <tr> <th>Technologies</th> <td><form:input</td> </tr> <tr> <td colspan="2"> <input type="submit" value="Save" /> <input type="reset" value="Reset" /> </td> </tr> </table> <br /> </form:form> <script type="text/javascript"> function split(val) { return val.split(/,\s*/); } function extractLast(term) { return split(term).pop(); } $(document).ready(function() { $( "#country" ).autocomplete({ source: '${pageContext. request. contextPath}/get_country_list.html' }); $( "#technologies").autocomplete({ source: function (request, response) { $.getJSON("${pageContext. request. contextPath}/get_tech_list.html", { term: extractLast(request.term) }, response); }, search: function () { // custom minLength var term = extractLast(this.value); if (term.length < 1) { return false; } }, focus: function () { //; } }); }); </script> </body> </html> Check the above JSP. We have added INPUT fields for Country and Technologies. Also we used $().autocomplete() to enable autocomplete. For country it was straightforward $( "#country" ).autocomplete() but for technologies we did some parsing and splitting. This is because we need multiple technologies in textbox separated by comma.<< Download Source Code SpringMVC_Autocomplete.zip (4.2 MB) Great job. Only I have one slight problem. Not wanting to use JQuery UI just for one widget, and really don’t like JQuery UI. How can you use all the same code as you have, but a different AutoComplete widget? Appreciate it. good example Thanks i had a doubt in the count text box you are showing the list countries starts with ” i ” but in the string data doesn’t have those name which starts with ” i “ see carefully the DummyDB.java file…String data contains all the country names and it also contains names starting with ‘i’…. Good stuff. Good one. Good example and works. Only comment is, the jackson jar’s that is in the example screen needed to be updated with two more jar files that was is in the zip file. Hi Satish, I am getting json back.. But the not getting the dropdown… Can it be bcoz of those remaining 2 jar files? That correct nilesh you need to add 2 jar that are highlighted. Hi Viral. Thanks for such a nicely documented blog. I tried implementing the above feature. I am able to get the json response at client side. But I am not getting the auto-populated drop down list. Any idea? Hey great tutorial! I did everything but I got an error when I tried to run it. Can someone assist please? I get the error below: HTTP Status 404 – /SpringMVC_Autocomplete/ type Status report message /SpringMVC_Autocomplete/ description The requested resource (/SpringMVC_Autocomplete/) is not available. Apache Tomcat/6.0.35 Update: sorry i forgot to put the index file in the right place…now i get a java error. one problem i noticed is that the project explorer view does not look like your example. I figure the reason is that i am using the lastest eclipse “cocoa” . Please advise, Here the java error… HTTP Status 500 – type Exception report description The server encountered an internal error () that prevented it from fulfilling this request. exception org.apache.jasper.JasperException: java.lang.IllegalStateException: No WebApplicationContext found: no ContextLoaderListener registered? org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:502) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:424).IllegalStateException: No WebApplicationContext found: no ContextLoaderListener registered? org.springframework.web.context.support.WebApplicationContextUtils.getRequiredWebApplicationContext(WebApplicationContextUtils.java:84) org.springframework.web.servlet.support.RequestContextUtils.getWebApplicationContext(RequestContextUtils.java:81) org.springframework.web.servlet.support.RequestContext.initContext(RequestContext.java:219) org.springframework.web.servlet.support.JspAwareRequestContext.initContext(JspAwareRequestContext.java:74) org.springframework.web.servlet.support.JspAwareRequestContext.(JspAwareRequestContext.java:48) org.springframework.web.servlet.tags.RequestContextAwareTag.doStartTag(RequestContextAwareTag.java:76) org.apache.jsp.index_jsp._jspx_meth_form_005fform_005f0(index_jsp.java:184) org.apache.jsp.index_jsp._jspService(index_jsp.java:95).35 logs. @BigCoder, It seems Spring is unable to locate your WebApplicationContext XML file. That is what the exception hints you – java.lang.IllegalStateException: No WebApplicationContext found: no ContextLoaderListener registered? Please check for the following – 1. By default, the naming convention of WebApplicationContext XML file is -servlet.xml. for e.g., if “springmvc” were the servlet name then the WebApplicationContext XML should be available in \WEB-INF folder with the name “springmvc-servlet.xml” or 2. If you have named your WebApplicationContext XML file differently, not adhering to the above naming convention then you should specify the location and name of your WebApplicationContext XML in web.xml with a context parameter and a context loader listener to load it. See below for an example – contextConfigLocation /WEB-INF/ org.springframework.web.context.ContextLoaderListener Hope the above steps will solve your problem. I m getting the following error: No matching handler method found for servlet request: path ‘/json/get_country_list.htm’, method ‘GET’, parameters map[‘term’ -> array[‘j’]] My controller consists of: Should i need to add something ??? I have added all the jars as in ur zip file.. please help… @Anny, It seems the URL-Controller mappings in your WebApplicationContext XML file is incorrect. Please print the contents of your web.xml and WebApplicationContext XML to help you further. Dear Patel, I always found your tutorials very helpful and beneficial. This one however was quite a disappointment. Should you be interested why please drop me an email. Hi Peter, Thanks for the comment. The feedback that my blog visitors give is very important for me. Please feel free to share your inputs at viralpatel.net at gmail dot com. Hi Viral, first of all thanx for such a nice explanation. In our project we are using tiles framework without annotation,in that case we have to always return ModelAndview Object.due to not using annotation we cannt set responsebody type as List in that case what type of data we should return for server side autocomplete feature?? Hi, i am using autocompleter tag with struts2 , tiles framework and i want to change second autocompleter combo value based on first autocompleter combo list selection. can you please, how it will done struts-dojo-tag. tell me the whatever the changes required in java script. Hi Viral, After downloading and configuring app in eclipse+tomcat only index.jsp is coming fine. None of the actions working fine. Anything m missing? Thanks. By using autocomplete it works fine in IE browser but in Mozilla and Chrome there is a space between the textbox and the autocomplete list…… How to remove that space…… good viral!!!!!!! Hi I have done everything as you have explained still getting 404 error “The requested resource (/SpringAutocomplete/index.html) is not available.” can you please help Nice tutorial … although when I run this in Tomcat I get a “406” error for every http://…/get_country_list.html?term=XXX call. So there’s no autocompletion in the form. I’d better double-check that I have copied the setup correctly … Found it: I forgot to include the jackson-core-asl and jackson-mapper libraries. Now the tutorial works as expected. I get a 404 error HTTP Status 404 – type Status report descriptionThe requested resource () is not available. GlassFish Server Open Source Edition 3.1.1 PLEASE HELP! Hi Viral, Can you tell me what is ‘term’ in the below mentioned line For the country input field, where do you define the “term” parameter? I guess that I echo the same question asked by Sandy. hello i follow the tutorial everything ok but the dropdown list doesnt show up? anybody that solved this issue? not even i made single changes and works fantastic…thanks for such tutorails..go bless.. If you enter a letter into an amount field, accidentally or intentionally, when you leave the field the letter will be replaced by 0.00. If not removed this will be what you submit. The amount field can be found throughout the site on the transaction screen, reports and search screens. provide logic using the spring mvc ,hibernate.if end user enter the non neumeric value then need to set the default value like 0.00(amount field) It is a great tutorial. I made it work on IE immediately. But it does not work on Firefox. Neither Country nor Technologies has pop-up windows to display the possible choices as it does on IE. I check that the Block pop-up window is NOT checked. For those who made it work, does it work on Firefox? it is great tutorial. It really helped me to configure json in spring 3.0. Big thank Viral :) error with save data in order to view JSON Object. please help me, to be run … Thanks. I am getting the below error after doing all the steps as mentioned. java.lang.IllegalStateException: Mode mappings conflict between method and type level: [/get_search_list] versus [VIEW] Note: I have used my existing controller and there I have used type level mode as view for @RequestMapping. Please suggest what change I need to do to make it run. Thanks. How to implement same with DB tables. thanks for share ! dear Viral, i need some help from you we are small team and new for Java base web application development. now we have one web vase financial application project have to develop in Java. i need help for selecting MVC technology like JSF+hibernet+spring+EJB and Jsper report for report, Jboss as AS etc please guide me which combination is best with regards and thanks Anand nice stuff:) Thanks dude…works very well…just what I was looking for! I am getting following error could you help me java.lang.IllegalStateException: Neither BindingResult nor plain target object for bean name ‘country’ available as request attribute i am new to spring Hi, I am getting the below exception. I have checked everything fine. Kindly help. Jun 13, 2013 2:34:24 PM org.springframework.web.servlet.DispatcherServlet noHandlerFound WARNING: No mapping found for HTTP request with URI [/Spring3MVC_Multiple_Row_Submit/save.html] in DispatcherServlet with name ‘spring’ Thanks, Karthik. Hi, Iam getting following error.please kindly help me out of this WARNING: No mapping found for HTTP request with URI [/Spring3MVC_Multiple_Row_Submit/index.html] in DispatcherServlet with name ‘spring’ Hi Viral, I base on your source code and developed autocomplete spring and json get list country from database mysql. Dao Class: My Controller And user.jsp I have run successful but when I input ‘India’ textbox, it is not display list country from databse If you resolved for me, please inbox for me, my email: [email protected] Thanks Viral patel very much. Hey Viral Patel…. gr8 tutorials… but i have a problem…. i need to get the data in the autocomplete from the database and not from the dummy database… so could you help me with such an example…. i use mysql….. thank you in advance……. hi viral, i have one problem in one spring web mvc example: i have on bean in dispatcher-servlet.xml file, i inject this bean in one java class (normal java class) then it inject property, but my problem is when i inject this bean into controller class (spring annotation base controller) then i got some exception. the dispatcher-servlet.xml is-> And the controller is like -> i need multiple values in single text box separated by commas with out using spring Hi viral, when i running these application , m getting following exception:- plz rectify the save logic in controller class WARNING: No mapping found for HTTP request with URI [/SpringMVCWithJqueryAndJsons/save.html] in DispatcherServlet with name ‘spring’ Good one… download and set up to run 1) eclipse project and java resource side has error symbols but internal has no error 2) does now work as it said Hi Viral. Thanks for such a nicely documented blog. I tried implementing the above feature. I am able to get the json response at client side. But I am not getting the auto-populated drop down list. help me. Hi, thanks. I have been able to run this successfully. Just want to know if we can modify this code to include images too in our results. Just like facebook search like when we search a page or a person. So then when we click on the list item , it opens the person’s page or any other page. . Thanks NIce article with example…keep it up :) Hi Viral, Can you give me example code with ajax posting to spring mvc controller with request param. Awesome, Thanks Viral I tried so many example but at the end your examples are working fine. Thanks Viral !!!! hi can you explain to me how did the ‘term’ get its value in the controller? wow very nice post. can u tell me where come from the “query” in the getCountryList()? would u tell me where come from the ‘query’ ? hey where there’s no ajax and jquery in your coding Hi, anyone can explain how terms is got? because on this part of code, they don’t define “terms” even though is a @requestParam, anyway “terms” is defined on technologies javascript code but I dont know why it’s possible to get it by request.param $( “#country” ).autocomplete({ source: ‘${pageContext. request. contextPath}/get_country_list.html’ }); dear patel i want to make categories of product for Example Mobile Nokia 1100 Nokia 2600 Man’s exa1 exa2 the above type i want to make using spring mvc and mongodb i am inserting the Document parent and child type so please suggest me how can i do in jsp page The value is passing in to the controller but the suggestion is not comming in the jsp page plz help me.When i save the data the exception shown (“HTTP Status 500 – Request processing failed; nested exception is org.springframework.web.multipart.MultipartException: The current request is not a multipart request”).
http://viralpatel.net/blogs/spring-3-mvc-autocomplete-json-tutorial/
CC-MAIN-2017-51
refinedweb
3,290
50.12
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. I do not have an option to send a purchase order by email to a supplier. Are there any workarounds for this? I just created a brand new database on today's release of 7.0 and installed JUST the Purchases module. The option was there and the only time it isn't is if the purchase order is already approved. If this is the case for you, you need to PRINT the PO and email the PDF. EDIT: Regarding your comment "But i still don't see that option in openerp demo site. " I do - what I see is below - I think you are looking in a different place, or I don't understand your problem. EDIT: The visibility of the email button is controlled by the form view definition. To change the view definition: 1) Enter debug mode (From the About OpenERP window). 2) Open an approved Purchase Order. 3) Go to the debug menu (top left). 4) Select Edit Form View. 5) Change: <button name="wkf_send_rfq" states="sent" string="Send by Email" type="object" context="{'send_rfq':True}"/> to <button name="wkf_send_rfq" states="sent" string="Send RFQ by Email" type="object" context="{'send_rfq':True}"/> 6) Add a new button: <button name="wkf_send_rfq" states="approved" string="Send PO by Email" type="object" context="{'send_rfq':False}"/> To persist these changes through upgrades, inherit and override this view either via the UI or in a module. Thats a good news. But i still don't see that option in openerp demo site. Is there any easy way to update to the latest build? See the edit to my answer above. Yes but i am talking about the purchase orders page. Once you confirm an order you have no option to send that to your customer. Re-read my answer. I posted "The option was there and the only time it isn't is if the purchase order is already approved". You are correct. I also posted "If this is the case for you, you need to PRINT the PO and email the PDF." Do you mean 'Supplier' a PO is when you purchase something, not sell something. I'm also wishing there was a button "Send by Email" for a confirmed PO. In general, a "Send by email' button makes a process doable from my smart phone or tablet because the alternative (save a pdf, rename a pdf, write an email, attach the pdf) can be quite a drag. See my edit above The code change in the form view, as provided in the answer above, creates a "Send by Email" button. The email template is correct for sending a PO but the attached file is the RFQ instead of the PO. I edited my answer to create a second button instead. The button "Send RFQ by Email" doesn't appear either in the RFQ or the PO view. There is still the original "Send by Email" button in the RFQ view. The "Send PO by Email" appears in the PO view but the attachment is the RFQ. OK. It is more complicated than I originally thought. Getting the correct report will require some coding. I will look at this if I have some spare time this week. You can override the method wfk_send_rfq. Jsut add a conditional statement on the order status to change the value of default_template_id: def wkf_send_rfq(self, cr, uid, ids, context=None): ''' This function opens a window to compose an email, with the edi purchase template message loaded by default ''' ir_model_data = self.pool.get('ir.model.data') try: template_id = ir_model_data.get_object_reference(cr, uid, 'purchase', 'email_template_edi_purchase')[1] except ValueError: template_id = False try: compose_form_id = ir_model_data.get_object_reference(cr, uid, 'mail', 'email_compose_message_wizard_form')[1] except ValueError: compose_form_id = False ctx = dict(context) ctx.update({ 'default_model': 'purchase.order', 'default_res_id': ids[0], 'default_use_template': bool(template_id), 'default_template_id': template_id, 'default_composition_mode': 'comment', }) return { 'type': 'ir.actions.act_window', 'view_type': 'form', 'view_mode': 'form', 'res_model': 'mail.compose.message', 'views': [(compose_form_id, 'form')], 'view_id': compose_form_id, 'target': 'new', 'context': ctx, } I'm just starting my testing of OpenERP 7 today. So far I have learned some of the email functions are controlled by the Chatter module, have you looked at that yet? Good Thinking - but you can't install the Purchases module without the modules that is depends on to work - stock, procurement, product, account and mail (Social Networking/Chatter).
https://www.odoo.com/forum/help-1/question/emailing-purchase-orders-10315
CC-MAIN-2017-04
refinedweb
746
65.62
Add static items and results by using Visual C# from data binding to a DropDownList control This article demonstrates how to add static items and data-bound items to a DropDownList control. The sample in this article populates a DropDownList control with an initial item. Original product version: Visual C# Original KB number: 312489 Requirements The following list outlines the recommended hardware and software that you need: - Microsoft Windows - .NET Framework - Visual Studio .NET - Internet Information Services (IIS) - SQL Server This article refers to the following .NET Framework Class Library namespace System.Data.SqlClient. Use Visual C# to create an ASP.NET web application To create a new ASP.NET web application that is named DDLSample, follow these steps: - Open Visual Studio .NET. - On the File menu, point to New, and then select Project. - In the New Project dialog box, select Visual C# Projects under Project Types, and then select ASP.NET Web Application under Templates. - In the Location box, replace WebApplication1 in the default URL with DDLSample. If you are using the local server, you can leave the server name as that the Location box displays. Create the sample In the following steps, you create an .aspx page that contains a DropDownList control. The DropDownList control is data bound to columns of the Authors table from the SQL Server Pubs database. To add a Web Form to the project, follow these steps: - Right-click the project node in Solution Explorer, select Add, and then select Add Web Form. - Name the .aspx page DropDown.aspx, and then select Open. Make sure that the page is open in Design view in the editor. Add a DropDownList control to the page. In the Properties pane, change the ID of the control to AuthorList. Add a Label control to the page after the DropDownList control. In the Properties pane, change the ID of the control to CurrentItem. Add a Button control to the page after the Label control. In the Properties pane, change the ID of the control to GetItem, and then change the Text property to Get Item. Right-click the page, and then select View Code. This opens the code-behind class file in the editor. Add the System.Data.SqlClientnamespace to the code-behind class file so that the sample code functions properly. The complete list of namespaces should appear as follows. using System; using System.Collections; using System.ComponentModel; using System.Data; using System.Drawing; using System.Web; using System.Web.SessionState; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.HtmlControls; using System.Data.SqlClient; Add the following code to the Page_Loadevent. private void Page_Load (object sender, System.EventArgs e) { if (!IsPostBack) { SqlConnection myConn = new SqlConnection ( "Server=localhost;Database=Pubs;Integrated Security=SSPI"); SqlCommand myCmd = new SqlCommand ( "SELECT au_id, au_lname FROM Authors", myConn); myConn.Open (); SqlDataReader myReader = myCmd.ExecuteReader (); //Set up the data binding. AuthorList.DataSource = myReader; AuthorList.DataTextField = "au_lname"; AuthorList.DataValueField = "au_id"; AuthorList.DataBind (); //Close the connection. myConn.Close (); myReader.Close (); //Add the item at the first position. AuthorList.Items.Insert (0, "<-- Select -->"); } } To use Integrated Security in the connect string, change the Web.config file for the application and set the impersonateattribute of the identityconfiguration element to true, as shown in the following example. <configuration> <system.web> <identity impersonate="true" /> </system.web> </configuration> For more information, see ASP.NET Impersonation. Modify the connection string as appropriate for your environment. Switch to the Design view in the editor for the .aspx page. Double-click GetItem. Add the following code to the GetItem_Clickevent in the code-behind class file. private void GetItem_Click (object sender, System.EventArgs e) { string itemText = AuthorList.SelectedItem.Text; string itemValue = AuthorList.SelectedItem.Value; CurrentItem.Text = string.Format ( "Selected Text is {0}, and Value is {1}", itemText, itemValue); } On the File menu, select Save All to save the Web Form and other associated project files. On the Build menu in the Visual Studio .NET Integrated Development Environment (IDE), select Build to build the project. In Solution Explorer, right-click the .aspx page, and then select View In Browser. Notice that the page opens in the browser and that the drop-down list box is populated with the initial data. Select an item in the drop-down list box. Notice that the CurrentItem Label control displays the item that you selected. In addition, notice that the list retains the current position and the static entry. Troubleshooting - You must place the code to add the static item to the ListItemcollection of the control after the data binding code. If you don't add the code in this order, the list is re-created with the data binding code, which overwrites the static entry. - The sample code checks the IsPostBackproperty to prevent the list from being re-created. In addition, this code checks IsPostBackto retain the selected item in the current position of the list between round trips to the server.
https://docs.microsoft.com/en-us/troubleshoot/dotnet/csharp/add-static-items-results-data-binding
CC-MAIN-2020-45
refinedweb
817
52.97
Scripting basics Contents Introduction ImageJ and Fiji are able to run scripts written in different languages. Besides all the differences the approach on how to use the API of ImageJ is similar for all of them. This article will introduce the basic concepts and is valid for all scripting languages. Get an image and perform an action First we want to learn different ways to select an image and perform an action on it. In ImageJ1 the image is represented by an ImagePlus object. The recommended way to select an ImagePlus object is to use Script Parameters: #@ ImagePlus imp #@ Integer(label='Filter radius',description='The sigma of the gaussian filter.',value=2) sig print(imp) import ij.IJ IJ.run(imp, "Gaussian Blur...", "sigma=" + sig) Script Parameters are placed at the beginning of the script file. If only one @ImagePlus is used, the front most image is selected. A second Script Parameter is used to get the radius of the gaussion filter. By using print(imp) we verify, that an ImagePlus object is assigned to the variable. To perform an operation on the selected image, we use IJ.run(). Therefore we have to import the class IJ. There are three different versions of the run() method of these we need the one with three parameters. The first parameter is the image to perform the action on, the second parameters defines the action (called command) and the last parameter is used to configure the action (here we set the filter radius). The easiest way to find a command is to use the Recorder. The second approach is similar to how to perform this operation using the macro language: import ij.IJ imp = IJ.getImage() sig = IJ.getNumber('Filter radius:', 2) IJ.run(imp, "Gaussian Blur...", "sigma=" + sig) The first step is to select the front most image by using IJ's method getImage(). The second step is to use the method getNumber() to show up a dialog to enter the filter radius. Running the filter is the same as in the previous example. Finally we want to use the WindowManager to select the front most image: import ij.IJ import ij.WindowManager imp = WindowManager.getCurrentImage() sig = IJ.getNumber('Filter radius:', 2) IJ.run(imp, "Gaussian Blur...", "sigma=" + sig) This is nearly identical to the use of IJ.getImage() and therefore not recommended. The WindowManager class contains some useful methods that can be used to select more than one image (e.g. getImageTitles() and getIDList(). Opening images In ImageJ there are many different ways to open images (or more general datasets). We want to introduce some of them. The first example uses the DatasetIOService. It is part of SCIFIO, a flexible framework for SCientific Image Format Input and Output. Two types of image files are opened. The first one is an example image, downloaded from the Internet. The second image can be chosen by the user. Both datasets are displayed using the UIService that is part of the SciJava project. #@ DatasetIOService ds #@ UIService ui #@ String(label='Image URL', value='') fileUrl #@ File(label='local image') file // Load a sample file from the internet and a local file of your choice. dataset1 = ds.open(fileUrl) dataset2 = ds.open(file.getAbsolutePath()) // Display the datasets. ui.show(dataset1) ui.show(dataset2) If a script only depends on ImageJ1 functionality, one can use the function IJ.openImage(). It will return an ImagePlus object. #@ String(label='Image URL', value='') fileUrl #@ File(label='local image') file import ij.IJ // Load a sample file from the internet and a local file of your choice. imagePlus1 = IJ.openImage(fileUrl) imagePlus2 = IJ.openImage(file.getAbsolutePath()) // Display the datasets. imagePlus1.show() imagePlus2.show() IJ.openImage() is based on the class ij.io.Opener. You can use it directly to open images and other files (e.g. text files). The example uses the class ij.io.OpenDialog to select a file. This is an alternative to the usage of the Scripting Parameter @File. import ij.io.Opener import ij.io.OpenDialog // Use the OpenDialog to select a file. filePath = new OpenDialog('Select an image file').getPath() // Open the selected file. imagePlus = new Opener().openImage(filePath) // Display the ImagePlus. imagePlus.show() ImagePlus, ImageStack and ImageProcessor Conversion When working with the ImageJ API you will run into the problem that you have e.g. a ImageProcessor, but what you need right now is a ImagePlus. To convert one to another use these commands: // ImagePlus to ImageProcessor: ip = imp.getProcessor() // ImageProcessor to ImagePlus: imp = new ImagePlus('title', ip) // ImagePlus to ImageStack: stack = imp.getImageStack() // ImageStack to ImagePlus: imp = ImagePlus('title', stack) // ImageStack to ImageProcessor: ip = stack.getProcessor(nframe) // ImageProcessor to ImageStack: stack.addSlice(ip)
https://imagej.net/index.php?title=Scripting_basics&oldid=36524
CC-MAIN-2018-47
refinedweb
777
52.76
John Coolidge12,613 Points Error in Working with Hash Keys exercise I'm not sure how to solve this exercise. I assume that checking the hash variable for the calories key and if so add a new key called "food" and set it to true would look like my code below. When I submit my answer, I get "The food variable was not found". I'm not a newb to programming in general, but perhaps I've missed something here. We just covered the .store method and if statements were covered a while back. What else could I need to check if a key exists and if so create another with a Boolean value? hash = { "name" => "Bread", "quantity" => 1, "calories" => 100 } if hash.has_key?("calories") hash.store("food", true) end 1 Answer Zach Stearman8,444 Points Hi John, The challenge is not asking you to create a new hash key called "food." The challenge simply asks you to create a new variable called food. So instead of using hash.store("food", true) type def food = true Hope this helps! John Coolidge12,613 Points Wow, do I feel a little silly. Thanks so much, Zach! I was overdoing it. :) John Coolidge12,613 Points John Coolidge12,613 Points I've also typed the above into IRB and it works without a problem.
https://teamtreehouse.com/community/error-in-working-with-hash-keys-exercise
CC-MAIN-2020-29
refinedweb
220
83.86
CheckiO Weekly #0 ― Clock Angle Review Hello, Checkio Friends! In this weekly Checkio Mission Review (I hope it will be weekly) we look at various missions and the interesting (or just random) solutions for them. As before, any ideas, comments or questions are welcome and can be left on this post. Today we will examine the "Clock Angle" mission. You are given a time in 24-hour format and you should calculate a lesser angle between the hour and minute hands in degrees for analog clock. Simple mathematical model. But you can solve this even if you don't have doctor degree in Mathematics. Just watch at an analog clock. Old grandmother cuckoo-clock will be nice or find some video with analog clock (yeah it's sound weird). The time is given as as a string in "HH:MM" format. So first we need to get some numbers from this. You can use regexp if you a fan of perl for example, or just split by ":" and convert to int for each part. Don't forget to convert from the 24 to 12 hour intervals. The modulo operation is enough for this. From here we can find the angles for each hand and the "zero-value" (12 hours or 0 minutes). For the minute hand it's simple; we know a full circle is 360 degrees, this makes each minute 6 degrees. Thus if we multiply minutes by 6 and we get the minute angle. With the hour hand things are a little trickier, because this hand moves with each minute too. First, for each hour we have 360 / 12 = 30 degrees. Second, for each minute the hour hand moves at 30 / 60 = 0.5 degrees – 30 is degrees for one hour and 60 is number of minutes in each hour. So the hour-angle will be received as a sum of two numbers -- hours by 30 and minutes by 0.5. Now we have two angles, find the absolute value of the difference and you get the results. Gotcha. Yeeeaah, I did forget one liiiiitle detail, but I think you can find it out yourself. Now let's look at some solutions by CheckiO players. The "Clear" category's top solutions look as we expected. Sim0000's first solution took the most votes: def clock_angle(time): hour, minutes = map(int, time.split(':')) angle = abs(30 * (hour % 12) - 5.5 * minutes) return min(angle, 360 - angle) Sim0000 merged the minute angle from the hour and minute hand and got 5.5. Firstly, the "map" may be looked at as "overhead", but this allows them to easily expand function for second hand and more DRY. For python newbies – "int" here is a function to convert strings to integers and strip forwarding zeroes. In Bukebuer's solution we see an "A if C else B" constructions instead "%" and "min". hour = hour if hour < 12 else hour - 12 ... angle = angle if angle <= 180 else 360 - angle In the "Creative" category, gyahun_dash's solution uses an interesting formula: 180 - abs(180 - (h * 60 + m) % (720 / 11) * 11 / 2) From his comment "I’ve learned basic Signal Processing, so I would search and formulate a periodicity in the outputs after solving.". Mmmmm, pure math (wink) At first look, Hatrik's solution is not looking as "Creative", just kindof more like a formula. But look carefully: min(abs(i - abs(30*((h, h-12)[h>12]) - 5.5*m)) for i in (360, 0)) "for"?!? Yes, I think Hatrik showed how some code which can "hide" bad constructions. You do code reviews for you collegue while you drink your coffee and watch twitter (Our link) with one eye aaaand just easily skip that construction. And from this mission in the "Scary" category for solutions. At first place in top, we have Veky with Scary math solution (that's why we did not see he in "Creative" category). This solution is the real Nightmare for people who turn white with mentions of TRIGONOMETRY! Early we saw pure math, now look at complex math (I can not stand and don't post it right here): from cmath import rect, phase from math import radians, degrees h, m = map(int, time.split(":")) return degrees(abs(phase(rect(1, radians(30 * h - 5.5 * m))))) And one more solution from Hanpari – "Awful story". This a simple solution, but it can be told near campfire in dark forest when you will gather with your developer friends. Take some time, you really need to read this. That's all folks. We are going to do these reviews weekly, so this is just the first pitch. Please tell us your thoughts in the comments below. Maybe we can take a look at random "bad" solutions to some favorite missions, or take a look at more solutions per mission. Let me know if you have any ideas!
https://py.checkio.org/blog/clock-angle-review/
CC-MAIN-2020-40
refinedweb
814
73.37
Top-level dispatch interface for dispatching via the dynamic dispatcher. More... #include <Dispatcher.h> Top-level dispatch interface for dispatching via the dynamic dispatcher. Definition at line 70 of file Dispatcher.h. Add a listener that gets called whenever a new op is registered or an existing op is deregistered. Immediately after registering, this listener gets called for all previously registered ops, so it can be used to keep track of ops registered with this dispatcher. Definition at line 79 of file Dispatcher.cpp. Remove an operator from the dispatcher. Make sure you removed all kernels for this operatorbefore calling this. Definition at line 55 of file Dispatcher.cpp. Register a new operator schema. The handle returned can be used to register kernels to this operator or to call it. Definition at line 42 of file Dispatcher.cpp.
https://caffe2.ai/doxygen-c/html/classc10_1_1_dispatcher.html
CC-MAIN-2020-40
refinedweb
138
51.95
_lwp_cond_reltimedwait(2) - get information about a processor set #include <sys/pset.h> int pset_info(psetid_t pset, int *type, uint_t *numcpus, processorid_t *cpulist); The pset_info() function returns information on the processor set pset. If type is non-null, then on successful completion the type of the processor set will be stored in the location pointed to by type. The only type supported for active processor sets is PS_PRIVATE. If numcpus is non-null, then on successful completion the number of processors in the processor set will be stored in the location pointed to by numcpus. If numcpus and cpulist are both non-null, then cpulist points to a buffer where a list of processors assigned to the processor set is to be stored, and numcpus points to the maximum number of processor IDs the buffer can hold. On successful completion, the list of processors up to the maximum buffer size is stored in the buffer pointed to by cpulist. If pset is PS_NONE, the list of processors not assigned to any processor set will be stored in the buffer pointed to by cpulist, and the number of such processors will be stored in the location pointed to by numcpus. The location pointed to by type will be set to PS_NONE. If pset is PS_MYID, the processor list and number of processors returned will be those of the processor set to which the caller is bound. If the caller is not bound to a processor set, the result will be equivalent to setting pset to PS_NONE. Upon successful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error. The pset_info() function will fail if: The location pointed to by type, numcpus, or cpulist was not null and not writable by the user. An invalid processor set ID was specified. The caller is in a non-global zone, the pools facility is active, and the processor is not a member of the zone's pool's processor set. See attributes(5) for descriptions of the following attributes: pooladm(1M), psrinfo(1M), psrset(1M), zoneadm(1M), processor_info(2), pset_assign(2), pset_bind(2), pset_create(2), pset_destroy(2), pset_getloadavg(3C), attributes(5) The processor set of type PS_SYSTEM is no longer supported.
http://docs.oracle.com/cd/E26502_01/html/E29032/pset-info-2.html
CC-MAIN-2016-18
refinedweb
372
59.03
py2app / doc / changelog.rst Release history py2app 0.8 py2app 0.8 is a feature release Issue #92: Add option '- 'email' recipe, but require a new enough version of modulegraph instead. Because of this py2app now requires modulegraph 0.11 or later. py2app 0.7.4 - Disabled the 'email' 'pkg/foo.py' to be in namespace package 'pkg' unless there is a zipfile entry for the 'pkg' folder (or there is a 'pkg/__init__.py' entry). '.git' and '.hg' directories while copying package data ('.svn' and 'C 'pkg 'raw-plugins (or 'matplotlib_plugins' in setup.py) is a list of plugins to include. Use '-' to not include backends other than those found by the import statement analysis, and '*' to include all backends (without necessarily including all of matplotlib) As an example, use --matplotlib-plugins 'python 'includes' 'Versions' 'Resources' folder is no longer on the python search path, it contains the scripts while Python modules and packages are located in the site-packages directory. This change is related to issue #30. The folder 'Resources 'argv_emulation' to False when your using a 64-bit build of python, because that option is not supported on such builds. py2app now clears the temporary directory in 'build' and the output directory in 'dist' 'examples'.2 is a bugfix release Bug fixes: - Ensure that the right stub executable gets found when using the system python 2.5 py2app 0.5.1 py2app 0.5.1 is a bugfix release Bug fixes: - Ensure stub executables get included in the egg files - Fix name of the bundletemplate stub executable for 32-bit builds py2app 0.5 py2app 0.5 is a minor feature release. Features: - Add support for the --with-framework-name option of Python's configure script, that is: py2app now also works when the Python framework is not named 'Python.framework'. - Add support for various build flavours of Python (32bit, 3-way, ...) - py2app now actually works for me (ronaldoussoren@mac.com) with a python interpreter in a virtualenv environment. - Experimental support for python 3 Bug fixes: - Fix recipe for matplotlib: that recipe caused an exception with current versions of matplotlib and pytz. - Use modern API's in the alias-build bootstrap code, without this 'py2app -A' will result in broken bundles on a 64-bit build of Python. (Patch contributed by James R Eagan) - Try both 'import Image' and 'from PIL import Image' in the PIL recipe. (Patch contributed by Christopher Barker) - The stub executable now works for 64-bit application bundles - (Lowlevel) The application stub was rewritten to use dlopen instead of dyld APIs. This removes deprecation warnings during compilation. py2app 0.4.3 py2app 0.4.3 is a bugfix release Bug fixes: - A bad format string in build_app.py made it impossible to copy the Python framework into an app bundle. py2app 0.4.2 py2app 0.4.2 is a minor feature release Features: - When the '--strip' option is specified we now also remove '.dSYM' directories from the bundle. - Remove dependency on a 'version.plist' file in the python framework - A new recipe for PyQt 4.x. This recipe was donated by Kevin Walzer. - A new recipe for virtualenv, this allows you to use py2app from a virtual environment. Adds support for converting .xib files (NIB files for Interface Builder 3) Introduces an experimental plugin API for data converters. A conversion plugin should be defined as an entry-point in the py2app.converter group: setup( ... entry_points = { 'py2app.converter': [ "label = some_module:converter_function", ] }, ... ) The conversion function should be defined like this: from py2app.decorators import converts @converts('.png') def optimze_png(source, proposed_destionation, dryrun=0): # Copy 'source' to 'proposed_destination' # The conversion is allowed to change the proposed # destination to another name in the same directory. pass Buf fixes: - This fixes an issue with copying a different version of Python over to an app/plugin bundle than the one used to run py2app with. py2app 0.4.0 py2app 0.4.0 is a minor feature release (and was never formally released). Features: - Support for CoreData mapping models (introduced in Mac OS X 10.5) - Support for python packages that are stored in zipfiles (such as zip_safe python eggs). Bug fixes: - Fix incorrect symlink target creation with an alias bundle that has included frameworks. - Stuffit tends to extract archives recursively, which results in unzipped code archives inside py2app-created bundles. This version has a workaround for this "feature" for Stuffit. - Be more carefull about passing non-constant strings as the template argumenti of string formatting functions (in the app and bundle templates), to avoid crashes under some conditions. py2app 0.3.6 py2app 0.3.6 is a minor bugfix release. Bug fixes: - Ensure that custom icons are copied into the output bundle - Solve compatibility problem with some haxies and inputmanager plugins py2app 0.3.5 py2app 0.3.5 is a minor bugfix release. Bug fixes: - Resolve disable_linecache issue - Fix Info.plist and Python path for plugins py2app 0.3.4 py2app 0.3.4 is a minor bugfix release. Bug fixes: - Fixed a typo in the py2applet script - Removed some, but not all, compiler warnings from the bundle template (which is still probably broken anyway) py2app 0.3.3 py2app 0.3.3 is a minor bugfix release. Bug Fixes: - Fixed a typo in the argv emulation code - Removed the unnecessary py2app.install hack (setuptools does that already) py2app 0.3.2 py2app 0.3.2 is a major bugfix release. Functional changes: - Massively updated documentation - New prefer-ppc option - New recipes: numpy, scipy, matplotlib - Updated py2applet script to take options, provide --make-setup Bug Fixes: - No longer defaults to LSPrefersPPC - Replaced stdlib usage of argvemulator to inline version for i386 compatibility py2app 0.3.1 py2app 0.3.1 is a minor bugfix release. Functional changes: - New EggInstaller example Bug Fixes: - Now ensures that the executable is +x (when installed from egg this may not be the case) py2app 0.3.0 py2app 0.3.0 is a major feature enhancements release. Functional changes: - New --xref (-x) option similar to py2exe's that produces a list of modules and their interdependencies as a HTML file - sys.executable now points to a regular Python interpreter alongside the regular executable, so spawning sub-interpreters should work much more reliably - Application bootstrap now detects paths containing ":" and will provide a "friendly" error message instead of just crashing <>. - Application bootstrap now sets PYTHONHOME instead of a large PYTHONPATH - Application bootstrap rewritten in C that links to CoreFoundation and Cocoa dynamically as needed, so it doesn't imply any particular version of the runtime. - Documentation and examples changed to use setuptools instead of distutils.core, which removes the need for the py2app import - Refactored to use setuptools, distributed as an egg. - macholib, bdist_mpkg, modulegraph, and altgraph are now separately maintained packages available on PyPI as eggs - macholib now supports little endian architectures, 64-bit Mach-O headers, and reading/writing of multiple headers per file (fat / universal binaries) py2app 0.2.1 py2app 0.2.1 is a minor bug fix release. Bug Fixes: - macholib.util.in_system_path understands SDKs now - DYLD_LIBRARY_PATH searching is fixed - Frameworks and excludes options should work again. py2app 0.2.0 py2app 0.2.0 is a minor bug fix release. Functional changes: - New datamodels option to support CoreData. Compiles .xcdatamodel files and places them in the Resources dir (as .mom). - New use-pythonpath option. The py2app application bootstrap will no longer use entries from PYTHONPATH unless this option is used. - py2app now persists information about the build environment (python version, executable, build style, etc.) in the Info.plist and will clean the executable before rebuilding if anything at all has changed. - bdist_mpkg now builds packages with the full platform info, so that installing a package for one platform combination will not look like an upgrade to another platform combination. Bug Fixes: - Fixed a bug in standalone building, where a rebuild could cause an unlaunchable executable. - Plugin bootstrap should compile/link correctly with gcc 4. - Plugin bootstrap no longer sets PYTHONHOME and will restore PYTHONPATH after initialization. - Plugin bootstrap swaps out thread state upon plug-in load if it is the first to initialize Python. This fixes threading issues. py2app 0.1. py2app 0.1.8 py2app 0.1.8 is a major enhancements release: Bugs fixed: - Symlinks in included frameworks should be preserved correctly (fixes Tcl/Tk) - Fixes some minor issues with alias bundles - Removed implicit SpiderImagePlugin -> ImageTk reference in PIL recipe - The --optimize option should work now - weakref is now included by default - anydbm's dynamic dependencies are now in the standard implies list - Errors on app launch are brought to the front so the user does not miss them - bdist_mpkg now compatible with pychecker (data_files had issues) Options changed: - deprecated --strip, it is now on by default - new --no-strip option to turn off stripping of executables New features: - Looks for a hacked version of the PyOpenGL __init__.py so that it doesn't have to include the whole package in order to get at the stupid version file. - New loader_files key that a recipe can return in order to ensure that non-code ends up in the .zip (the pygame recipe uses this) - Now scans all files in the bundle and normalizes Mach-O load commands, not just extensions. This helps out when using the --package option, when including frameworks that have plugins, etc. - An embedded Python interpreter is now included in the executable bundle (sys.executable points to it), this currently only works for framework builds of Python - New macho_standalone tool - New macho_find tool - Major enhancements to the way plugins are built - bdist_mpkg now has a --zipdist option to build zip files from the built package - The bdist_mpkg "Installed to:" description is now based on the package install root, rather than the build root py2app 0.1.7 py2app 0.1.7 is a bug fix release: - The bdist_mpkg script will now set up sys.path properly, for setup scripts that require local imports. - bdist_mpkg will now correctly accept ReadMe, License, Welcome, and background files by parameter. - bdist_mpkg can now display a custom background again (0.1.6 broke this). - bdist_mpkg now accepts a build-base= argument, to put build files in an alternate location. - py2app will now accept main scripts with a .pyw extension. - py2app's not_stdlib_filter will now ignore a site-python directory as well as site-packages. - py2app's plugin bundle template no longer displays GUI dialogs by default, but still links to AppKit. - py2app now ensures that the directory of the main script is now added to sys.path when scanning modules. - The py2app build command has been refactored such that it would be easier to change its behavior by subclassing. - py2app alias bundles can now cope with editors that do atomic saves (write new file, swap names with existing file). - macholib now has minimal support for fat binaries. It still assumes big endian and will not make any changes to a little endian header. - Add a warning message when using the install command rather than installing from a package. - New simple/structured example that shows how you could package an application that is organized into several folders. - New PyObjC/pbplugin Xcode Plug-In example. py2app 0.1.6 Since I have been slacking and the last announcement was for 0.1.4, here are the changes for the soft-launched releases 0.1.5 and 0.1.6: py2app 0.1.6 was a major feature enhancements release: - py2applet and bdist_mpkg scripts have been moved to Python modules so that the functionality can be shared with the tools. - Generic graph-related functionality from py2app was moved to altgraph.ObjectGraph and altgraph.GraphUtil. - bdist_mpkg now outputs more specific plist requirements (for future compatibility). - py2app can now create plugin bundles (MH_BUNDLE) as well as executables. New recipe for supporting extensions built with sip, such as PyQt. Note that due to the way that sip works, when one sip-based extension is used, all sip-based extensions are included in your application. In practice, this means anything provided by Riverbank, I don't think anyone else uses sip (publicly). - New recipe for PyOpenGL. This is very naive and simply includes the whole thing, rather than trying to monkeypatch their brain-dead version acquisition routine in __init__. - Bootstrap now sets ARGVZERO and EXECUTABLEPATH environment variables, corresponding to the argv[0] and the _NSGetExecutablePath(...) that the bundle saw. This is only really useful if you need to relaunch your own application. - More correct dyld search behavior. - Refactored macholib to use altgraph, can now generate GraphViz graphs and more complex analysis of dependencies can be done. - macholib was refactored to be easier to maintain, and the structure handling has been optimized a bit. - The few tests that there are were refactored in py.test style. - New PyQt example. - New PyOpenGL example. See also: py2app 0.1.5 py2app 0.1.5 is a major feature enhancements release: Added a bdist_mpkg distutils extension, for creating Installer an metapackage from any distutils script. - Includes PackageInstaller tool - bdist_mpkg script - setup.py enhancements to support bdist_mpkg functionality - Added a PackageInstaller tool, a droplet that performs the same function as the bdist_mpkg script. Create a custom bdist_mpkg subclass for py2app's setup script. Source package now includes PJE's setuptools extension to distutils. Added lots of metadata to the setup script. py2app.modulegraph is now a top-level package, modulegraph. py2app.find_modules is now modulegraph.find_modules. Should now correctly handle paths (and application names) with unicode characters in them. New --strip option for py2app build command, strips all Mach-O files in output application bundle. New --bdist-base= option for py2app build command, allows an alternate build directory to be specified. New docutils recipe. Support for non-framework Python, such as the one provided by DarwinPorts. See also: py2app 0.1.4 py2app 0.1.4 is a minor bugfix release: - The altgraph from 0.1.3 had a pretty nasty bug in it that prevented filtering from working properly, so I fixed it and bumped to 0.1.4. py2app 0.1.3 py2app 0.1.3 is a refactoring and new features release: - altgraph, my fork of Istvan Albert's graphlib, is now part of the distribution - py2app.modulegraph has been refactored to use altgraph - py2app can now create GraphViz DOT graphs with the -g option (TinyTinyEdit example) - Moved the filter stack into py2app.modulegraph - Fixed a bug that may have been in 0.1.2 where explicitly included packages would not be scanned by macholib - py2app.apptemplate now contains a stripped down site module as opposed to a sitecustomize - Alias builds are now the only ones that contain the system and user site-packages directory in sys.path - The pydoc recipe has been beefed up to also exclude BaseHTTPServer, etc. Known issues: - Commands marked with XXX in the help are not implemented - Includes all files from packages, it should be smart enough to strip unused .py/.pyc/.pyo files (to save space, depending on which optimization flag is used) - macholib should be refactored to use altgraph - py2app.build_app and py2app.modulegraph should be refactored to search for dependencies on a per-application basis py2app 0.1.2 py2app 0 (this is a good example of how to write your own recipe, and how to deal with complex applications that mix code and data files) -). py2app 0.1.1 py2app 0.1.1 is primarily a bugfix release: - Several problems related to Mac OS X 10.2 compatibility and standalone building have been resolved Scripts that are not in the same directory as setup.py now work A new recipe has been added that removes the pydoc -> Tkinter dependency A recipe has been added for py2app itself a wxPython example (superdoodle) has been added. Demonstrates not only how easy it is (finally!) to bundle wxPython applications, but also how one setup.py can deal with both py2exe and py2app. A new experimental tool, py2applet, has been added. Once you've built it (python setup.py py2app, of course), you should be able to build simple applications simply by dragging your main script and optionally any packages, data files, Info.plist and icon it needs. Known issues: - Includes all files from packages, it should be smart enough to strip unused .py/.pyc/.pyo files (to save space, depending on which optimization flag is used). - The default PyRuntimeLocations can cause problems on machines that have a /Library/Frameworks/Python.framework installed. Workaround is to set a plist that has the following key: PyRuntimeLocations=['/System/Library/Frameworks/Python.framework/Versions/2.3/Python'] (this will be resolved soon)
https://bitbucket.org/ronaldoussoren/py2app/src/5bf163082f89/doc/changelog.rst
CC-MAIN-2014-10
refinedweb
2,771
58.69
Have you ever read any book about treasure exploration? Have you ever see any film about treasure exploration? Have you ever explored treasure? If you never have such experiences, you would never know what fun treasure exploring brings to you. Recently, a company named EUC (Exploring the Unknown Company) plan to explore an unknown place on Mars, which is considered full of treasure. For fast development of technology and bad environment for human beings, EUC sends some robots to explore the treasure. To make it easy, we use a graph, which is formed by N points (these N points are numbered from 1 to N), to represent the places to be explored. And some points are connected by one-way road, which means that, through the road, a robot can only move from one end to the other end, but cannot move back. For some unknown reasons, there is no circle in this graph. The robots can be sent to any point from Earth by rockets. After landing, the robot can visit some points through the roads, and it can choose some points, which are on its roads, to explore. You should notice that the roads of two different robots may contain some same point. For financial reason, EUC wants to use minimal number of robots to explore all the points on Mars. As an ICPCer, who has excellent programming skill, can your help EUC? Input The input will consist of several test cases. For each test case, two integers N (1 <= N <= 500) and M (0 <= M <= 5000) are given in the first line, indicating the number of points and the number of one-way roads in the graph respectively. Each of the following M lines contains two different integers A and B, indicating there is a one-way from A to B (0 < A, B <= N). The input is terminated by a single line with two zeros. Output For each test of the input, print a line containing the least robots needed. Sample Input 1 0 2 1 1 2 2 0 0 0 Sample Output 1 1 2 题解: 与求有向无环图的最少不相交路径覆盖的方法一样,只是要先用Floyd求传递闭包。具体传递闭包求法很简单看代码就行了。 代码: #include <cstdio> #include <cstring> using namespace std; const int MAXN = 505; bool map[MAXN][MAXN]; struct Edge{ int next,to; }E[MAXN*MAXN]; int head[MAXN*2],tot; inline void Add(int from,int to){ E[++tot].next = head[from]; E[tot].to = to; head[from] = tot; } //------------求传递闭包------------------------------ void Floyd(int N){ for(int k=1 ; k<=N ; ++k) for(int i=1 ; i<=N ; ++i) for(int j=1 ; j<=N ; ++j){ if(map[i][j])continue; if(map[i][k] && map[k][j]){ map[i][j] = true; Add(i,j+N); } } } //----------------------------------------------------- int pre[MAXN*2]; bool used[MAXN*2]; int Find(int x){ for(int i=head[x] ; i ; i=E[i].next){ int to = E[i].to; if(used[to])continue; used[to] = true; if(pre[to] == 0 || Find(pre[to])){ pre[to] = x; return 1; } } return 0; } inline void init(){ memset(map,false,sizeof map); memset(pre,0,sizeof pre); memset(head,0,sizeof head); tot = 0; } int main(){ int N,M; while(scanf("%d %d",&N,&M) && (N||M)){ init(); int from,to; for(int i=1 ; i<=M ; ++i){ scanf("%d %d",&from,&to); map[from][to] = true; Add(from,to+N); } Floyd(N); int sum = 0; for(int i=1 ; i<=N ; ++i){ memset(used,false,sizeof used); sum += Find(i); } printf("%d\n",N-sum); } return 0; }
https://blog.csdn.net/vocaloid01/article/details/82349785
CC-MAIN-2019-26
refinedweb
581
68.3
Problem solve Get help with specific problems with your technologies, process and projects. Problem solve Get help with specific problems with your technologies, process and projects. Learning .NET: Tips for getting started with .NET development Our "Getting Started" tip series provides an introductory look at leading-edge technology like ASP.NET AJAX, .NET 3.0 and Visual Studio Team System.Continue Reading Book Review: Understanding .NET, Second Edition Ed Tittel calls this book an effective .NET tutorial for software developers and their managers. It covers the My namespace, ASP.NET, the CLR and other important topics.Continue Reading Beginning Visual Studio Team System development With Visual Studio Team System, Microsoft brings collaboration into the SDLC. This tip will help you get the most out of the product's planning, management and testing features.Continue Reading Book excerpt: Code snippets in Visual Studio 2005 This chapter from "Professional Visual Studio 2005" explains code snippets, which let developers save frequently used chunks of code and call them up whenever needed.Continue Reading Beginning LINQ development, Part 1 The Language Integrated Query integrates data queries right into VB 2008 and C# 3.0 and works with objects, XML and SQL. This tip highlights resources for getting to know LINQ.Continue Reading Beginning ASP.NET AJAX development, Part 3 In the latest tip in this series, Ed Tittel considers the ASP.NET AJAX road map and links to resources for both client-side and server-side development.Continue Reading Chart FX offers great data visualizations Software FX offers an award winning (and sizable) collection of offerings designed to enhance and extend .NET applications, particularly those created within Visual Studio.Continue Reading Getting started with ASP.NET AJAX development, Part 2 In this tip we revisit ASP.NET AJAX development and present resources for getting the most out of server-side programming, the UpdatePanel and add-in controls.Continue Reading Book Excerpt: ASP.NET 2.0 Web Parts in Action An ASP.NET 2.0 app built with Web Parts has sections that communicate with the server independently, improving performance and the user experience. This book excerpt explains how to manage these applications
http://searchwindevelopment.techtarget.com/info/problemsolve/page/15
CC-MAIN-2018-09
refinedweb
358
50.94
Wikiversity talk:Help desk From Wikiversity [edit] Pakistan Why we people do not have the talk on help desk to get some info from each other. So as my question to others is "Can somebody help me in finding tye free E Resources of Law of Pakistan?" my email address is "sajjadhussainz@gmail.com" - I repeated this question at Wikipedia. Here is a start: w:Media laws of Pakistan (from Category:Pakistani law). --JWSchmidt 12:24, 7 September 2006 (UTC) [edit] Redundancy I can't help but feel that this help desk is really redundant with that at Wikipedia. Why have both? The Jade Knight 05:28, 9 October 2006 (UTC) - Maybe we could institute a policy of "We do help with homework, just we don't give the exact answer" or there abouts. This might be the future site of an online tutoring section--Rayc 23:20, 16 November 2006 (UTC) [edit] subject help desks There is now an astronomy help desk at Topic:Astronomy/Help desk and I know of at least one other at School of Mathematics Help Desk. Perhaps we should start a page to list these to better coordinate questions?--mikeu 16:44, 6 February 2007 (UTC) - Good point. Maybe at least we can start adding all such pages to Category:Help desks (though not all such pages have "Help" and/or "desk" in the title). Then we need to work on Wikiversity:Questions... Cormaggio beep 17:08, 6 February 2007 (UTC) I've added all the Help Desks I found under Category:Help Desks to the header, how does it look ? StuRat 07:08, 9 April 2007 (UTC) [edit] The Wikipedia Reference Desk On the Wikipedia Reference Desks talk page, there is a great discussion going on about whether they should answer all kinds of questions placed there. It's hard to explain the debate, but you could go there and see for yourselves. I suggested that they send here the questions that don't fit there. I want to know whether this would be good for this Help Desk. (My nick on Wikipedia is A.Z.) A.z. 17:04, 31 March 2007 (UTC) - Yes, in particular there are many subjective and opinion questions there which some find objectionable. I would like to set up a formal mechanism for redirecting such questions here, as long as that's OK with everyone here. I would also be willing to answer many of the questions sent here. This might cause your number of questions to grow dramatically, as the Wikipedia Ref Desk gets quite a few subjective questions. It might also make sense to direct strictly factual questions there, since those are answered rather well at the Wikipedia Ref Desk, at present. StuRat 17:41, 2 April 2007 (UTC) - Sorry for the very late reply to this thread. I did read this back in April, but at a busy time for me. If I had taken time to comment I would only have said, "go for it". I sometimes participate at the Wikipedia reference desks, particularly for science, but until this month I had never imagined that there was so much "drama" surrounding the rules for reference desk editing. I think that all Wikiversity editors should respect the spirit of Wikiversity:Cite sources when replying to help desk questions. It makes sense to me that the help desk will probably eventually start to gain momentum by specializing in questions that are not suited for the Wikipedia reference desk. I hope the Wikiversity community can be clever about finding ways to harness enthusiasms for discussing topics and directing that enthusiasm towards full compatibility with the educational mission of Wikiversity. Spirited discussion, including opinion, is a good starting point for exploring a topic. At Wikiversity, we should be able to move past mere opinion into scholarly research and analysis. I think it is a serious problem that so many Wikipedians either do not know that Wikiversity exists, do not understand what Wikiversity is, or just do not support the existence of Wikiversity. Such Wikipedians are always going to make difficulties for Wikiversity and Wikiversity participants who try to make links from Wikipedia to Wikiversity. I'm thinking that Wikiversity should have a learning project where we can study this problem, and at the very least keep track of efforts at Wikipedia to prevent Wikiversity participants from linking Wikipedia pages to Wikiversity pages. We have Wikiversity and Wikipedia services as a starting point, and it might make sense to think in terms of a requirement that Wikiversity have well developed pages before trying to make links to them from Wikipedia. In the case of the help desk, this might mean having a special subpage (Wikiversity:Help desk/Wikipedia links?) that explains why Wikipedians should allow links from the Wikipedia reference desk to the Wikiversity help desk. --JWSchmidt 17:16, 10 September 2007 (UTC) - I don't perceive there to be a "problem" on Wikipedia, where people oppose linking to Wikiversity - except for the recent medical advice stuff. Still, I would look at it from the other side, and address the question of why would people want to link to Wikiversity? Having well-developed, or simply well-structured and defined pages/projects would be a good reason to do so, I agree. Cormaggio talk 22:17, 10 September 2007 (UTC) - To be specific, the problem I perceive at Wikipedia is when links to Wikimedia sister projects are treated like external links. Wikipedia generally enforces fairly strict rules for external links....not too many and just select the best ones. There is another long-standing tradition of allowing internal links to any relevant pages, even if they are just stubs.....that is how the stubs get attention and grow. Links to sister projects should not be treated like external links. Wikipedians should recognize the global goal of the Wikimedia Foundation and help the sister projects grow by allowing links to them, even if the linked pages are not the greatest. --JWSchmidt 23:52, 10 September 2007 (UTC) - I would tend to suggest that Wikipedians will tend to support treating links to fellow Wikimedia projects – Wikiversity, Wikibooks, Wiktionary, etc. – as internal links where those links tend to be consistent with Wikipedia policies and goals. On the Reference Desk at Wikipedia, our attitude towards links to Wikiversity is coloured by the practice of using such links to attempt to game Wikipedia's internal rules. I doubt many editors have any objection to links which encourage freewheeling philosophical or historical discussion on your Help Desk here. Speaking for myself, I think it's healthy and beneficial for learners of all ages and backgrounds to have the opportunity to test and present their ideas in a welcoming and safe environment. - However, there are strong objections and concerns surrounding the links that a few editors have made to some potentially very poor and very dangerous medical advice. (The canonical example is Wikiversity:Help desk#Intracranial pressure. It is also worth bearing in mind that sometimes incorrect advice isn't so obviously questionable.) The links are created to circumvent some very specific Wikipedia policy, which is troubling. The advice offered is from amateurs, and is not vetted by any professional, and probably not reviewed even by very many nonprofessionals, and some of it is downright scary. In general, Wikipedia tries to encourage links to sources that are reputable, respected, accurate, and peer-reviewed or fact-checked. On the issue of offering medical advice (diagnoses, prognoses, or suggested treatment options) Wikiversity just doesn't come anywhere close to those standards. - There's a world of difference between "I was told Hitler was the most evil person in history; is this true?" and "I was told my intracranial pressure was elevated; what should I do?". The relative amounts and types of harm that may result from amateurs spouting off their best guesses to those questions seems fairly obvious. Consequently, I expect that editors at Wikipedia will continue to apply common sense to the sorts of interproject links that they allow or encourage—trying to create hard-and-fast bright line rules like "All inter-project links are good (or bad), and should be treated as internal (or external) links" is asking for trouble. TenOfAllTrades 01:24, 11 September 2007 (UTC) - TenOfAllTrades: First, I'd like to see the definition of "medical advice" that you are applying. Second, Wikiversity does not invite participants to either ask for medical advice or provide medical advice. Wikiversity is still working out a way to work with participants who do ask for medical advice or provide medical advice.....see Medical advice tutorial and Medical practice and the law. Third, I have not been able to get too concerned about the "canonical example" you mentioned. "My EEG revealed a slightly high intracranial pressure. However IMHO after downing the pressure to normal I will a bit more passive and slow. Does it make much sense?" This "question" makes no sense to me and is not even coherent English. I doubt if this person was actually subjected to EEG with a resulting diagnosis of "slightly high intracranial pressure". The person asking the question seems to be asking if slightly reducing intracranial pressure will make them more "more passive and slow". This sounds like a joke about ancient theories saying the pressure of "psychic pneuma" produce bodily motion. I cannot see how the reply from "StuRat" constitutes medical advice. I hope Wikiversity can develop learning resources that will educate people about why Wikimedia projects do not want participants to ask for medical advice or provide medical advice. If Wikipedia sends such people to Wikiversity we can try clue them in to what a wiki website is and what they should and should not expect to get from one. --JWSchmidt 03:11, 11 September 2007 (UTC) - I read the advice offered, - "I believe that doctors, upon finding something "abnormal" (outside the typical range), often decide, incorrectly, that they need to correct it. If this condition has been with you all your life, your body may have adapted to it and may actually function better with that pressure. It may adjust to the new pressure, or it may not, only time will tell." - as implying that no medical intervention was necessary, and further suggesting that the patient shouldn't listen to his doctor's treatment recommendations. There are several problems with the answer to this question (the following list may not be exhaustive). - The original poster (OP) apparently doesn't have any understanding of what intracranial pressure is, or what it means. No effort was made in the response to clarify this. - As you've noted, the original poster has made errors of English and/or medical terminology. Offering medical advice to someone who hasn't even clearly conveyed their case information is a bad idea. - The answer discourages the OP from trusting all physicians. While informed consent, patient education, second opinions, etc. are all valuable and important parts of the practice of medicine, this is a pretty irresponsible statement coming from a random individual on the internet. - The answer also demonstrates no particular knowledge about intracranial pressure or why it might be elevated. - The answer discourages the OP from seeking addition, professional, competent advice—"only time will tell". - The answer implicitly supports the OP's hypothesis (that he will be more passive and slow with a normal intracranial pressure) with the statement "your body may have adapted to it and may actually function better with that pressure". This is, flatly, nonsense. - I admit that I hadn't considered that the question might have been a joke; I was working under the assumption that the OP was just genuinely confused, or was not writing in his own first language, or possibly (and this is a really scary case) was a minor. - In answer to your first question, for the purposes of Wikipedia we define medical advice as offering any comment that can reasonably be interpreted as a diagnosis, prognosis, or recommended course of treatment; see w:Wikipedia:Reference desk/guidelines/Medical advice. You're welcome to define it differently on Wikiversity, of course. I can't comment on whether or not StuRat's answer would constitute the Practice of Medicine from a legal standpoint; I'm not a lawyer, nor am I likely to be familiar with the relevant law, nor do I know how one would approach the jurisdictional mess that would attach to such a case. However, if the question were a legitimate one, asked seriously by a rather ill-informed person, consider the potential harm that could arise from following the advice given here (implicitly and explicitly). TenOfAllTrades 04:14, 11 September 2007 (UTC) - First, I think anyone describing personal health or medical test results (even if fictitious) at a Wikimedia wiki page should be directed to an explanation of why such personal accounts are not welcome. Doing so could be part of an even more stringent method of screening health-related questions than just putting restrictions on "medical advice". In my view, the difficulty with applying the Wikipedia definition of "medical advice" is that it is too easy for people to have conflicting views of what "reasonably" constitutes medical advice. In the "canonical example", the questioner presented a "medical" theory: lower intracranial pressure will result in a particular behavioral change, and then asked if that theory makes sense. There are hundreds of other similar questions that can be asked. If I masturbate will I get freckles? If I will myself to grow taller will I grow taller? If I feel a lump under my skin does that mean that I have a bacterium under my skin and I will soon have a fever? Is it productive to say that these kinds of questions are asking for medical advice? At some point, exactly where depends on our personal level of medical knowledge, we will all begin to feel we can no longer "reasonably" view some of these questions as requests for medical advice. At some point we start to find it difficult to say, "You should ask a doctor about that". I'm thinking that rather than debate what can "reasonably" be imagined to be a request for medical advice, maybe we should just reject any health-related question that is framed in a personal way. What I mean by "reject" is that if a questioner frames a question in the context of personal health information, then it might be best to direct the person asking the question to Medical advice tutorial where they would be asked to think about what they can reasonably expect to get from a wiki website and where they are asked to remove their original question from the help desk. Admittedly, Wikiversity is only just getting around to developing that approach, but I think it might work. The second side of the equation concerns providing medical advice. You raised objections (above) to StuRat's reply, but I do not think his reply provided diagnosis, a suggested treatment or a prognosis. I'd like to develop the Wikiversity help desk as a place where emphasis is placed on providing links to reliable sources of health-related information as discussed at Legal definitions of medical practice and medical advice. In the case of the "canonical example" what I see is a silly question that stimulated StuRat to comment on a related subject. In discussing that related subject, StuRat was not providing an answer to the original question, but was raising an interesting and widely discussed issue about how medicine is often practiced. In pedagogical terms, it is not unusual to respond to a confused question with information that might re-direct the confused person's thinking along more productive lines of thought. I've been discussing with StuRat the idea that maybe Wikiversity can direct such confused questioners to Medical advice tutorial before providing them with any health-related information. --JWSchmidt 06:56, 11 September 2007 (UTC) [edit] Archive I have archived half the page. My explanation for doing this is that the page was too long and this made the page hard to navigate and slow to load. The archived threads did not get new responses for more than one month. I hope everyone agrees with the archival. a.z. 04:56, 8 April 2007 (UTC) - Looks good to me, although I would have archived up to the end of March. Perhaps I'll add a second archive for Feb-March 2007 in a week or so. StuRat 04:38, 9 April 2007 (UTC) [edit] Curiosity and learning I'm just curious about the activity on this page - is this page being coordinated with the help desk on Wikipedia, or are people just interested in answering as many questions as possible? ;-) I've noticed many new names on this page that I don't see anywhere else on RecentChanges, and it looks like a bit of a mini-community here. I suppose this makes me wonder how to best integrate this page with learning activities here on Wikiversity - I think people should be given BOTH the opportunity to read a well-written and/or informative article on Wikipedia, AND be able to make a space in which to learn about their interests here on Wikiversity. This is entirely related to our Learning by doing ethos (see Portal:Education/Wikiversity model), in which people can learn about subjects by asking about them and discussing. In this sense this helpdesk can be the seed from which a learning project/community could grow. What do you all think? Also, does anyone here frequent IRC? It might be good to hang out in #wikiversity-en in order to sound out some ideas for how to develop active learning communities (in general or in relation to a specific question). Thoughts? Cormaggio talk 11:48, 16 May 2007 (UTC) - There are some questions which the Wikipedia Ref Desk doesn't want to address, like those requiring original research. Three of us (myself, User:a.z., and User:Lewis) thought it was a good fit to redirect some of those questions here and answer them here. This also could help the Wikiversity Help Desk, which otherwise seemed to be in danger of lacking the critical mass needed to succeed. That is, if there is only one question a month, nobody will bother looking for new questions, so answers will take too long, then everybody will stop asking. StuRat 20:49, 16 May 2007 (UTC) [edit] Posterity All these questions could be viewed as students asking questions in a classroom, so I'll ask if anything is being done to copy these questions to the appropriate department where they can be somehow integrated into a lesson (not that I'm especially clear how to make a lesson). I guess that really just sounds like a lot of busy work for slim-to-none results. I don't know, does anybody else have some thoughts? Xaxafrad 03:46, 8 August 2007 (UTC) - If we have an appropriate article already, and something here looks like a good addition, then that would make sense. However, in most cases a new article would need to be created, and the info in answer to a question here would barely constitute a stub if a new article were to be created for it. StuRat 15:51, 11 September 2007 (UTC) - Perhaps some interesting stubs would be useful in stimulating interest in various schools/topics. Wikipedia's stubs in the early days were widely varied in length, quality, content but stimulated a lot of interest in improving the stubs to articles. Perhaps lengthy discussions of interesting topics could start to stimulate interest in evolving learning trails or lesson/resources pertinent to the topic. My current response to an empty topic title, syllabus, list of subtopics, etc. with no actual entry points is to simply click on leaving no record of passing. My typical response to an active discussion or casual pile of intersting information is to "twikify". Mirwin 06:49, 16 September 2007 (UTC) [edit] Medical advice After I've been notified about a user dispensing amateur medical advice on the help desk, I was wondering if perhaps it might be prudent not to cover such topics to avoid unintentional medical injury. The same would probably also apply to legal advice, since there's usually a lot of potential liabilty attached to providing false information. Also, in some jurisdictions, providing medical or legal advice without proper qualifications is against the law. What are your thoughts? sebmol ? 13:04, 6 September 2007 (UTC) - I think most people agree that Wikiversity should not be a place where anyone expects to get or give medical advice. Part of the problem is that not everyone agrees on a common definition of what constitutes "medical advice" or "practicing medicine without a license". How do we make a distinction between discussing and learning about medical and legal subjects while at the same time not providing medical or legal advice? As a start towards exploring these questions I started Medical practice and the law. --JWSchmidt 13:25, 6 September 2007 (UTC) - I think a general disclaimer that basically says "Use at your own risk, Wikiversity is not fit for any and all purposes. By reading or using Wikiversity, you agree to not hold Wikiversity or any contributors liable for any and all of your actions", kind of like the disclaimers commonly found in most software licenses including the GPL, should be enough. --dark lama 13:34, 6 September 2007 (UTC) - I don't see this primarily as a legal how-can-we-cover-our-ass-issue, which is what a disclaimer addresses. There's a dimension of morality and personal responsibility here too. I agree with JWSchmidt that there is not necessarily a common understanding of what constitutes medical advice. Perhaps it would be useful to create a definition we can all agree on as a limit not to be crossed when answering questions at the help desk. sebmol ? 16:38, 6 September 2007 (UTC) - I do not think the disclaimer, by itself, covers the Wikimedia ass if we are passive and allow Wikiversity participants to freely request and receive medical advice. We have to make a good faith and active effort to direct Wikiversity participants away from the temptation to create an exchange system for medical advice. It should not be that hard to guide participants towards factual, education-oriented discussion about medical information that can be supported by citation of reliable sources. Such discussions need to take place within the framework of our education-oriented mission, not a framework that resembles the provision of medical advice to patients. I'm trying to construct specific examples of how to do this at Legal definitions of medical practice and medical advice. There is a real distinction that can be made between giving "medical advice" and providing health-related information....its not just a matter of semantics. I think the Wikipedia reference desk has shown that there will always be people who come to wiki information desks seeking medical advice. Rather than simply delete or ignore questions that seek medical advice, a more education-oriented approach is to have a system in place that functions to shift those people out of their unrealistic expectations and into a frame of mind that will allow them recognize what a wiki website can and cannot do for them. --JWSchmidt 18:17, 6 September 2007 (UTC) - That's a good comment, John. If we can facilitate an educational process around a certain medical condition, that's absolutely great. If we however seem to be giving advice to someone without any frame of reference for this (ie medical expertise), then I think we are doing the asker a disservice in raising their expectations that this is the place for them to get the advice they need. Cormaggio talk 11:17, 7 September 2007 (UTC) - It seems to me that the underlying theory of Wikiversity is that a group of people learning about a common interest or problem can create materials useful to themselves and others. Consider support groups for specific problems such as aids, cancer, diabetes .... exchange of specific information can stimulate interest in studying specifics and compiling information useful to self and future others. Further the beginning database of each participant is likely to be limited to personal experience since most participants will not be trained medical professionals. I think we need to be careful about dictating to people how they will or will not interact at Wikiversity or run the risk of having little interaction locally. JWSchmidt's proposal to encourage best practices in studying specific issues seems useful approach to settling some of the controversy around this issue as long as it is not used as a bludgeon against people seeking specific information related to personal problems or interests. Not everyone has access to trained medical personal and the Wikimedia and Wikiversity mission statements would seem to preclude limiting access to medical information. Mirwin 07:04, 16 September 2007 (UTC) - We have no real control over people who might read health-related information at Wikiversity and use that information in an attempt to deal with an existing medical problem. However, we do have control over context within which health-related information is presented at Wikiversity. We know that there are people who come to wiki websites in search of medical advice. We can teach such people that there is an important difference between using a wiki for collaborative gathering of health-related information and using a wiki to obtain medical advice. If potential Wikiversity participants cannot learn that distinction, it is better that they not participate at Wikiversity. If insisting that participants make that distinction is a "bludgeon", then I will play the role of chief bludgeon wielder. However, I don't think shifting people from an un-welcomed search for medical advice to a welcomed search for health-related information needs to be thought of as a "bludgeon". --JWSchmidt 16:09, 16 September 2007 (UTC) - I agree, I find your proposed approach to be diplomatic and valuable. The bludgeon would be non diplomatic or unsmooth characters leery of wasting their time typing a few extra words posting WIKIVERSITY IS NOT FOR MEDICAL ADVICE --->> link to prohibitive policy. I do have one concern. I think there are probably many people in the world who will soon arrive at the web who cannot afford routinely professional medical advice and live places where it is not illegal for them to assist each other. Indeed, I suspect that it is not illegal to give medical advice in the U.S. as long as one does not charge for it or misrepresent oneself. The strawman regarding legal liability could easily be extended to any and all information exchange on public wikis. Advise someone McDonald's coffee is excellent and served hot and one/many (and/or Wikimedia Foundation) might easily be listed as a codefendent when the recipient of the opinion scalds their gonads. Still, sticking to general information and applicable hard data and making it clear how it is used is personal responsibility is a fine way to start out in my opinion. The situation is certain to evolve and get clarifed as Wikiversity grows in population and capability. Mirwin 06:19, 17 September 2007 (UTC) [edit] A definitive answer (?) There is a recognized difference between legal education and legal advice: advice is specific to a given current case or individual; education is of general application. The distinction can be used of medical education and medical advice as well. - Examples: Q: My situation/symptoms are X, Y and Z. What should I do? A: The law/diagnosis is thus (explanation), so you should do this. ( = ADVICE) Q: What is the law surrounding (...) / what are the symptoms of and treatment for (...) ? A: It is thus (explanation). ( = EDUCATION) As any creative educationalist will quickly see, the difference between specific and general can be blurred, including by forms of expression as well as matters of substance. The answer to blurring is very simple: the medical or legal educationalist should not blur. What this means in practice is that legal and medical educationalists should stick to well recognized genres or paradigms of education (i.e. those established in their respective fields), rather than experimenting. In general I dislike saying "this is the answer" of anything, but in this case, I believe it is indeed "the" answer. Legal definitions of medical practice and medical advice seems to be in agreement with some of this. [edit] Shouldn't we be a bit more specific ?). - Listing possible causes of a condition with the usual treatments for each (this is tricky, because listing one condition and saying "you definitely have X and should do Y" is, indeed, medical advice). I'm also in favor of "turning" medical questions into questions we can answer. For example, a recent Q over at Wikipedia said the poster had injured their back doing heavy lifting and asked how to repair the damage. I referred them to a doctor for that, but then added some tips on proper heavy lifting with the goal of preventing this type of injury from recurring. StuRat 13:21, 8 September 2007 (UTC) - The first problem is, if someone comes to a wiki and asks for medical advice, just telling them "see a doctor" or "we do not give medical advice" is no guarantee that they will see a doctor rather than interpret as medical advice the health-related information you provide. When you provide medical information to someone who has tried to initiate a patient-doctor relationship, you risk entering into what is functionally a patient-doctor relationship. We do not want to facilitate the formation of a patient-doctor relationship. The second problem is that if you provide health-related information to people who have asked for medical advice, then you are encouraging other people to also ask for medical advice. Our goal is to discourage people from asking for medical advice, not encourage more people to ask for medical advice. If you create a wiki space where you allow people to ask for medical advice then you have created the equivalent of an attractive nuisance...in effect, you are "asking for trouble" or creating the conditions by which observers can reasonable assume that you are helping people create trouble for themselves. Specifically, there will be people who, since they are allowed to ask for medical advice and since by doing so they are getting health-related information, will assume that that they are getting the help they need for their medical problem and they will assume that they do not need to see a doctor (even if you tell them to see a doctor). If we knowingly create a wiki space where we know that people will believe they are getting useful medical advice, then we are helping create a patient-doctor relationship. We do not want to help create such a relationship. I agree that advising people to seek the advice of a doctor or other professional is not giving them professional advice. However, that does not mean that telling people to see a doctor is the best way to respond to a request for medical advice. If someone is misguided enough to come to a wiki help desk and ask for medical advice, you can reasonably suspect that they have a good chance of ignoring your suggestion that they see a doctor. In other words, saying, "see a doctor" to such a person can be as fruitless as having a sign next to a swimming pool that says "danger". A child will ignore the sign and you are still legally responsible for allowing the child to get into your pool. We need the wiki equivalent of a fence that keeps children out of a pool. We need active procedures for preventing people from seeking and giving medical advice at Wikiversity. Rather than just say "see a doctor", we need to explain what a wiki website is and why we do not accept requests for medical advice. I think we should interpret a request for medical advice as a RED FLAG. The impulse to provide health-related information to such a person should be repressed and the response should be a concentrated effort to educate the person who asked for medical advice. We need to take it upon ourselves to educate such people about Wikiversity and our reasons for not entering into a patient-doctor relationship with them. I think it is absurd to say that, "my exercise advice was not intended to be medical advice so I have not given medical advice". When someone asks you for medical advice you should know that they are likely to interpret anything you say as being medical advice, even if you do not intend it to be medical advice. Your intention does not make it "safe" for you to provide health-related information to such a person. Having a desire that no child drown in your pool does not protect you from being held responsible for not having a fence around your pool. --JWSchmidt 14:48, 8 September 2007 (UTC) - But what about all the other areas where info is provided on the Internet that can potentially be misused, like chemical formulae given for explosives and illegal drugs, info on addictiveness of various drugs, info on weapons, etc. Couldn't any such info be viewed as an "attractive nuisance" ? That is, despite any disclaimers you list, can't the reader still misuse that info to harm themselves ? StuRat 15:02, 9 September 2007 (UTC) - I'm not trying to argue against having health-related information at Wikiversity. I'm not trying to argue against interacting with people who come to wiki websites asking for medical advice. I'm trying to find a way to "do a little dance" that will gracefully shift the thinking of people who ask for medical advice. I think having in place a system that can achieve this shift in thinking fits the educational mission of Wikiversity. I'd like to find a graceful and efficient way of shifting people away from thinking, "Maybe I can get medical advice at this wiki". I want to show such people how to think differently about what they should expect a wiki website can do for them. I'm not sure exactly how to do this, but I'm thinking that we can try to have a Wikiversity learning resource that tries to address this specific problem. Medical practice and the law was started as an attempt to think about how current Wikiversity participants should respond when dealing with questions that seek medical advice. That page is not for the people who ask for medical advice. We need a new main namespace page (maybe called Medical advice tutorial) that will help orient people who ask for medical advice. I'm thinking that the first response to a request for medical advice should be to just provide a link to a page such as Medical advice tutorial. That page would explain why Wikiversity does not want to encourage people to ask for medical advice at Wikimedia wiki websites and would explain how people can constructively use online heath-related information without making the mistake of avoiding medical professionals when medical advice is needed. I like "learn by doing" and I'm thinking that people who have asked for medical advice could be asked to re-word their own question in order to change it into a more general request for health-related information rather than a request for help with a personal health problem. Medical advice tutorial could provide examples of how to do this. How does this differ from just letting others re-write the original question that asked for medical advice? If the person asking the question has to re-write their own question then we know that they are thinking about the problem of asking for medical advice online. To answer the question about people doing harm by making use of information, that is where the "swimming pool analogy" breaks down. We might reasonably try to prevent all children from wandering into pools, but we do not try to prevent everyone from obtaining potentially dangerous information. Our culture values access to information, so potentially dangerous information is usually not restricted. However, I think it fits within the educational mission of Wikiversity to try to provide potentially dangerous information in a way that helps people use that information in safe ways. --JWSchmidt 16:02, 9 September 2007 (UTC) - I agree with most of what you said, but do think we should rewrite the Q (either explicitly or implicitly), rather than waiting for the original poster to do so, so we can show them how it's done and provide any answers we're qualified to give in a more timely manner. StuRat 15:45, 11 September 2007 (UTC) - I'd favor having something at the top of the help desk page (in the instructions) such as, "Do not tell us about your personal health matters. If you are tempted to ask a question here about a an existing health problem, please read the Medical advice tutorial first." In my view, when a health-related question involves information about a specific patient, then we are flirting with the establishment of what is functionally a doctor-patient relationship. The person asking the question is at risk of interpreting any reply as constituting medical advice, advice that they can use in dealing with an existing medical problem. The person asking such a question, by providing information about their personal health, has demonstrated a serious lack of understanding about the type of information they can get at a Wikimedia wiki website. I think the correct way to reply to such questions is to explain what Wikiversity is and that nobody at Wikiversity wants to hear about existing medical cases/problems. If someone does provide information about a specific medical cases/problem, I think they should be asked to go to Medical advice tutorial. Upon completing that tutorial they should be able to return to the help desk and re-word their question as an impersonal general knowledge question. It is better that they do this themselves because it demonstrates that they have thought about what is going on. I'm trying to think outside of the box that defines how Wikipedia deals with "medical advice". In trying to avoid "practicing medicine without a license" I think it might be more useful to avoid anything that moves Wikiversity towards a doctor-patient relationship. There cannot be a doctor-patient relationship if the "doctor" knows nothing about the patient's health status. Does that make sense? I think any health-related information can be interpreted as medical advice, and I think we are really trying to avoid making it possible for editors to interpret health-related information at Wikiversity as constituting medical advice. --JWSchmidt 19:05, 11 September 2007 (UTC) [edit] Straw poll It seems to me we are on agreement on most issues. The one area for disagreement seems to be whether responders can reformulate a question asking for medical advice themselves or whether we should require the original poster to do so. I'd like to get an idea of how many people are on each side of this relatively minor issue, so I can then abide by the consensus. So, please list you support below one of the following: [edit] Allow responder to reformulate question - Support. Time may be of the essence. For example, if someone asks "How can I lift my window A/C unit without aggravating my back injury ?", we need to reformulate it to "What is the proper way to lift a window A/C unit ?" and give the answer before they do so improperly and injure themselves further. Also, criticizing the way a question is asked and refusing to answer until it is reformulated appears to fall into the "biting the newbies" category, to me. StuRat 12:48, 17 September 2007 (UTC) [edit] Require original poster to reformulate question [edit] Continue the discussion - I'm continuing the discussion, not participating in a vote. StuRat has suggested (above) that we need to quickly reformulate questions that are asked when those questions are not in the form of "general knowledge questions". The example provided by StuRat suggests that Wikiversity has some obligation to answer quickly so as to prevent people from hurting themselves. However, anyone placing themselves in the position of relying on the help desk so that they do not hurt themselves is deeply confused about what the help desk is for. How should we characterize the behavior of anyone at the help desk who is rushing to provide time-critical information that will influence a decision effecting someone's health? In my view, anyone doing this is participating in a mis-guided attempt to collaborate with the questioner who is engaged in an un-welcome use of Wikiversity project resources. Maybe we need to link the phrase (at the top of the help desk) "general knowledge questions" to a subpage that explains what a general knowledge question is and why questioners must format their questions as general knowledge questions. Such a subpage might also be a good place to explain that providing answers to help desk questions is not a race. Providing good answers often involves reflection, research and writing a thoughtful reply that includes links to useful sources of information. Providing good answers also involves paying attention to the way questions are asked. When a question is formated in a way that indicates the person asking the question is confused about the purpose of the help desk, then there is no need to answer the question at all. In my view, the correct response to mis-guided questions is to educate the questioner about the help desk and the nature of wiki websites. We can provide a link to a page that explains how to ask a good question. The questioner can demonstrate that they have learned the purpose of the help desk and the nature of wiki websites by re-writing their own question as a general knowledge question. In my view, if someone else re-writes the question then we are not correctly taking advantage of an educational opportunity and we would risk encouraging people to remain ignorant and mis-use the help desk again in the future. --JWS 16:45, 17 September 2007 (UTC) [edit] Link to this page from Wikipedia It's being discussed here whether the reference desk guidelines should link to the Wikiversity Help Desk. The reference desk is for reference only, but this Help Desk allows people to discuss things. The purposes are different. I feel they should link to one another, so people looking for reference go there and people looking for dialectical conversations come here. a.z. 03:18, 8 September 2007 (UTC) [edit] Telling people to provide links I very strongly disagree with the sentence added here saying that people should place emphasis on providing links to useful sources of information. This is appropriate for the Wikipedia reference desk, but I believe this help desk should allow and encourage debate, discussion, dialectical conversation, etc. In fact, the rationale when I added a link to this page from Wikipedia was that, while the reference desk is not an appropriate place for long debates, this is. There should be no problem with not adding any links. I propose that the sentence be changed to say that providing links may be useful. a.z. 04:28, 21 September 2007 (UTC) - Citation of sources is a skill that promotes constructive debates. Making controversial statements without bothering to cite sources invites others to ask for supporting sources and evidence. Providing evidence and citing sources to support controversial statements is a normal part of useful discussions. If you fail to support your controversial statements with evidence then it is natural for other people to ignore you. "should be no problem" <-- Problems can arise when there are participants in a discussion who fail to cite sources and rather than having a useful conversation people end up asserting opinions rather than gathering and analyzing evidence. --JWSchmidt 05:28, 21 September 2007 (UTC) - Links are good in any debate, but shouldn't be required, I would say they "should be encouraged". However, when telling somebody they are wrong, proof should be provided. Otherwise, if neither party provides proof, the discussion can degenerate into a shouting match. While links are one form of proof, there are others, like logic, and "original research". For example, if someone denies that cats have an inner eyelid, one form of "original research" proof would be to provide a pic of your cat with the third eyelid showing. If someone says "if God existed he would create the world, and the world does exist, therefore God must have made it", a logical counter-argument can be made, by explaining the flaw in their logic. In this case you could say "then, following your logic, if it rains the street gets wet, so, therefore, if the street is wet, it must have rained". Then point out that a wet street could result from a flood, burst dam or water main, snow melt, dew, etc. StuRat 15:15, 21 September 2007 (UTC) I would like to discuss those issues with you, but now I have another point, one that I would like to focus on. I don't like the sentence being on the help desk page. On this talk page, signed by the authors, it's fine, but I feel that "official" recommendations on the desk are just unhelpful. JWSchmidt, StuRat, and I and everyone could write our own signed opinions about what is good on the help desk on a page called Wikiversity:Help desk/Suggestions. That sentence was added with no signature and people who read it won't know what it is supposed to be (a policy, a recommendation, a suggestion, a guideline, etc). I would not like people on the desks "quoting" the sentence as if it were a rule. I really disliked it when I read: "When responding to help desk questions, place emphasis on providing links to useful sources of information." --JWS 03:34, 18 September 2007 (UTC) If you want to make a point that someone's argument on a thread needs a link, make it. If you want to tell people how good it is to provide links, do it using your words and arguments, as both of you did above, and sign it. If you want to prevent people from having their arguments ignored for not providing links, you can do that without making rules, without citing rules. People are not stupid and they don't need you to tell them what to do. Readers are intelligent people that can choose whether they'll accept a source-less argument or not, if they're going to take seriously an argument without logic, whatever. If you want them to do something, if you think they should see something differently, just convince them to do so. If you feel someone's argument is not good because it lacks links (or anything else), you may ignore it if you wish, and this is not such a big deal. I know no one started removing posts from the desk yet, no one started saying that people were breaking the rules, no one is discussing over rule-interpretation, but that's where we'll be if we don't stop rules like those from being created now. It does not harm anyone to say "there are human races" or "there are not human races" without citing any source. I mean, it may not be a good thing to do, but the post should not be removed for that, no editor should be frowned upon for doing that, no rule needs to be created to solve that. It's not a personal attack or anything like that, which are for what rules should exist. As I said, you can ignore it, you can say it's wrong, you can say people shouldn't take the argument seriously, but we don't need a rule. Just to make it clear: I think StuRat's posts on that thread are fine and interesting. If I feel I need some link to further understand what he means or to know whether he can back up his arguments, I'll just request it to him. Also just to make it clear: I have nothing against JWS nor do I dislike them. a.z. 02:37, 22 September 2007 (UTC) - I think JWS/JWSchmidt was just stating his opinion, not making a new rule. Perhaps it would have made this clearer had he added "In my opinion..." at the start. StuRat 03:47, 22 September 2007 (UTC) - Yes, that would be a good way to state his opinion. I'm referring to the sentence on the help desk, are you? It ought to have been "in my opinion, when responding to help desk questions, one should place emphasis on providing links to useful sources of information." a.z. 04:16, 22 September 2007 (UTC) On the Wikiversity:Help desk I saw the statement, "your response is neither true nor informed," a statement that was made with no attempt to cite supporting sources. I added into the discussion a reminder (a quote from the instructions at the top of the page) that was intended to be a helpful suggestion for how to move the discussion towards a more productive exchange of information. --JWSchmidt 22:25, 22 September 2007 (UTC) - Oh, you're JWS :-) I thought you were two different users. I have no problem with you adding that sentence to the thread, but I don't want it to be unsigned on the header. It is merely an opinion, and I would not like it to become a rule. I think I have explained this fairly well above. a.z. 03:13, 23 September 2007 (UTC) - I think A.Z. might also object to the appearance of you unilaterally changing the Help Desk rules. I, for one, would prefer to see any rules change discussed here first, then, if a consensus is reached to make a rules changes, we can update the header with the agreed-upon text. StuRat 15:54, 23 September 2007 (UTC) [edit] Header I made a header template. Any objections? a.z. 18:09, 22 September 2007 (UTC) [edit] Traffic Does anyone know what kind of daily traffic this site gets? How about the school of Theology? Thanks. Magosgruss 21:48 02.28.2008 (UTC) - I mean all of wikiversity. or maybe just the main page is the first is too daunting a task. Magosgruss 03.01.2008 17:34 (UTC) [edit] Updated Nav-Header I've just done Template:Help desk header 0.5 an updated version of the current one - am i fine to update the current one. Terra 18:54, 20 April 2008 (UTC) [edit] Questions/comments about archiving The new header seems to say it will only archive sections that have gone over 19 days old without a new response (which is fine) that have at least 5 signed responses (which is NOT fine). Many, perhaps most, of the answered questions here will never have 5 signed responses. A single signed response is quite common. Also, we don't want to keep unanswered questions here forever, they should be archived, too. Further, manually archiving (which I've been doing every couple of months), doesn't seem like it will work now, because the manually archived sections would need to be interspersed between all the automatically archived sections, which would be far more work than the block archive I've been doing. I recommend removing any requirement for number of signed responses in the automatic archiving system. If this can't be done, let's turn it off and go back to manual archiving. StuRat 14:47, 22 April 2008 (UTC) - I did that and manually archived March, 2008. It appears that with all that aditional header info I now need to archive every month to keep the page from going over 32 KB. One other problem, though, there are two "Restore Discussion" buttons on the new archive page, but neither of them appear to restore anything, they just open up the editor as if the user wants to post a new question. StuRat 19:53, 22 April 2008 (UTC)
http://en.wikiversity.org/wiki/Wikiversity_talk:Help_desk
crawl-002
refinedweb
8,786
57.71
24 October 2011 18:35 [Source: ICIS news] HOUSTON (ICIS)--US methyl ethyl ketone (MEK) contracts fell by 8 cents/lb ($176/tonne, €127/tonne) in October on soft demand and more available supply, sources confirmed on Monday. Some of the drop was attributed to September price weakness, but values were unchanged in September because of a lack of consensus during the period. The decrease took MEK prices to 126-130 cents/lb, as assessed by ICIS, but market sources said prices could slip even further before the end of the month. Weakness that began in August could take prices down another 7-8 cents/lb in the near term. A producer said supply continues to outpace demand and that pressure from Asian imports also is exerting downward pressure on contract values. A buyer disagreed, however, saying sellers have been lowering prices more slowly and that some supply is still tightly controlled although market-wide allocations are no longer being heard. Prices had spiked globally in March after ?xml:namespace> But contract prices settled flat in July and appeared to signal the onset of weaker pricing. Also, an indicator of future construction activity in the The American Institute of Architects (AIA) said its monthly Architecture Billings Index (ABI) was 46.9 for the month, a drop of more than 4 points, or nearly 9%, from 51.4 in August. The September ABI score reflected a sharp decrease in demand for design services from AIA-member firms. Feedstock pricing has been mixed. Butane for Mont Belvieu, ($1 = €0.72) For more on ME
http://www.icis.com/Articles/2011/10/24/9502496/us-oct-mek-falls-8-centslb-on-lower-demand-may-drop-further.html
CC-MAIN-2014-10
refinedweb
262
63.19
Hello NTDebuggers, from time to time we see the following problem. It’s another access violation, and the debug notes below are from a minidump. Here is what we need to know… · Generally speaking what happened to cause this AV? · What method you would use to isolate root cause of the failure? There are a lot of ways to do this. We look forward to hearing your approach. We will post our methods and answer at the end of the week. If you need anything please let us know. ------------------------------------------- Microsoft (R) Windows Debugger Version 6.8.0001.0 User Mini Dump File: Only registers, stack and portions of memory are available 0:000> k 123 ChildEBP RetAddr 0017f93c 75e4edb5 ntdll!ZwWaitForMultipleObjects+0x15 0017f9d8 75e430c3 kernel32!WaitForMultipleObjectsEx+0x11d 0017f9f4 75ef2084 kernel32!WaitForMultipleObjects+0x18 0017fa60 75ef22b1 kernel32!WerpReportFaultInternal+0x16c 0017fa74 75ebbe60 kernel32!WerpReportFault+0x70 0017fb00 7732d15a kernel32!UnhandledExceptionFilter+0x1c1 0017fb08 773000c4 ntdll!_RtlUserThreadStart+0x6f 0017fb1c 77361d05 ntdll!_EH4_CallFilterFunc+0x12 0017fb44 772eb6d1 ntdll!_except_handler4+0x8e 0017fb68 772eb6a3 ntdll!ExecuteHandler2+0x26 0017fc10 772cee57 ntdll!ExecuteHandler+0x24 0017fc10 10011127 ntdll!KiUserExceptionDispatcher+0xf *** ERROR: Module load completed but symbols could not be loaded for crash3.exe WARNING: Frame IP not in any known module. Following frames may be wrong. 0017ff40 0040104a 0x10011127 0017ffa0 75eb19f1 crash3+0x104a 0017ffac 7732d109 kernel32!BaseThreadInitThunk+0xe 0017ffec 00000000 ntdll!_RtlUserThreadStart+0x23 0:000> lm start end module name 00400000 0040d000 crash3 (no symbols) 6c250000 6c288000 odbcint (deferred) 6c290000 6c2f5000 odbc32 (deferred) 72a00000 72a86000 comctl32 (deferred) 74820000 749b4000 comctl32_74820000 (deferred) 75240000 75251000 samlib (deferred) 75260000 75281000 ntmarta (deferred) 754b0000 75510000 secur32 (deferred) 75510000 75570000 imm32 (deferred) 75700000 75790000 gdi32 (deferred) 757a0000 75870000 user32 (deferred) 758a0000 758a6000 nsi (deferred) 758b0000 759f4000 ole32 (deferred) 75a00000 75aaa000 msvcrt (deferred) 75ab0000 75ba0000 rpcrt4 (deferred) 75ba0000 75c1d000 usp10 (deferred) 75c20000 75c75000 shlwapi (deferred) 75d60000 75e27000 msctf (deferred) 75e30000 75f40000 kernel32 (pdb symbols) 76140000 76189000 Wldap32 (deferred) 76190000 7624f000 advapi32 (deferred) 76250000 76d1e000 shell32 (deferred) 76d20000 76d94000 comdlg32 (deferred) 76da0000 76dcd000 ws2_32 (deferred) 77280000 77287000 psapi (deferred) 77290000 77299000 lpk (deferred) 772b0000 77400000 ntdll (pdb symbols) Good luck and happy debugging. Jeff- [Update: our answer. Posted 5/13/2008] We enjoyed seeing different people’s approaches on this week’s puzzler. This was a simple module unload. We loaded a lib, did a GetProcAddress, freed the lib, and called the function. The dump was a mini dump created via .dump /m C:\dump file. There are various ways this type of scenario may arise. Obviously someone could unload a lib, but why? In most cases I’ve seen, it was due to a ref count problem in a com object. Poor accounting leading to one too many decrements, and the dll will get unloaded causing a simple crash footprint. There are quite a few ways to track this down. First of all, if you had the debugger attached and got a full dump or /ma dump you would have seen the loaded module list. This would have been a dead giveaway and part of why we did the .dump /m. There are other options you can enable that make tracking of module loads easy under the debugger. I personally like “loader snaps” if I’m trying to track down module load shenanigans. To enable this, just go into the image section of the gflags tool and enable loader snaps for the exe in question. Now attach a debugger and watch the mode load and GetProcAddress details scroll by. Yet another popular approach is to use process monitor. This tool is not only easy to set up, but it also gives you great logs with call stacks and other details such as registry accesses. This puzzler provided the bare minumum data required. We did not give you much to go on because sometimes in real debugging scenarios you have to work with a lack of data. I really liked how many people questioned the source of the dump file. It really shows how familiar you all are with the various dump types. Great work! A trend that I’ve noticed recently are cases involving paged pool depletion with high MmSt tag usage that remains after trying KB304101 (PoolUsageMaximum). These pool allocations are used by the memory manager for section object prototype PTEs. There are generally only two options when this happens: 1) upgrade to a 64-bit platform, or 2) reduce the size of the volumes. But we may want to know what mapped files are using this memory. Here is how it can be done. Start with !memusage. 5: kd> !memusage Compiling memory usage data (99% Complete). Zeroed: 19073 ( 76292 kb) Free: 0 ( 0 kb) Standby: 1468824 (5875296 kb) Modified: 368 ( 1472 kb) ModifiedNoWrite: 1927 ( 7708 kb) Active/Valid: 605772 (2423088 kb) Transition: 0 ( 0 kb) Bad: 0 ( 0 kb) Unknown: 0 ( 0 kb) TOTAL: 2095964 (8383856 kb) Building kernel map Finished building kernel map Scanning PFN database - (100% complete) Following this you will see the list of mapped files and their control areas. Usage Summary (in Kb): Control Valid Standby Dirty Shared Locked PageTables name … 8c62a638 1108 945868 3064 0 0 0 mapped_file( $Mft ) The control area is the address at the far left and has a Segment field that contains the total number of PTEs. 5: kd> dt 8c62a638 _CONTROL_AREA Segment->TotalNumberOfPtes nt!_CONTROL_AREA +0x000 Segment : +0x004 TotalNumberOfPtes : 0x1e8b00 The MmSt allocations contain these PTEs so all we need to do is multiply this by the size of a PTE to get the total size of the MmSt allocations for this control area. Note that there may be multiple allocations for this control area, but this number will reflect the total size all these allocations. 5: kd> ?? 0x1e8b00 * sizeof(nt!_MMPTE) unsigned int 0xf45800 So now we know the MmSt size in bytes for a single control area, or mapped file. What if we would like to see the totals for all mapped files from the !memusage output? First, place the !memusage output in a text file and remove all header information. You will also need to remove all tail information including the page file and process summaries. Every line should look like these. 8c62a638 1108 945868 3064 0 0 0 mapped_file( $Mft ) 8b06ac18 516 0 0 0 0 0 No Name for File We want to include the “No Name for File” entries since those are valid mapped files even though the name could not be located. Next strip out everything but the control area address. You can use Excel or any other tool that allows you to select and delete columns in a text file. Now we have a file with a single column of all the control areas on the system. The following debugger command script can be used to process this file. $$ countptes.txt script r $t2 = 0; $$ Replace the memusage.txt file name with your file name. .foreach /f (ca "memusage.txt") { r $t1 = @@c++(((nt!_CONTROL_AREA *)(0x${ca}))->Segment->TotalNumberOfPtes); .printf "Control Area %p : %d\n", ${ca}, @$t1; r $t2 = @$t2 + @$t1; } .printf "Total PTEs : %d\n", @$t2; .printf "MmSt size : %d bytes\n", (@$t2 * @@c++(sizeof(nt!_MMPTE))); The following command will execute the script. 5: kd> $$><countptes.txt This will show the number of PTEs for each control area, followed by a summary. Total PTEs : 62790244 MmSt size : 502321952 bytes A common high user of MmSt allocations is $Mft. The cache manager will hold the MmSt allocations for these file system metadata files at a cost of up to 4 files per PTE. This technique can be used to determine how much $Mft is using MmSt pool memory by first using findstr at a command prompt to isolate just those values from the !memusage output. C:\Projects>findstr /c:"$Mft" memusage.txt >mftusage.txt After stripping out the control area addresses with Excel and running the command script you’ll have the size of the MmSt allocations for just the $Mft files. If this is consuming most of the MmSt bytes then you are limited to the options mentioned at the beginning of this article. There may be other options if something else is the primary user but it will likely involve reducing some heavy load on the system. -Bryan Written by Jeff Dailey. Hello NTDebug%. The best case scenario is a live debug on the process that is running at high CPU levels. If you’re lucky enough to have a customer / user that will let you do a remote debug, and the problem reproduces on demand, you can take the following action. You need to install the debugging tools for Windows, and set your symbol path. If at all possible acquire the symbols for the application you are debugging. We’ll assume that you are the expert that supports said program. If it’s written in house, get the symbols from the developer. If it’s from a third party, that vendor may be willing to provide you with symbols for their product. Microsoft has most of the symbols for our products available on our symbol server on the internet (!sympath srv*DownstreamStore*.) The next thing is to attach windbg.exe to the process in question. From the debuggers directory, type TLIST, this will list your process. Get the process id and then run WinDBG.EXE –p PROCESSID, or if your debugging a program like EATCPU, you can run WINDBG C:\program\EATCPU.EXE. After the debugger is attached or the process is started in the debugger, reproduce the problem. ***** WARNING: Your debugger is probably out-of-date. ***** Check for updates. CommandLine: eatcpu.exe Symbol search path is: srv*C:\symbols*\\symbols\symbols Executable search path is: ModLoad: 00400000 0041a000 eatcpu.exe ModLoad: 779b0000 77b00000 ntdll.dll ModLoad: 76780000 76890000 C:\Windows\syswow64\kernel32.dll ModLoad: 62bb0000 62cd1000 C:\Windows\WinSxS\x86_microsoft.vc80.debugcrt_1fc8b3b9a1e18e3b_8.0.50727.762_none_24c8a196583ff03b\MSVCR80D.dll ModLoad: 75cb0000 75d5a000 C:\Windows\syswow64\msvcrt.dll (1090.164): Break instruction exception - code 80000003 (first chance) eax=00000000 ebx=00000000 ecx=712b0000 edx=00000000 esi=fffffffe edi=77a90094 eip=779c0004 esp=0017faf8 ebp=0017fb28: 779c0004 cc int 3 0:000> g (1090.11d4): Break instruction exception - code 80000003 (first chance) eax=7efa3000 ebx=00000000 ecx=00000000 edx=77a1d894 esi=00000000 edi=00000000 eip=779c0004 esp=0109ff74 ebp=0109ffa0 iopl=0 nv up ei pl zr na pe nc 0:007> .sympath SRV*c:\websymbols* Symbol search path is: SRV*c:\websymbols* 0:007> g (1090.17d4): Break instruction exception - code 80000003 (first chance) cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000246 Once the problem has started run the !runaway command. 0:007> !runaway User Mode Time Thread Time 2:c04 0 days 0:01:08.827 ß Note this thread, thread 2:c04 is using more CPU than any other. 7:17d4 0 days 0:00:00.000 ß Note the other threads are using very little if any CPU. 6:1a4c 0 days 0:00:00.000 5:d20 0 days 0:00:00.000 4:157c 0 days 0:00:00.000 3:1b28 0 days 0:00:00.000 1:1134 0 days 0:00:00.000 0:164 0 days 0:00:00.000 0:007> ~2s ß Using the thread number 2, set the thread context with the ~Ns command. *** WARNING: Unable to verify checksum for eatcpu.exe eax=cccccccc ebx=00b93c48 ecx=0000002b edx=00b937a8 esi=00000000 edi=00d9fcf0 eip=0041169c esp=00d9fcd0 ebp=00d9fd9c iopl=0 nv up ei pl nz na po nc cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00010202 eatcpu!checkSomething+0x1c: 0041169c f3ab rep stos dword ptr es:[edi] es:002b:00d9fcf0=cccccccc 0:002> k ß Dump the call stack using k. If you look at the following call stack, the applications code in this thread starts where you see EATCPU, the code before that is C Runtime code for begin thread. I want to trace all the code that is running under being thread. The assumption here is that I’ll find something looping and eating CPU. To do this I will use the WT command. However first I need to specify a beginning address for WT to start trace at. Let's use UF to Unassemble the Function that started our code by taking the return address of eatcpu!myThreadFunction. 00d9fd9c 00411657 eatcpu!checkSomething+0x1c [c:\source\eatcpu\eatcpu\eatcpu.cpp @ 49] 00d9fe74 004115a8 eatcpu!trySomething+0x27 [c:\source\eatcpu\eatcpu\eatcpu.cpp @ 45] 00d9ff58 62bb4601 eatcpu!myThreadFunction+0x38 [c:\source\eatcpu\eatcpu\eatcpu.cpp @ 35] 00d9ff94 62bb459c MSVCR80D!_beginthread+0x221 00d9ffa0 768019f1 MSVCR80D!_beginthread+0x1bc 00d9ffac 77a2d109 kernel32!BaseThreadInitThunk+0xe 00d9ffec 00000000 ntdll!_RtlUserThreadStart+0x23 [d:\vistartm\base\ntos\rtl\rtlexec.c @ 2695] 0:002> uf 004115a8 ß This command will unassemble the function at this address beginning to end. 0:007> uf 004115a8 eatcpu!myThreadFunction [c:\source\eatcpu\eatcpu\eatcpu.cpp @ 30]: 30 00411570 55 push ebp 30 00411571 8bec mov ebp,esp 30 00411573 81eccc000000 sub esp,0CCh 30 00411579 53 push ebx 30 0041157a 56 push esi 30 0041157b 57 push edi 30 0041157c 8dbd34ffffff lea edi,[ebp-0CCh] 30 00411582 b933000000 mov ecx,33h 30 00411587 b8cccccccc mov eax,0CCCCCCCCh 30 0041158c f3ab rep stos dword ptr es:[edi] 31 0041158e 8b4508 mov eax,dword ptr [ebp+8] 31 00411591 8945f8 mov dword ptr [ebp-8],eax eatcpu!myThreadFunction+0x24 [c:\source\eatcpu\eatcpu\eatcpu.cpp @ 33]: 33 00411594 b801000000 mov eax,1 33 00411599 85c0 test eax,eax 33 0041159b 7410 je eatcpu!myThreadFunction+0x3d (004115ad) eatcpu!myThreadFunction+0x2d [c:\source\eatcpu\eatcpu\eatcpu.cpp @ 35]: 35 0041159d 8b45f8 mov eax,dword ptr [ebp-8] 35 004115a0 8b08 mov ecx,dword ptr [eax] 35 004115a2 51 push ecx 35 004115a3 e880faffff call eatcpu!ILT+35(?trySomethingYAHHZ) (00411028) 35 004115a8 83c404 add esp,4 36 004115ab ebe7 jmp eatcpu!myThreadFunction+0x24 (00411594) eatcpu!myThreadFunction+0x3d [c:\source\eatcpu\eatcpu\eatcpu.cpp @ 37]: 37 004115ad 5f pop edi 37 004115ae 5e pop esi 37 004115af 5b pop ebx 37 004115b0 81c4cc000000 add esp,0CCh 37 004115b6 3bec cmp ebp,esp 37 004115b8 e8a1fbffff call eatcpu!ILT+345(__RTC_CheckEsp) (0041115e) 37 004115bd 8be5 mov esp,ebp 37 004115bf 5d pop ebp 37 004115c0 c3 ret 0:002> wt -or 00411570 ß We will use WT to Watch Trace this code. I’ve selected the starting address of the myThreadFunction function. I’ve also specified –or to print the return value of each function. Wt produces very visual output. It allows you to quickly identify patterns in the way the code executes that would be much more difficult just using the T (TRACE) command. 8 0 [ 0] ntdll!RtlSetLastWin32Error eax = 0 >> No match on ret 8 0 [ 0] ntdll!RtlSetLastWin32Error 2 0 [ 0] eatcpu!checkSomething 1 0 [ 1] eatcpu!ILT+345(__RTC_CheckEsp) 2 0 [ 1] eatcpu!_RTC_CheckEsp eax = 0 9 3 [ 0] eatcpu!checkSomething 12 6 [ 0] eatcpu!checkSomething eax = 0 12 6 [ 0] eatcpu!checkSomething 7 0 [ 0] eatcpu!trySomething 10 3 [ 0] eatcpu!trySomething eax = 0 10 3 [ 0] eatcpu!trySomething 9 0 [ 0] eatcpu!myThreadFunction ß I see a pattern, a loop. Beginning of loop. 1 0 [ 1] eatcpu!ILT+35(?trySomethingYAHHZ) 60 0 [ 1] eatcpu!trySomething 1 0 [ 2] eatcpu!ILT+180(?checkSomethingYAHHZ) 62 0 [ 2] eatcpu!checkSomething 5 0 [ 3] kernel32!SetLastError 16 0 [ 3] ntdll!RtlSetLastWin32Error eax = 0 64 21 [ 2] eatcpu!checkSomething 1 0 [ 3] eatcpu!ILT+345(__RTC_CheckEsp) 2 0 [ 3] eatcpu!_RTC_CheckEsp eax = 0 71 24 [ 2] eatcpu!checkSomething 74 27 [ 2] eatcpu!checkSomething eax = 0 67 102 [ 1] eatcpu!trySomething 1 0 [ 2] eatcpu!ILT+345(__RTC_CheckEsp) 2 0 [ 2] eatcpu!_RTC_CheckEsp eax = 0 70 105 [ 1] eatcpu!trySomething eax = 0 18 176 [ 0] eatcpu!myThreadFunction ß End of loop / beginning of loop 5 0 [ 3] kernel32!SetLastError ß Always look for what might be going wrong! Last error can give you a clue. We are setting last error at the low level of the loop 16 0 [ 3] ntdll!RtlSetLastWin32Error eax = 0 64 21 [ 2] eatcpu!checkSomething 74 27 [ 2] eatcpu!checkSomething eax = 0 ß Also note checkSomething is returning ZERO, this might indicate a problem. You need to look at the code or assembler. 1 0 [ 2] eatcpu!ILT+345(__RTC_CheckEsp) 27 352 [ 0] eatcpu!myThreadFunction ß End of loop / beginning of loop 36 528 [ 0] eatcpu!myThreadFunction ß End of loop / beginning of loop 60 0 [ 1] eatcpu!trySomething 2 0 [ 3] eatcpu!_RTC_CheckEsp eax = 0 12930 instructions were executed in 12929 events (0 from other threads) Function Name Invocations MinInst MaxInst AvgInst eatcpu!ILT+180(?checkSomethingYAHHZ) 69 1 1 1 eatcpu!ILT+345(__RTC_CheckEsp) 210 1 1 1 eatcpu!ILT+35(?trySomethingYAHHZ) 70 1 1 1 eatcpu!_RTC_CheckEsp 210 2 2 2 eatcpu!checkSomething 70 60 74 73 eatcpu!myThreadFunction 1 630 630 630 eatcpu!trySomething 71 10 70 68 kernel32!SetLastError 70 5 5 5 ntdll!RtlSetLastWin32Error 70 16 16 16 0 system calls were executed eax=cccccccc ebx=00b93c48 ecx=00000002 edx=00b937a8 esi=00000000 edi=00d9fe6c eip=0041164c esp=00d9fda8 ebp=00d9fe74 iopl=0 nv up ei pl nz na pe nc cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00010206 eatcpu!trySomething+0x1c: 0041164c f3ab rep stos dword ptr es:[edi] es:002b:00d9fe6c=cccccccc 0:002> !gle ß Now that we have broken in let's check and see what the last error is using !GLE (Get Last Error) This dumps out the last error from the TEB. LastErrorValue: (Win32) 0x57 (87) - The parameter is incorrect. LastStatusValue: (NTSTATUS) 0xc000000d - An invalid parameter was passed to a service or function. 0:007> bp kernel32!SetLastError ß Lets set a breakpoint on last error to examine what is going on in the function calling it. Breakpoint 1 hit eax=cccccccc ebx=00b93c48 ecx=00000000 edx=00b937a8 esi=00d9fcd0 edi=00d9fd9c eip=767913dd esp=00d9fcc8 ebp=00d9fd9c iopl=0 nv up ei pl zr na pe nc kernel32!SetLastError: 767913dd 8bff mov edi,edi 0:002> kv ß Get the call stack 0:002> kv ChildEBP RetAddr Args to Child 00d9fcc4 004116cb 00000057 00d9fe74 00000000 kernel32!SetLastError (FPO: [Non-Fpo]) ß 0x57 Invalid parameter error, Why? 00d9fd9c 00411657 00000000 00d9ff58 00000000 eatcpu!checkSomething+0x4b (FPO: [Non-Fpo]) (CONV: cdecl) [c:\source\eatcpu\eatcpu\eatcpu.cpp @ 57] 00d9fe74 004115a8 00000000 00000000 00000000 eatcpu!trySomething+0x27 (FPO: [Non-Fpo]) (CONV: cdecl) [c:\source\eatcpu\eatcpu\eatcpu.cpp @ 45] 00d9ff58 62bb4601 0017ff34 4f9f12e9 00000000 eatcpu!myThreadFunction+0x38 (FPO: [Non-Fpo]) (CONV: cdecl) [c:\source\eatcpu\eatcpu\eatcpu.cpp @ 35] 00d9ff94 62bb459c 00b937a8 00d9ffac 768019f1 MSVCR80D!_beginthread+0x221 (FPO: [Non-Fpo]) 00d9ffa0 768019f1 00b937a8 00d9ffec 77a2d109 MSVCR80D!_beginthread+0x1bc (FPO: [Non-Fpo]) 00d9ffac 77a2d109 00b93c48 00d926a6 00000000 kernel32!BaseThreadInitThunk+0xe (FPO: [Non-Fpo]) 00d9ffec 00000000 62bb4520 00b93c48 00000000 ntdll!_RtlUserThreadStart+0x23 (FPO: [Non-Fpo]) (CONV: stdcall) [d:\vistartm\base\ntos\rtl\rtlexec.c @ 2695] 0:002> !error 00000057 ß double check, using !error, this will decode the error into a human readable string. Error code: (Win32) 0x57 (87) - The parameter is incorrect. 0:002> uf checkSomething ß lets disassemble the function calling SetLastError. eatcpu!checkSomething [c:\source\eatcpu\eatcpu\eatcpu.cpp @ 49]: 49 00411680 55 push ebp 49 00411681 8bec mov ebp,esp 49 00411683 81ecc0000000 sub esp,0C0h 49 00411689 53 push ebx 49 0041168a 56 push esi 49 0041168b 57 push edi 49 0041168c 8dbd40ffffff lea edi,[ebp-0C0h] 49 00411692 b930000000 mov ecx,30h 49 00411697 b8cccccccc mov eax,0CCCCCCCCh 49 0041169c f3ab rep stos dword ptr es:[edi] 50 0041169e 837d0800 cmp dword ptr [ebp+8],0 ß Check what our first parameter is on the stack, EBP+8 remember PLUS == Parameters. Note looking at the stack it’s 00000000 50 004116a2 741d je eatcpu!checkSomething+0x41 (004116c1) ß if parameter 1 ones 0 we are going to jump to this addres, else we execute the following code. (WE JUMP) eatcpu!checkSomething+0x24 [c:\source\eatcpu\eatcpu\eatcpu.cpp @ 52]: 52 004116a4 8bf4 mov esi,esp ß The green would have been the good code path, non errant. 52 004116a6 68fa000000 push 0FAh 52 004116ab ff15a8814100 call dword ptr [eatcpu!_imp__Sleep (004181a8)] ß Note we sleep or do some work other then eat CPU here if we are passed non ZERO 52 004116b1 3bf4 cmp esi,esp 52 004116b3 e8a6faffff call eatcpu!ILT+345(__RTC_CheckEsp) (0041115e) 53 004116b8 b801000000 mov eax,1 ß We are setting EAX to 1, this means we have succeded 53 004116bd eb15 jmp eatcpu!checkSomething+0x54 (004116d4) ß Now we jump to the clean up code for the fucntionn eatcpu!checkSomething+0x41 [c:\source\eatcpu\eatcpu\eatcpu.cpp @ 57]: 57 004116c1 8bf4 mov esi,esp ß This appears to be a failure case. We did not get an expected parameter so we report an error and return zero. 57 004116c3 6a57 push 57h ß Pusing error 0x57 on the stack, invalid parameter. 57 004116c5 ff15a4814100 call dword ptr [eatcpu!_imp__SetLastError (004181a4)] ß Our call to setlasterror 57 004116cb 3bf4 cmp esi,esp 57 004116cd e88cfaffff call eatcpu!ILT+345(__RTC_CheckEsp) (0041115e) 58 004116d2 33c0 xor eax,eax ß XORing eax with eax will make EAX Zero. This is an error condition. eatcpu!checkSomething+0x54 [c:\source\eatcpu\eatcpu\eatcpu.cpp @ 60]: 60 004116d4 5f pop edi 60 004116d5 5e pop esi 60 004116d6 5b pop ebx 60 004116d7 81c4c0000000 add esp,0C0h 60 004116dd 3bec cmp ebp,esp 60 004116df e87afaffff call eatcpu!ILT+345(__RTC_CheckEsp) (0041115e) 60 004116e4 8be5 mov esp,ebp 60 004116e6 5d pop ebp 60 004116e7 c3 ret The key thing that should be observed by this scenario is that when dealing with a high CPU condition there is often some problem at the lower level of some loop condition that prevents the proper execution of code from happening. If you’re lucky the problem is reported by some error facility in the OS or the application. In either case you can use the above technique for isolation. The following is the sample code for EATCPU. // eatcpu.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include <windows.h> #include <process.h> void myThreadFunction(void *); int doSomething(int); int trySomething(int); int checkSomething(int); int _tmain(int argc, _TCHAR* argv[]) { int truevalue = 1; int falsevalue = 0; // let's start some threads. These should represent worker threads in a process. Some of them will do “valid work” one will fail to do so. _beginthread(myThreadFunction,12000,(void *)&truevalue); _beginthread(myThreadFunction,12000,(void *)&falsevalue ); Sleep(10*60000); return 0; void myThreadFunction(void *value) int *localvalue = (int *)value; while(1) { trySomething(*localvalue); } int doSomething(int value) return trySomething(value); int trySomething(int value) return checkSomething(value); int checkSomething(int value) if(value) { // Make sure we have have some valid input parameter. //We will pretend we are doing work, this could be anything , file I/O etc. Sleep(250); return TRUE; else { // This is an error condition, this function expects an NON Zero parameter and will report ZERO as an invalid parameter. SetLastError(ERROR_INVALID_PARAMETER); return FALSE; Hi NTDebuggers, this week’s puzzler just so happens to match its number: 0x000000006 = ERROR_INVALID_HANDLE. That said, let me give you a scenario and the challenge will be to provide the best action plan to isolate the problem. This should include an explanation of what types of code problems cause invalid handles. Scenario 1 : You have a customer or client that is getting invalid handle errors. This is causing unhandled exceptions and the application is crashing. What action plan do you give the customer? In scenario 1, you don’t have access to the remote machine. Your customer will execute any action plan you give them, gather the data, and provide you with the results. Scenario 2: You have full access to the process and you can debug it live on your own personal machine; you even have source access. What do you do? [Update: our answer. Posted 5/27/2008] Hi NTDebuggers, if you’ve been debugging for a while I’m sure you’ve run into an Invalid Handle Error 6 (aka ERROR_INVALID_HANDLE or “The handle is invalid”). If you see it in your debugger output it indicates you have tried to pass an invalid handle to a call that is either expecting a different handle type or a the value for the handle its self is not valid. To understand this error in more detail you have to understand what handles are and where they live in the operating system. Each process has a handle table in the kernel. Think of this as an array or table of objects in kernel mode. The handle values are actually multiples of four. Review to see for what the bottom 3 bits are used for. For this illustration we will be numbering them 1,2,3 etc. When you open a file you will get back a simple pointer-sized value that in the handle table points to a file object. In this case let’s say you opened a file and you received a handle value of 2. If you envision the handle table you might see something like this. ------------- USER ADDRESS SPACE FOR YOUR PROCESS ------------- Your handle for your open file is 2 ------------- KERNEL ADDRESS SPACE FOR YOUR PROCESS ------------- Handle table Handle [1]=Pointer to ->Reg Handle [2]=Pointer to ->File <<<<<< Your handle is 2 It points to a file object Handle [3]=Pointer to ->File Handle [4]=Pointer to ->Reg Handle [5]=Pointer to ->Process Handle [6]=Pointer to ->Thread Handle [7]=Pointer to ->Thread Handle [8]=Pointer to ->File Handle [9]=Pointer to ->File In the debugger, you can see the handle table is referenced from the EPROCESS object for each process. Before we get into troubleshooting let’s look at a couple of scenarios for bad handles. In the first bad handle scenario, let’s say your application closes handle 2 using CloseHandle(), however your application keeps the value 2 in a variable somewhere. In this scenario we will assume no other code has come along and opened a new object that is occupying element 2 of the handle table. If you go to make a file I/O call using handle 2 the object manager will catch this and your app will get an invalid handle exception. This is somewhat harmless but your app my crash if you don’t catch the exception and your I/O will obviously not complete. In the second scenario we’ll say that your file I/O code closes the handle, and holds on to that handle value. Meanwhile another part of your application opens a registry key or some other object that is not a file object. Now your file I/O code goes to use handle 2 again. The problem is it’s not a handle to a file now. When the registry code opened the registry key element 2 was available so we now have a registry object in that location. If you go to do a file I/O on handle 2 you will get an invalid handle message. Now in our final scenario, we close a file handle in our file I/O code and keep the handle around. In this case though we have different code that is also doing file I/O, It opens a file and gets handle value 2. Now you have big trouble because if the first file I/O code now writes to handle 2, and it should not be because it closed it! This handle now belongs to a different file. This means you have code basically doing a write into an unknown region for that code’s context and file format. This will result in file corruption and no invalid handle error. So this really comes down to best practice. If you’re writing code and close a handle NULL IT OUT! That way you guarantee it will not be reused accidently at some other point. CloseHandle(MyHandle); MyHandle = NULL; On to debugging: So how do you track this down? For the remote scenario you could give your customer the following action plan: 1. Start gflags and under the image file tab, enter your application name in the image edit control. 2. Check Create User mode stack trace database. 3. Start the process under windbg. “WinDBG processname.exe” 4. In windbg Run !htrace -enable 5. Do a sxe ch. This will cause windbg to break on an invalid handle exception. 6. Run sx to confirm “ch - Invalid handle – break” should be set. 7. Enter g for go. 8. Let the process run until you get an invalid handle message. The debugger should break in. 9. Now all you have to do is run !htrace Htrace will dump out all the call stacks for each handle that was opened or closed. You need to take the invalid handle value and search backward to see where it was last opened, and where it was closed before that. It’s likely that the close before the last open is your culprit code path. Make sure that suspect close nulls that handle value out so it does not reuse it. In the live debug scenario following the same procedure. In this case you have the luxury of going back and setting a breakpoint in the offending call stack that freed the handle before the reuse. You can then figure out why it was not zeroed out. Good luck and happy debugging.” Written 077aeae0 760dfb75 80000002 760dfb94 00000000 ADVAPI32!RegOpenKeyExW (FPO: [Non-Fpo]). Jeff Dailey- Hello, my name is Ron Stock and I’m an Escalation Engineer on the Microsoft Platforms Global Escalation Services Team. Today I’m going to talk about pool corruption which manifests itself in various ways. It’s usually hard to track down because the culprit is long gone when the machine crashes. Tools such as Special Pool make our debug lives easier; however tracking down corruption doesn’t always have to make you pull your hair out. In some cases simply re-tracing the steps of the crash can reveal a smoking gun. Let’s take a look at a real world example. First we need to be in the right context so we set the trap frame to give us the register context when the machine crashed. 2: kd> .trap 0xfffffffff470662c ErrCode = 00000002 eax=35303132 ebx=fd24d640 ecx=fd24d78c edx=fd24d784 esi=fd24d598 edi=fd24d610 eip=e083f7a5 esp=f47066a0 ebp=f47066e0 iopl=0 nv up ei pl nz na po nc cs=0008 ss=0010 ds=0023 es=0023 fs=0030 gs=0000 efl=00010202 nt!KeWaitForSingleObject+0x25b: e083f7a5 ff4818 dec dword ptr [eax+18h] ds:0023:3530314a=???????? From the register output we can tell that the system crashed while attempting to dereference a pointer at memory location [eax+18h]. The value stored in register eax is probably the address of a structure given that the code is attempting to dereference offset 18 from the base of eax. Currently eax is pointing to 0x35303132 which is clearly not a valid kernel mode address. Most kernel mode addresses on 32-bit systems will be above the 0x80000000 range assuming the machine is not using something like the /3GB switch. Our mission now is to determine how eax was set. First we’ll unassemble the failing function using the UF command. 2: kd> uf nt!KeWaitForSingleObject ….. …… ….. e083f7a5 ff4818 dec dword ptr [eax+18h] e083f7a8 8b4818 mov ecx,dword ptr [eax+18h] e083f7ab 3b481c cmp ecx,dword ptr [eax+1Ch] e083f7ae 0f836ef9ffff jae nt!KeWaitForSingleObject+0x2a3 (e083f122) I truncated the results of the UF output to conserve space in this blog. Instruction e083f7a5 is the line of code that generated the fault so our focus is to determine how the value of eax was set prior to running instruction e083f7a5. Based on the UF output, instruction e083f11c could have jumped to e083f7a5. Let’s investigate how eax is set before instruction e083f11c jumped to the failing line. nt!KeWaitForSingleObject+0x244: e083f107 8d4208 lea eax,[edx+8] e083f10a 8b4804 mov ecx,dword ptr [eax+4] e083f10d 8903 mov dword ptr [ebx],eax e083f10f 894b04 mov dword ptr [ebx+4],ecx e083f112 8919 mov dword ptr [ecx],ebx e083f114 895804 mov dword ptr [eax+4],ebx e083f117 8b4668 mov eax,dword ptr [esi+68h] e083f11a 85c0 test eax,eax e083f11c 0f8583060000 jne nt!KeWaitForSingleObject+0x25b (e083f7a5) ß--Jump Instruction e083f117 moves a value into eax so I’m dumping the value here. 2: kd> dd esi+68h l1 fd24d600 35303132 Bingo! There’s our bad value of 35303132 which is the value of the eax register too, so we probably took this code path. Just to confirm the current value of eax, I’m dumping the register which should mirror the results for eax when using the “r” command to get the full register set. 2: kd> r eax Last set context: eax=35303132 Now our focus moves to why dword ptr [esi+68h] points to the bad value? Without source code this can be challenging to narrow down, however the !pool command comes in handy for cases like this. 2: kd> ? esi+68h Evaluate expression: -47917568 = fd24d600 Let’s examine fd24d600 a little more in detail using the !pool command. The !pool command neatly displays an entire page of 4k kernel memory listing all of the allocations contained on the page. From the output we can determine that our address is allocated from NonPaged pool and holds some sort of thread data, evidenced by the Thre tag next to our allocation. Notice the asterisk next to fd24d578 indicating the start of our pool. Virtual address fd24d578 is the beginning of an 8 byte pool header, and the header is followed by the actual data blob. Be aware that not all memory is allocated from the pool so the !pool command is not always useful. I have more information on !pool later in the blog. 2: kd> !pool fd24d600 Pool page fd24d600 region is Nonpaged pool fd24d000 size: 270 previous size: 0 (Allocated) Thre (Protected) fd24d270 size: 10 previous size: 270 (Free) `.lk fd24d280 size: 40 previous size: 10 (Allocated) Ntfr fd24d2c0 size: 20 previous size: 40 (Free) CcSc fd24d2e0 size: 128 previous size: 20 (Allocated) PTrk fd24d408 size: 128 previous size: 128 (Allocated) PTrk fd24d530 size: 8 previous size: 128 (Free) Mdl fd24d538 size: 28 previous size: 8 (Allocated) Ntfn fd24d560 size: 18 previous size: 28 (Free) Muta *fd24d578 size: 270 previous size: 18 (Allocated) *Thre (Protected) ß-our pool fd24d7e8 size: 428 previous size: 270 (Allocated) Mdl fd24dc10 size: 30 previous size: 428 (Allocated) Even (Protected) fd24dc40 size: 30 previous size: 30 (Allocated) TCPc fd24dc70 size: 18 previous size: 30 (Free) SeTd fd24dc88 size: 28 previous size: 18 (Allocated) Ntfn fd24dcb0 size: 128 previous size: 28 (Allocated) PTrk fd24ddd8 size: 228 previous size: 128 (Allocated) tdLL I’ll dump out the contents of the allocation using the dc command starting at the pool header for this block of memory. Remember, we expect to move a value from [esi+68] into eax. Later the code dereferences [eax+18] which leads me to believe that eax is the base of a structure. So we expect a valid Kernel mode value to be moved into eax rather than something like a string, otherwise the code wouldn’t dereference an offset. 2: kd> dc fd24d578 fd24d578 0a4e0003 e5726854 00000003 00000002 ..N.Thr......... fd24d588 eb10ee70 20000000 e08b5c60 eb136f96 p...... `\...o.. fd24d598 006e0006 00000000 fd24d5a0 fd24d5a0 ..n.......$...$. fd24d5a8 fd24d5a8 fd24d5a8 f4707000 f4704000 ..$...$..pp..@p. fd24d5b8 f4706d48 00000000 fd24d700 fd24d700 Hmp.......$...$. fd24d5c8 fd24d5c8 fd24d5c8 fd270290 01000100 ..$...$...'..... fd24d5d8 00000002 00000000 00000001 01000a02 ................ fd24d5e8 00000000 fd24d640 32110000 0200009f ....@.$....2.... 2: kd> dc fd24d5f8 00000000 20202020 32313532 000a6953 .... 25125Si.. <-- appears to be a string. fd24d608 20202020 20202020 20202020 5c4e4556 VEN\ fd24d618 32313532 20202020 20202020 20202020 2512 fd24d628 00000000 00000000 00000000 00000000 ................ fd24d638 00000000 00000000 fd24d78c fd24d78c ..........$...$. fd24d648 00000000 fd24d784 fd24d640 30010000 ......$.@.$....0 fd24d658 00343033 00000000 00000000 00000000 304............. fd24d668 00000000 01000000 00000000 00000000 ................ fd24d678 fd24d598 00000000 00000000 00000000 ..$............. fd24d688 fd24d618 fd24d618 fd24d598 fd24d610 ..$...$...$...$. fd24d698 00000000 00010102 00000000 00000000 ................ fd24d6a8 00000000 00000000 e08aeee0 00000000 ................ fd24d6b8 00000801 0000000f fd270290 0000000f ..........'..... fd24d6c8 fd24d5c0 fd24d6d0 00000000 00000000 ..$...$......... fd24d6d8 00000000 00000000 00000000 00000000 ................ fd24d6e8 00000000 00000000 f4707000 06300612 .........pp...0. Examining the memory contents above you can clearly see a string overwrite starting around 0xfd24d5f8. The memory we dereferenced, fd24d600 or [esi+68], is right in the middle of the string. The string appears to be a vendor number for a piece of hardware. After examining the setupapi.log and the OEM**.inf files in the Windows\inf directory we found a similar string and narrowed it down to a third party. A little more on the !pool command is important to mention. The memory address of interest may not always be allocated from the pool in which case you would encounter a message similar to this. 0: kd> !pool 80000ae5 Pool page 80000ae5 region is Unknown 80000000 is not a valid large pool allocation, checking large session pool... 80000000 is freed (or corrupt) pool Bad allocation size @80000000, too large *** *** An error (or corruption) in the pool was detected; *** Pool Region unknown (0xFFFFFFFF80000000) *** Use !poolval 80000000 for more details. If this had been the case I would have enabled Special Pool to narrow down the culprit. Introduction Hi everyone, Bob here again with a description of Work Queues and Dispatcher Headers. For those of you that look at dumps, you may have noticed that there are always threads waiting at KeRemoveQueue. You may have wondered what this function does. Well, I’m glad you asked… J What are those threads doing? Those threads waiting on the Remove Queue are worker threads. Worker threads are used when a system task cannot or does not want to do a particular task. For example, a thread running a DPC cannot pend and wait for a task to be done, so it sends the work to a worker thread. How does this mechanism work? The worker thread and the entities that are going to give the worker thread work, each know of a KQUEUE structure. The KQUEUE structure is initialized and, since the queue has an embedded dispatcher object, the worker thread pends on it waiting to be signaled. That is what you see on one of these waiting stacks. Below is a KQUEUE: typedef struct _KQUEUE { DISPATCHER_HEADER Header; LIST_ENTRY EntryListHead; ULONG CurrentCount; ULONG MaximumCount; LIST_ENTRY ThreadListHead;} KQUEUE, *PKQUEUE, *RESTRICTED_POINTER PRKQUEUE; Below is an example of a waiter: Priority 9 BasePriority 9 PriorityDecrement 0 Child-SP RetAddr Call Site fffffadc`b053dab0 fffff800`01027752 nt!KiSwapContext+0x85 fffffadc`b053dc30 fffff800`01024ef0 nt!KiSwapThread+0x3c9 ß Waits on the dispatcher object fffffadc`b053dc90 fffffadc`b9a380b0 nt!KeRemoveQueue+0x656 fffffadc`b053dd10 fffff800`0124b972 srv!WorkerThread+0xb0 fffffadc`b053dd70 fffff800`010202d6 nt!PspSystemThreadStartup+0x3e fffffadc`b053ddd0 00000000`00000000 nt!KxStartSystemThread+0x16 What is a dispatcher object? A dispatcher object can be passed into kernel routines such as KeWaitForSingleObject. This object is a synchronization object. This means that a thread can wait on this object until another thread “signals” it. The function KeRemoveQueue is waiting for its dispatcher object to be “signaled”. Below is a dispatcher object. Basically threads are queued on this object until the object is “signaled”. Once that happens the waiting thread is readied for execution. nt!_DISPATCHER_HEADER +0x000 Type : UChar +0x001 Absolute : UChar +0x001 NpxIrql : UChar +0x002 Size : UChar +0x002 Hand : UChar +0x003 Inserted : UChar +0x003 DebugActive : UChar +0x000 Lock : Int4B +0x004 SignalState : Int4B ß Set when the object is signaled. +0x008 WaitListHead : _LIST_ENTRY ß List of waiters on this object. Below is an actual dispatcher object for a queue: 5: kd> dt nt!_dispatcher_header fffffadcdb3ed368 +0x000 Type : 0x4 '' +0x001 Absolute : 0 '' +0x001 NpxIrql : 0 '' +0x002 Size : 0x10 '' +0x002 Hand : 0x10 '' +0x003 Inserted : 0 '' +0x003 DebugActive : 0 '' +0x000 Lock : 1048580 +0x004 SignalState : 0 +0x008 WaitListHead : _LIST_ENTRY [ 0xfffffadc`db3f4ce8 - 0xfffffadc`da74dce8 ] ß List of threads waiting for this object Each thread has a Wait List entry for each object it is waiting for: 5: kd> dt nt!_KWAIT_BLOCK 0xfffffadc`db3f4ce8 +0x000 WaitListEntry : _LIST_ENTRY [ 0xfffffadc`da74dce8 - 0xfffffadc`db3ed370 ] ß Next thread waiting for this object +0x010 Thread : 0xfffffadc`db3f4bf0 _KTHREAD ß The thread waiting for the object +0x018 Object : 0xfffffadc`db3ed368 ß The object the thread is waiting for (queue object) +0x020 NextWaitBlock : 0xfffffadc`db3f4ce8 _KWAIT_BLOCK ß Next object this thread is waiting for (thread 0xfffffadc`db3f4bf0) if any. +0x028 WaitKey : 0 +0x02a WaitType : 0x1 '' +0x02b SpareByte : 0 '' +0x02c SpareLong : 1533340 What wakes up or signals the thread? When the thread is waiting, an entity can call KeInsertQueue to insert elements in the work queue. When that event happens the thread is woken up and the system will remove the entry from the work queue and the call from KeRemoveQueue will return with the element. If the thread is not waiting when the call is made, the dispatcher object is put in the queue and the next call to KeRemoveQueue will not pend. What about synchronization objects? When one thread wants to synchronize with another, a synchronization object (such as an event) is used. When a thread waits for an event, another thread will signal the event when a job is done such as I/O. The dispatcher objects above are used for all the synchronization objects. As you can see by how the structures are designed, one thread can wait for many objects. Below this thread is waiting for a synchronization object. THREAD fffffadff752b040 Cid 0004.2858 fffffadcbe1c3768 NotificationEvent ß Object thread is waiting for. Not impersonating DeviceMap fffffa80000840f0 Owning Process fffffadce06e15a0 Image: System Wait Start TickCount 49664324 Ticks: 247591 (0:01:04:28.609) Context Switch Count 1 UserTime 00:00:00.000 KernelTime 00:00:00.000 Start Address EmcpBase (0xfffffadcbe22d810) Stack Init fffffadcb870be00 Current fffffadcb870b940 Base fffffadcb870c000 Limit fffffadcb8706000 Call 0 Priority 8 BasePriority 8 PriorityDecrement 0 Child-SP RetAddr : Args to Child : Call Site fffffadc`b870b980 fffff800`01027752 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : nt!KiSwapContext+0x85 fffffadc`b870bb00 fffff800`0102835e : 00000000`00000000 00000000`00000000 fffffadf`f752b0d8 fffffadf`f752b040 : nt!KiSwapThread+0x3c9 fffffadc`b870bb60 fffffadc`be21832b : 00000000`00000000 fffff800`00000000 00000000`00000000 fffffadc`be88b100 : nt!KeWaitForSingleObject+0x5a6 fffffadc`b870bbe0 fffffadc`be1bd0da : 00000000`00000004 00000000`00000000 fffffadc`be239c40 00000000`00000000 : EmcpBase+0xb32b fffffadc`b870bc20 fffffadc`be22c9a1 : 00000000`00000000 fffffadc`b870bd08 fffffadc`be239c40 fffffadc`e06f6fe0 : EmcpMPAA+0xd0da fffffadc`b870bc70 fffffadc`be22d90b : fffffadc`da2338c0 00000000`00000001 fffffadc`d9eb3c10 fffffadc`b870bd08 : EmcpBase+0x1f9a1 fffffadc`b870bce0 fffff800`0124b972 : fffffadc`d9f85780 fffffadf`f752b040 00000000`00000080 fffffadf`f752b040 : EmcpBase+0x2090b fffffadc`b870bd70 fffff800`010202d6 : fffff800`011b1180 fffffadf`f752b040 fffff800`011b5500 00000000`00000000 : nt!PspSystemThreadStartup+0x3e fffffadc`b870bdd0 00000000`00000000 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : nt!KxStartSystemThread+0x16 Dispatcher header from address above: 5: kd> dt nt!_dispatcher_header fffffadcbe1c3768 +0x000 Type : 0 '' +0x002 Size : 0x6 '' +0x002 Hand : 0x6 '' +0x000 Lock : 393216 +0x008 WaitListHead : _LIST_ENTRY [ 0xfffffadf`f752b138 - 0xfffffadf`f752b138 ] Wait block for this thread: 5: kd> dt 0xfffffadf`f752b138 _KWAIT_BLOCK nt!_KWAIT_BLOCK +0x000 WaitListEntry : _LIST_ENTRY [ 0xfffffadc`be1c3770 - 0xfffffadc`be1c3770 ] +0x010 Thread : 0xfffffadf`f752b040 _KTHREAD +0x018 Object : 0xfffffadc`be1c3768 +0x020 NextWaitBlock : 0xfffffadf`f752b138 _KWAIT_BLOCK +0x02c SpareLong : 1 Conclusion I hope this gives a better understanding of Work Queues and Dispatcher Headers. More detailed information can be found here: and here:. Hello all, Scott Olson here again to share another interesting issue I worked on a while back. The issue was that after upgrading to Windows XP Service Pack 2 the system would experience random bug checks with memory corruption. Interestingly, there was a very specific pattern to the corruption - it looked like a PFN address and flags were randomly placed into the page table page in several places in the process. The memory manager would never do this type of thing and I suspected that a driver was editing user page table pages, which should never be done. Let's take a look at the stack: Here is the page table entry for the virtual address: This shows that the value 15e0086f is incorrectly put into the page table pages. This bad value corresponds to a write-through mapping to a range allocated via a call to MmAllocatePagesForMdl. The driver also has an outstanding call MmProbeAndLockPages call on the pages indicated by the reference count of 2. Thinking that this pfn value is incorrect I decided to search for this value and see what I could..... I found a few entries but the middle one looks like it could be an MDL allocation. So I verified this: kd> !pool 86cacbf4 2 Pool page 86cacbf4 region is Nonpaged pool *86cacbd0 size: 80 previous size: 28 (Allocated) *Mdl Pooltag Mdl : Io, Mdls kd> !pool 86cacbf4 2 Pool page 86cacbf4 region is Nonpaged pool *86cacbd0 size: 80 previous size: 28 (Allocated) *Mdl Pooltag Mdl : Io, Mdls Yes this is an MDL, let's inspect it: Notice that the page 15e00 is in the MDL’s page list. Next I wanted to see if I could find a driver that may have references to this MDL and I found two: ................ Now let's see who owns these This gives us a pretty convincing probability that this driver is at fault. So now you may ask, "Why did this problem only start after applying Service Pack 2?" By default when you install Server Pack 2, Data Execution Prevention (DEP) is enabled on systems that support it. The support for DEP is in the PAE kernel which uses extra bits to describe the page table entries. In this crash the solution was to disable DEP until the driver could be corrected. The driver was incorrectly using the memory mappings by ignoring the extra bits in the page number and causing the memory corruption by writing to the wrong page. For more information on default DEP settings and enabling/disabling it in Windows see the following article. 899298 The "Understanding Data Execution Prevention" help topic incorrectly states the default setting for DEP in Windows Server 2003 Service Pack 1;EN-US;899298
http://blogs.msdn.com/b/ntdebugging/archive/2008/05.aspx?PostSortBy=MostComments&PageIndex=1
CC-MAIN-2015-27
refinedweb
7,654
65.62
import "crypto/md5" Package md5 implements the MD5 hash algorithm as defined in RFC 1321. MD5 is cryptographically broken and should not be used for secure applications. md5.go md5block.go md5block_decl.go The blocksize of MD5 in bytes. The size of an MD5 checksum in bytes. New returns a new hash.Hash computing the MD5 checksum. The Hash also implements encoding.BinaryMarshaler and encoding.BinaryUnmarshaler to marshal and unmarshal the internal state of the hash. Sum returns the MD5 checksum of the data. Package md5 imports 5 packages (graph) and is imported by 16725 packages. Updated 2020-06-01. Refresh now. Tools for package owners.
https://godoc.org/crypto/md5
CC-MAIN-2020-29
refinedweb
106
54.79
Up to [cvs.NetBSD.org] / src / lib / libc / gen Request diff between arbitrary revisions Default branch: MAIN Revision 1.18 / (download) - annotate - [select for diffs], Sat Jul 9 21:15:00 2016 UTC (18 months, 1 week ago) by dholl.17: +5 -5 lines Diff to previous 1.17 (colored) Fix three of these strings (ones that are rarely seen) Revision 1.17 / (download) - annotate - [select for diffs], Wed Jan 17 23:24:22 2007 UTC (11 years Changes since 1.16: +2 -3 lines Diff to previous 1.16 (colored) Remove more duplicate #includes, and a few spurious whitespaces at EOL From Slava Semushin <slava.semushin@gmail.com> Revision 1.16 / (download) - annotate - [select for diffs], Tue Sep 13 01:44:09 2005 UTC (12 years,.15: +73 -39 lines Diff to previous 1.15 (colored) compat core reorg. Revision 1.15 / (download) - annotate - [select for diffs], Thu Aug 7 16:42:56: +3 -7 lines Diff to previous 1.14 (colored) Move UCB-licensed code from 4-clause to 3-clause licence. Patches provided by Joel Baker in PR 22280, verified by myself. Revision 1.14 / (download) - annotate - [select for diffs], Tue Aug 17 03:50:56 1999 UTC (18 years, 5 months ago) by mycroft.13: +7 -2 lines Diff to previous 1.13 (colored) Make some needed weak aliases. Revision 1.13 / (download) - annotate - [select for diffs], Sun Dec 6 07:05:49 1998 UTC (19 years, 1 month ago) by jonathan Branch: MAIN CVS Tags: netbsd-1-4-base, netbsd-1-4-RELEASE, netbsd-1-4-PATCH003, netbsd-1-4-PATCH002, netbsd-1-4-PATCH001, netbsd-1-4 Changes since 1.12: +1 -6 lines Diff to previous 1.12 (colored) Move warnings about sys_siglist[] and __sys_siglist to _sys_siglist.c, so that the warning is shown once at link time, not three times. Revision 1.12 / (download) - annotate - [select for diffs], Tue Dec 1 20:31:00 1998 UTC (19 years, 1 month ago) by thorpej Branch: MAIN Changes since 1.11: +7 -2 lines Diff to previous 1.11 (colored) Warn about references to the compatibility sys_siglist[], and direct the user to include <signal.h> or <unistd.h> to generate the correct reference. Warn about references to the deprecated __sys_siglist[], and direct the user to include <signal.h> or <unistd.h> and use sys_siglist instead. Revision 1.11 / (download) - annotate - [select for diffs], Mon Nov 30 20:42:44 1998 UTC (19 years, 1 month ago) by thorpej Branch: MAIN Changes since 1.10: +3 -6 lines Diff to previous 1.10 (colored) Don't include <sys/cdefs.h> twice. Also, don't include <signal.h> or <unistd.h>. These headers are not needed, and if included now, cause a compile error since the exported and renamed type is different. Revision 1.10 / (download) - annotate - [select for diffs], Sat Sep 26 23:53:36 1998 UTC (19 years, 3 months ago) by christos Branch: MAIN Changes since 1.9: +3 -3 lines Diff to previous 1.9 (colored) Adapt to new signal changes (from Jason) Revision 1.9 / (download) - annotate - [select for diffs], Sun Jul 13 19:46:16 1997 UTC (20 years,: +3 -2 lines Diff to previous 1.8 (colored) Fix RCSID's Revision 1.8.4.1 / (download) - annotate - [select for diffs], Mon Sep 16 18:40:36 1996 UTC (21 years, 4 months ago) by jtc Branch: ivory_soap2 Changes since 1.8: +9 -3 lines Diff to previous 1.8 (colored) next main 1.9 (colored) snapshot namespace cleanup Revision 1.6.2.2 / (download) - annotate - [select for diffs], Tue May 2 19:35:10 1995 UTC (22 years, 8 months ago) by jtc Branch: ivory_soap Changes since 1.6.2.1: +2 -2 lines Diff to previous 1.6.2.1 (colored) next main 1.7 (colored) #include "namespace.h" Revision 1.8 / (download) - annotate - [select for diffs], Sat Mar 4 01:56:02 1995 UTC .7: +2 -2 lines Diff to previous 1.7 (colored) fix up some RCS Id's i botched. Revision 1.7 / (download) - annotate - [select for diffs], Mon Feb 27 05:51:07 1995 UTC (22 years, 10 months ago) by cgd Branch: MAIN Changes since 1.6: +12 -7 lines Diff to previous 1.6 (colored) merge with 4.4-Lite, keeping local changes. clean up Ids Revision 1.1.1.2 / (download) - annotate - [select for diffs] (vendor branch), Sat Feb 25 09:12:35 1995 UTC (22 years, 10 months ago) by cgd Branch: WFJ-920714, CSRG CVS Tags: lite-2, lite-1 Changes since 1.1.1.1: +42 -7 lines Diff to previous 1.1.1.1 (colored) from lite, with minor name rearrangement to fit. Revision 1.6.2.1 / (download) - annotate - [select for diffs], Fri Feb 17 10:40:32 1995 UTC (22 years, 11 months ago) by jtc Branch: ivory_soap Changes since 1.6: +4 -2 lines Diff to previous 1.6 (colored) Use "namespace.h", back out old mechanism for namespace purity. Revision 1.6 / (download) - annotate - [select for diffs], Mon Dec 12 22:42:13 1994 UTC (23 years, 1 month ago) by jtc Branch: MAIN Branch point for: ivory_soap Changes since 1.5: +70 -16 lines Diff to previous 1.5 (colored) Rework indirect reference support as outlined by my recent message to the tech-userlevel mailing list. Revision 1.5 / (download) - annotate - [select for diffs], Mon Oct 10 04:46:45 1994 UTC (23 years, 3 months ago) by jtc Branch: MAIN Changes since 1.4: +16 -70 lines Diff to previous 1.4 (colored) Renamed sys_errlist[] and sys_nerr to __sys_errlist[] and __sys_nerr. The traditional API of sys_errlist[] and sys_nerr is provided by weak references if they are supported. Otherwise, we're forced to have to have two copies of the error message string table in the library. Fortunately, unless a program uses both sys_errlist[] and strerror(), only one of the copies will be linked into the executable. This is all to provide an clean namespace as required by ANSI. I've done the same for sys_siglist[], even though it is not required, to be consistant. Revision 1.4 / (download) - annotate - [select for diffs], Thu Dec 2 09:53:28 1993 UTC (24 years, 1 month: +2 -2 lines Diff to previous 1.3 (colored) Add `const's to sys_siglist and sys_signame decls. Revision 1.3 / (download) - annotate - [select for diffs], Thu Aug 26 00:45:07 1993 UTC (24 years, 4 months ago) by jtc Branch: MAIN Changes since 1.2: +2 -2 lines Diff to previous 1.2 (colored) Declare rcsid strings so they are stored in text segment. Revision 1.2 / (download) - annotate - [select for diffs], Fri Jul 30 08:23:26 1993 UTC (24 (24.
http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/gen/siglist.c
CC-MAIN-2018-05
refinedweb
1,126
76.52
import java.util.Scanner; public class Pearce_Cheryl_03_03 { public static void main(String[] args) { double area1; //Area of rectangle one double area2; //Area of rectangle two double length1; //Length of rectangle one double length2; //Length of rectangle two double width1; //Width of rectangle one double width2; //Width of rectangle two String input; //To hold user's input Scanner keyboard = new Scanner(System.in); System.out.println("Enter the length of the first rectangle, then press enter."); length1 = keyboard.nextDouble(); System.out.println("Enter the width of the first rectangle, then press enter."); width1 = keyboard.nextDouble(); area1 = (length1 * width1); System.out.println("The area of the first rectangle is "+ area1 +"."); System.out.println("Enter the length of the second rectangle, then press enter."); length2 = keyboard.nextDouble(); System.out.println("Enter the width of the second rectangle, then press enter."); width2 = keyboard.nextDouble(); area2 = (length2 * width2); System.out.println("The area of the second rectangle is " + area2 + "."); if (area1 > area2) System.out.println("The first rectangle has a greater area than the second."); if (area 1 < area2) //Error here for "(" expected, not a statement, and; expected System.out.println("The second rectangle has a greater area than the first."); if (area1 == area2) System.out.println("The area of both rectangles is the same!"); } } This is quite frustrating. Any help is appreciated!
http://www.dreamincode.net/forums/topic/69791-basic-if-statement-error/
CC-MAIN-2017-51
refinedweb
218
52.26